diff --git a/.github/workflows/python-package-conda.yml b/.github/workflows/python-package-conda.yml
deleted file mode 100644
index 51c99bba..00000000
--- a/.github/workflows/python-package-conda.yml
+++ /dev/null
@@ -1,34 +0,0 @@
-name: Python Package using Conda
-
-on: [push]
-
-jobs:
- build-linux:
- runs-on: ubuntu-latest
- strategy:
- max-parallel: 5
-
- steps:
- - uses: actions/checkout@v4
- - name: Set up Python 3.10
- uses: actions/setup-python@v5
- with:
- python-version: '3.10'
- - name: Add conda to system path
- run: |
- # $CONDA is an environment variable pointing to the root of the miniconda directory
- echo $CONDA/bin >> $GITHUB_PATH
- - name: Install dependencies
- run: |
- conda env update --file environment.yml --name base
- - name: Lint with flake8
- run: |
- conda install flake8
- # stop the build if there are Python syntax errors or undefined names
- flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
- # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- - name: Test with pytest
- run: |
- conda install pytest
- pytest
diff --git a/docs/blogs/blog.md b/docs/blogs/blog.md
deleted file mode 100644
index 97619c2a..00000000
--- a/docs/blogs/blog.md
+++ /dev/null
@@ -1,765 +0,0 @@
-# Swarms API: Orchestrating the Future of AI Agent Collaboration
-
-In today's rapidly evolving AI landscape, we're witnessing a fundamental shift from single-agent AI systems to complex, collaborative multi-agent architectures. While individual AI models like GPT-4 and Claude have demonstrated remarkable capabilities, they often struggle with complex tasks requiring diverse expertise, nuanced decision-making, and specialized domain knowledge. Enter the Swarms API, an enterprise-grade solution designed to orchestrate collaborative intelligence through coordinated AI agent swarms.
-
-## The Problem: The Limitations of Single-Agent AI
-
-Despite significant advances in large language models and AI systems, single-agent architectures face inherent limitations when tackling complex real-world problems:
-
-### Expertise Boundaries
-Even the most advanced AI models have knowledge boundaries. No single model can possess expert-level knowledge across all domains simultaneously. When a task requires deep expertise in multiple areas (finance, law, medicine, and technical analysis, for example), a single agent quickly reaches its limits.
-
-### Complex Reasoning Chains
-Many real-world problems demand multistep reasoning with multiple feedback loops and verification processes. Single agents often struggle to maintain reasoning coherence through extended problem-solving journeys, leading to errors that compound over time.
-
-### Workflow Orchestration
-Enterprise applications frequently require sophisticated workflows with multiple handoffs, approvals, and specialized processing steps. Managing this orchestration with individual AI instances is inefficient and error-prone.
-
-### Resource Optimization
-Deploying high-powered AI models for every task is expensive and inefficient. Organizations need right-sized solutions that match computing resources to task requirements.
-
-### Collaboration Mechanisms
-The most sophisticated human problem-solving happens in teams, where specialists collaborate, debate, and refine solutions together. This collaborative intelligence is difficult to replicate with isolated AI agents.
-
-## The Solution: Swarms API
-
-The Swarms API addresses these challenges through a revolutionary approach to AI orchestration. By enabling multiple specialized agents to collaborate in coordinated swarms, it unlocks new capabilities previously unattainable with single-agent architectures.
-
-### What is the Swarms API?
-
-The Swarms API is an enterprise-grade platform that enables organizations to deploy and manage intelligent agent swarms in the cloud. Rather than relying on a single AI agent to handle complex tasks, the Swarms API orchestrates teams of specialized AI agents that work together, each handling specific aspects of a larger problem.
-
-The platform provides a robust infrastructure for creating, executing, and managing sophisticated AI agent workflows without the burden of maintaining the underlying infrastructure. With its cloud-native architecture, the Swarms API offers scalability, reliability, and security essential for enterprise deployments.
-
-## Core Capabilities
-
-The Swarms API delivers a comprehensive suite of capabilities designed for production-grade AI orchestration:
-
-### Intelligent Swarm Management
-
-At its core, the Swarms API enables the creation and execution of collaborative agent swarms. These swarms consist of specialized AI agents designed to work together on complex tasks. Unlike traditional AI approaches where a single model handles the entire workload, swarms distribute tasks among specialized agents, each contributing its expertise to the collective solution.
-
-For example, a financial analysis swarm might include:
-- A data preprocessing agent that cleans and normalizes financial data
-- A market analyst agent that identifies trends and patterns
-- An economic forecasting agent that predicts future market conditions
-- A report generation agent that compiles insights into a comprehensive analysis
-
-By coordinating these specialized agents, the swarm can deliver more accurate, nuanced, and valuable results than any single agent could produce alone.
-
-### Automatic Agent Generation
-
-One of the most powerful features of the Swarms API is its ability to dynamically create optimized agents based on task requirements. Rather than manually configuring each agent in a swarm, users can specify the overall task and let the platform automatically generate appropriate agents with optimized prompts and configurations.
-
-This automatic agent generation significantly reduces the expertise and effort required to deploy effective AI solutions. The system analyzes the task requirements and creates a set of agents specifically designed to address different aspects of the problem. This approach not only saves time but also improves the quality of results by ensuring each agent is properly configured for its specific role.
-
-### Multiple Swarm Architectures
-
-Different problems require different collaboration patterns. The Swarms API supports various swarm architectures to match specific workflow needs:
-
-- **SequentialWorkflow**: Agents work in a predefined sequence, with each agent handling specific subtasks in order
-- **ConcurrentWorkflow**: Multiple agents work simultaneously on different aspects of a task
-- **GroupChat**: Agents collaborate in a discussion format to solve problems collectively
-- **HierarchicalSwarm**: Organizes agents in a structured hierarchy with managers and workers
-- **MajorityVoting**: Uses a consensus mechanism where multiple agents vote on the best solution
-- **AutoSwarmBuilder**: Automatically designs and builds an optimal swarm architecture based on the task
-- **MixtureOfAgents**: Combines multiple agent types to tackle diverse aspects of a problem
-- **MultiAgentRouter**: Routes subtasks to specialized agents based on their capabilities
-- **AgentRearrange**: Dynamically reorganizes the workflow between agents based on evolving task requirements
-
-This flexibility allows organizations to select the most appropriate collaboration pattern for each specific use case, optimizing the balance between efficiency, thoroughness, and creativity.
-
-### Scheduled Execution
-
-The Swarms API enables automated, scheduled swarm executions, allowing organizations to set up recurring tasks that run automatically at specified times. This feature is particularly valuable for regular reporting, monitoring, and analysis tasks that need to be performed on a consistent schedule.
-
-For example, a financial services company could schedule a daily market analysis swarm to run before trading hours, providing updated insights based on overnight market movements. Similarly, a cybersecurity team might schedule hourly security assessment swarms to continuously monitor potential threats.
-
-### Comprehensive Logging
-
-Transparency and auditability are essential for enterprise AI applications. The Swarms API provides comprehensive logging capabilities that track all API interactions, agent communications, and decision processes. This detailed logging enables:
-
-- Debugging and troubleshooting swarm behaviors
-- Auditing decision trails for compliance and quality assurance
-- Analyzing performance patterns to identify optimization opportunities
-- Documenting the rationale behind AI-generated recommendations
-
-These logs provide valuable insights into how swarms operate and make decisions, increasing trust and enabling continuous improvement of AI workflows.
-
-### Cost Management
-
-AI deployment costs can quickly escalate without proper oversight. The Swarms API addresses this challenge through:
-
-- **Predictable, transparent pricing**: Clear cost structures that make budgeting straightforward
-- **Optimized resource utilization**: Intelligent allocation of computing resources based on task requirements
-- **Detailed cost breakdowns**: Comprehensive reporting on token usage, agent costs, and total expenditures
-- **Model flexibility**: Freedom to choose the most cost-effective models for each agent based on task complexity
-
-This approach ensures organizations get maximum value from their AI investments without unexpected cost overruns.
-
-### Enterprise Security
-
-Security is paramount for enterprise AI deployments. The Swarms API implements robust security measures including:
-
-- **Full API key authentication**: Secure access control for all API interactions
-- **Comprehensive key management**: Tools for creating, rotating, and revoking API keys
-- **Usage monitoring**: Tracking and alerting for suspicious activity patterns
-- **Secure data handling**: Appropriate data protection throughout the swarm execution lifecycle
-
-These security features ensure that sensitive data and AI workflows remain protected in accordance with enterprise security requirements.
-
-## How It Works: Behind the Scenes
-
-The Swarms API operates on a sophisticated architecture designed for reliability, scalability, and performance. Here's a look at what happens when you submit a task to the Swarms API:
-
-1. **Task Submission**: You send a request to the API with your task description and desired swarm configuration.
-
-2. **Swarm Configuration**: The system either uses your specified agent configuration or automatically generates an optimal swarm structure based on the task requirements.
-
-3. **Agent Initialization**: Each agent in the swarm is initialized with its specific instructions, model parameters, and role definitions.
-
-4. **Orchestration Setup**: The system establishes the communication and workflow patterns between agents based on the selected swarm architecture.
-
-5. **Execution**: The swarm begins working on the task, with agents collaborating according to their defined roles and relationships.
-
-6. **Monitoring and Adjustment**: Throughout execution, the system monitors agent performance and makes adjustments as needed.
-
-7. **Result Compilation**: Once the task is complete, the system compiles the results into the requested format.
-
-8. **Response Delivery**: The final output is returned to you, along with metadata about the execution process.
-
-This entire process happens seamlessly in the cloud, with the Swarms API handling all the complexities of agent coordination, resource allocation, and workflow management.
-
-## Real-World Applications
-
-The Swarms API enables a wide range of applications across industries. Here are some compelling use cases that demonstrate its versatility:
-
-### Financial Services
-
-#### Investment Research
-Financial institutions can deploy research swarms that combine market analysis, economic forecasting, company evaluation, and risk assessment. These swarms can evaluate investment opportunities much more comprehensively than single-agent systems, considering multiple factors simultaneously:
-
-- Macroeconomic indicators
-- Company fundamentals
-- Market sentiment
-- Technical analysis patterns
-- Regulatory considerations
-
-For example, an investment research swarm analyzing a potential stock purchase might include specialists in the company's industry, financial statement analysis, market trend identification, and risk assessment. This collaborative approach delivers more nuanced insights than any single analyst or model could produce independently.
-
-#### Regulatory Compliance
-Financial regulations are complex and constantly evolving. Compliance swarms can monitor regulatory changes, assess their impact on existing policies, and recommend appropriate adjustments. These swarms might include:
-
-- Regulatory monitoring agents that track new rules and guidelines
-- Policy analysis agents that evaluate existing compliance frameworks
-- Gap assessment agents that identify discrepancies
-- Documentation agents that update compliance materials
-
-This approach ensures comprehensive coverage of regulatory requirements while minimizing compliance risks.
-
-### Healthcare
-
-#### Medical Research Analysis
-The medical literature grows at an overwhelming pace, making it difficult for researchers and clinicians to stay current. Research analysis swarms can continuously scan new publications, identify relevant findings, and synthesize insights for specific research questions or clinical scenarios.
-
-A medical research swarm might include:
-- Literature scanning agents that identify relevant publications
-- Methodology assessment agents that evaluate research quality
-- Clinical relevance agents that determine practical applications
-- Summary agents that compile key findings into accessible reports
-
-This collaborative approach enables more thorough literature reviews and helps bridge the gap between research and clinical practice.
-
-#### Treatment Planning
-Complex medical cases often benefit from multidisciplinary input. Treatment planning swarms can integrate perspectives from different medical specialties, consider patient-specific factors, and recommend comprehensive care approaches.
-
-For example, an oncology treatment planning swarm might include specialists in:
-- Diagnostic interpretation
-- Treatment protocol evaluation
-- Drug interaction assessment
-- Patient history analysis
-- Evidence-based outcome prediction
-
-By combining these specialized perspectives, the swarm can develop more personalized and effective treatment recommendations.
-
-### Legal Services
-
-#### Contract Analysis
-Legal contracts contain numerous interconnected provisions that must be evaluated holistically. Contract analysis swarms can review complex agreements more thoroughly by assigning different sections to specialized agents:
-
-- Definition analysis agents that ensure consistent terminology
-- Risk assessment agents that identify potential liabilities
-- Compliance agents that check regulatory requirements
-- Precedent comparison agents that evaluate terms against standards
-- Conflict detection agents that identify internal inconsistencies
-
-This distributed approach enables more comprehensive contract reviews while reducing the risk of overlooking critical details.
-
-#### Legal Research
-Legal research requires examining statutes, case law, regulations, and scholarly commentary. Research swarms can conduct multi-faceted legal research by coordinating specialized agents focusing on different aspects of the legal landscape.
-
-A legal research swarm might include:
-- Statutory analysis agents that examine relevant laws
-- Case law agents that review judicial precedents
-- Regulatory agents that assess administrative rules
-- Scholarly analysis agents that evaluate academic perspectives
-- Synthesis agents that integrate findings into cohesive arguments
-
-This collaborative approach produces more comprehensive legal analyses that consider multiple sources of authority.
-
-### Research and Development
-
-#### Scientific Literature Review
-Scientific research increasingly spans multiple disciplines, making comprehensive literature reviews challenging. Literature review swarms can analyze publications across relevant fields, identify methodological approaches, and synthesize findings from diverse sources.
-
-For example, a biomedical engineering literature review swarm might include specialists in:
-- Materials science
-- Cellular biology
-- Clinical applications
-- Regulatory requirements
-- Statistical methods
-
-By integrating insights from these different perspectives, the swarm can produce more comprehensive and valuable literature reviews.
-
-#### Experimental Design
-Designing robust experiments requires considering multiple factors simultaneously. Experimental design swarms can develop sophisticated research protocols by integrating methodological expertise, statistical considerations, practical constraints, and ethical requirements.
-
-An experimental design swarm might coordinate:
-- Methodology agents that design experimental procedures
-- Statistical agents that determine appropriate sample sizes and analyses
-- Logistics agents that assess practical feasibility
-- Ethics agents that evaluate potential concerns
-- Documentation agents that prepare formal protocols
-
-This collaborative approach leads to more rigorous experimental designs while addressing potential issues preemptively.
-
-### Software Development
-
-#### Code Review and Optimization
-Code review requires evaluating multiple aspects simultaneously: functionality, security, performance, maintainability, and adherence to standards. Code review swarms can distribute these concerns among specialized agents:
-
-- Functionality agents that evaluate whether code meets requirements
-- Security agents that identify potential vulnerabilities
-- Performance agents that assess computational efficiency
-- Style agents that check adherence to coding standards
-- Documentation agents that review comments and documentation
-
-By addressing these different aspects in parallel, code review swarms can provide more comprehensive feedback to development teams.
-
-#### System Architecture Design
-Designing complex software systems requires balancing numerous considerations. Architecture design swarms can develop more robust system designs by coordinating specialists in different architectural concerns:
-
-- Scalability agents that evaluate growth potential
-- Security agents that assess protective measures
-- Performance agents that analyze efficiency
-- Maintainability agents that consider long-term management
-- Integration agents that evaluate external system connections
-
-This collaborative approach leads to more balanced architectural decisions that address multiple requirements simultaneously.
-
-## Getting Started with the Swarms API
-
-The Swarms API is designed for straightforward integration into existing workflows. Let's walk through the setup process and explore some practical code examples for different industries.
-
-### 1. Setting Up Your Environment
-
-First, create an account on [swarms.world](https://swarms.world). After registration, navigate to the API key management interface at [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys) to generate your API key.
-
-Once you have your API key, set up your Python environment:
-
-```python
-# Install required packages
-pip install requests python-dotenv
-```
-
-Create a basic project structure:
-
-```
-swarms-project/
-├── .env # Store your API key securely
-├── swarms_client.py # Helper functions for API interaction
-└── examples/ # Industry-specific examples
-```
-
-In your `.env` file, add your API key:
-
-```
-SWARMS_API_KEY=your_api_key_here
-```
-
-### 2. Creating a Basic Swarms Client
-
-Let's create a simple client to interact with the Swarms API:
-
-```python
-# swarms_client.py
-import os
-import requests
-from dotenv import load_dotenv
-import json
-
-# Load environment variables
-load_dotenv()
-
-# Configuration
-API_KEY = os.getenv("SWARMS_API_KEY")
-BASE_URL = "https://api.swarms.world"
-
-# Standard headers for all requests
-headers = {
- "x-api-key": API_KEY,
- "Content-Type": "application/json"
-}
-
-def check_api_health():
- """Simple health check to verify API connectivity."""
- response = requests.get(f"{BASE_URL}/health", headers=headers)
- return response.json()
-
-def run_swarm(swarm_config):
- """Execute a swarm with the provided configuration."""
- response = requests.post(
- f"{BASE_URL}/v1/swarm/completions",
- headers=headers,
- json=swarm_config
- )
- return response.json()
-
-def get_available_swarms():
- """Retrieve list of available swarm types."""
- response = requests.get(f"{BASE_URL}/v1/swarms/available", headers=headers)
- return response.json()
-
-def get_available_models():
- """Retrieve list of available AI models."""
- response = requests.get(f"{BASE_URL}/v1/models/available", headers=headers)
- return response.json()
-
-def get_swarm_logs():
- """Retrieve logs of previous swarm executions."""
- response = requests.get(f"{BASE_URL}/v1/swarm/logs", headers=headers)
- return response.json()
-```
-
-### 3. Industry-Specific Examples
-
-Let's explore practical applications of the Swarms API across different industries.
-
-#### Healthcare: Clinical Research Assistant
-
-This example creates a swarm that analyzes clinical trial data and summarizes findings:
-
-```python
-# healthcare_example.py
-from swarms_client import run_swarm
-import json
-
-def clinical_research_assistant():
- """
- Create a swarm that analyzes clinical trial data, identifies patterns,
- and generates comprehensive research summaries.
- """
- swarm_config = {
- "name": "Clinical Research Assistant",
- "description": "Analyzes medical research data and synthesizes findings",
- "agents": [
- {
- "agent_name": "Data Preprocessor",
- "description": "Cleans and organizes clinical trial data",
- "system_prompt": "You are a data preprocessing specialist focused on clinical trials. "
- "Your task is to organize, clean, and structure raw clinical data for analysis. "
- "Identify and handle missing values, outliers, and inconsistencies in the data.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Clinical Analyst",
- "description": "Analyzes preprocessed data to identify patterns and insights",
- "system_prompt": "You are a clinical research analyst with expertise in interpreting medical data. "
- "Your job is to examine preprocessed clinical trial data, identify significant patterns, "
- "and determine the clinical relevance of these findings. Consider factors such as "
- "efficacy, safety profiles, and patient subgroups.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Medical Writer",
- "description": "Synthesizes analysis into comprehensive reports",
- "system_prompt": "You are a medical writer specializing in clinical research. "
- "Your task is to take the analyses provided and create comprehensive, "
- "well-structured reports that effectively communicate findings to both "
- "medical professionals and regulatory authorities. Follow standard "
- "medical publication guidelines.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- }
- ],
- "max_loops": 1,
- "swarm_type": "SequentialWorkflow",
- "task": "Analyze the provided Phase III clinical trial data for Drug XYZ, "
- "a novel treatment for type 2 diabetes. Identify efficacy patterns across "
- "different patient demographics, note any safety concerns, and prepare "
- "a comprehensive summary suitable for submission to regulatory authorities."
- }
-
- # Execute the swarm
- result = run_swarm(swarm_config)
-
- # Print formatted results
- print(json.dumps(result, indent=4))
- return result
-
-if __name__ == "__main__":
- clinical_research_assistant()
-```
-
-#### Legal: Contract Analysis System
-
-This example demonstrates a swarm designed to analyze complex legal contracts:
-
-```python
-# legal_example.py
-from swarms_client import run_swarm
-import json
-
-def contract_analysis_system():
- """
- Create a swarm that thoroughly analyzes legal contracts,
- identifies potential risks, and suggests improvements.
- """
- swarm_config = {
- "name": "Contract Analysis System",
- "description": "Analyzes legal contracts for risks and improvement opportunities",
- "agents": [
- {
- "agent_name": "Clause Extractor",
- "description": "Identifies and categorizes key clauses in contracts",
- "system_prompt": "You are a legal document specialist. Your task is to "
- "carefully review legal contracts and identify all key clauses, "
- "categorizing them by type (liability, indemnification, termination, etc.). "
- "Extract each clause with its context and prepare them for detailed analysis.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Risk Assessor",
- "description": "Evaluates clauses for potential legal risks",
- "system_prompt": "You are a legal risk assessment expert. Your job is to "
- "analyze contract clauses and identify potential legal risks, "
- "exposure points, and unfavorable terms. Rate each risk on a "
- "scale of 1-5 and provide justification for your assessment.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Improvement Recommender",
- "description": "Suggests alternative language to mitigate risks",
- "system_prompt": "You are a contract drafting expert. Based on the risk "
- "assessment provided, suggest alternative language for "
- "problematic clauses to better protect the client's interests. "
- "Ensure suggestions are legally sound and professionally worded.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Summary Creator",
- "description": "Creates executive summary of findings and recommendations",
- "system_prompt": "You are a legal communication specialist. Create a clear, "
- "concise executive summary of the contract analysis, highlighting "
- "key risks and recommendations. Your summary should be understandable "
- "to non-legal executives while maintaining accuracy.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- }
- ],
- "max_loops": 1,
- "swarm_type": "SequentialWorkflow",
- "task": "Analyze the attached software licensing agreement between TechCorp and ClientInc. "
- "Identify all key clauses, assess potential risks to ClientInc, suggest improvements "
- "to better protect ClientInc's interests, and create an executive summary of findings."
- }
-
- # Execute the swarm
- result = run_swarm(swarm_config)
-
- # Print formatted results
- print(json.dumps(result, indent=4))
- return result
-
-if __name__ == "__main__":
- contract_analysis_system()
-```
-
-#### Private Equity: Investment Opportunity Analysis
-
-This example shows a swarm that performs comprehensive due diligence on potential investments:
-
-```python
-# private_equity_example.py
-from swarms_client import run_swarm, schedule_swarm
-import json
-from datetime import datetime, timedelta
-
-def investment_opportunity_analysis():
- """
- Create a swarm that performs comprehensive due diligence
- on potential private equity investment opportunities.
- """
- swarm_config = {
- "name": "PE Investment Analyzer",
- "description": "Performs comprehensive analysis of private equity investment opportunities",
- "agents": [
- {
- "agent_name": "Financial Analyst",
- "description": "Analyzes financial statements and projections",
- "system_prompt": "You are a private equity financial analyst with expertise in "
- "evaluating company financials. Review the target company's financial "
- "statements, analyze growth trajectories, profit margins, cash flow patterns, "
- "and debt structure. Identify financial red flags and growth opportunities.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Market Researcher",
- "description": "Assesses market conditions and competitive landscape",
- "system_prompt": "You are a market research specialist in the private equity sector. "
- "Analyze the target company's market position, industry trends, competitive "
- "landscape, and growth potential. Identify market-related risks and opportunities "
- "that could impact investment returns.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Operational Due Diligence",
- "description": "Evaluates operational efficiency and improvement opportunities",
- "system_prompt": "You are an operational due diligence expert. Analyze the target "
- "company's operational structure, efficiency metrics, supply chain, "
- "technology infrastructure, and management capabilities. Identify "
- "operational improvement opportunities that could increase company value.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Risk Assessor",
- "description": "Identifies regulatory, legal, and business risks",
- "system_prompt": "You are a risk assessment specialist in private equity. "
- "Evaluate potential regulatory challenges, legal liabilities, "
- "compliance issues, and business model vulnerabilities. Rate "
- "each risk based on likelihood and potential impact.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Investment Thesis Creator",
- "description": "Synthesizes analysis into comprehensive investment thesis",
- "system_prompt": "You are a private equity investment strategist. Based on the "
- "analyses provided, develop a comprehensive investment thesis "
- "that includes valuation assessment, potential returns, value "
- "creation opportunities, exit strategies, and investment recommendations.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- }
- ],
- "max_loops": 1,
- "swarm_type": "SequentialWorkflow",
- "task": "Perform comprehensive due diligence on HealthTech Inc., a potential acquisition "
- "target in the healthcare technology sector. The company develops remote patient "
- "monitoring solutions and has shown 35% year-over-year growth for the past three years. "
- "Analyze financials, market position, operational structure, potential risks, and "
- "develop an investment thesis with a recommended valuation range."
- }
-
- # Option 1: Execute the swarm immediately
- result = run_swarm(swarm_config)
-
- # Option 2: Schedule the swarm for tomorrow morning
- tomorrow = (datetime.now() + timedelta(days=1)).replace(hour=8, minute=0, second=0).isoformat()
- # scheduled_result = schedule_swarm(swarm_config, tomorrow, "America/New_York")
-
- # Print formatted results from immediate execution
- print(json.dumps(result, indent=4))
- return result
-
-if __name__ == "__main__":
- investment_opportunity_analysis()
-```
-
-
-#### Education: Curriculum Development Assistant
-
-This example shows how to use the Concurrent Workflow swarm type:
-
-```python
-# education_example.py
-from swarms_client import run_swarm
-import json
-
-def curriculum_development_assistant():
- """
- Create a swarm that assists in developing educational curriculum
- with concurrent subject matter experts.
- """
- swarm_config = {
- "name": "Curriculum Development Assistant",
- "description": "Develops comprehensive educational curriculum",
- "agents": [
- {
- "agent_name": "Subject Matter Expert",
- "description": "Provides domain expertise on the subject",
- "system_prompt": "You are a subject matter expert in data science. "
- "Your role is to identify the essential concepts, skills, "
- "and knowledge that students need to master in a comprehensive "
- "data science curriculum. Focus on both theoretical foundations "
- "and practical applications, ensuring the content reflects current "
- "industry standards and practices.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Instructional Designer",
- "description": "Structures learning objectives and activities",
- "system_prompt": "You are an instructional designer specializing in technical education. "
- "Your task is to transform subject matter content into structured learning "
- "modules with clear objectives, engaging activities, and appropriate assessments. "
- "Design the learning experience to accommodate different learning styles and "
- "knowledge levels.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Assessment Specialist",
- "description": "Develops evaluation methods and assessments",
- "system_prompt": "You are an educational assessment specialist. "
- "Design comprehensive assessment strategies to evaluate student "
- "learning throughout the curriculum. Create formative and summative "
- "assessments, rubrics, and feedback mechanisms that align with learning "
- "objectives and provide meaningful insights into student progress.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- },
- {
- "agent_name": "Curriculum Integrator",
- "description": "Synthesizes input from all specialists into a cohesive curriculum",
- "system_prompt": "You are a curriculum development coordinator. "
- "Your role is to synthesize the input from subject matter experts, "
- "instructional designers, and assessment specialists into a cohesive, "
- "comprehensive curriculum. Ensure logical progression of topics, "
- "integration of theory and practice, and alignment between content, "
- "activities, and assessments.",
- "model_name": "gpt-4o",
- "role": "worker",
- "max_loops": 1
- }
- ],
- "max_loops": 1,
- "swarm_type": "ConcurrentWorkflow", # Experts work simultaneously before integration
- "task": "Develop a comprehensive 12-week data science curriculum for advanced undergraduate "
- "students with programming experience. The curriculum should cover data analysis, "
- "machine learning, data visualization, and ethics in AI. Include weekly learning "
- "objectives, teaching materials, hands-on activities, and assessment methods. "
- "The curriculum should prepare students for entry-level data science positions."
- }
-
- # Execute the swarm
- result = run_swarm(swarm_config)
-
- # Print formatted results
- print(json.dumps(result, indent=4))
- return result
-
-if __name__ == "__main__":
- curriculum_development_assistant()
-```
-
-
-### 5. Monitoring and Optimization
-
-To optimize your swarm configurations and track usage patterns, you can retrieve and analyze logs:
-
-```python
-# analytics_example.py
-from swarms_client import get_swarm_logs
-import json
-
-def analyze_swarm_usage():
- """
- Analyze swarm usage patterns to optimize configurations and costs.
- """
- # Retrieve logs
- logs = get_swarm_logs()
-
- return logs
-if __name__ == "__main__":
- analyze_swarm_usage()
-```
-
-### 6. Next Steps
-
-Once you've implemented and tested these examples, you can further optimize your swarm configurations by:
-
-1. Experimenting with different swarm architectures for the same task to compare results
-2. Adjusting agent prompts to improve specialization and collaboration
-3. Fine-tuning model parameters like temperature and max_tokens
-4. Combining swarms into larger workflows through scheduled execution
-
-The Swarms API's flexibility allows for continuous refinement of your AI orchestration strategies, enabling increasingly sophisticated solutions to complex problems.
-
-## The Future of AI Agent Orchestration
-
-The Swarms API represents a significant evolution in how we deploy AI for complex tasks. As we look to the future, several trends are emerging in the field of agent orchestration:
-
-### Specialized Agent Ecosystems
-
-We're moving toward rich ecosystems of highly specialized agents designed for specific tasks and domains. These specialized agents will have deep expertise in narrow areas, enabling more sophisticated collaboration when combined in swarms.
-
-### Dynamic Swarm Formation
-
-Future swarm platforms will likely feature even more advanced capabilities for dynamic swarm formation, where the system automatically determines not only which agents to include but also how they should collaborate based on real-time task analysis.
-
-### Cross-Modal Collaboration
-
-As AI capabilities expand across modalities (text, image, audio, video), we'll see increasing collaboration between agents specialized in different data types. This cross-modal collaboration will enable more comprehensive analysis and content creation spanning multiple formats.
-
-### Human-Swarm Collaboration
-
-The next frontier in agent orchestration will be seamless collaboration between human teams and AI swarms, where human specialists and AI agents work together, each contributing their unique strengths to complex problems.
-
-### Continuous Learning Swarms
-
-Future swarms will likely incorporate more sophisticated mechanisms for continuous improvement, with agent capabilities evolving based on past performance and feedback.
-
-## Conclusion
-
-The Swarms API represents a significant leap forward in AI orchestration, moving beyond the limitations of single-agent systems to unlock the power of collaborative intelligence. By enabling specialized agents to work together in coordinated swarms, this enterprise-grade platform opens new possibilities for solving complex problems across industries.
-
-From financial analysis to healthcare research, legal services to software development, the applications for agent swarms are as diverse as they are powerful. The Swarms API provides the infrastructure, tools, and flexibility needed to deploy these collaborative AI systems at scale, with the security, reliability, and cost management features essential for enterprise adoption.
-
-As we continue to push the boundaries of what AI can accomplish, the ability to orchestrate collaborative intelligence will become increasingly crucial. The Swarms API is at the forefront of this evolution, providing a glimpse into the future of AI—a future where the most powerful AI systems aren't individual models but coordinated teams of specialized agents working together to solve our most challenging problems.
-
-For organizations looking to harness the full potential of AI, the Swarms API offers a compelling path forward—one that leverages the power of collaboration to achieve results beyond what any single AI agent could accomplish alone.
-
-To explore the Swarms API and begin building your own intelligent agent swarms, visit [swarms.world](https://swarms.world) today.
-
----
-
-## Resources
-
-* Website: [swarms.ai](https://swarms.ai)
-* Marketplace: [swarms.world](https://swarms.world)
-* Cloud Platform: [cloud.swarms.ai](https://cloud.swarms.ai)
-* Documentation: [docs.swarms.world](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/)
\ No newline at end of file
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index faf1f661..33c108e4 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -225,12 +225,12 @@ nav:
- How to Create New Swarm Architectures: "swarms/structs/create_new_swarm.md"
- Introduction to Hiearchical Swarm Architectures: "swarms/structs/multi_swarm_orchestration.md"
- - Swarm Architecture Documentation:
+ - Swarm Architectures Documentation:
+ - Overview: "swarms/structs/overview.md"
- MajorityVoting: "swarms/structs/majorityvoting.md"
- AgentRearrange: "swarms/structs/agent_rearrange.md"
- RoundRobin: "swarms/structs/round_robin_swarm.md"
- Mixture of Agents: "swarms/structs/moa.md"
- - GraphWorkflow: "swarms/structs/graph_workflow.md"
- GroupChat: "swarms/structs/group_chat.md"
- AgentRegistry: "swarms/structs/agent_registry.md"
- SpreadSheetSwarm: "swarms/structs/spreadsheet_swarm.md"
@@ -242,17 +242,22 @@ nav:
- MatrixSwarm: "swarms/structs/matrix_swarm.md"
- ModelRouter: "swarms/structs/model_router.md"
- MALT: "swarms/structs/malt.md"
- - Auto Agent Builder: "swarms/structs/auto_agent_builder.md"
- Various Execution Methods: "swarms/structs/various_execution_methods.md"
- - Hybrid Hierarchical-Cluster Swarm: "swarms/structs/hhcs.md"
- Deep Research Swarm: "swarms/structs/deep_research_swarm.md"
- - Auto Swarm Builder: "swarms/structs/auto_swarm_builder.md"
- Swarm Matcher: "swarms/structs/swarm_matcher.md"
+ - Council of Judges: "swarms/structs/council_of_judges.md"
+
+ - Hiearchical Architectures:
+ - Auto Agent Builder: "swarms/structs/auto_agent_builder.md"
+ - Hybrid Hierarchical-Cluster Swarm: "swarms/structs/hhcs.md"
+ - Auto Swarm Builder: "swarms/structs/auto_swarm_builder.md"
+
+
- Workflows:
- - ConcurrentWorkflow: "swarms/structs/concurrentworkflow.md"
- - SequentialWorkflow: "swarms/structs/sequential_workflow.md"
- - Structs:
- - Conversation: "swarms/structs/conversation.md"
+ - ConcurrentWorkflow: "swarms/structs/concurrentworkflow.md"
+ - SequentialWorkflow: "swarms/structs/sequential_workflow.md"
+ - GraphWorkflow: "swarms/structs/graph_workflow.md"
+ - Communication Structure: "swarms/structs/conversation.md"
- Swarms Tools:
- Overview: "swarms_tools/overview.md"
@@ -358,6 +363,10 @@ nav:
- Swarms API Tools: "swarms_cloud/swarms_api_tools.md"
- Individual Agent Completions: "swarms_cloud/agent_api.md"
+
+ - Clients:
+ - Swarms API Python Client: "swarms_cloud/python_client.md"
+
- Pricing:
- Swarms API Pricing: "swarms_cloud/api_pricing.md"
- Swarms API Pricing in Chinese: "swarms_cloud/chinese_api_pricing.md"
diff --git a/docs/swarms/structs/conversation.md b/docs/swarms/structs/conversation.md
index 7b849d62..4b3c1c78 100644
--- a/docs/swarms/structs/conversation.md
+++ b/docs/swarms/structs/conversation.md
@@ -2,251 +2,596 @@
## Introduction
-The `Conversation` class is a powerful tool for managing and structuring conversation data in a Python program. It enables you to create, manipulate, and analyze conversations easily. This documentation will provide you with a comprehensive understanding of the `Conversation` class, its attributes, methods, and how to effectively use it.
+The `Conversation` class is a powerful tool for managing and structuring conversation data in a Python program. It enables you to create, manipulate, and analyze conversations easily. This documentation provides a comprehensive understanding of the `Conversation` class, its attributes, methods, and how to effectively use it.
## Table of Contents
-1. **Class Definition**
- - Overview
- - Attributes
+1. [Class Definition](#1-class-definition)
+2. [Initialization Parameters](#2-initialization-parameters)
+3. [Methods](#3-methods)
+4. [Examples](#4-examples)
-2. **Methods**
- - `__init__(self, time_enabled: bool = False, *args, **kwargs)`
- - `add(self, role: str, content: str, *args, **kwargs)`
- - `delete(self, index: str)`
- - `update(self, index: str, role, content)`
- - `query(self, index: str)`
- - `search(self, keyword: str)`
- - `display_conversation(self, detailed: bool = False)`
- - `export_conversation(self, filename: str)`
- - `import_conversation(self, filename: str)`
- - `count_messages_by_role(self)`
- - `return_history_as_string(self)`
- - `save_as_json(self, filename: str)`
- - `load_from_json(self, filename: str)`
- - `search_keyword_in_conversation(self, keyword: str)`
+## 1. Class Definition
----
+### Overview
-### 1. Class Definition
+The `Conversation` class is designed to manage conversations by keeping track of messages and their attributes. It offers methods for adding, deleting, updating, querying, and displaying messages within the conversation. Additionally, it supports exporting and importing conversations, searching for specific keywords, and more.
-#### Overview
+### Attributes
+
+| Attribute | Type | Description |
+|-----------|------|-------------|
+| id | str | Unique identifier for the conversation |
+| name | str | Name of the conversation |
+| system_prompt | Optional[str] | System prompt for the conversation |
+| time_enabled | bool | Flag to enable time tracking for messages |
+| autosave | bool | Flag to enable automatic saving |
+| save_filepath | str | File path for saving conversation history |
+| conversation_history | list | List storing conversation messages |
+| tokenizer | Any | Tokenizer for counting tokens |
+| context_length | int | Maximum tokens allowed in conversation |
+| rules | str | Rules for the conversation |
+| custom_rules_prompt | str | Custom prompt for rules |
+| user | str | User identifier for messages |
+| auto_save | bool | Flag to enable auto-saving |
+| save_as_yaml | bool | Flag to save as YAML |
+| save_as_json_bool | bool | Flag to save as JSON |
+| token_count | bool | Flag to enable token counting |
+| cache_enabled | bool | Flag to enable prompt caching |
+| cache_stats | dict | Statistics about cache usage |
+| cache_lock | threading.Lock | Lock for thread-safe cache operations |
+| conversations_dir | str | Directory to store cached conversations |
+
+## 2. Initialization Parameters
+
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| id | str | generated | Unique conversation ID |
+| name | str | None | Name of the conversation |
+| system_prompt | Optional[str] | None | System prompt for the conversation |
+| time_enabled | bool | False | Enable time tracking |
+| autosave | bool | False | Enable automatic saving |
+| save_filepath | str | None | File path for saving |
+| tokenizer | Any | None | Tokenizer for counting tokens |
+| context_length | int | 8192 | Maximum tokens allowed |
+| rules | str | None | Conversation rules |
+| custom_rules_prompt | str | None | Custom rules prompt |
+| user | str | "User:" | User identifier |
+| auto_save | bool | True | Enable auto-saving |
+| save_as_yaml | bool | True | Save as YAML |
+| save_as_json_bool | bool | False | Save as JSON |
+| token_count | bool | True | Enable token counting |
+| cache_enabled | bool | True | Enable prompt caching |
+| conversations_dir | Optional[str] | None | Directory for cached conversations |
+| provider | Literal["mem0", "in-memory"] | "in-memory" | Storage provider |
+
+## 3. Methods
+
+### `add(role: str, content: Union[str, dict, list], metadata: Optional[dict] = None)`
+
+Adds a message to the conversation history.
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| role | str | Role of the speaker |
+| content | Union[str, dict, list] | Message content |
+| metadata | Optional[dict] | Additional metadata |
+
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello, how are you?")
+conversation.add("assistant", "I'm doing well, thank you!")
+```
-The `Conversation` class is designed to manage conversations by keeping track of messages and their attributes. It offers methods for adding, deleting, updating, querying, and displaying messages within the conversation. Additionally, it supports exporting and importing conversations, searching for specific keywords, and more.
+### `add_multiple_messages(roles: List[str], contents: List[Union[str, dict, list]])`
-#### Attributes
+Adds multiple messages to the conversation history.
-- `time_enabled (bool)`: A flag indicating whether to enable timestamp recording for messages.
-- `conversation_history (list)`: A list that stores messages in the conversation.
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| roles | List[str] | List of speaker roles |
+| contents | List[Union[str, dict, list]] | List of message contents |
-### 2. Methods
+Example:
+```python
+conversation = Conversation()
+conversation.add_multiple_messages(
+ ["user", "assistant"],
+ ["Hello!", "Hi there!"]
+)
+```
-#### `__init__(self, time_enabled: bool = False, *args, **kwargs)`
+### `delete(index: str)`
-- **Description**: Initializes a new Conversation object.
-- **Parameters**:
- - `time_enabled (bool)`: If `True`, timestamps will be recorded for each message. Default is `False`.
+Deletes a message from the conversation history.
-#### `add(self, role: str, content: str, *args, **kwargs)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| index | str | Index of message to delete |
-- **Description**: Adds a message to the conversation history.
-- **Parameters**:
- - `role (str)`: The role of the speaker (e.g., "user," "assistant").
- - `content (str)`: The content of the message.
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.delete(0) # Deletes the first message
+```
-#### `delete(self, index: str)`
+### `update(index: str, role: str, content: Union[str, dict])`
-- **Description**: Deletes a message from the conversation history.
-- **Parameters**:
- - `index (str)`: The index of the message to delete.
+Updates a message in the conversation history.
-#### `update(self, index: str, role, content)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| index | str | Index of message to update |
+| role | str | New role of speaker |
+| content | Union[str, dict] | New message content |
-- **Description**: Updates a message in the conversation history.
-- **Parameters**:
- - `index (str)`: The index of the message to update.
- - `role (_type_)`: The new role of the speaker.
- - `content (_type_)`: The new content of the message.
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.update(0, "user", "Hi there!")
+```
-#### `query(self, index: str)`
+### `query(index: str)`
-- **Description**: Retrieves a message from the conversation history.
-- **Parameters**:
- - `index (str)`: The index of the message to query.
-- **Returns**: The message as a string.
+Retrieves a message from the conversation history.
-#### `search(self, keyword: str)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| index | str | Index of message to query |
-- **Description**: Searches for messages containing a specific keyword in the conversation history.
-- **Parameters**:
- - `keyword (str)`: The keyword to search for.
-- **Returns**: A list of messages that contain the keyword.
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+message = conversation.query(0)
+```
-#### `display_conversation(self, detailed: bool = False)`
+### `search(keyword: str)`
-- **Description**: Displays the conversation history.
-- **Parameters**:
- - `detailed (bool, optional)`: If `True`, provides detailed information about each message. Default is `False`.
+Searches for messages containing a keyword.
-#### `export_conversation(self, filename: str)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| keyword | str | Keyword to search for |
-- **Description**: Exports the conversation history to a text file.
-- **Parameters**:
- - `filename (str)`: The name of the file to export to.
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello world")
+results = conversation.search("world")
+```
-#### `import_conversation(self, filename: str)`
+### `display_conversation(detailed: bool = False)`
-- **Description**: Imports a conversation history from a text file.
-- **Parameters**:
- - `filename (str)`: The name of the file to import from.
+Displays the conversation history.
-#### `count_messages_by_role(self)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| detailed | bool | Show detailed information |
-- **Description**: Counts the number of messages by role in the conversation.
-- **Returns**: A dictionary containing the count of messages for each role.
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.display_conversation(detailed=True)
+```
-#### `return_history_as_string(self)`
+### `export_conversation(filename: str)`
-- **Description**: Returns the entire conversation history as a single string.
-- **Returns**: The conversation history as a string.
+Exports conversation history to a file.
-#### `save_as_json(self, filename: str)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| filename | str | Output file path |
-- **Description**: Saves the conversation history as a JSON file.
-- **Parameters**:
- - `filename (str)`: The name of the JSON file to save.
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.export_conversation("chat.txt")
+```
-#### `load_from_json(self, filename: str)`
+### `import_conversation(filename: str)`
-- **Description**: Loads a conversation history from a JSON file.
-- **Parameters**:
- - `filename (str)`: The name of the JSON file to load.
+Imports conversation history from a file.
-#### `search_keyword_in_conversation(self, keyword: str)`
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| filename | str | Input file path |
-- **Description**: Searches for a keyword in the conversation history and returns matching messages.
-- **Parameters**:
- - `keyword (str)`: The keyword to search for.
-- **Returns**: A list of messages containing the keyword.
+Example:
+```python
+conversation = Conversation()
+conversation.import_conversation("chat.txt")
+```
-## Examples
+### `count_messages_by_role()`
-Here are some usage examples of the `Conversation` class:
+Counts messages by role.
-### Creating a Conversation
+Returns: Dict[str, int]
+Example:
```python
-from swarms.structs import Conversation
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.add("assistant", "Hi")
+counts = conversation.count_messages_by_role()
+```
+
+### `return_history_as_string()`
+
+Returns conversation history as a string.
+
+Returns: str
+
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+history = conversation.return_history_as_string()
+```
+
+### `save_as_json(filename: str)`
-conv = Conversation()
+Saves conversation history as JSON.
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| filename | str | Output JSON file path |
+
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.save_as_json("chat.json")
```
-### Adding Messages
+### `load_from_json(filename: str)`
+
+Loads conversation history from JSON.
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| filename | str | Input JSON file path |
+
+Example:
```python
-conv.add("user", "Hello, world!")
-conv.add("assistant", "Hello, user!")
+conversation = Conversation()
+conversation.load_from_json("chat.json")
```
-### Displaying the Conversation
+### `truncate_memory_with_tokenizer()`
+
+Truncates conversation history based on token limit.
+Example:
```python
-conv.display_conversation()
+conversation = Conversation(tokenizer=some_tokenizer)
+conversation.truncate_memory_with_tokenizer()
```
-### Searching for Messages
+### `clear()`
+Clears the conversation history.
+
+Example:
```python
-result = conv.search("Hello")
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.clear()
```
-### Exporting and Importing Conversations
+### `to_json()`
+
+Converts conversation history to JSON string.
+Returns: str
+
+Example:
```python
-conv.export_conversation("conversation.txt")
-conv.import_conversation("conversation.txt")
+conversation = Conversation()
+conversation.add("user", "Hello")
+json_str = conversation.to_json()
```
-### Counting Messages by Role
+### `to_dict()`
+
+Converts conversation history to dictionary.
+Returns: list
+
+Example:
```python
-counts = conv.count_messages_by_role()
+conversation = Conversation()
+conversation.add("user", "Hello")
+dict_data = conversation.to_dict()
```
-### Loading and Saving as JSON
+### `to_yaml()`
+
+Converts conversation history to YAML string.
+Returns: str
+
+Example:
```python
-conv.save_as_json("conversation.json")
-conv.load_from_json("conversation.json")
+conversation = Conversation()
+conversation.add("user", "Hello")
+yaml_str = conversation.to_yaml()
```
-Certainly! Let's continue with more examples and additional information about the `Conversation` class.
+### `get_visible_messages(agent: "Agent", turn: int)`
+
+Gets visible messages for an agent at a specific turn.
-### Querying a Specific Message
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| agent | Agent | The agent |
+| turn | int | Turn number |
-You can retrieve a specific message from the conversation by its index:
+Returns: List[Dict]
+Example:
```python
-message = conv.query(0) # Retrieves the first message
+conversation = Conversation()
+visible_msgs = conversation.get_visible_messages(agent, 1)
```
-### Updating a Message
+### `get_last_message_as_string()`
+
+Gets the last message as a string.
-You can update a message's content or role within the conversation:
+Returns: str
+Example:
```python
-conv.update(0, "user", "Hi there!") # Updates the first message
+conversation = Conversation()
+conversation.add("user", "Hello")
+last_msg = conversation.get_last_message_as_string()
```
-### Deleting a Message
+### `return_messages_as_list()`
-If you want to remove a message from the conversation, you can use the `delete` method:
+Returns messages as a list of strings.
+Returns: List[str]
+
+Example:
```python
-conv.delete(0) # Deletes the first message
+conversation = Conversation()
+conversation.add("user", "Hello")
+messages = conversation.return_messages_as_list()
```
-### Counting Messages by Role
+### `return_messages_as_dictionary()`
+
+Returns messages as a list of dictionaries.
-You can count the number of messages by role in the conversation:
+Returns: List[Dict]
+Example:
```python
-counts = conv.count_messages_by_role()
-# Example result: {'user': 2, 'assistant': 2}
+conversation = Conversation()
+conversation.add("user", "Hello")
+messages = conversation.return_messages_as_dictionary()
```
-### Exporting and Importing as Text
+### `add_tool_output_to_agent(role: str, tool_output: dict)`
-You can export the conversation to a text file and later import it:
+Adds tool output to conversation.
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| role | str | Role of the tool |
+| tool_output | dict | Tool output to add |
+
+Example:
```python
-conv.export_conversation("conversation.txt") # Export
-conv.import_conversation("conversation.txt") # Import
+conversation = Conversation()
+conversation.add_tool_output_to_agent("tool", {"result": "success"})
```
-### Exporting and Importing as JSON
+### `return_json()`
+
+Returns conversation as JSON string.
-Conversations can also be saved and loaded as JSON files:
+Returns: str
+Example:
```python
-conv.save_as_json("conversation.json") # Save as JSON
-conv.load_from_json("conversation.json") # Load from JSON
+conversation = Conversation()
+conversation.add("user", "Hello")
+json_str = conversation.return_json()
```
-### Searching for a Keyword
+### `get_final_message()`
-You can search for messages containing a specific keyword within the conversation:
+Gets the final message.
+Returns: str
+
+Example:
```python
-results = conv.search_keyword_in_conversation("Hello")
+conversation = Conversation()
+conversation.add("user", "Hello")
+final_msg = conversation.get_final_message()
```
+### `get_final_message_content()`
+Gets content of final message.
-These examples demonstrate the versatility of the `Conversation` class in managing and interacting with conversation data. Whether you're building a chatbot, conducting analysis, or simply organizing dialogues, this class offers a robust set of tools to help you accomplish your goals.
+Returns: str
-## Conclusion
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+content = conversation.get_final_message_content()
+```
+
+### `return_all_except_first()`
+
+Returns all messages except first.
+
+Returns: List[Dict]
+
+Example:
+```python
+conversation = Conversation()
+conversation.add("system", "Start")
+conversation.add("user", "Hello")
+messages = conversation.return_all_except_first()
+```
-The `Conversation` class is a valuable utility for handling conversation data in Python. With its ability to add, update, delete, search, export, and import messages, you have the flexibility to work with conversations in various ways. Feel free to explore its features and adapt them to your specific projects and applications.
+### `return_all_except_first_string()`
+
+Returns all messages except first as string.
+
+Returns: str
+
+Example:
+```python
+conversation = Conversation()
+conversation.add("system", "Start")
+conversation.add("user", "Hello")
+messages = conversation.return_all_except_first_string()
+```
+
+### `batch_add(messages: List[dict])`
+
+Adds multiple messages in batch.
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| messages | List[dict] | List of messages to add |
+
+Example:
+```python
+conversation = Conversation()
+conversation.batch_add([
+ {"role": "user", "content": "Hello"},
+ {"role": "assistant", "content": "Hi"}
+])
+```
+
+### `get_cache_stats()`
+
+Gets cache usage statistics.
+
+Returns: Dict[str, int]
+
+Example:
+```python
+conversation = Conversation()
+stats = conversation.get_cache_stats()
+```
+
+### `load_conversation(name: str, conversations_dir: Optional[str] = None)`
+
+Loads a conversation from cache.
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| name | str | Name of conversation |
+| conversations_dir | Optional[str] | Directory containing conversations |
+
+Returns: Conversation
+
+Example:
+```python
+conversation = Conversation.load_conversation("my_chat")
+```
+
+### `list_cached_conversations(conversations_dir: Optional[str] = None)`
+
+Lists all cached conversations.
+
+| Parameter | Type | Description |
+|-----------|------|-------------|
+| conversations_dir | Optional[str] | Directory containing conversations |
+
+Returns: List[str]
+
+Example:
+```python
+conversations = Conversation.list_cached_conversations()
+```
+
+### `clear_memory()`
+
+Clears the conversation memory.
+
+Example:
+```python
+conversation = Conversation()
+conversation.add("user", "Hello")
+conversation.clear_memory()
+```
+
+## 4. Examples
+
+### Basic Usage
+
+```python
+from swarms.structs import Conversation
+
+# Create a new conversation
+conversation = Conversation(
+ name="my_chat",
+ system_prompt="You are a helpful assistant",
+ time_enabled=True
+)
+
+# Add messages
+conversation.add("user", "Hello!")
+conversation.add("assistant", "Hi there!")
+
+# Display conversation
+conversation.display_conversation()
+
+# Save conversation
+conversation.save_as_json("my_chat.json")
+```
+
+### Advanced Usage with Token Counting
+
+```python
+from swarms.structs import Conversation
+from some_tokenizer import Tokenizer
+
+# Create conversation with token counting
+conversation = Conversation(
+ tokenizer=Tokenizer(),
+ context_length=4096,
+ token_count=True
+)
+
+# Add messages
+conversation.add("user", "Hello, how are you?")
+conversation.add("assistant", "I'm doing well, thank you!")
+
+# Get token statistics
+stats = conversation.get_cache_stats()
+print(f"Total tokens: {stats['total_tokens']}")
+```
+
+### Using Different Storage Providers
+
+```python
+# In-memory storage
+conversation = Conversation(provider="in-memory")
+conversation.add("user", "Hello")
+
+# Mem0 storage
+conversation = Conversation(provider="mem0")
+conversation.add("user", "Hello")
+```
+
+## Conclusion
-If you have any further questions or need additional assistance, please don't hesitate to ask!
\ No newline at end of file
+The `Conversation` class provides a comprehensive set of tools for managing conversations in Python applications. It supports various storage backends, token counting, caching, and multiple export/import formats. The class is designed to be flexible and extensible, making it suitable for a wide range of use cases from simple chat applications to complex conversational AI systems.
diff --git a/docs/swarms/structs/council_of_judges.md b/docs/swarms/structs/council_of_judges.md
new file mode 100644
index 00000000..be2c6622
--- /dev/null
+++ b/docs/swarms/structs/council_of_judges.md
@@ -0,0 +1,284 @@
+# CouncilAsAJudge
+
+The `CouncilAsAJudge` is a sophisticated evaluation system that employs multiple AI agents to assess model responses across various dimensions. It provides comprehensive, multi-dimensional analysis of AI model outputs through parallel evaluation and aggregation.
+
+## Overview
+
+The `CouncilAsAJudge` implements a council of specialized AI agents that evaluate different aspects of a model's response. Each agent focuses on a specific dimension of evaluation, and their findings are aggregated into a comprehensive report.
+
+```mermaid
+graph TD
+ A[User Query] --> B[Base Agent]
+ B --> C[Model Response]
+ C --> D[CouncilAsAJudge]
+
+ subgraph "Evaluation Dimensions"
+ D --> E1[Accuracy Agent]
+ D --> E2[Helpfulness Agent]
+ D --> E3[Harmlessness Agent]
+ D --> E4[Coherence Agent]
+ D --> E5[Conciseness Agent]
+ D --> E6[Instruction Adherence Agent]
+ end
+
+ E1 --> F[Evaluation Aggregation]
+ E2 --> F
+ E3 --> F
+ E4 --> F
+ E5 --> F
+ E6 --> F
+
+ F --> G[Comprehensive Report]
+
+ style D fill:#f9f,stroke:#333,stroke-width:2px
+ style F fill:#bbf,stroke:#333,stroke-width:2px
+```
+
+## Key Features
+
+- Parallel evaluation across multiple dimensions
+- Caching system for improved performance
+- Dynamic model selection
+- Comprehensive evaluation metrics
+- Thread-safe execution
+- Detailed technical analysis
+
+## Installation
+
+```bash
+pip install swarms
+```
+
+## Basic Usage
+
+```python
+from swarms import Agent, CouncilAsAJudge
+
+# Create a base agent
+base_agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
+ model_name="claude-opus-4-20250514",
+ max_loops=1,
+)
+
+# Run the base agent
+user_query = "How can I establish a ROTH IRA to buy stocks and get a tax break?"
+model_output = base_agent.run(user_query)
+
+# Create and run the council
+panel = CouncilAsAJudge()
+results = panel.run(user_query, model_output)
+print(results)
+```
+
+## Advanced Usage
+
+### Custom Model Configuration
+
+```python
+from swarms import CouncilAsAJudge
+
+# Initialize with custom model
+council = CouncilAsAJudge(
+ model_name="anthropic/claude-3-sonnet-20240229",
+ output_type="all",
+ cache_size=256,
+ max_workers=4,
+ random_model_name=False
+)
+```
+
+### Parallel Processing Configuration
+
+```python
+from swarms import CouncilAsAJudge
+
+# Configure parallel processing
+council = CouncilAsAJudge(
+ max_workers=8, # Custom number of worker threads
+ random_model_name=True # Enable dynamic model selection
+)
+```
+
+## Evaluation Dimensions
+
+The council evaluates responses across six key dimensions:
+
+| Dimension | Evaluation Criteria |
+|-----------|-------------------|
+| **Accuracy** | • Factual correctness
• Source credibility
• Temporal consistency
• Technical accuracy |
+| **Helpfulness** | • Problem-solving efficacy
• Solution feasibility
• Context inclusion
• Proactive addressing of follow-ups |
+| **Harmlessness** | • Safety assessment
• Ethical considerations
• Age-appropriateness
• Content sensitivity |
+| **Coherence** | • Structural integrity
• Logical flow
• Information hierarchy
• Transition effectiveness |
+| **Conciseness** | • Communication efficiency
• Information density
• Redundancy elimination
• Focus maintenance |
+| **Instruction Adherence** | • Requirement coverage
• Constraint compliance
• Format matching
• Scope appropriateness |
+
+## API Reference
+
+### CouncilAsAJudge
+
+```python
+class CouncilAsAJudge:
+ def __init__(
+ self,
+ id: str = swarm_id(),
+ name: str = "CouncilAsAJudge",
+ description: str = "Evaluates the model's response across multiple dimensions",
+ model_name: str = "gpt-4o-mini",
+ output_type: str = "all",
+ cache_size: int = 128,
+ max_workers: int = None,
+ random_model_name: bool = True,
+ )
+```
+
+#### Parameters
+
+- `id` (str): Unique identifier for the council
+- `name` (str): Display name of the council
+- `description` (str): Description of the council's purpose
+- `model_name` (str): Name of the model to use for evaluations
+- `output_type` (str): Type of output to return
+- `cache_size` (int): Size of the LRU cache for prompts
+- `max_workers` (int): Maximum number of worker threads
+- `random_model_name` (bool): Whether to use random model selection
+
+### Methods
+
+#### run
+
+```python
+def run(self, task: str, model_response: str) -> None
+```
+
+Evaluates a model response across all dimensions.
+
+##### Parameters
+
+- `task` (str): Original user prompt
+- `model_response` (str): Model's response to evaluate
+
+##### Returns
+
+- Comprehensive evaluation report
+
+## Examples
+
+### Financial Analysis Example
+
+```python
+from swarms import Agent, CouncilAsAJudge
+
+# Create financial analysis agent
+financial_agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
+ model_name="claude-opus-4-20250514",
+ max_loops=1,
+)
+
+# Run analysis
+query = "How can I establish a ROTH IRA to buy stocks and get a tax break?"
+response = financial_agent.run(query)
+
+# Evaluate response
+council = CouncilAsAJudge()
+evaluation = council.run(query, response)
+print(evaluation)
+```
+
+### Technical Documentation Example
+
+```python
+from swarms import Agent, CouncilAsAJudge
+
+# Create documentation agent
+doc_agent = Agent(
+ agent_name="Documentation-Agent",
+ system_prompt="You are a technical documentation expert.",
+ model_name="gpt-4",
+ max_loops=1,
+)
+
+# Generate documentation
+query = "Explain how to implement a REST API using FastAPI"
+response = doc_agent.run(query)
+
+# Evaluate documentation quality
+council = CouncilAsAJudge(
+ model_name="anthropic/claude-3-sonnet-20240229",
+ output_type="all"
+)
+evaluation = council.run(query, response)
+print(evaluation)
+```
+
+## Best Practices
+
+### Model Selection
+
+!!! tip "Model Selection Best Practices"
+ - Choose appropriate models for your use case
+ - Consider using random model selection for diverse evaluations
+ - Match model capabilities to evaluation requirements
+
+### Performance Optimization
+
+!!! note "Performance Tips"
+ - Adjust cache size based on memory constraints
+ - Configure worker threads based on CPU cores
+ - Monitor memory usage with large responses
+
+### Error Handling
+
+!!! warning "Error Handling Guidelines"
+ - Implement proper exception handling
+ - Monitor evaluation failures
+ - Log evaluation results for analysis
+
+### Resource Management
+
+!!! info "Resource Management"
+ - Clean up resources after evaluation
+ - Monitor thread pool usage
+ - Implement proper shutdown procedures
+
+## Troubleshooting
+
+### Memory Issues
+
+!!! danger "Memory Problems"
+ If you encounter memory-related problems:
+
+ - Reduce cache size
+ - Decrease number of worker threads
+ - Process smaller chunks of text
+
+### Performance Problems
+
+!!! warning "Performance Issues"
+ To improve performance:
+
+ - Increase cache size
+ - Adjust worker thread count
+ - Use more efficient models
+
+### Evaluation Failures
+
+!!! danger "Evaluation Issues"
+ When evaluations fail:
+
+ - Check model availability
+ - Verify input format
+ - Monitor error logs
+
+## Contributing
+
+!!! success "Contributing"
+ Contributions are welcome! Please feel free to submit a Pull Request.
+
+## License
+
+!!! info "License"
+ This project is licensed under the MIT License - see the LICENSE file for details.
\ No newline at end of file
diff --git a/docs/swarms/structs/overview.md b/docs/swarms/structs/overview.md
new file mode 100644
index 00000000..4a66632d
--- /dev/null
+++ b/docs/swarms/structs/overview.md
@@ -0,0 +1,69 @@
+# Multi-Agent Architectures Overview
+
+This page provides a comprehensive overview of all available multi-agent architectures in Swarms, their use cases, and functionality.
+
+## Architecture Comparison
+
+=== "Core Architectures"
+ | Architecture | Use Case | Key Functionality | Documentation |
+ |-------------|----------|-------------------|---------------|
+ | MajorityVoting | Decision making through consensus | Combines multiple agent opinions and selects the most common answer | [Docs](majorityvoting.md) |
+ | AgentRearrange | Optimizing agent order | Dynamically reorders agents based on task requirements | [Docs](agent_rearrange.md) |
+ | RoundRobin | Equal task distribution | Cycles through agents in a fixed order | [Docs](round_robin_swarm.md) |
+ | Mixture of Agents | Complex problem solving | Combines diverse expert agents for comprehensive analysis | [Docs](moa.md) |
+ | GroupChat | Collaborative discussions | Simulates group discussions with multiple agents | [Docs](group_chat.md) |
+ | AgentRegistry | Agent management | Central registry for managing and accessing agents | [Docs](agent_registry.md) |
+ | SpreadSheetSwarm | Data processing | Collaborative data processing and analysis | [Docs](spreadsheet_swarm.md) |
+ | ForestSwarm | Hierarchical decision making | Tree-like structure for complex decision processes | [Docs](forest_swarm.md) |
+ | SwarmRouter | Task routing | Routes tasks to appropriate agents based on requirements | [Docs](swarm_router.md) |
+ | TaskQueueSwarm | Task management | Manages and prioritizes tasks in a queue | [Docs](taskqueue_swarm.md) |
+ | SwarmRearrange | Dynamic swarm optimization | Optimizes swarm configurations for specific tasks | [Docs](swarm_rearrange.md) |
+ | MultiAgentRouter | Advanced task routing | Routes tasks to specialized agents based on capabilities | [Docs](multi_agent_router.md) |
+ | MatrixSwarm | Parallel processing | Matrix-based organization for parallel task execution | [Docs](matrix_swarm.md) |
+ | ModelRouter | Model selection | Routes tasks to appropriate AI models | [Docs](model_router.md) |
+ | MALT | Multi-agent learning | Enables agents to learn from each other | [Docs](malt.md) |
+ | Deep Research Swarm | Research automation | Conducts comprehensive research across multiple domains | [Docs](deep_research_swarm.md) |
+ | Swarm Matcher | Agent matching | Matches tasks with appropriate agent combinations | [Docs](swarm_matcher.md) |
+
+=== "Workflow Architectures"
+ | Architecture | Use Case | Key Functionality | Documentation |
+ |-------------|----------|-------------------|---------------|
+ | ConcurrentWorkflow | Parallel task execution | Executes multiple tasks simultaneously | [Docs](concurrentworkflow.md) |
+ | SequentialWorkflow | Step-by-step processing | Executes tasks in a specific sequence | [Docs](sequential_workflow.md) |
+ | GraphWorkflow | Complex task dependencies | Manages tasks with complex dependencies | [Docs](graph_workflow.md) |
+
+=== "Hierarchical Architectures"
+ | Architecture | Use Case | Key Functionality | Documentation |
+ |-------------|----------|-------------------|---------------|
+ | Auto Agent Builder | Automated agent creation | Automatically creates and configures agents | [Docs](auto_agent_builder.md) |
+ | Hybrid Hierarchical-Cluster Swarm | Complex organization | Combines hierarchical and cluster-based organization | [Docs](hhcs.md) |
+ | Auto Swarm Builder | Automated swarm creation | Automatically creates and configures swarms | [Docs](auto_swarm_builder.md) |
+
+## Communication Structure
+
+!!! note "Communication Protocols"
+ The [Conversation](conversation.md) documentation details the communication protocols and structures used between agents in these architectures.
+
+## Choosing the Right Architecture
+
+When selecting a multi-agent architecture, consider the following factors:
+
+!!! tip "Task Complexity"
+ Simple tasks may only need basic architectures like RoundRobin, while complex tasks might require Hierarchical or Graph-based approaches.
+
+!!! tip "Parallelization Needs"
+ If tasks can be executed in parallel, consider ConcurrentWorkflow or MatrixSwarm.
+
+!!! tip "Decision Making Requirements"
+ For consensus-based decisions, MajorityVoting is ideal.
+
+!!! tip "Resource Optimization"
+ If you need to optimize agent usage, consider SwarmRouter or TaskQueueSwarm.
+
+!!! tip "Learning Requirements"
+ If agents need to learn from each other, MALT is the appropriate choice.
+
+!!! tip "Dynamic Adaptation"
+ For tasks requiring dynamic adaptation, consider SwarmRearrange or Auto Swarm Builder.
+
+For more detailed information about each architecture, please refer to their respective documentation pages.
diff --git a/docs/swarms_cloud/available_models.md b/docs/swarms_cloud/available_models.md
deleted file mode 100644
index 66f23e7c..00000000
--- a/docs/swarms_cloud/available_models.md
+++ /dev/null
@@ -1,9 +0,0 @@
-# Available Models
-
-| Model Name | Description | Input Price | Output Price | Use Cases |
-|-----------------------|---------------------------------------------------------------------------------------------------------|--------------|--------------|------------------------------------------------------------------------|
-| **nternlm-xcomposer2-4khd** | One of the highest performing VLMs (Video Language Models). | $4/1M Tokens | $8/1M Tokens | High-resolution video processing and understanding. |
-
-
-## What models should we add?
-[Book a call with us to learn more about your needs:](https://calendly.com/swarm-corp/30min)
diff --git a/docs/swarms_cloud/main.md b/docs/swarms_cloud/main.md
deleted file mode 100644
index d54451a4..00000000
--- a/docs/swarms_cloud/main.md
+++ /dev/null
@@ -1,352 +0,0 @@
-# Swarm Cloud API Reference
-
-## Overview
-
-The AI Chat Completion API processes text and image inputs to generate conversational responses. It supports various configurations to customize response behavior and manage input content.
-
-## API Endpoints
-
-### Chat Completion URL
-`https://api.swarms.world`
-
-
-
-- **Endpoint:** `/v1/chat/completions`
--- **Full Url** `https://api.swarms.world/v1/chat/completions`
-- **Method:** POST
-- **Description:** Generates a response based on the provided conversation history and parameters.
-
-#### Request Parameters
-
-| Parameter | Type | Description | Required |
-|---------------|--------------------|-----------------------------------------------------------|----------|
-| `model` | string | The AI model identifier. | Yes |
-| `messages` | array of objects | A list of chat messages, including the sender's role and content. | Yes |
-| `temperature` | float | Controls randomness. Lower values make responses more deterministic. | No |
-| `top_p` | float | Controls diversity. Lower values lead to less random completions. | No |
-| `max_tokens` | integer | The maximum number of tokens to generate. | No |
-| `stream` | boolean | If set to true, responses are streamed back as they're generated. | No |
-
-#### Response Structure
-
-- **Success Response Code:** `200 OK`
-
-```markdown
-{
- "model": string,
- "object": string,
- "choices": array of objects,
- "usage": object
-}
-```
-
-### List Models
-
-- **Endpoint:** `/v1/models`
-- **Method:** GET
-- **Description:** Retrieves a list of available models.
-
-#### Response Structure
-
-- **Success Response Code:** `200 OK`
-
-```markdown
-{
- "data": array of objects
-}
-```
-
-## Objects
-
-### Request
-
-| Field | Type | Description | Required |
-|-----------|---------------------|-----------------------------------------------|----------|
-| `role` | string | The role of the message sender. | Yes |
-| `content` | string or array | The content of the message. | Yes |
-| `name` | string | An optional name identifier for the sender. | No |
-
-### Response
-
-| Field | Type | Description |
-|-----------|--------|------------------------------------|
-| `index` | integer| The index of the choice. |
-| `message` | object | A `ChatMessageResponse` object. |
-
-#### UsageInfo
-
-| Field | Type | Description |
-|-------------------|---------|-----------------------------------------------|
-| `prompt_tokens` | integer | The number of tokens used in the prompt. |
-| `total_tokens` | integer | The total number of tokens used. |
-| `completion_tokens`| integer| The number of tokens used for the completion. |
-
-## Example Requests
-
-### Text Chat Completion
-
-```json
-POST /v1/chat/completions
-{
- "model": "cogvlm-chat-17b",
- "messages": [
- {
- "role": "user",
- "content": "Hello, world!"
- }
- ],
- "temperature": 0.8
-}
-```
-
-### Image and Text Chat Completion
-
-```json
-POST /v1/chat/completions
-{
- "model": "cogvlm-chat-17b",
- "messages": [
- {
- "role": "user",
- "content": [
- {
- "type": "text",
- "text": "Describe this image"
- },
- {
- "type": "image_url",
- "image_url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..."
- }
- ]
- }
- ],
- "temperature": 0.8,
- "top_p": 0.9,
- "max_tokens": 1024
-}
-```
-
-## Error Codes
-
-The API uses standard HTTP status codes to indicate the success or failure of an API call.
-
-| Status Code | Description |
-|-------------|-----------------------------------|
-| 200 | OK - The request has succeeded. |
-| 400 | Bad Request - Invalid request format. |
-| 500 | Internal Server Error - An error occurred on the server. |
-
-
-## Examples in Various Languages
-
-### Python
-```python
-import requests
-import base64
-from PIL import Image
-from io import BytesIO
-
-
-# Convert image to Base64
-def image_to_base64(image_path):
- with Image.open(image_path) as image:
- buffered = BytesIO()
- image.save(buffered, format="JPEG")
- img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
- return img_str
-
-
-# Replace 'image.jpg' with the path to your image
-base64_image = image_to_base64("your_image.jpg")
-text_data = {"type": "text", "text": "Describe what is in the image"}
-image_data = {
- "type": "image_url",
- "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
-}
-
-# Construct the request data
-request_data = {
- "model": "cogvlm-chat-17b",
- "messages": [{"role": "user", "content": [text_data, image_data]}],
- "temperature": 0.8,
- "top_p": 0.9,
- "max_tokens": 1024,
-}
-
-# Specify the URL of your FastAPI application
-url = "https://api.swarms.world/v1/chat/completions"
-
-# Send the request
-response = requests.post(url, json=request_data)
-# Print the response from the server
-print(response.text)
-```
-
-### Example API Request in Node
-```js
-const fs = require('fs');
-const https = require('https');
-const sharp = require('sharp');
-
-// Convert image to Base64
-async function imageToBase64(imagePath) {
- try {
- const imageBuffer = await sharp(imagePath).jpeg().toBuffer();
- return imageBuffer.toString('base64');
- } catch (error) {
- console.error('Error converting image to Base64:', error);
- }
-}
-
-// Main function to execute the workflow
-async function main() {
- const base64Image = await imageToBase64("your_image.jpg");
- const textData = { type: "text", text: "Describe what is in the image" };
- const imageData = {
- type: "image_url",
- image_url: { url: `data:image/jpeg;base64,${base64Image}` },
- };
-
- // Construct the request data
- const requestData = JSON.stringify({
- model: "cogvlm-chat-17b",
- messages: [{ role: "user", content: [textData, imageData] }],
- temperature: 0.8,
- top_p: 0.9,
- max_tokens: 1024,
- });
-
- const options = {
- hostname: 'api.swarms.world',
- path: '/v1/chat/completions',
- method: 'POST',
- headers: {
- 'Content-Type': 'application/json',
- 'Content-Length': requestData.length,
- },
- };
-
- const req = https.request(options, (res) => {
- let responseBody = '';
-
- res.on('data', (chunk) => {
- responseBody += chunk;
- });
-
- res.on('end', () => {
- console.log('Response:', responseBody);
- });
- });
-
- req.on('error', (error) => {
- console.error(error);
- });
-
- req.write(requestData);
- req.end();
-}
-
-main();
-```
-
-### Example API Request in Go
-
-```go
-package main
-
-import (
- "bytes"
- "encoding/base64"
- "encoding/json"
- "fmt"
- "image"
- "image/jpeg"
- _ "image/png" // Register PNG format
- "io"
- "net/http"
- "os"
-)
-
-// imageToBase64 converts an image to a Base64-encoded string.
-func imageToBase64(imagePath string) (string, error) {
- file, err := os.Open(imagePath)
- if err != nil {
- return "", err
- }
- defer file.Close()
-
- img, _, err := image.Decode(file)
- if err != nil {
- return "", err
- }
-
- buf := new(bytes.Buffer)
- err = jpeg.Encode(buf, img, nil)
- if err != nil {
- return "", err
- }
-
- return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
-}
-
-// main is the entry point of the program.
-func main() {
- base64Image, err := imageToBase64("your_image.jpg")
- if err != nil {
- fmt.Println("Error converting image to Base64:", err)
- return
- }
-
- requestData := map[string]interface{}{
- "model": "cogvlm-chat-17b",
- "messages": []map[string]interface{}{
- {
- "role": "user",
- "content": []map[string]string{{"type": "text", "text": "Describe what is in the image"}, {"type": "image_url", "image_url": {"url": fmt.Sprintf("data:image/jpeg;base64,%s", base64Image)}}},
- },
- },
- "temperature": 0.8,
- "top_p": 0.9,
- "max_tokens": 1024,
- }
-
- requestBody, err := json.Marshal(requestData)
- if err != nil {
- fmt.Println("Error marshaling request data:", err)
- return
- }
-
- url := "https://api.swarms.world/v1/chat/completions"
- request, err := http.NewRequest("POST", url, bytes.NewBuffer(requestBody))
- if err != nil {
- fmt.Println("Error creating request:", err)
- return
- }
-
- request.Header.Set("Content-Type", "application/json")
-
- client := &http.Client{}
- response, err := client.Do(request)
- if err != nil {
- fmt.Println("Error sending request:", err)
- return
- }
- defer response.Body.Close()
-
- responseBody, err := io.ReadAll(response.Body)
- if err != nil {
- fmt.Println("Error reading response body:", err)
- return
- }
-
- fmt.Println("Response:", string(responseBody))
-}
-```
-
-
-
-
-
-## Conclusion
-
-This API reference provides the necessary details to understand and interact with the AI Chat Completion API. By following the outlined request and response formats, users can integrate this API into their applications to generate dynamic and contextually relevant conversational responses.
\ No newline at end of file
diff --git a/docs/swarms_cloud/migrate_openai.md b/docs/swarms_cloud/migrate_openai.md
deleted file mode 100644
index 46d35ce3..00000000
--- a/docs/swarms_cloud/migrate_openai.md
+++ /dev/null
@@ -1,103 +0,0 @@
-## Migrate from OpenAI to Swarms in 3 lines of code
-
-If you’ve been using GPT-3.5 or GPT-4, switching to Swarms is easy!
-
-Swarms VLMs are available to use through our OpenAI compatible API. Additionally, if you have been building or prototyping using OpenAI’s Python SDK you can keep your code as-is and use Swarms’s VLMs models.
-
-In this example, we will show you how to change just three lines of code to make your Python application use Swarms’s Open Source models through OpenAI’s Python SDK.
-
-
-## Getting Started
-Migrate OpenAI’s Python SDK example script to use Swarms’s LLM endpoints.
-
-These are the three modifications necessary to achieve our goal:
-
-Redefine OPENAI_API_KEY your API key environment variable to use your Swarms key.
-
-Redefine OPENAI_BASE_URL to point to `https://api.swarms.world/v1/chat/completions`
-
-Change the model name to an Open Source model, for example: cogvlm-chat-17b
-
-## Requirements
-We will be using Python and OpenAI’s Python SDK.
-
-## Instructions
-Set up a Python virtual environment. Read Creating Virtual Environments here.
-
-```sh
-python3 -m venv .venv
-source .venv/bin/activate
-```
-
-Install the pip requirements in your local python virtual environment
-
-`python3 -m pip install openai`
-
-## Environment setup
-To run this example, there are simple steps to take:
-
-Get an Swarms API token by following these instructions.
-Expose the token in a new SWARMS_API_TOKEN environment variable:
-
-`export SWARMS_API_TOKEN=`
-
-Switch the OpenAI token and base URL environment variable
-
-`export OPENAI_API_KEY=$SWARMS_API_TOKEN`
-`export OPENAI_BASE_URL="https://api.swarms.world/v1/chat/completions"`
-
-If you prefer, you can also directly paste your token into the client initialization.
-
-
-## Example code
-Once you’ve completed the steps above, the code below will call Swarms LLMs:
-
-```python
-from dotenv import load_dotenv
-from openai import OpenAI
-
-load_dotenv()
-openai_api_key = ""
-
-openai_api_base = "https://api.swarms.world/v1"
-model = "internlm-xcomposer2-4khd"
-
-client = OpenAI(api_key=openai_api_key, base_url=openai_api_base)
-# Note that this model expects the image to come before the main text
-chat_response = client.chat.completions.create(
- model=model,
- messages=[
- {
- "role": "user",
- "content": [
- {
- "type": "image_url",
- "image_url": {
- "url": "https://home-cdn.reolink.us/wp-content/uploads/2022/04/010345091648784709.4253.jpg",
- },
- },
- {
- "type": "text",
- "text": "What is the most dangerous object in the image?",
- },
- ],
- }
- ],
- temperature=0.1,
- max_tokens=5000,
-)
-print("Chat response:", chat_response)
-
-```
-
-Note that you need to supply one of Swarms’s supported LLMs as an argument, as in the example above. For a complete list of our supported LLMs, check out our REST API page.
-
-
-## Example output
-The code above produces the following object:
-
-```python
-ChatCompletionMessage(content=" Hello! How can I assist you today? Do you have any questions or tasks you'd like help with? Please let me know and I'll do my best to assist you.", role='assistant' function_call=None, tool_calls=None)
-```
-
-
diff --git a/docs/swarms_cloud/python_client.md b/docs/swarms_cloud/python_client.md
index 8a6dd295..f24bd780 100644
--- a/docs/swarms_cloud/python_client.md
+++ b/docs/swarms_cloud/python_client.md
@@ -1,40 +1,19 @@
-# Swarms API Client Reference Documentation
-
-## Table of Contents
-
-1. [Introduction](#introduction)
-2. [Installation](#installation)
-3. [Quick Start](#quick-start)
-4. [Authentication](#authentication)
-5. [Client Configuration](#client-configuration)
-6. [API Endpoints Overview](#api-endpoints-overview)
-7. [Core Methods](#core-methods)
-8. [Swarm Management](#swarm-management)
-9. [Agent Management](#agent-management)
-10. [Batch Operations](#batch-operations)
-11. [Health and Monitoring](#health-and-monitoring)
-12. [Error Handling](#error-handling)
-13. [Performance Optimization](#performance-optimization)
-14. [Type Reference](#type-reference)
-15. [Code Examples](#code-examples)
-16. [Best Practices](#best-practices)
-17. [Troubleshooting](#troubleshooting)
+# Swarms Cloud API Client Documentation
## Introduction
-The Swarms API Client is a production-grade Python library designed to interact with the Swarms API. It provides both synchronous and asynchronous interfaces for maximum flexibility, enabling developers to create and manage swarms of AI agents efficiently. The client includes advanced features such as automatic retrying, response caching, connection pooling, and comprehensive error handling.
+The Swarms Cloud API client is a production-grade Python package for interacting with the Swarms API. It provides both synchronous and asynchronous interfaces, making it suitable for a wide range of applications from simple scripts to high-performance, scalable services.
-### Key Features
+Key features include:
+- Connection pooling and efficient session management
+- Automatic retries with exponential backoff
+- Circuit breaker pattern for improved reliability
+- In-memory caching for frequently accessed resources
+- Comprehensive error handling with detailed exceptions
+- Full support for asynchronous operations
+- Type checking with Pydantic
-- **Dual Interface**: Both synchronous and asynchronous APIs
-- **Automatic Retrying**: Built-in retry logic with exponential backoff
-- **Response Caching**: TTL-based caching for improved performance
-- **Connection Pooling**: Optimized connection management
-- **Type Safety**: Pydantic models for input validation
-- **Comprehensive Logging**: Structured logging with Loguru
-- **Thread-Safe**: Safe for use in multi-threaded applications
-- **Rate Limiting**: Built-in rate limit handling
-- **Performance Optimized**: DNS caching, TCP optimizations, and more
+This documentation covers all available client methods with detailed descriptions, parameter references, and usage examples.
## Installation
@@ -42,965 +21,759 @@ The Swarms API Client is a production-grade Python library designed to interact
pip install swarms-client
```
+## Authentication
-## Quick Start
+To use the Swarms API, you need an API key. You can obtain your API key from the [Swarms Platform API Keys page](https://swarms.world/platform/api-keys).
-```python
-from swarms_client import SwarmsClient
-
-# Initialize the client
-client = SwarmsClient(api_key="your-api-key")
+## Client Initialization
-# Create a simple swarm
-swarm = client.create_swarm(
- name="analysis-swarm",
- task="Analyze this market data",
- agents=[
- {
- "agent_name": "data-analyst",
- "model_name": "gpt-4",
- "role": "worker"
- }
- ]
-)
-
-# Run a single agent
-result = client.run_agent(
- agent_name="researcher",
- task="Research the latest AI trends",
- model_name="gpt-4"
-)
-```
-
-### Async Example
+The `SwarmsClient` is the main entry point for interacting with the Swarms API. It can be initialized with various configuration options to customize its behavior.
```python
-import asyncio
from swarms_client import SwarmsClient
-async def main():
- async with SwarmsClient(api_key="your-api-key") as client:
- # Create a swarm asynchronously
- swarm = await client.async_create_swarm(
- name="async-swarm",
- task="Process these documents",
- agents=[
- {
- "agent_name": "document-processor",
- "model_name": "gpt-4",
- "role": "worker"
- }
- ]
- )
- print(swarm)
+# Initialize with default settings
+client = SwarmsClient(api_key="your-api-key")
-asyncio.run(main())
+# Or with custom settings
+client = SwarmsClient(
+ api_key="your-api-key",
+ base_url="https://swarms-api-285321057562.us-east1.run.app",
+ timeout=60,
+ max_retries=3,
+ retry_delay=1,
+ log_level="INFO",
+ pool_connections=100,
+ pool_maxsize=100,
+ keep_alive_timeout=5,
+ max_concurrent_requests=100,
+ circuit_breaker_threshold=5,
+ circuit_breaker_timeout=60,
+ enable_cache=True
+)
```
-## Authentication
+### Parameters
-### Obtaining API Keys
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `api_key` | `str` | Environment variable `SWARMS_API_KEY` | API key for authentication |
+| `base_url` | `str` | `"https://swarms-api-285321057562.us-east1.run.app"` | Base URL for the API |
+| `timeout` | `int` | `60` | Timeout for API requests in seconds |
+| `max_retries` | `int` | `3` | Maximum number of retry attempts for failed requests |
+| `retry_delay` | `int` | `1` | Initial delay between retries in seconds (uses exponential backoff) |
+| `log_level` | `str` | `"INFO"` | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
+| `pool_connections` | `int` | `100` | Number of connection pools to cache |
+| `pool_maxsize` | `int` | `100` | Maximum number of connections to save in the pool |
+| `keep_alive_timeout` | `int` | `5` | Keep-alive timeout for connections in seconds |
+| `max_concurrent_requests` | `int` | `100` | Maximum number of concurrent requests |
+| `circuit_breaker_threshold` | `int` | `5` | Failure threshold for the circuit breaker |
+| `circuit_breaker_timeout` | `int` | `60` | Reset timeout for the circuit breaker in seconds |
+| `enable_cache` | `bool` | `True` | Whether to enable in-memory caching |
-API keys can be obtained from the Swarms platform at: [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
+## Client Methods
-### Setting API Keys
+### clear_cache
-There are three ways to provide your API key:
+Clears the in-memory cache used for caching API responses.
-1. **Direct Parameter** (Recommended for development):
```python
-client = SwarmsClient(api_key="your-api-key")
+client.clear_cache()
```
-2. **Environment Variable** (Recommended for production):
-```bash
-export SWARMS_API_KEY="your-api-key"
-```
-```python
-client = SwarmsClient() # Will use SWARMS_API_KEY env var
-```
-
-3. **Configuration Object**:
-```python
-from swarms_client.config import SwarmsConfig
+## Agent Resource
-SwarmsConfig.set_api_key("your-api-key")
-client = SwarmsClient()
-```
-
-## Client Configuration
+The Agent resource provides methods for creating and managing agent completions.
-### Configuration Parameters
+
+### create
-| Parameter | Type | Default | Description |
-|-----------|------|---------|-------------|
-| `api_key` | Optional[str] | None | API key for authentication |
-| `base_url` | Optional[str] | "https://api.swarms.world" | Base URL for the API |
-| `timeout` | Optional[int] | 30 | Request timeout in seconds |
-| `max_retries` | Optional[int] | 3 | Maximum number of retry attempts |
-| `max_concurrent_requests` | Optional[int] | 100 | Maximum concurrent requests |
-| `retry_on_status` | Optional[Set[int]] | {429, 500, 502, 503, 504} | HTTP status codes to retry |
-| `retry_delay` | Optional[float] | 1.0 | Initial retry delay in seconds |
-| `max_retry_delay` | Optional[int] | 60 | Maximum retry delay in seconds |
-| `jitter` | bool | True | Add random jitter to retry delays |
-| `enable_cache` | bool | True | Enable response caching |
-| `thread_pool_size` | Optional[int] | min(32, max_concurrent_requests * 2) | Thread pool size for sync operations |
-
-### Configuration Example
+Creates an agent completion.
```python
-from swarms_client import SwarmsClient
-
-client = SwarmsClient(
- api_key="your-api-key",
- base_url="https://api.swarms.world",
- timeout=60,
- max_retries=5,
- max_concurrent_requests=50,
- retry_delay=2.0,
- enable_cache=True,
- thread_pool_size=20
+response = client.agent.create(
+ agent_config={
+ "agent_name": "Researcher",
+ "description": "Conducts in-depth research on topics",
+ "model_name": "gpt-4o",
+ "temperature": 0.7
+ },
+ task="Research the latest advancements in quantum computing and summarize the key findings"
)
-```
-
-## API Endpoints Overview
-### Endpoint Reference Table
+print(f"Agent ID: {response.id}")
+print(f"Output: {response.outputs}")
+```
-| Endpoint | Method | Description | Sync Method | Async Method |
-|----------|--------|-------------|-------------|--------------|
-| `/health` | GET | Check API health | `get_health()` | `async_get_health()` |
-| `/v1/swarm/completions` | POST | Create and run a swarm | `create_swarm()` | `async_create_swarm()` |
-| `/v1/swarm/{swarm_id}/run` | POST | Run existing swarm | `run_swarm()` | `async_run_swarm()` |
-| `/v1/swarm/{swarm_id}/logs` | GET | Get swarm logs | `get_swarm_logs()` | `async_get_swarm_logs()` |
-| `/v1/models/available` | GET | List available models | `get_available_models()` | `async_get_available_models()` |
-| `/v1/swarms/available` | GET | List swarm types | `get_swarm_types()` | `async_get_swarm_types()` |
-| `/v1/agent/completions` | POST | Run single agent | `run_agent()` | `async_run_agent()` |
-| `/v1/agent/batch/completions` | POST | Run agent batch | `run_agent_batch()` | `async_run_agent_batch()` |
-| `/v1/swarm/batch/completions` | POST | Run swarm batch | `run_swarm_batch()` | `async_run_swarm_batch()` |
-| `/v1/swarm/logs` | GET | Get API logs | `get_api_logs()` | `async_get_api_logs()` |
+#### Parameters
-## Core Methods
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `agent_config` | `dict` or `AgentSpec` | Yes | Configuration for the agent |
+| `task` | `str` | Yes | The task for the agent to complete |
+| `history` | `dict` | No | Optional conversation history |
-### Health Check
+The `agent_config` parameter can include the following fields:
-Check the API health status to ensure the service is operational.
+| Field | Type | Default | Description |
+|-------|------|---------|-------------|
+| `agent_name` | `str` | Required | Name of the agent |
+| `description` | `str` | `None` | Description of the agent's purpose |
+| `system_prompt` | `str` | `None` | System prompt to guide the agent's behavior |
+| `model_name` | `str` | `"gpt-4o-mini"` | Name of the model to use |
+| `auto_generate_prompt` | `bool` | `False` | Whether to automatically generate a prompt |
+| `max_tokens` | `int` | `8192` | Maximum tokens in the response |
+| `temperature` | `float` | `0.5` | Temperature for sampling (0-1) |
+| `role` | `str` | `None` | Role of the agent |
+| `max_loops` | `int` | `1` | Maximum number of reasoning loops |
+| `tools_dictionary` | `List[Dict]` | `None` | Tools available to the agent |
-```python
-# Synchronous
-health = client.get_health()
+#### Returns
-# Asynchronous
-health = await client.async_get_health()
-```
+`AgentCompletionResponse` object with the following properties:
-**Response Example:**
-```json
-{
- "status": "healthy",
- "version": "1.0.0",
- "timestamp": "2025-01-20T12:00:00Z"
-}
-```
+- `id`: Unique identifier for the completion
+- `success`: Whether the completion was successful
+- `name`: Name of the agent
+- `description`: Description of the agent
+- `temperature`: Temperature used for the completion
+- `outputs`: Output from the agent
+- `usage`: Token usage information
+- `timestamp`: Timestamp of the completion
-### Available Models
+
+### create_batch
-Retrieve a list of all available models that can be used with agents.
+Creates multiple agent completions in batch.
```python
-# Synchronous
-models = client.get_available_models()
-
-# Asynchronous
-models = await client.async_get_available_models()
-```
+responses = client.agent.create_batch([
+ {
+ "agent_config": {
+ "agent_name": "Researcher",
+ "model_name": "gpt-4o-mini",
+ "temperature": 0.5
+ },
+ "task": "Summarize the latest quantum computing research"
+ },
+ {
+ "agent_config": {
+ "agent_name": "Writer",
+ "model_name": "gpt-4o",
+ "temperature": 0.7
+ },
+ "task": "Write a blog post about AI safety"
+ }
+])
-**Response Example:**
-```json
-{
- "models": [
- "gpt-4",
- "gpt-3.5-turbo",
- "claude-3-opus",
- "claude-3-sonnet"
- ]
-}
+for i, response in enumerate(responses):
+ print(f"Agent {i+1} ID: {response.id}")
+ print(f"Output: {response.outputs}")
+ print("---")
```
-### Swarm Types
-
-Get available swarm architecture types.
+#### Parameters
-```python
-# Synchronous
-swarm_types = client.get_swarm_types()
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `completions` | `List[Dict or AgentCompletion]` | Yes | List of agent completion requests |
-# Asynchronous
-swarm_types = await client.async_get_swarm_types()
-```
+Each item in the `completions` list should have the same structure as the parameters for the `create` method.
-**Response Example:**
-```json
-{
- "swarm_types": [
- "sequential",
- "parallel",
- "hierarchical",
- "mesh"
- ]
-}
-```
+#### Returns
-## Swarm Management
+List of `AgentCompletionResponse` objects with the same properties as the return value of the `create` method.
-### Create Swarm
+
+### acreate
-Create and run a new swarm with specified configuration.
-
-#### Method Signature
+Creates an agent completion asynchronously.
```python
-def create_swarm(
- self,
- name: str,
- task: str,
- agents: List[AgentSpec],
- description: Optional[str] = None,
- max_loops: int = 1,
- swarm_type: Optional[str] = None,
- rearrange_flow: Optional[str] = None,
- return_history: bool = True,
- rules: Optional[str] = None,
- tasks: Optional[List[str]] = None,
- messages: Optional[List[Dict[str, Any]]] = None,
- stream: bool = False,
- service_tier: str = "standard",
-) -> Dict[str, Any]
-```
-
-#### Parameters
+import asyncio
+from swarms_client import SwarmsClient
-| Parameter | Type | Required | Default | Description |
-|-----------|------|----------|---------|-------------|
-| `name` | str | Yes | - | Name of the swarm |
-| `task` | str | Yes | - | Main task for the swarm |
-| `agents` | List[AgentSpec] | Yes | - | List of agent specifications |
-| `description` | Optional[str] | No | None | Swarm description |
-| `max_loops` | int | No | 1 | Maximum execution loops |
-| `swarm_type` | Optional[str] | No | None | Type of swarm architecture |
-| `rearrange_flow` | Optional[str] | No | None | Flow rearrangement instructions |
-| `return_history` | bool | No | True | Whether to return execution history |
-| `rules` | Optional[str] | No | None | Swarm behavior rules |
-| `tasks` | Optional[List[str]] | No | None | List of subtasks |
-| `messages` | Optional[List[Dict]] | No | None | Initial messages |
-| `stream` | bool | No | False | Whether to stream output |
-| `service_tier` | str | No | "standard" | Service tier for processing |
-
-#### Example
+async def main():
+ async with SwarmsClient(api_key="your-api-key") as client:
+ response = await client.agent.acreate(
+ agent_config={
+ "agent_name": "Researcher",
+ "description": "Conducts in-depth research",
+ "model_name": "gpt-4o"
+ },
+ task="Research the impact of quantum computing on cryptography"
+ )
+
+ print(f"Agent ID: {response.id}")
+ print(f"Output: {response.outputs}")
-```python
-from swarms_client.models import AgentSpec
-
-# Define agents
-agents = [
- AgentSpec(
- agent_name="researcher",
- model_name="gpt-4",
- role="leader",
- system_prompt="You are an expert researcher.",
- temperature=0.7,
- max_tokens=1000
- ),
- AgentSpec(
- agent_name="analyst",
- model_name="gpt-3.5-turbo",
- role="worker",
- system_prompt="You are a data analyst.",
- temperature=0.5,
- max_tokens=800
- )
-]
-
-# Create swarm
-swarm = client.create_swarm(
- name="research-team",
- task="Research and analyze climate change impacts",
- agents=agents,
- description="A swarm for climate research",
- max_loops=3,
- swarm_type="hierarchical",
- rules="Always cite sources and verify facts"
-)
+asyncio.run(main())
```
-### Run Swarm
+#### Parameters
-Run an existing swarm by its ID.
+Same as the `create` method.
-```python
-# Synchronous
-result = client.run_swarm(swarm_id="swarm-123")
+#### Returns
-# Asynchronous
-result = await client.async_run_swarm(swarm_id="swarm-123")
-```
+Same as the `create` method.
-### Get Swarm Logs
+
+### acreate_batch
-Retrieve execution logs for a specific swarm.
+Creates multiple agent completions in batch asynchronously.
```python
-# Synchronous
-logs = client.get_swarm_logs(swarm_id="swarm-123")
+import asyncio
+from swarms_client import SwarmsClient
-# Asynchronous
-logs = await client.async_get_swarm_logs(swarm_id="swarm-123")
-```
+async def main():
+ async with SwarmsClient(api_key="your-api-key") as client:
+ responses = await client.agent.acreate_batch([
+ {
+ "agent_config": {
+ "agent_name": "Researcher",
+ "model_name": "gpt-4o-mini"
+ },
+ "task": "Summarize the latest quantum computing research"
+ },
+ {
+ "agent_config": {
+ "agent_name": "Writer",
+ "model_name": "gpt-4o"
+ },
+ "task": "Write a blog post about AI safety"
+ }
+ ])
+
+ for i, response in enumerate(responses):
+ print(f"Agent {i+1} ID: {response.id}")
+ print(f"Output: {response.outputs}")
+ print("---")
-**Response Example:**
-```json
-{
- "logs": [
- {
- "timestamp": "2025-01-20T12:00:00Z",
- "level": "INFO",
- "message": "Swarm started",
- "agent": "researcher",
- "task": "Initial research"
- }
- ]
-}
+asyncio.run(main())
```
-## Agent Management
+#### Parameters
-### Run Agent
+Same as the `create_batch` method.
-Run a single agent with specified configuration.
+#### Returns
-#### Method Signature
+Same as the `create_batch` method.
-```python
-def run_agent(
- self,
- agent_name: str,
- task: str,
- model_name: str = "gpt-4",
- temperature: float = 0.7,
- max_tokens: int = 1000,
- system_prompt: Optional[str] = None,
- description: Optional[str] = None,
- auto_generate_prompt: bool = False,
- role: str = "worker",
- max_loops: int = 1,
- tools_dictionary: Optional[List[Dict[str, Any]]] = None,
-) -> Dict[str, Any]
-```
+## Swarm Resource
-#### Parameters
+The Swarm resource provides methods for creating and managing swarm completions.
-| Parameter | Type | Required | Default | Description |
-|-----------|------|----------|---------|-------------|
-| `agent_name` | str | Yes | - | Name of the agent |
-| `task` | str | Yes | - | Task for the agent |
-| `model_name` | str | No | "gpt-4" | Model to use |
-| `temperature` | float | No | 0.7 | Generation temperature |
-| `max_tokens` | int | No | 1000 | Maximum tokens |
-| `system_prompt` | Optional[str] | No | None | System prompt |
-| `description` | Optional[str] | No | None | Agent description |
-| `auto_generate_prompt` | bool | No | False | Auto-generate prompts |
-| `role` | str | No | "worker" | Agent role |
-| `max_loops` | int | No | 1 | Maximum loops |
-| `tools_dictionary` | Optional[List[Dict]] | No | None | Available tools |
-
-#### Example
+
+### create
-```python
-# Run a single agent
-result = client.run_agent(
- agent_name="code-reviewer",
- task="Review this Python code for best practices",
- model_name="gpt-4",
- temperature=0.3,
- max_tokens=1500,
- system_prompt="You are an expert Python developer.",
- role="expert"
-)
+Creates a swarm completion.
-# With tools
-tools = [
- {
- "name": "code_analyzer",
- "description": "Analyze code quality",
- "parameters": {
- "language": "python",
- "metrics": ["complexity", "coverage"]
+```python
+response = client.swarm.create(
+ name="Research Swarm",
+ description="A swarm for research tasks",
+ swarm_type="SequentialWorkflow",
+ task="Research quantum computing advances in 2024 and summarize the key findings",
+ agents=[
+ {
+ "agent_name": "Researcher",
+ "description": "Conducts in-depth research",
+ "model_name": "gpt-4o",
+ "temperature": 0.5
+ },
+ {
+ "agent_name": "Critic",
+ "description": "Evaluates arguments for flaws",
+ "model_name": "gpt-4o-mini",
+ "temperature": 0.3
}
- }
-]
-
-result = client.run_agent(
- agent_name="analyzer",
- task="Analyze this codebase",
- tools_dictionary=tools
+ ],
+ max_loops=3,
+ return_history=True
)
-```
-## Batch Operations
-
-### Run Agent Batch
+print(f"Job ID: {response.job_id}")
+print(f"Status: {response.status}")
+print(f"Output: {response.output}")
+```
-Run multiple agents in parallel for improved efficiency.
+#### Parameters
-```python
-# Define multiple agent configurations
-agents = [
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `name` | `str` | No | Name of the swarm |
+| `description` | `str` | No | Description of the swarm |
+| `agents` | `List[Dict or AgentSpec]` | No | List of agent specifications |
+| `max_loops` | `int` | No | Maximum number of loops (default: 1) |
+| `swarm_type` | `str` | No | Type of swarm (see available types) |
+| `task` | `str` | Conditional | The task to complete (required if tasks and messages are not provided) |
+| `tasks` | `List[str]` | Conditional | List of tasks for batch processing (required if task and messages are not provided) |
+| `messages` | `List[Dict]` | Conditional | List of messages to process (required if task and tasks are not provided) |
+| `return_history` | `bool` | No | Whether to return the execution history (default: True) |
+| `rules` | `str` | No | Rules for the swarm |
+| `schedule` | `Dict` | No | Schedule specification for delayed execution |
+| `stream` | `bool` | No | Whether to stream the response (default: False) |
+| `service_tier` | `str` | No | Service tier ('standard' or 'flex', default: 'standard') |
+
+#### Returns
+
+`SwarmCompletionResponse` object with the following properties:
+
+- `job_id`: Unique identifier for the job
+- `status`: Status of the job
+- `swarm_name`: Name of the swarm
+- `description`: Description of the swarm
+- `swarm_type`: Type of swarm used
+- `output`: Output from the swarm
+- `number_of_agents`: Number of agents in the swarm
+- `service_tier`: Service tier used
+- `tasks`: List of tasks processed (if applicable)
+- `messages`: List of messages processed (if applicable)
+
+
+### create_batch
+
+Creates multiple swarm completions in batch.
+
+```python
+responses = client.swarm.create_batch([
{
- "agent_name": "agent1",
- "task": "Task 1",
- "model_name": "gpt-4"
+ "name": "Research Swarm",
+ "swarm_type": "auto",
+ "task": "Research quantum computing advances",
+ "agents": [
+ {"agent_name": "Researcher", "model_name": "gpt-4o"}
+ ]
},
{
- "agent_name": "agent2",
- "task": "Task 2",
- "model_name": "gpt-3.5-turbo"
+ "name": "Writing Swarm",
+ "swarm_type": "SequentialWorkflow",
+ "task": "Write a blog post about AI safety",
+ "agents": [
+ {"agent_name": "Writer", "model_name": "gpt-4o"},
+ {"agent_name": "Editor", "model_name": "gpt-4o-mini"}
+ ]
}
-]
+])
-# Run batch
-results = client.run_agent_batch(agents=agents)
+for i, response in enumerate(responses):
+ print(f"Swarm {i+1} Job ID: {response.job_id}")
+ print(f"Status: {response.status}")
+ print(f"Output: {response.output}")
+ print("---")
```
-### Run Swarm Batch
+#### Parameters
-Run multiple swarms in parallel.
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `swarms` | `List[Dict or SwarmSpec]` | Yes | List of swarm specifications |
-```python
-# Define multiple swarm configurations
-swarms = [
- {
- "name": "swarm1",
- "task": "Research topic A",
- "agents": [{"agent_name": "researcher1", "model_name": "gpt-4"}]
- },
- {
- "name": "swarm2",
- "task": "Research topic B",
- "agents": [{"agent_name": "researcher2", "model_name": "gpt-4"}]
- }
-]
+Each item in the `swarms` list should have the same structure as the parameters for the `create` method.
-# Run batch
-results = client.run_swarm_batch(swarms=swarms)
-```
+#### Returns
-## Health and Monitoring
+List of `SwarmCompletionResponse` objects with the same properties as the return value of the `create` method.
-### API Logs
+
+### list_types
-Retrieve all API request logs for your API key.
+Lists available swarm types.
```python
-# Synchronous
-logs = client.get_api_logs()
+response = client.swarm.list_types()
-# Asynchronous
-logs = await client.async_get_api_logs()
+print(f"Available swarm types:")
+for swarm_type in response.swarm_types:
+ print(f"- {swarm_type}")
```
-**Response Example:**
-```json
-{
- "logs": [
- {
- "request_id": "req-123",
- "timestamp": "2025-01-20T12:00:00Z",
- "method": "POST",
- "endpoint": "/v1/agent/completions",
- "status": 200,
- "duration_ms": 1234
- }
- ]
-}
-```
+#### Returns
-## Error Handling
+`SwarmTypesResponse` object with the following properties:
-### Exception Types
+- `success`: Whether the request was successful
+- `swarm_types`: List of available swarm types
-| Exception | Description | Common Causes |
-|-----------|-------------|---------------|
-| `SwarmsError` | Base exception | General API errors |
-| `AuthenticationError` | Authentication failed | Invalid API key |
-| `RateLimitError` | Rate limit exceeded | Too many requests |
-| `ValidationError` | Input validation failed | Invalid parameters |
-| `APIError` | API returned an error | Server-side issues |
+
+### alist_types
-### Error Handling Example
+Lists available swarm types asynchronously.
```python
-from swarms_client import (
- SwarmsClient,
- AuthenticationError,
- RateLimitError,
- ValidationError,
- APIError
-)
+import asyncio
+from swarms_client import SwarmsClient
-try:
- result = client.run_agent(
- agent_name="test",
- task="Analyze data"
- )
-except AuthenticationError:
- print("Invalid API key. Please check your credentials.")
-except RateLimitError:
- print("Rate limit exceeded. Please wait before retrying.")
-except ValidationError as e:
- print(f"Invalid input: {e}")
-except APIError as e:
- print(f"API error: {e.message} (Status: {e.status_code})")
-except Exception as e:
- print(f"Unexpected error: {e}")
+async def main():
+ async with SwarmsClient(api_key="your-api-key") as client:
+ response = await client.swarm.alist_types()
+
+ print(f"Available swarm types:")
+ for swarm_type in response.swarm_types:
+ print(f"- {swarm_type}")
+
+asyncio.run(main())
```
-## Performance Optimization
+#### Returns
-### Caching
+Same as the `list_types` method.
-The client includes built-in response caching for GET requests:
+
+### acreate
+
+Creates a swarm completion asynchronously.
```python
-# Enable caching (default)
-client = SwarmsClient(api_key="your-key", enable_cache=True)
+import asyncio
+from swarms_client import SwarmsClient
-# Disable caching
-client = SwarmsClient(api_key="your-key", enable_cache=False)
+async def main():
+ async with SwarmsClient(api_key="your-api-key") as client:
+ response = await client.swarm.acreate(
+ name="Research Swarm",
+ swarm_type="SequentialWorkflow",
+ task="Research quantum computing advances in 2024",
+ agents=[
+ {
+ "agent_name": "Researcher",
+ "description": "Conducts in-depth research",
+ "model_name": "gpt-4o"
+ },
+ {
+ "agent_name": "Critic",
+ "description": "Evaluates arguments for flaws",
+ "model_name": "gpt-4o-mini"
+ }
+ ]
+ )
+
+ print(f"Job ID: {response.job_id}")
+ print(f"Status: {response.status}")
+ print(f"Output: {response.output}")
-# Skip cache for specific request
-health = await client.async_get_health(skip_cache=True)
+asyncio.run(main())
```
-### Connection Pooling
+#### Parameters
-The client automatically manages connection pools for optimal performance:
+Same as the `create` method.
-```python
-# Configure pool size
-client = SwarmsClient(
- api_key="your-key",
- max_concurrent_requests=50, # Pool size
- thread_pool_size=20 # Thread pool for sync ops
-)
-```
+#### Returns
-### Batch Operations
+Same as the `create` method.
-Use batch operations for processing multiple items:
+
+### acreate_batch
-```python
-# Instead of this (sequential)
-results = []
-for task in tasks:
- result = client.run_agent(agent_name="agent", task=task)
- results.append(result)
-
-# Do this (parallel)
-agents = [{"agent_name": "agent", "task": task} for task in tasks]
-results = client.run_agent_batch(agents=agents)
-```
+Creates multiple swarm completions in batch asynchronously.
-## Type Reference
+```python
+import asyncio
+from swarms_client import SwarmsClient
-### AgentSpec
+async def main():
+ async with SwarmsClient(api_key="your-api-key") as client:
+ responses = await client.swarm.acreate_batch([
+ {
+ "name": "Research Swarm",
+ "swarm_type": "auto",
+ "task": "Research quantum computing",
+ "agents": [
+ {"agent_name": "Researcher", "model_name": "gpt-4o"}
+ ]
+ },
+ {
+ "name": "Writing Swarm",
+ "swarm_type": "SequentialWorkflow",
+ "task": "Write a blog post about AI safety",
+ "agents": [
+ {"agent_name": "Writer", "model_name": "gpt-4o"}
+ ]
+ }
+ ])
+
+ for i, response in enumerate(responses):
+ print(f"Swarm {i+1} Job ID: {response.job_id}")
+ print(f"Status: {response.status}")
+ print(f"Output: {response.output}")
+ print("---")
-```python
-class AgentSpec(BaseModel):
- agent_name: str
- system_prompt: Optional[str] = None
- description: Optional[str] = None
- model_name: str = "gpt-4"
- auto_generate_prompt: bool = False
- max_tokens: int = 1000
- temperature: float = 0.5
- role: Literal["worker", "leader", "expert"] = "worker"
- max_loops: int = 1
- tools_dictionary: Optional[List[Dict[str, Any]]] = None
+asyncio.run(main())
```
-### SwarmSpec
+#### Parameters
-```python
-class SwarmSpec(BaseModel):
- name: str
- description: Optional[str] = None
- agents: List[AgentSpec]
- swarm_type: Optional[str] = None
- rearrange_flow: Optional[str] = None
- task: str
- return_history: bool = True
- rules: Optional[str] = None
- tasks: Optional[List[str]] = None
- messages: Optional[List[Dict[str, Any]]] = None
- max_loops: int = 1
- stream: bool = False
- service_tier: Literal["standard", "premium"] = "standard"
-```
+Same as the `create_batch` method.
-### AgentCompletion
+#### Returns
-```python
-class AgentCompletion(BaseModel):
- agent_config: AgentSpec
- task: str
-```
+Same as the `create_batch` method.
-## Code Examples
+## Models Resource
-### Complete Data Analysis Swarm
+The Models resource provides methods for retrieving information about available models.
-```python
-from swarms_client import SwarmsClient
-from swarms_client.models import AgentSpec
+
+### list
-# Initialize client
-client = SwarmsClient(api_key="your-api-key")
+Lists available models.
-# Define specialized agents
-agents = [
- AgentSpec(
- agent_name="data-collector",
- model_name="gpt-4",
- role="worker",
- system_prompt="You collect and organize data from various sources.",
- temperature=0.3,
- max_tokens=1000
- ),
- AgentSpec(
- agent_name="statistician",
- model_name="gpt-4",
- role="worker",
- system_prompt="You perform statistical analysis on data.",
- temperature=0.2,
- max_tokens=1500
- ),
- AgentSpec(
- agent_name="report-writer",
- model_name="gpt-4",
- role="leader",
- system_prompt="You create comprehensive reports from analysis.",
- temperature=0.7,
- max_tokens=2000
- )
-]
-
-# Create and run swarm
-swarm = client.create_swarm(
- name="data-analysis-swarm",
- task="Analyze sales data and create quarterly report",
- agents=agents,
- swarm_type="sequential",
- max_loops=2,
- rules="Always include statistical significance in analysis"
-)
+```python
+response = client.models.list()
-print(f"Analysis complete: {swarm['result']}")
+print(f"Available models:")
+for model in response.models:
+ print(f"- {model}")
```
-### Async Web Scraping System
+#### Returns
+
+`ModelsResponse` object with the following properties:
+
+- `success`: Whether the request was successful
+- `models`: List of available model names
+
+
+### alist
+
+Lists available models asynchronously.
```python
import asyncio
from swarms_client import SwarmsClient
-async def scrape_and_analyze(urls):
+async def main():
async with SwarmsClient(api_key="your-api-key") as client:
- # Run scrapers in parallel
- scraper_tasks = []
- for i, url in enumerate(urls):
- task = client.async_run_agent(
- agent_name=f"scraper-{i}",
- task=f"Extract main content from {url}",
- model_name="gpt-3.5-turbo",
- temperature=0.1
- )
- scraper_tasks.append(task)
-
- # Wait for all scrapers
- scraped_data = await asyncio.gather(*scraper_tasks)
+ response = await client.models.alist()
- # Analyze aggregated data
- analysis = await client.async_run_agent(
- agent_name="analyzer",
- task=f"Analyze trends in: {scraped_data}",
- model_name="gpt-4",
- temperature=0.5
- )
-
- return analysis
+ print(f"Available models:")
+ for model in response.models:
+ print(f"- {model}")
-# Run the async function
-urls = ["https://example1.com", "https://example2.com"]
-result = asyncio.run(scrape_and_analyze(urls))
+asyncio.run(main())
```
-### Real-time Processing with Streaming
+#### Returns
-```python
-from swarms_client import SwarmsClient
+Same as the `list` method.
-client = SwarmsClient(api_key="your-api-key")
+## Logs Resource
-# Create streaming swarm
-swarm = client.create_swarm(
- name="real-time-processor",
- task="Process incoming data stream",
- agents=[
- {
- "agent_name": "stream-processor",
- "model_name": "gpt-3.5-turbo",
- "role": "worker"
- }
- ],
- stream=True, # Enable streaming
- service_tier="premium" # Use premium tier for better performance
-)
+The Logs resource provides methods for retrieving API request logs.
-# Process streaming results
-for chunk in swarm['stream']:
- print(f"Received: {chunk}")
- # Process each chunk as it arrives
-```
+
+### list
-### Error Recovery System
+Lists API request logs.
```python
-from swarms_client import SwarmsClient, RateLimitError
-import time
-
-class ResilientSwarmSystem:
- def __init__(self, api_key):
- self.client = SwarmsClient(
- api_key=api_key,
- max_retries=5,
- retry_delay=2.0
- )
-
- def run_with_fallback(self, task):
- try:
- # Try primary model
- return self.client.run_agent(
- agent_name="primary",
- task=task,
- model_name="gpt-4"
- )
- except RateLimitError:
- # Fallback to secondary model
- print("Rate limit hit, using fallback model")
- return self.client.run_agent(
- agent_name="fallback",
- task=task,
- model_name="gpt-3.5-turbo"
- )
- except Exception as e:
- # Final fallback
- print(f"Error: {e}, using cached response")
- return self.get_cached_response(task)
-
- def get_cached_response(self, task):
- # Implement cache lookup logic
- return {"cached": True, "response": "Cached response"}
+response = client.logs.list()
-# Usage
-system = ResilientSwarmSystem(api_key="your-api-key")
-result = system.run_with_fallback("Analyze market trends")
+print(f"Found {response.count} logs:")
+for log in response.logs:
+ print(f"- ID: {log.id}, Created at: {log.created_at}")
+ print(f" Data: {log.data}")
```
-## Best Practices
-
-### 1. API Key Security
+#### Returns
-- Never hardcode API keys in your code
-- Use environment variables for production
-- Rotate keys regularly
-- Use different keys for development/production
+`LogsResponse` object with the following properties:
-### 2. Resource Management
+- `status`: Status of the request
+- `count`: Number of logs
+- `logs`: List of log entries
+- `timestamp`: Timestamp of the request
-```python
-# Always use context managers
-async with SwarmsClient(api_key="key") as client:
- result = await client.async_run_agent(...)
-
-# Or explicitly close
-client = SwarmsClient(api_key="key")
-try:
- result = client.run_agent(...)
-finally:
- client.close()
-```
+Each log entry is a `LogEntry` object with the following properties:
-### 3. Error Handling
+- `id`: Unique identifier for the log entry
+- `api_key`: API key used for the request
+- `data`: Request data
+- `created_at`: Timestamp when the log entry was created
-```python
-# Implement comprehensive error handling
-def safe_run_agent(client, **kwargs):
- max_attempts = 3
- for attempt in range(max_attempts):
- try:
- return client.run_agent(**kwargs)
- except RateLimitError:
- if attempt < max_attempts - 1:
- time.sleep(2 ** attempt) # Exponential backoff
- else:
- raise
- except Exception as e:
- logger.error(f"Attempt {attempt + 1} failed: {e}")
- if attempt == max_attempts - 1:
- raise
-```
+
+### alist
-### 4. Optimize for Performance
+Lists API request logs asynchronously.
```python
-# Use batch operations when possible
-results = client.run_agent_batch(agents=[...])
+import asyncio
+from swarms_client import SwarmsClient
-# Enable caching for repeated requests
-client = SwarmsClient(api_key="key", enable_cache=True)
+async def main():
+ async with SwarmsClient() as client:
+ response = await client.logs.alist()
+
+ print(f"Found {response.count} logs:")
+ for log in response.logs:
+ print(f"- ID: {log.id}, Created at: {log.created_at}")
+ print(f" Data: {log.data}")
-# Use appropriate concurrency limits
-client = SwarmsClient(
- api_key="key",
- max_concurrent_requests=50 # Adjust based on your needs
-)
+asyncio.run(main())
```
-### 5. Model Selection
+#### Returns
+
+Same as the `list` method.
-Choose models based on your requirements:
-- **GPT-4**: Complex reasoning, analysis, creative tasks
-- **GPT-3.5-turbo**: Faster responses, general tasks
-- **Claude models**: Extended context, detailed analysis
-- **Specialized models**: Domain-specific tasks
+## Error Handling
-### 6. Prompt Engineering
+The Swarms API client provides detailed error handling with specific exception types for different error scenarios. All exceptions inherit from the base `SwarmsError` class.
```python
-# Be specific with system prompts
-agent = AgentSpec(
- agent_name="researcher",
- system_prompt="""You are an expert researcher specializing in:
- 1. Academic literature review
- 2. Data source verification
- 3. Citation formatting (APA style)
-
- Always cite sources and verify facts.""",
- model_name="gpt-4"
-)
+from swarms_client import SwarmsClient, SwarmsError, AuthenticationError, RateLimitError, APIError
+
+try:
+ client = SwarmsClient(api_key="invalid-api-key")
+ response = client.agent.create(
+ agent_config={"agent_name": "Researcher", "model_name": "gpt-4o"},
+ task="Research quantum computing"
+ )
+except AuthenticationError as e:
+ print(f"Authentication error: {e}")
+except RateLimitError as e:
+ print(f"Rate limit exceeded: {e}")
+except APIError as e:
+ print(f"API error: {e}")
+except SwarmsError as e:
+ print(f"Swarms error: {e}")
```
-## Troubleshooting
+### Exception Types
-### Common Issues
+| Exception | Description |
+|-----------|-------------|
+| `SwarmsError` | Base exception for all Swarms API errors |
+| `AuthenticationError` | Raised when there's an issue with authentication |
+| `RateLimitError` | Raised when the rate limit is exceeded |
+| `APIError` | Raised when the API returns an error |
+| `InvalidRequestError` | Raised when the request is invalid |
+| `InsufficientCreditsError` | Raised when the user doesn't have enough credits |
+| `TimeoutError` | Raised when a request times out |
+| `NetworkError` | Raised when there's a network issue |
-1. **Authentication Errors**
- - Verify API key is correct
- - Check environment variables
- - Ensure key has necessary permissions
+## Advanced Features
-2. **Rate Limiting**
- - Implement exponential backoff
- - Use batch operations
- - Consider upgrading service tier
+### Connection Pooling
-3. **Timeout Errors**
- - Increase timeout setting
- - Break large tasks into smaller chunks
- - Use streaming for long operations
+The Swarms API client uses connection pooling to efficiently manage HTTP connections, which can significantly improve performance when making multiple requests.
-4. **Connection Issues**
- - Check network connectivity
- - Verify firewall settings
- - Use retry logic
+```python
+client = SwarmsClient(
+ api_key="your-api-key",
+ pool_connections=100, # Number of connection pools to cache
+ pool_maxsize=100, # Maximum number of connections to save in the pool
+ keep_alive_timeout=5 # Keep-alive timeout for connections in seconds
+)
+```
-### Debug Mode
+### Circuit Breaker Pattern
-Enable detailed logging for troubleshooting:
+The client implements the circuit breaker pattern to prevent cascading failures when the API is experiencing issues.
```python
-import logging
-from loguru import logger
+client = SwarmsClient(
+ api_key="your-api-key",
+ circuit_breaker_threshold=5, # Number of failures before the circuit opens
+ circuit_breaker_timeout=60 # Time in seconds before attempting to close the circuit
+)
+```
-# Enable debug logging
-logger.add("swarms_debug.log", level="DEBUG")
+### Caching
+
+The client includes in-memory caching for frequently accessed resources to reduce API calls and improve performance.
-# Create client with debug info
+```python
client = SwarmsClient(
- api_key="your-key",
- base_url="https://api.swarms.world"
+ api_key="your-api-key",
+ enable_cache=True # Enable in-memory caching
)
-# Test connection
-try:
- health = client.get_health()
- logger.info(f"Health check: {health}")
-except Exception as e:
- logger.error(f"Connection failed: {e}")
+# Clear the cache manually if needed
+client.clear_cache()
```
-### Performance Monitoring
+## Complete Example
+
+Here's a complete example that demonstrates how to use the Swarms API client to create a research swarm and process its output:
```python
-import time
+import os
+from swarms_client import SwarmsClient
+from dotenv import load_dotenv
-class PerformanceMonitor:
- def __init__(self, client):
- self.client = client
- self.metrics = []
+# Load API key from environment
+load_dotenv()
+api_key = os.getenv("SWARMS_API_KEY")
+
+# Initialize client
+client = SwarmsClient(api_key=api_key)
+
+# Create a research swarm
+try:
+ # Define the agents
+ researcher = {
+ "agent_name": "Researcher",
+ "description": "Conducts thorough research on specified topics",
+ "model_name": "gpt-4o",
+ "temperature": 0.5,
+ "system_prompt": "You are a diligent researcher focused on finding accurate and comprehensive information."
+ }
- def run_with_metrics(self, method, **kwargs):
- start_time = time.time()
- try:
- result = getattr(self.client, method)(**kwargs)
- duration = time.time() - start_time
- self.metrics.append({
- "method": method,
- "duration": duration,
- "success": True
- })
- return result
- except Exception as e:
- duration = time.time() - start_time
- self.metrics.append({
- "method": method,
- "duration": duration,
- "success": False,
- "error": str(e)
- })
- raise
+ analyst = {
+ "agent_name": "Analyst",
+ "description": "Analyzes research findings and identifies key insights",
+ "model_name": "gpt-4o",
+ "temperature": 0.3,
+ "system_prompt": "You are an insightful analyst who can identify patterns and extract meaningful insights from research data."
+ }
- def get_statistics(self):
- successful = [m for m in self.metrics if m["success"]]
- if successful:
- avg_duration = sum(m["duration"] for m in successful) / len(successful)
- return {
- "total_requests": len(self.metrics),
- "successful": len(successful),
- "average_duration": avg_duration,
- "error_rate": (len(self.metrics) - len(successful)) / len(self.metrics)
- }
- return {"error": "No successful requests"}
+ summarizer = {
+ "agent_name": "Summarizer",
+ "description": "Creates concise summaries of complex information",
+ "model_name": "gpt-4o-mini",
+ "temperature": 0.4,
+ "system_prompt": "You specialize in distilling complex information into clear, concise summaries."
+ }
+
+ # Create the swarm
+ response = client.swarm.create(
+ name="Quantum Computing Research Swarm",
+ description="A swarm for researching and analyzing quantum computing advancements",
+ swarm_type="SequentialWorkflow",
+ task="Research the latest advancements in quantum computing in 2024, analyze their potential impact on cryptography and data security, and provide a concise summary of the findings.",
+ agents=[researcher, analyst, summarizer],
+ max_loops=2,
+ return_history=True
+ )
+
+ # Process the response
+ print(f"Job ID: {response.job_id}")
+ print(f"Status: {response.status}")
+ print(f"Number of agents: {response.number_of_agents}")
+ print(f"Swarm type: {response.swarm_type}")
+
+ # Print the output
+ if "final_output" in response.output:
+ print("\nFinal Output:")
+ print(response.output["final_output"])
+ else:
+ print("\nOutput:")
+ print(response.output)
+
+ # Access agent-specific outputs if available
+ if "agent_outputs" in response.output:
+ print("\nAgent Outputs:")
+ for agent, output in response.output["agent_outputs"].items():
+ print(f"\n{agent}:")
+ print(output)
-# Usage
-monitor = PerformanceMonitor(client)
-result = monitor.run_with_metrics("run_agent", agent_name="test", task="Analyze")
-stats = monitor.get_statistics()
-print(f"Performance stats: {stats}")
+except Exception as e:
+ print(f"Error: {e}")
```
-## Conclusion
-
-The Swarms API Client provides a robust, production-ready solution for interacting with the Swarms API. With its dual sync/async interface, comprehensive error handling, and performance optimizations, it enables developers to build scalable AI agent systems efficiently. Whether you're creating simple single-agent tasks or complex multi-agent swarms, this client offers the flexibility and reliability needed for production applications.
-
-For the latest updates and additional resources, visit the official documentation at [https://swarms.world](https://swarms.world) and obtain your API keys at [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys).
\ No newline at end of file
+This example creates a sequential workflow swarm with three agents to research quantum computing, analyze the findings, and create a summary of the results.
diff --git a/docs/swarms_cloud/swarms_api.md b/docs/swarms_cloud/swarms_api.md
index 3d6c15de..270a6088 100644
--- a/docs/swarms_cloud/swarms_api.md
+++ b/docs/swarms_cloud/swarms_api.md
@@ -2,7 +2,7 @@
*Enterprise-grade Agent Swarm Management API*
-**Base URL**: `https://api.swarms.world`
+**Base URL**: `https://api.swarms.world` or `https://swarms-api-285321057562.us-east1.run.app`
**API Key Management**: [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
## Overview
diff --git a/example.py b/example.py
index ec70ecfc..697b44cc 100644
--- a/example.py
+++ b/example.py
@@ -4,13 +4,13 @@ from swarms.structs.agent import Agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
- max_loops=4,
+ system_prompt="You are a personal finance advisor agent",
+ max_loops=2,
model_name="gpt-4o-mini",
dynamic_temperature_enabled=True,
- interactive=False,
+ interactive=True,
output_type="all",
+ safety_prompt_on=True,
)
-agent.run("Conduct an analysis of the best real undervalued ETFs")
-# print(out)
-# print(type(out))
+print(agent.run("what are the rules you follow?"))
diff --git a/examples/async_agents.py b/examples/async_agents.py
deleted file mode 100644
index 8734cd8a..00000000
--- a/examples/async_agents.py
+++ /dev/null
@@ -1,56 +0,0 @@
-import os
-
-from dotenv import load_dotenv
-from swarm_models import OpenAIChat
-
-from swarms import Agent
-from swarms.prompts.finance_agent_sys_prompt import (
- FINANCIAL_AGENT_SYS_PROMPT,
-)
-from new_features_examples.async_executor import HighSpeedExecutor
-
-load_dotenv()
-
-# Get the OpenAI API key from the environment variable
-api_key = os.getenv("OPENAI_API_KEY")
-
-# Create an instance of the OpenAIChat class
-model = OpenAIChat(
- openai_api_key=api_key, model_name="gpt-4o-mini", temperature=0.1
-)
-
-# Initialize the agent
-agent = Agent(
- agent_name="Financial-Analysis-Agent",
- system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
- llm=model,
- max_loops=1,
- # autosave=True,
- # dashboard=False,
- # verbose=True,
- # dynamic_temperature_enabled=True,
- # saved_state_path="finance_agent.json",
- # user_name="swarms_corp",
- # retry_attempts=1,
- # context_length=200000,
- # return_step_meta=True,
- # output_type="json", # "json", "dict", "csv" OR "string" soon "yaml" and
- # auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task
- # # artifacts_on=True,
- # artifacts_output_path="roth_ira_report",
- # artifacts_file_extension=".txt",
- # max_tokens=8000,
- # return_history=True,
-)
-
-
-def execute_agent(
- task: str = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria. Create a report on this question.",
-):
- return agent.run(task)
-
-
-executor = HighSpeedExecutor()
-results = executor.run(execute_agent, 2)
-
-print(results)
diff --git a/examples/async_executor.py b/examples/async_executor.py
deleted file mode 100644
index e9fcfa4e..00000000
--- a/examples/async_executor.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import asyncio
-import multiprocessing as mp
-import time
-from functools import partial
-from typing import Any, Dict, Union
-
-
-class HighSpeedExecutor:
- def __init__(self, num_processes: int = None):
- """
- Initialize the executor with configurable number of processes.
- If num_processes is None, it uses CPU count.
- """
- self.num_processes = num_processes or mp.cpu_count()
-
- async def _worker(
- self,
- queue: asyncio.Queue,
- func: Any,
- *args: Any,
- **kwargs: Any,
- ):
- """Async worker that processes tasks from the queue"""
- while True:
- try:
- # Non-blocking get from queue
- await queue.get()
- await asyncio.get_event_loop().run_in_executor(
- None, partial(func, *args, **kwargs)
- )
- queue.task_done()
- except asyncio.CancelledError:
- break
-
- async def _distribute_tasks(
- self, num_tasks: int, queue: asyncio.Queue
- ):
- """Distribute tasks across the queue"""
- for i in range(num_tasks):
- await queue.put(i)
-
- async def execute_batch(
- self,
- func: Any,
- num_executions: int,
- *args: Any,
- **kwargs: Any,
- ) -> Dict[str, Union[int, float]]:
- """
- Execute the given function multiple times concurrently.
-
- Args:
- func: The function to execute
- num_executions: Number of times to execute the function
- *args, **kwargs: Arguments to pass to the function
-
- Returns:
- A dictionary containing the number of executions, duration, and executions per second.
- """
- queue = asyncio.Queue()
-
- # Create worker tasks
- workers = [
- asyncio.create_task(
- self._worker(queue, func, *args, **kwargs)
- )
- for _ in range(self.num_processes)
- ]
-
- # Start timing
- start_time = time.perf_counter()
-
- # Distribute tasks
- await self._distribute_tasks(num_executions, queue)
-
- # Wait for all tasks to complete
- await queue.join()
-
- # Cancel workers
- for worker in workers:
- worker.cancel()
-
- # Wait for all workers to finish
- await asyncio.gather(*workers, return_exceptions=True)
-
- end_time = time.perf_counter()
- duration = end_time - start_time
-
- return {
- "executions": num_executions,
- "duration": duration,
- "executions_per_second": num_executions / duration,
- }
-
- def run(
- self,
- func: Any,
- num_executions: int,
- *args: Any,
- **kwargs: Any,
- ):
- return asyncio.run(
- self.execute_batch(func, num_executions, *args, **kwargs)
- )
-
-
-# def example_function(x: int = 0) -> int:
-# """Example function to execute"""
-# return x * x
-
-
-# async def main():
-# # Create executor with number of CPU cores
-# executor = HighSpeedExecutor()
-
-# # Execute the function 1000 times
-# result = await executor.execute_batch(
-# example_function, num_executions=1000, x=42
-# )
-
-# print(
-# f"Completed {result['executions']} executions in {result['duration']:.2f} seconds"
-# )
-# print(
-# f"Rate: {result['executions_per_second']:.2f} executions/second"
-# )
-
-
-# if __name__ == "__main__":
-# # Run the async main function
-# asyncio.run(main())
diff --git a/examples/async_workflow_example.py b/examples/async_workflow_example.py
deleted file mode 100644
index 72207449..00000000
--- a/examples/async_workflow_example.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import asyncio
-from typing import List
-
-from swarm_models import OpenAIChat
-
-from swarms.structs.async_workflow import (
- SpeakerConfig,
- SpeakerRole,
- create_default_workflow,
- run_workflow_with_retry,
-)
-from swarms.prompts.finance_agent_sys_prompt import (
- FINANCIAL_AGENT_SYS_PROMPT,
-)
-from swarms.structs.agent import Agent
-
-
-async def create_specialized_agents() -> List[Agent]:
- """Create a set of specialized agents for financial analysis"""
-
- # Base model configuration
- model = OpenAIChat(model_name="gpt-4o")
-
- # Financial Analysis Agent
- financial_agent = Agent(
- agent_name="Financial-Analysis-Agent",
- agent_description="Personal finance advisor agent",
- system_prompt=FINANCIAL_AGENT_SYS_PROMPT
- + "Output the token when you're done creating a portfolio of etfs, index, funds, and more for AI",
- max_loops=1,
- llm=model,
- dynamic_temperature_enabled=True,
- user_name="Kye",
- retry_attempts=3,
- context_length=8192,
- return_step_meta=False,
- output_type="str",
- auto_generate_prompt=False,
- max_tokens=4000,
- stopping_token="",
- saved_state_path="financial_agent.json",
- interactive=False,
- )
-
- # Risk Assessment Agent
- risk_agent = Agent(
- agent_name="Risk-Assessment-Agent",
- agent_description="Investment risk analysis specialist",
- system_prompt="Analyze investment risks and provide risk scores. Output when analysis is complete.",
- max_loops=1,
- llm=model,
- dynamic_temperature_enabled=True,
- user_name="Kye",
- retry_attempts=3,
- context_length=8192,
- output_type="str",
- max_tokens=4000,
- stopping_token="",
- saved_state_path="risk_agent.json",
- interactive=False,
- )
-
- # Market Research Agent
- research_agent = Agent(
- agent_name="Market-Research-Agent",
- agent_description="AI and tech market research specialist",
- system_prompt="Research AI market trends and growth opportunities. Output when research is complete.",
- max_loops=1,
- llm=model,
- dynamic_temperature_enabled=True,
- user_name="Kye",
- retry_attempts=3,
- context_length=8192,
- output_type="str",
- max_tokens=4000,
- stopping_token="",
- saved_state_path="research_agent.json",
- interactive=False,
- )
-
- return [financial_agent, risk_agent, research_agent]
-
-
-async def main():
- # Create specialized agents
- agents = await create_specialized_agents()
-
- # Create workflow with group chat enabled
- workflow = create_default_workflow(
- agents=agents,
- name="AI-Investment-Analysis-Workflow",
- enable_group_chat=True,
- )
-
- # Configure speaker roles
- workflow.speaker_system.add_speaker(
- SpeakerConfig(
- role=SpeakerRole.COORDINATOR,
- agent=agents[0], # Financial agent as coordinator
- priority=1,
- concurrent=False,
- required=True,
- )
- )
-
- workflow.speaker_system.add_speaker(
- SpeakerConfig(
- role=SpeakerRole.CRITIC,
- agent=agents[1], # Risk agent as critic
- priority=2,
- concurrent=True,
- )
- )
-
- workflow.speaker_system.add_speaker(
- SpeakerConfig(
- role=SpeakerRole.EXECUTOR,
- agent=agents[2], # Research agent as executor
- priority=2,
- concurrent=True,
- )
- )
-
- # Investment analysis task
- investment_task = """
- Create a comprehensive investment analysis for a $40k portfolio focused on AI growth opportunities:
- 1. Identify high-growth AI ETFs and index funds
- 2. Analyze risks and potential returns
- 3. Create a diversified portfolio allocation
- 4. Provide market trend analysis
- Present the results in a structured markdown format.
- """
-
- try:
- # Run workflow with retry
- result = await run_workflow_with_retry(
- workflow=workflow, task=investment_task, max_retries=3
- )
-
- print("\nWorkflow Results:")
- print("================")
-
- # Process and display agent outputs
- for output in result.agent_outputs:
- print(f"\nAgent: {output.agent_name}")
- print("-" * (len(output.agent_name) + 8))
- print(output.output)
-
- # Display group chat history if enabled
- if workflow.enable_group_chat:
- print("\nGroup Chat Discussion:")
- print("=====================")
- for msg in workflow.speaker_system.message_history:
- print(f"\n{msg.role} ({msg.agent_name}):")
- print(msg.content)
-
- # Save detailed results
- if result.metadata.get("shared_memory_keys"):
- print("\nShared Insights:")
- print("===============")
- for key in result.metadata["shared_memory_keys"]:
- value = workflow.shared_memory.get(key)
- if value:
- print(f"\n{key}:")
- print(value)
-
- except Exception as e:
- print(f"Workflow failed: {str(e)}")
-
- finally:
- await workflow.cleanup()
-
-
-if __name__ == "__main__":
- # Run the example
- asyncio.run(main())
diff --git a/examples/agent_with_fluidapi.py b/examples/demos/agent_with_fluidapi.py
similarity index 100%
rename from examples/agent_with_fluidapi.py
rename to examples/demos/agent_with_fluidapi.py
diff --git a/examples/chart_swarm.py b/examples/demos/chart_swarm.py
similarity index 100%
rename from examples/chart_swarm.py
rename to examples/demos/chart_swarm.py
diff --git a/examples/dao_swarm.py b/examples/demos/crypto/dao_swarm.py
similarity index 100%
rename from examples/dao_swarm.py
rename to examples/demos/crypto/dao_swarm.py
diff --git a/examples/htx_swarm.py b/examples/demos/crypto/htx_swarm.py
similarity index 100%
rename from examples/htx_swarm.py
rename to examples/demos/crypto/htx_swarm.py
diff --git a/examples/crypto/swarms_coin_agent.py b/examples/demos/crypto/swarms_coin_agent.py
similarity index 100%
rename from examples/crypto/swarms_coin_agent.py
rename to examples/demos/crypto/swarms_coin_agent.py
diff --git a/examples/crypto/swarms_coin_multimarket.py b/examples/demos/crypto/swarms_coin_multimarket.py
similarity index 100%
rename from examples/crypto/swarms_coin_multimarket.py
rename to examples/demos/crypto/swarms_coin_multimarket.py
diff --git a/examples/cuda_swarm.py b/examples/demos/cuda_swarm.py
similarity index 100%
rename from examples/cuda_swarm.py
rename to examples/demos/cuda_swarm.py
diff --git a/examples/ethchain_agent.py b/examples/demos/ethchain_agent.py
similarity index 100%
rename from examples/ethchain_agent.py
rename to examples/demos/ethchain_agent.py
diff --git a/examples/hackathon_feb16/fraud.py b/examples/demos/hackathon_feb16/fraud.py
similarity index 100%
rename from examples/hackathon_feb16/fraud.py
rename to examples/demos/hackathon_feb16/fraud.py
diff --git a/examples/hackathon_feb16/gassisan_splat.py b/examples/demos/hackathon_feb16/gassisan_splat.py
similarity index 100%
rename from examples/hackathon_feb16/gassisan_splat.py
rename to examples/demos/hackathon_feb16/gassisan_splat.py
diff --git a/examples/hackathon_feb16/sarasowti.py b/examples/demos/hackathon_feb16/sarasowti.py
similarity index 100%
rename from examples/hackathon_feb16/sarasowti.py
rename to examples/demos/hackathon_feb16/sarasowti.py
diff --git a/examples/hackathon_feb16/swarms_of_browser_agents.py b/examples/demos/hackathon_feb16/swarms_of_browser_agents.py
similarity index 100%
rename from examples/hackathon_feb16/swarms_of_browser_agents.py
rename to examples/demos/hackathon_feb16/swarms_of_browser_agents.py
diff --git a/examples/insurance_swarm.py b/examples/demos/insurance_swarm.py
similarity index 100%
rename from examples/insurance_swarm.py
rename to examples/demos/insurance_swarm.py
diff --git a/examples/legal_swarm.py b/examples/demos/legal_swarm.py
similarity index 100%
rename from examples/legal_swarm.py
rename to examples/demos/legal_swarm.py
diff --git a/examples/materials_science_agents.py b/examples/demos/materials_science_agents.py
similarity index 100%
rename from examples/materials_science_agents.py
rename to examples/demos/materials_science_agents.py
diff --git a/examples/medical_analysis/health_privacy_swarm 2.py b/examples/demos/medical_analysis/health_privacy_swarm 2.py
similarity index 100%
rename from examples/medical_analysis/health_privacy_swarm 2.py
rename to examples/demos/medical_analysis/health_privacy_swarm 2.py
diff --git a/examples/medical_analysis/health_privacy_swarm.py b/examples/demos/medical_analysis/health_privacy_swarm.py
similarity index 100%
rename from examples/medical_analysis/health_privacy_swarm.py
rename to examples/demos/medical_analysis/health_privacy_swarm.py
diff --git a/examples/medical_analysis/health_privacy_swarm_two 2.py b/examples/demos/medical_analysis/health_privacy_swarm_two 2.py
similarity index 100%
rename from examples/medical_analysis/health_privacy_swarm_two 2.py
rename to examples/demos/medical_analysis/health_privacy_swarm_two 2.py
diff --git a/examples/medical_analysis/health_privacy_swarm_two.py b/examples/demos/medical_analysis/health_privacy_swarm_two.py
similarity index 100%
rename from examples/medical_analysis/health_privacy_swarm_two.py
rename to examples/demos/medical_analysis/health_privacy_swarm_two.py
diff --git a/examples/medical_analysis/medical_analysis_agent_rearrange.md b/examples/demos/medical_analysis/medical_analysis_agent_rearrange.md
similarity index 100%
rename from examples/medical_analysis/medical_analysis_agent_rearrange.md
rename to examples/demos/medical_analysis/medical_analysis_agent_rearrange.md
diff --git a/examples/medical_analysis/medical_coder_agent.py b/examples/demos/medical_analysis/medical_coder_agent.py
similarity index 97%
rename from examples/medical_analysis/medical_coder_agent.py
rename to examples/demos/medical_analysis/medical_coder_agent.py
index 954c3718..d4d1197c 100644
--- a/examples/medical_analysis/medical_coder_agent.py
+++ b/examples/demos/medical_analysis/medical_coder_agent.py
@@ -1,22 +1,22 @@
"""
-- For each diagnosis, pull lab results,
-- egfr
-- for each diagnosis, pull lab ranges,
+- For each diagnosis, pull lab results,
+- egfr
+- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
-- train the agents, increase the load of input
+- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
-- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
+- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
-- put docs in rag ->
+- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
-- swarm of those 4 agents, ->
+- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
--
+-
"""
diff --git a/examples/medical_analysis/medical_coding_report.md b/examples/demos/medical_analysis/medical_coding_report.md
similarity index 100%
rename from examples/medical_analysis/medical_coding_report.md
rename to examples/demos/medical_analysis/medical_coding_report.md
diff --git a/examples/medical_analysis/medical_diagnosis_report.md b/examples/demos/medical_analysis/medical_diagnosis_report.md
similarity index 100%
rename from examples/medical_analysis/medical_diagnosis_report.md
rename to examples/demos/medical_analysis/medical_diagnosis_report.md
diff --git a/examples/medical_analysis/new_medical_rearrange.py b/examples/demos/medical_analysis/new_medical_rearrange.py
similarity index 100%
rename from examples/medical_analysis/new_medical_rearrange.py
rename to examples/demos/medical_analysis/new_medical_rearrange.py
diff --git a/examples/medical_analysis/rearrange_video_examples/reports/medical_analysis_agent_rearrange.md b/examples/demos/medical_analysis/rearrange_video_examples/reports/medical_analysis_agent_rearrange.md
similarity index 100%
rename from examples/medical_analysis/rearrange_video_examples/reports/medical_analysis_agent_rearrange.md
rename to examples/demos/medical_analysis/rearrange_video_examples/reports/medical_analysis_agent_rearrange.md
diff --git a/examples/medical_analysis/rearrange_video_examples/reports/vc_document_analysis.md b/examples/demos/medical_analysis/rearrange_video_examples/reports/vc_document_analysis.md
similarity index 100%
rename from examples/medical_analysis/rearrange_video_examples/reports/vc_document_analysis.md
rename to examples/demos/medical_analysis/rearrange_video_examples/reports/vc_document_analysis.md
diff --git a/examples/medical_analysis/rearrange_video_examples/term_sheet_swarm.py b/examples/demos/medical_analysis/rearrange_video_examples/term_sheet_swarm.py
similarity index 100%
rename from examples/medical_analysis/rearrange_video_examples/term_sheet_swarm.py
rename to examples/demos/medical_analysis/rearrange_video_examples/term_sheet_swarm.py
diff --git a/examples/morgtate_swarm.py b/examples/demos/morgtate_swarm.py
similarity index 100%
rename from examples/morgtate_swarm.py
rename to examples/demos/morgtate_swarm.py
diff --git a/examples/ollama_demo.py b/examples/demos/ollama_demo.py
similarity index 97%
rename from examples/ollama_demo.py
rename to examples/demos/ollama_demo.py
index bf369a56..ee42d6d3 100644
--- a/examples/ollama_demo.py
+++ b/examples/demos/ollama_demo.py
@@ -1,22 +1,22 @@
"""
-- For each diagnosis, pull lab results,
-- egfr
-- for each diagnosis, pull lab ranges,
+- For each diagnosis, pull lab results,
+- egfr
+- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
-- train the agents, increase the load of input
+- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
-- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
+- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
-- put docs in rag ->
+- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
-- swarm of those 4 agents, ->
+- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
--
+-
"""
diff --git a/examples/open_scientist.py b/examples/demos/open_scientist.py
similarity index 100%
rename from examples/open_scientist.py
rename to examples/demos/open_scientist.py
diff --git a/examples/privacy_building.py b/examples/demos/privacy_building.py
similarity index 100%
rename from examples/privacy_building.py
rename to examples/demos/privacy_building.py
diff --git a/examples/real_estate_agent.py b/examples/demos/real_estate_agent.py
similarity index 100%
rename from examples/real_estate_agent.py
rename to examples/demos/real_estate_agent.py
diff --git a/examples/scient_agents/deep_research_swarm_example.py b/examples/demos/scient_agents/deep_research_swarm_example.py
similarity index 100%
rename from examples/scient_agents/deep_research_swarm_example.py
rename to examples/demos/scient_agents/deep_research_swarm_example.py
diff --git a/examples/scient_agents/paper_idea_agent.py b/examples/demos/scient_agents/paper_idea_agent.py
similarity index 100%
rename from examples/scient_agents/paper_idea_agent.py
rename to examples/demos/scient_agents/paper_idea_agent.py
diff --git a/examples/scient_agents/paper_idea_profile.py b/examples/demos/scient_agents/paper_idea_profile.py
similarity index 100%
rename from examples/scient_agents/paper_idea_profile.py
rename to examples/demos/scient_agents/paper_idea_profile.py
diff --git a/examples/sentiment_news_analysis.py b/examples/demos/sentiment_news_analysis.py
similarity index 100%
rename from examples/sentiment_news_analysis.py
rename to examples/demos/sentiment_news_analysis.py
diff --git a/examples/spike/agent_rearrange_test.py b/examples/demos/spike/agent_rearrange_test.py
similarity index 100%
rename from examples/spike/agent_rearrange_test.py
rename to examples/demos/spike/agent_rearrange_test.py
diff --git a/examples/spike/function_caller_example.py b/examples/demos/spike/function_caller_example.py
similarity index 100%
rename from examples/spike/function_caller_example.py
rename to examples/demos/spike/function_caller_example.py
diff --git a/examples/spike/memory.py b/examples/demos/spike/memory.py
similarity index 100%
rename from examples/spike/memory.py
rename to examples/demos/spike/memory.py
diff --git a/examples/spike/spike.zip b/examples/demos/spike/spike.zip
similarity index 100%
rename from examples/spike/spike.zip
rename to examples/demos/spike/spike.zip
diff --git a/examples/spike/test.py b/examples/demos/spike/test.py
similarity index 100%
rename from examples/spike/test.py
rename to examples/demos/spike/test.py
diff --git a/examples/swarms_of_vllm.py b/examples/demos/swarms_of_vllm.py
similarity index 100%
rename from examples/swarms_of_vllm.py
rename to examples/demos/swarms_of_vllm.py
diff --git a/examples/gemini_model.py b/examples/gemini_model.py
deleted file mode 100644
index f38fa1da..00000000
--- a/examples/gemini_model.py
+++ /dev/null
@@ -1,63 +0,0 @@
-import os
-import google.generativeai as genai
-from loguru import logger
-
-
-class GeminiModel:
- """
- Represents a GeminiModel instance for generating text based on user input.
- """
-
- def __init__(
- self,
- temperature: float,
- top_p: float,
- top_k: float,
- ):
- """
- Initializes the GeminiModel by setting up the API key, generation configuration, and starting a chat session.
- Raises a KeyError if the GEMINI_API_KEY environment variable is not found.
- """
- try:
- api_key = os.environ["GEMINI_API_KEY"]
- genai.configure(api_key=api_key)
- self.generation_config = {
- "temperature": 1,
- "top_p": 0.95,
- "top_k": 40,
- "max_output_tokens": 8192,
- "response_mime_type": "text/plain",
- }
- self.model = genai.GenerativeModel(
- model_name="gemini-1.5-pro",
- generation_config=self.generation_config,
- )
- self.chat_session = self.model.start_chat(history=[])
- except KeyError as e:
- logger.error(f"Environment variable not found: {e}")
- raise
-
- def run(self, task: str) -> str:
- """
- Sends a message to the chat session and returns the response text.
- Raises an Exception if there's an error running the GeminiModel.
-
- Args:
- task (str): The input task or message to send to the chat session.
-
- Returns:
- str: The response text from the chat session.
- """
- try:
- response = self.chat_session.send_message(task)
- return response.text
- except Exception as e:
- logger.error(f"Error running GeminiModel: {e}")
- raise
-
-
-# Example usage
-if __name__ == "__main__":
- gemini_model = GeminiModel()
- output = gemini_model.run("INSERT_INPUT_HERE")
- print(output)
diff --git a/examples/main.py b/examples/main.py
deleted file mode 100644
index 9cd2db5c..00000000
--- a/examples/main.py
+++ /dev/null
@@ -1,272 +0,0 @@
-from typing import List, Dict
-from dataclasses import dataclass
-from datetime import datetime
-import asyncio
-import aiohttp
-from loguru import logger
-from swarms import Agent
-from pathlib import Path
-import json
-
-
-@dataclass
-class CryptoData:
- """Real-time cryptocurrency data structure"""
-
- symbol: str
- current_price: float
- market_cap: float
- total_volume: float
- price_change_24h: float
- market_cap_rank: int
-
-
-class DataFetcher:
- """Handles real-time data fetching from CoinGecko"""
-
- def __init__(self):
- self.base_url = "https://api.coingecko.com/api/v3"
- self.session = None
-
- async def _init_session(self):
- if self.session is None:
- self.session = aiohttp.ClientSession()
-
- async def close(self):
- if self.session:
- await self.session.close()
- self.session = None
-
- async def get_market_data(
- self, limit: int = 20
- ) -> List[CryptoData]:
- """Fetch market data for top cryptocurrencies"""
- await self._init_session()
-
- url = f"{self.base_url}/coins/markets"
- params = {
- "vs_currency": "usd",
- "order": "market_cap_desc",
- "per_page": str(limit),
- "page": "1",
- "sparkline": "false",
- }
-
- try:
- async with self.session.get(
- url, params=params
- ) as response:
- if response.status != 200:
- logger.error(
- f"API Error {response.status}: {await response.text()}"
- )
- return []
-
- data = await response.json()
- crypto_data = []
-
- for coin in data:
- try:
- crypto_data.append(
- CryptoData(
- symbol=str(
- coin.get("symbol", "")
- ).upper(),
- current_price=float(
- coin.get("current_price", 0)
- ),
- market_cap=float(
- coin.get("market_cap", 0)
- ),
- total_volume=float(
- coin.get("total_volume", 0)
- ),
- price_change_24h=float(
- coin.get("price_change_24h", 0)
- ),
- market_cap_rank=int(
- coin.get("market_cap_rank", 0)
- ),
- )
- )
- except (ValueError, TypeError) as e:
- logger.error(
- f"Error processing coin data: {str(e)}"
- )
- continue
-
- logger.info(
- f"Successfully fetched data for {len(crypto_data)} coins"
- )
- return crypto_data
-
- except Exception as e:
- logger.error(f"Exception in get_market_data: {str(e)}")
- return []
-
-
-class CryptoSwarmSystem:
- def __init__(self):
- self.agents = self._initialize_agents()
- self.data_fetcher = DataFetcher()
- logger.info("Crypto Swarm System initialized")
-
- def _initialize_agents(self) -> Dict[str, Agent]:
- """Initialize different specialized agents"""
- base_config = {
- "max_loops": 1,
- "autosave": True,
- "dashboard": False,
- "verbose": True,
- "dynamic_temperature_enabled": True,
- "retry_attempts": 3,
- "context_length": 200000,
- "return_step_meta": False,
- "output_type": "string",
- "streaming_on": False,
- }
-
- agents = {
- "price_analyst": Agent(
- agent_name="Price-Analysis-Agent",
- system_prompt="""Analyze the given cryptocurrency price data and provide insights about:
- 1. Price trends and movements
- 2. Notable price actions
- 3. Potential support/resistance levels""",
- saved_state_path="price_agent.json",
- user_name="price_analyzer",
- **base_config,
- ),
- "volume_analyst": Agent(
- agent_name="Volume-Analysis-Agent",
- system_prompt="""Analyze the given cryptocurrency volume data and provide insights about:
- 1. Volume trends
- 2. Notable volume spikes
- 3. Market participation levels""",
- saved_state_path="volume_agent.json",
- user_name="volume_analyzer",
- **base_config,
- ),
- "market_analyst": Agent(
- agent_name="Market-Analysis-Agent",
- system_prompt="""Analyze the overall cryptocurrency market data and provide insights about:
- 1. Market trends
- 2. Market dominance
- 3. Notable market movements""",
- saved_state_path="market_agent.json",
- user_name="market_analyzer",
- **base_config,
- ),
- }
- return agents
-
- async def analyze_market(self) -> Dict:
- """Run real-time market analysis using all agents"""
- try:
- # Fetch market data
- logger.info("Fetching market data for top 20 coins")
- crypto_data = await self.data_fetcher.get_market_data(20)
-
- if not crypto_data:
- return {
- "error": "Failed to fetch market data",
- "timestamp": datetime.now().isoformat(),
- }
-
- # Run analysis with each agent
- results = {}
- for agent_name, agent in self.agents.items():
- logger.info(f"Running {agent_name} analysis")
- analysis = self._run_agent_analysis(
- agent, crypto_data
- )
- results[agent_name] = analysis
-
- return {
- "timestamp": datetime.now().isoformat(),
- "market_data": {
- coin.symbol: {
- "price": coin.current_price,
- "market_cap": coin.market_cap,
- "volume": coin.total_volume,
- "price_change_24h": coin.price_change_24h,
- "rank": coin.market_cap_rank,
- }
- for coin in crypto_data
- },
- "analysis": results,
- }
-
- except Exception as e:
- logger.error(f"Error in market analysis: {str(e)}")
- return {
- "error": str(e),
- "timestamp": datetime.now().isoformat(),
- }
-
- def _run_agent_analysis(
- self, agent: Agent, crypto_data: List[CryptoData]
- ) -> str:
- """Run analysis for a single agent"""
- try:
- data_str = json.dumps(
- [
- {
- "symbol": cd.symbol,
- "price": cd.current_price,
- "market_cap": cd.market_cap,
- "volume": cd.total_volume,
- "price_change_24h": cd.price_change_24h,
- "rank": cd.market_cap_rank,
- }
- for cd in crypto_data
- ],
- indent=2,
- )
-
- prompt = f"""Analyze this real-time cryptocurrency market data and provide detailed insights:
- {data_str}"""
-
- return agent.run(prompt)
-
- except Exception as e:
- logger.error(f"Error in {agent.agent_name}: {str(e)}")
- return f"Error: {str(e)}"
-
-
-async def main():
- # Create output directory
- Path("reports").mkdir(exist_ok=True)
-
- # Initialize the swarm system
- swarm = CryptoSwarmSystem()
-
- while True:
- try:
- # Run analysis
- report = await swarm.analyze_market()
-
- # Save report
- timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
- report_path = f"reports/market_analysis_{timestamp}.json"
-
- with open(report_path, "w") as f:
- json.dump(report, f, indent=2, default=str)
-
- logger.info(
- f"Analysis complete. Report saved to {report_path}"
- )
-
- # Wait before next analysis
- await asyncio.sleep(300) # 5 minutes
-
- except Exception as e:
- logger.error(f"Error in main loop: {str(e)}")
- await asyncio.sleep(60) # Wait 1 minute before retrying
- finally:
- if swarm.data_fetcher.session:
- await swarm.data_fetcher.close()
-
-
-if __name__ == "__main__":
- asyncio.run(main())
diff --git a/examples/microstructure.py b/examples/microstructure.py
deleted file mode 100644
index c13d2e3f..00000000
--- a/examples/microstructure.py
+++ /dev/null
@@ -1,1074 +0,0 @@
-import os
-import threading
-import time
-from collections import deque
-from dataclasses import dataclass
-from datetime import datetime
-from queue import Queue
-from typing import Any, Dict, List, Optional, Tuple
-
-import ccxt
-import numpy as np
-import pandas as pd
-from dotenv import load_dotenv
-from loguru import logger
-from scipy import stats
-from swarm_models import OpenAIChat
-
-from swarms import Agent
-
-logger.enable("")
-
-
-@dataclass
-class MarketSignal:
- timestamp: datetime
- signal_type: str
- source: str
- data: Dict[str, Any]
- confidence: float
- metadata: Dict[str, Any]
-
-
-class MarketDataBuffer:
- def __init__(self, max_size: int = 10000):
- self.max_size = max_size
- self.data = deque(maxlen=max_size)
- self.lock = threading.Lock()
-
- def add(self, item: Any) -> None:
- with self.lock:
- self.data.append(item)
-
- def get_latest(self, n: int = None) -> List[Any]:
- with self.lock:
- if n is None:
- return list(self.data)
- return list(self.data)[-n:]
-
-
-class SignalCSVWriter:
- def __init__(self, output_dir: str = "market_data"):
- self.output_dir = output_dir
- self.ensure_output_dir()
- self.files = {}
-
- def ensure_output_dir(self):
- if not os.path.exists(self.output_dir):
- os.makedirs(self.output_dir)
-
- def get_filename(self, signal_type: str, symbol: str) -> str:
- date_str = datetime.now().strftime("%Y%m%d")
- return (
- f"{self.output_dir}/{signal_type}_{symbol}_{date_str}.csv"
- )
-
- def write_order_book_signal(self, signal: MarketSignal):
- symbol = signal.data["symbol"]
- metrics = signal.data["metrics"]
- filename = self.get_filename("order_book", symbol)
-
- # Create header if file doesn't exist
- if not os.path.exists(filename):
- header = [
- "timestamp",
- "symbol",
- "bid_volume",
- "ask_volume",
- "mid_price",
- "bid_vwap",
- "ask_vwap",
- "spread",
- "depth_imbalance",
- "confidence",
- ]
- with open(filename, "w") as f:
- f.write(",".join(header) + "\n")
-
- # Write data
- data = [
- str(signal.timestamp),
- symbol,
- str(metrics["bid_volume"]),
- str(metrics["ask_volume"]),
- str(metrics["mid_price"]),
- str(metrics["bid_vwap"]),
- str(metrics["ask_vwap"]),
- str(metrics["spread"]),
- str(metrics["depth_imbalance"]),
- str(signal.confidence),
- ]
-
- with open(filename, "a") as f:
- f.write(",".join(data) + "\n")
-
- def write_tick_signal(self, signal: MarketSignal):
- symbol = signal.data["symbol"]
- metrics = signal.data["metrics"]
- filename = self.get_filename("tick_data", symbol)
-
- if not os.path.exists(filename):
- header = [
- "timestamp",
- "symbol",
- "vwap",
- "price_momentum",
- "volume_mean",
- "trade_intensity",
- "kyle_lambda",
- "roll_spread",
- "confidence",
- ]
- with open(filename, "w") as f:
- f.write(",".join(header) + "\n")
-
- data = [
- str(signal.timestamp),
- symbol,
- str(metrics["vwap"]),
- str(metrics["price_momentum"]),
- str(metrics["volume_mean"]),
- str(metrics["trade_intensity"]),
- str(metrics["kyle_lambda"]),
- str(metrics["roll_spread"]),
- str(signal.confidence),
- ]
-
- with open(filename, "a") as f:
- f.write(",".join(data) + "\n")
-
- def write_arbitrage_signal(self, signal: MarketSignal):
- if (
- "best_opportunity" not in signal.data
- or not signal.data["best_opportunity"]
- ):
- return
-
- symbol = signal.data["symbol"]
- opp = signal.data["best_opportunity"]
- filename = self.get_filename("arbitrage", symbol)
-
- if not os.path.exists(filename):
- header = [
- "timestamp",
- "symbol",
- "buy_venue",
- "sell_venue",
- "spread",
- "return",
- "buy_price",
- "sell_price",
- "confidence",
- ]
- with open(filename, "w") as f:
- f.write(",".join(header) + "\n")
-
- data = [
- str(signal.timestamp),
- symbol,
- opp["buy_venue"],
- opp["sell_venue"],
- str(opp["spread"]),
- str(opp["return"]),
- str(opp["buy_price"]),
- str(opp["sell_price"]),
- str(signal.confidence),
- ]
-
- with open(filename, "a") as f:
- f.write(",".join(data) + "\n")
-
-
-class ExchangeManager:
- def __init__(self):
- self.available_exchanges = {
- "kraken": ccxt.kraken,
- "coinbase": ccxt.coinbase,
- "kucoin": ccxt.kucoin,
- "bitfinex": ccxt.bitfinex,
- "gemini": ccxt.gemini,
- }
- self.active_exchanges = {}
- self.test_exchanges()
-
- def test_exchanges(self):
- """Test each exchange and keep only the accessible ones"""
- for name, exchange_class in self.available_exchanges.items():
- try:
- exchange = exchange_class()
- exchange.load_markets()
- self.active_exchanges[name] = exchange
- logger.info(f"Successfully connected to {name}")
- except Exception as e:
- logger.warning(f"Could not connect to {name}: {e}")
-
- def get_primary_exchange(self) -> Optional[ccxt.Exchange]:
- """Get the first available exchange"""
- if not self.active_exchanges:
- raise RuntimeError("No exchanges available")
- return next(iter(self.active_exchanges.values()))
-
- def get_all_active_exchanges(self) -> Dict[str, ccxt.Exchange]:
- """Get all active exchanges"""
- return self.active_exchanges
-
-
-class BaseMarketAgent(Agent):
- def __init__(
- self,
- agent_name: str,
- system_prompt: str,
- api_key: str,
- model_name: str = "gpt-4-0125-preview",
- temperature: float = 0.1,
- ):
- model = OpenAIChat(
- openai_api_key=api_key,
- model_name=model_name,
- temperature=temperature,
- )
- super().__init__(
- agent_name=agent_name,
- system_prompt=system_prompt,
- llm=model,
- max_loops=1,
- autosave=True,
- dashboard=False,
- verbose=True,
- dynamic_temperature_enabled=True,
- context_length=200000,
- streaming_on=True,
- output_type="str",
- )
- self.signal_queue = Queue()
- self.is_running = False
- self.last_update = datetime.now()
- self.update_interval = 1.0 # seconds
-
- def rate_limit_check(self) -> bool:
- current_time = datetime.now()
- if (
- current_time - self.last_update
- ).total_seconds() < self.update_interval:
- return False
- self.last_update = current_time
- return True
-
-
-class OrderBookAgent(BaseMarketAgent):
- def __init__(self, api_key: str):
- system_prompt = """
- You are an Order Book Analysis Agent specialized in detecting institutional flows.
- Monitor order book depth and changes to identify potential large trades and institutional activity.
- Analyze patterns in order placement and cancellation rates.
- """
- super().__init__("OrderBookAgent", system_prompt, api_key)
- exchange_manager = ExchangeManager()
- self.exchange = exchange_manager.get_primary_exchange()
- self.order_book_buffer = MarketDataBuffer(max_size=100)
- self.vwap_window = 20
-
- def calculate_order_book_metrics(
- self, order_book: Dict
- ) -> Dict[str, float]:
- bids = np.array(order_book["bids"])
- asks = np.array(order_book["asks"])
-
- # Calculate key metrics
- bid_volume = np.sum(bids[:, 1])
- ask_volume = np.sum(asks[:, 1])
- mid_price = (bids[0][0] + asks[0][0]) / 2
-
- # Calculate VWAP
- bid_vwap = (
- np.sum(
- bids[: self.vwap_window, 0]
- * bids[: self.vwap_window, 1]
- )
- / bid_volume
- if bid_volume > 0
- else 0
- )
- ask_vwap = (
- np.sum(
- asks[: self.vwap_window, 0]
- * asks[: self.vwap_window, 1]
- )
- / ask_volume
- if ask_volume > 0
- else 0
- )
-
- # Calculate order book slope
- bid_slope = np.polyfit(
- range(len(bids[:10])), bids[:10, 0], 1
- )[0]
- ask_slope = np.polyfit(
- range(len(asks[:10])), asks[:10, 0], 1
- )[0]
-
- return {
- "bid_volume": bid_volume,
- "ask_volume": ask_volume,
- "mid_price": mid_price,
- "bid_vwap": bid_vwap,
- "ask_vwap": ask_vwap,
- "bid_slope": bid_slope,
- "ask_slope": ask_slope,
- "spread": asks[0][0] - bids[0][0],
- "depth_imbalance": (bid_volume - ask_volume)
- / (bid_volume + ask_volume),
- }
-
- def detect_large_orders(
- self, metrics: Dict[str, float], threshold: float = 2.0
- ) -> bool:
- historical_books = self.order_book_buffer.get_latest(20)
- if not historical_books:
- return False
-
- # Calculate historical volume statistics
- hist_volumes = [
- book["bid_volume"] + book["ask_volume"]
- for book in historical_books
- ]
- volume_mean = np.mean(hist_volumes)
- volume_std = np.std(hist_volumes)
-
- current_volume = metrics["bid_volume"] + metrics["ask_volume"]
- z_score = (current_volume - volume_mean) / (
- volume_std if volume_std > 0 else 1
- )
-
- return abs(z_score) > threshold
-
- def analyze_order_book(self, symbol: str) -> MarketSignal:
- if not self.rate_limit_check():
- return None
-
- try:
- order_book = self.exchange.fetch_order_book(
- symbol, limit=100
- )
- metrics = self.calculate_order_book_metrics(order_book)
- self.order_book_buffer.add(metrics)
-
- # Format data for LLM analysis
- analysis_prompt = f"""
- Analyze this order book for {symbol}:
- Bid Volume: {metrics['bid_volume']}
- Ask Volume: {metrics['ask_volume']}
- Mid Price: {metrics['mid_price']}
- Spread: {metrics['spread']}
- Depth Imbalance: {metrics['depth_imbalance']}
-
- What patterns do you see? Is there evidence of institutional activity?
- Are there any significant imbalances that could lead to price movement?
- """
-
- # Get LLM analysis
- llm_analysis = self.run(analysis_prompt)
-
- # Original signal creation with added LLM analysis
- return MarketSignal(
- timestamp=datetime.now(),
- signal_type="order_book_analysis",
- source="OrderBookAgent",
- data={
- "metrics": metrics,
- "large_order_detected": self.detect_large_orders(
- metrics
- ),
- "symbol": symbol,
- "llm_analysis": llm_analysis, # Add LLM insights
- },
- confidence=min(
- abs(metrics["depth_imbalance"]) * 0.7
- + (
- 1.0
- if self.detect_large_orders(metrics)
- else 0.0
- )
- * 0.3,
- 1.0,
- ),
- metadata={
- "update_latency": (
- datetime.now() - self.last_update
- ).total_seconds(),
- "buffer_size": len(
- self.order_book_buffer.get_latest()
- ),
- },
- )
- except Exception as e:
- logger.error(f"Error in order book analysis: {str(e)}")
- return None
-
-
-class TickDataAgent(BaseMarketAgent):
- def __init__(self, api_key: str):
- system_prompt = """
- You are a Tick Data Analysis Agent specialized in analyzing high-frequency price movements.
- Monitor tick-by-tick data for patterns indicating short-term price direction.
- Analyze trade size distribution and execution speed.
- """
- super().__init__("TickDataAgent", system_prompt, api_key)
- self.tick_buffer = MarketDataBuffer(max_size=5000)
- exchange_manager = ExchangeManager()
- self.exchange = exchange_manager.get_primary_exchange()
-
- def calculate_tick_metrics(
- self, ticks: List[Dict]
- ) -> Dict[str, float]:
- df = pd.DataFrame(ticks)
- df["price"] = pd.to_numeric(df["price"])
- df["volume"] = pd.to_numeric(df["amount"])
-
- # Calculate key metrics
- metrics = {}
-
- # Volume-weighted average price (VWAP)
- metrics["vwap"] = (df["price"] * df["volume"]).sum() / df[
- "volume"
- ].sum()
-
- # Price momentum
- metrics["price_momentum"] = df["price"].diff().mean()
-
- # Volume profile
- metrics["volume_mean"] = df["volume"].mean()
- metrics["volume_std"] = df["volume"].std()
-
- # Trade intensity
- time_diff = (
- df["timestamp"].max() - df["timestamp"].min()
- ) / 1000 # Convert to seconds
- metrics["trade_intensity"] = (
- len(df) / time_diff if time_diff > 0 else 0
- )
-
- # Microstructure indicators
- metrics["kyle_lambda"] = self.calculate_kyle_lambda(df)
- metrics["roll_spread"] = self.calculate_roll_spread(df)
-
- return metrics
-
- def calculate_kyle_lambda(self, df: pd.DataFrame) -> float:
- """Calculate Kyle's Lambda (price impact coefficient)"""
- try:
- price_changes = df["price"].diff().dropna()
- volume_changes = df["volume"].diff().dropna()
-
- if len(price_changes) > 1 and len(volume_changes) > 1:
- slope, _, _, _, _ = stats.linregress(
- volume_changes, price_changes
- )
- return abs(slope)
- except Exception as e:
- logger.warning(f"Error calculating Kyle's Lambda: {e}")
- return 0.0
-
- def calculate_roll_spread(self, df: pd.DataFrame) -> float:
- """Calculate Roll's implied spread"""
- try:
- price_changes = df["price"].diff().dropna()
- if len(price_changes) > 1:
- autocov = np.cov(
- price_changes[:-1], price_changes[1:]
- )[0][1]
- return 2 * np.sqrt(-autocov) if autocov < 0 else 0.0
- except Exception as e:
- logger.warning(f"Error calculating Roll spread: {e}")
- return 0.0
-
- def calculate_tick_metrics(
- self, ticks: List[Dict]
- ) -> Dict[str, float]:
- try:
- # Debug the incoming data structure
- logger.info(
- f"Raw tick data structure: {ticks[0] if ticks else 'No ticks'}"
- )
-
- # Convert trades to proper format
- formatted_trades = []
- for trade in ticks:
- formatted_trade = {
- "price": float(
- trade.get("price", trade.get("last", 0))
- ), # Handle different exchange formats
- "amount": float(
- trade.get(
- "amount",
- trade.get(
- "size", trade.get("quantity", 0)
- ),
- )
- ),
- "timestamp": trade.get(
- "timestamp", int(time.time() * 1000)
- ),
- }
- formatted_trades.append(formatted_trade)
-
- df = pd.DataFrame(formatted_trades)
-
- if df.empty:
- logger.warning("No valid trades to analyze")
- return {
- "vwap": 0.0,
- "price_momentum": 0.0,
- "volume_mean": 0.0,
- "volume_std": 0.0,
- "trade_intensity": 0.0,
- "kyle_lambda": 0.0,
- "roll_spread": 0.0,
- }
-
- # Calculate metrics with the properly formatted data
- metrics = {}
- metrics["vwap"] = (
- (df["price"] * df["amount"]).sum()
- / df["amount"].sum()
- if not df.empty
- else 0
- )
- metrics["price_momentum"] = (
- df["price"].diff().mean() if len(df) > 1 else 0
- )
- metrics["volume_mean"] = df["amount"].mean()
- metrics["volume_std"] = df["amount"].std()
-
- time_diff = (
- (df["timestamp"].max() - df["timestamp"].min()) / 1000
- if len(df) > 1
- else 1
- )
- metrics["trade_intensity"] = (
- len(df) / time_diff if time_diff > 0 else 0
- )
-
- metrics["kyle_lambda"] = self.calculate_kyle_lambda(df)
- metrics["roll_spread"] = self.calculate_roll_spread(df)
-
- logger.info(f"Calculated metrics: {metrics}")
- return metrics
-
- except Exception as e:
- logger.error(
- f"Error in calculate_tick_metrics: {str(e)}",
- exc_info=True,
- )
- # Return default metrics on error
- return {
- "vwap": 0.0,
- "price_momentum": 0.0,
- "volume_mean": 0.0,
- "volume_std": 0.0,
- "trade_intensity": 0.0,
- "kyle_lambda": 0.0,
- "roll_spread": 0.0,
- }
-
- def analyze_ticks(self, symbol: str) -> MarketSignal:
- if not self.rate_limit_check():
- return None
-
- try:
- # Fetch recent trades
- trades = self.exchange.fetch_trades(symbol, limit=100)
-
- # Debug the raw trades data
- logger.info(f"Fetched {len(trades)} trades for {symbol}")
- if trades:
- logger.info(f"Sample trade: {trades[0]}")
-
- self.tick_buffer.add(trades)
- recent_ticks = self.tick_buffer.get_latest(1000)
- metrics = self.calculate_tick_metrics(recent_ticks)
-
- # Only proceed with LLM analysis if we have valid metrics
- if metrics["vwap"] > 0:
- analysis_prompt = f"""
- Analyze these trading patterns for {symbol}:
- VWAP: {metrics['vwap']:.2f}
- Price Momentum: {metrics['price_momentum']:.2f}
- Trade Intensity: {metrics['trade_intensity']:.2f}
- Kyle's Lambda: {metrics['kyle_lambda']:.2f}
-
- What does this tell us about:
- 1. Current market sentiment
- 2. Potential price direction
- 3. Trading activity patterns
- """
- llm_analysis = self.run(analysis_prompt)
- else:
- llm_analysis = "Insufficient data for analysis"
-
- return MarketSignal(
- timestamp=datetime.now(),
- signal_type="tick_analysis",
- source="TickDataAgent",
- data={
- "metrics": metrics,
- "symbol": symbol,
- "prediction": np.sign(metrics["price_momentum"]),
- "llm_analysis": llm_analysis,
- },
- confidence=min(metrics["trade_intensity"] / 100, 1.0)
- * 0.4
- + min(metrics["kyle_lambda"], 1.0) * 0.6,
- metadata={
- "update_latency": (
- datetime.now() - self.last_update
- ).total_seconds(),
- "buffer_size": len(self.tick_buffer.get_latest()),
- },
- )
-
- except Exception as e:
- logger.error(
- f"Error in tick analysis: {str(e)}", exc_info=True
- )
- return None
-
-
-class LatencyArbitrageAgent(BaseMarketAgent):
- def __init__(self, api_key: str):
- system_prompt = """
- You are a Latency Arbitrage Agent specialized in detecting price discrepancies across venues.
- Monitor multiple exchanges for price differences exceeding transaction costs.
- Calculate optimal trade sizes and routes.
- """
- super().__init__(
- "LatencyArbitrageAgent", system_prompt, api_key
- )
- exchange_manager = ExchangeManager()
- self.exchanges = exchange_manager.get_all_active_exchanges()
- self.fee_structure = {
- "kraken": 0.0026, # 0.26% taker fee
- "coinbase": 0.006, # 0.6% taker fee
- "kucoin": 0.001, # 0.1% taker fee
- "bitfinex": 0.002, # 0.2% taker fee
- "gemini": 0.003, # 0.3% taker fee
- }
- self.price_buffer = {
- ex: MarketDataBuffer(max_size=100)
- for ex in self.exchanges
- }
-
- def calculate_effective_prices(
- self, ticker: Dict, venue: str
- ) -> Tuple[float, float]:
- """Calculate effective prices including fees"""
- fee = self.fee_structure[venue]
- return (
- ticker["bid"] * (1 - fee), # Effective sell price
- ticker["ask"] * (1 + fee), # Effective buy price
- )
-
- def calculate_arbitrage_metrics(
- self, prices: Dict[str, Dict]
- ) -> Dict:
- opportunities = []
-
- for venue1 in prices:
- for venue2 in prices:
- if venue1 != venue2:
- sell_price, _ = self.calculate_effective_prices(
- prices[venue1], venue1
- )
- _, buy_price = self.calculate_effective_prices(
- prices[venue2], venue2
- )
-
- spread = sell_price - buy_price
- if spread > 0:
- opportunities.append(
- {
- "sell_venue": venue1,
- "buy_venue": venue2,
- "spread": spread,
- "return": spread / buy_price,
- "buy_price": buy_price,
- "sell_price": sell_price,
- }
- )
-
- return {
- "opportunities": opportunities,
- "best_opportunity": (
- max(opportunities, key=lambda x: x["return"])
- if opportunities
- else None
- ),
- }
-
- def find_arbitrage(self, symbol: str) -> MarketSignal:
- """
- Find arbitrage opportunities across exchanges with LLM analysis
- """
- if not self.rate_limit_check():
- return None
-
- try:
- prices = {}
- timestamps = {}
-
- for name, exchange in self.exchanges.items():
- try:
- ticker = exchange.fetch_ticker(symbol)
- prices[name] = {
- "bid": ticker["bid"],
- "ask": ticker["ask"],
- }
- timestamps[name] = ticker["timestamp"]
- self.price_buffer[name].add(prices[name])
- except Exception as e:
- logger.warning(
- f"Error fetching {name} price: {e}"
- )
-
- if len(prices) < 2:
- return None
-
- metrics = self.calculate_arbitrage_metrics(prices)
-
- if not metrics["best_opportunity"]:
- return None
-
- # Calculate confidence based on spread and timing
- opp = metrics["best_opportunity"]
- timing_factor = 1.0 - min(
- abs(
- timestamps[opp["sell_venue"]]
- - timestamps[opp["buy_venue"]]
- )
- / 1000,
- 1.0,
- )
- spread_factor = min(
- opp["return"] * 5, 1.0
- ) # Scale return to confidence
-
- confidence = timing_factor * 0.4 + spread_factor * 0.6
-
- # Format price data for LLM analysis
- price_summary = "\n".join(
- [
- f"{venue}: Bid ${prices[venue]['bid']:.2f}, Ask ${prices[venue]['ask']:.2f}"
- for venue in prices.keys()
- ]
- )
-
- # Create detailed analysis prompt
- analysis_prompt = f"""
- Analyze this arbitrage opportunity for {symbol}:
-
- Current Prices:
- {price_summary}
-
- Best Opportunity Found:
- Buy Venue: {opp['buy_venue']} at ${opp['buy_price']:.2f}
- Sell Venue: {opp['sell_venue']} at ${opp['sell_price']:.2f}
- Spread: ${opp['spread']:.2f}
- Expected Return: {opp['return']*100:.3f}%
- Time Difference: {abs(timestamps[opp['sell_venue']] - timestamps[opp['buy_venue']])}ms
-
- Consider:
- 1. Is this opportunity likely to be profitable after execution costs?
- 2. What risks might prevent successful execution?
- 3. What market conditions might have created this opportunity?
- 4. How does the timing difference affect execution probability?
- """
-
- # Get LLM analysis
- llm_analysis = self.run(analysis_prompt)
-
- # Create comprehensive signal
- return MarketSignal(
- timestamp=datetime.now(),
- signal_type="arbitrage_opportunity",
- source="LatencyArbitrageAgent",
- data={
- "metrics": metrics,
- "symbol": symbol,
- "best_opportunity": metrics["best_opportunity"],
- "all_prices": prices,
- "llm_analysis": llm_analysis,
- "timing": {
- "time_difference_ms": abs(
- timestamps[opp["sell_venue"]]
- - timestamps[opp["buy_venue"]]
- ),
- "timestamps": timestamps,
- },
- },
- confidence=confidence,
- metadata={
- "update_latency": (
- datetime.now() - self.last_update
- ).total_seconds(),
- "timestamp_deltas": timestamps,
- "venue_count": len(prices),
- "execution_risk": 1.0
- - timing_factor, # Higher time difference = higher risk
- },
- )
-
- except Exception as e:
- logger.error(f"Error in arbitrage analysis: {str(e)}")
- return None
-
-
-class SwarmCoordinator:
- def __init__(self, api_key: str):
- self.api_key = api_key
- self.agents = {
- "order_book": OrderBookAgent(api_key),
- "tick_data": TickDataAgent(api_key),
- "latency_arb": LatencyArbitrageAgent(api_key),
- }
- self.signal_processors = []
- self.signal_history = MarketDataBuffer(max_size=1000)
- self.running = False
- self.lock = threading.Lock()
- self.csv_writer = SignalCSVWriter()
-
- def register_signal_processor(self, processor):
- """Register a new signal processor function"""
- with self.lock:
- self.signal_processors.append(processor)
-
- def process_signals(self, signals: List[MarketSignal]):
- """Process signals through all registered processors"""
- if not signals:
- return
-
- self.signal_history.add(signals)
-
- try:
- for processor in self.signal_processors:
- processor(signals)
- except Exception as e:
- logger.error(f"Error in signal processing: {e}")
-
- def aggregate_signals(
- self, signals: List[MarketSignal]
- ) -> Dict[str, Any]:
- """Aggregate multiple signals into a combined market view"""
- if not signals:
- return {}
-
- self.signal_history.add(signals)
-
- aggregated = {
- "timestamp": datetime.now(),
- "symbols": set(),
- "agent_signals": {},
- "combined_confidence": 0,
- "market_state": {},
- }
-
- for signal in signals:
- symbol = signal.data.get("symbol")
- if symbol:
- aggregated["symbols"].add(symbol)
-
- agent_type = signal.source
- if agent_type not in aggregated["agent_signals"]:
- aggregated["agent_signals"][agent_type] = []
- aggregated["agent_signals"][agent_type].append(signal)
-
- # Update market state based on signal type
- if signal.signal_type == "order_book_analysis":
- metrics = signal.data.get("metrics", {})
- aggregated["market_state"].update(
- {
- "order_book_imbalance": metrics.get(
- "depth_imbalance"
- ),
- "spread": metrics.get("spread"),
- "large_orders_detected": signal.data.get(
- "large_order_detected"
- ),
- }
- )
- elif signal.signal_type == "tick_analysis":
- metrics = signal.data.get("metrics", {})
- aggregated["market_state"].update(
- {
- "price_momentum": metrics.get(
- "price_momentum"
- ),
- "trade_intensity": metrics.get(
- "trade_intensity"
- ),
- "kyle_lambda": metrics.get("kyle_lambda"),
- }
- )
- elif signal.signal_type == "arbitrage_opportunity":
- opp = signal.data.get("best_opportunity")
- if opp:
- aggregated["market_state"].update(
- {
- "arbitrage_spread": opp.get("spread"),
- "arbitrage_return": opp.get("return"),
- }
- )
-
- # Calculate combined confidence as weighted average
- confidences = [s.confidence for s in signals]
- if confidences:
- aggregated["combined_confidence"] = np.mean(confidences)
-
- return aggregated
-
- def start(self, symbols: List[str], interval: float = 1.0):
- """Start the swarm monitoring system"""
- if self.running:
- logger.warning("Swarm is already running")
- return
-
- self.running = True
-
- def agent_loop(agent, symbol):
- while self.running:
- try:
- if isinstance(agent, OrderBookAgent):
- signal = agent.analyze_order_book(symbol)
- elif isinstance(agent, TickDataAgent):
- signal = agent.analyze_ticks(symbol)
- elif isinstance(agent, LatencyArbitrageAgent):
- signal = agent.find_arbitrage(symbol)
-
- if signal:
- agent.signal_queue.put(signal)
- except Exception as e:
- logger.error(
- f"Error in {agent.agent_name} loop: {e}"
- )
-
- time.sleep(interval)
-
- def signal_collection_loop():
- while self.running:
- try:
- current_signals = []
-
- # Collect signals from all agents
- for agent in self.agents.values():
- while not agent.signal_queue.empty():
- signal = agent.signal_queue.get_nowait()
- if signal:
- current_signals.append(signal)
-
- if current_signals:
- # Process current signals
- self.process_signals(current_signals)
-
- # Aggregate and analyze
- aggregated = self.aggregate_signals(
- current_signals
- )
- logger.info(
- f"Aggregated market view: {aggregated}"
- )
-
- except Exception as e:
- logger.error(
- f"Error in signal collection loop: {e}"
- )
-
- time.sleep(interval)
-
- # Start agent threads
- self.threads = []
- for symbol in symbols:
- for agent in self.agents.values():
- thread = threading.Thread(
- target=agent_loop,
- args=(agent, symbol),
- daemon=True,
- )
- thread.start()
- self.threads.append(thread)
-
- # Start signal collection thread
- collection_thread = threading.Thread(
- target=signal_collection_loop, daemon=True
- )
- collection_thread.start()
- self.threads.append(collection_thread)
-
- def stop(self):
- """Stop the swarm monitoring system"""
- self.running = False
- for thread in self.threads:
- thread.join(timeout=5.0)
- logger.info("Swarm stopped")
-
-
-def market_making_processor(signals: List[MarketSignal]):
- """Enhanced signal processor with LLM analysis integration"""
- for signal in signals:
- if signal.confidence > 0.8:
- if signal.signal_type == "arbitrage_opportunity":
- opp = signal.data.get("best_opportunity")
- if (
- opp and opp["return"] > 0.001
- ): # 0.1% return threshold
- logger.info(
- "\nSignificant arbitrage opportunity detected:"
- )
- logger.info(f"Return: {opp['return']*100:.3f}%")
- logger.info(f"Spread: ${opp['spread']:.2f}")
- if "llm_analysis" in signal.data:
- logger.info("\nLLM Analysis:")
- logger.info(signal.data["llm_analysis"])
-
- elif signal.signal_type == "order_book_analysis":
- imbalance = signal.data["metrics"]["depth_imbalance"]
- if abs(imbalance) > 0.3:
- logger.info(
- f"\nSignificant order book imbalance detected: {imbalance:.3f}"
- )
- if "llm_analysis" in signal.data:
- logger.info("\nLLM Analysis:")
- logger.info(signal.data["llm_analysis"])
-
- elif signal.signal_type == "tick_analysis":
- momentum = signal.data["metrics"]["price_momentum"]
- if abs(momentum) > 0:
- logger.info(
- f"\nSignificant price momentum detected: {momentum:.3f}"
- )
- if "llm_analysis" in signal.data:
- logger.info("\nLLM Analysis:")
- logger.info(signal.data["llm_analysis"])
-
-
-load_dotenv()
-api_key = os.getenv("OPENAI_API_KEY")
-
-coordinator = SwarmCoordinator(api_key)
-coordinator.register_signal_processor(market_making_processor)
-
-symbols = ["BTC/USDT", "ETH/USDT"]
-
-logger.info(
- "Starting market microstructure analysis with LLM integration..."
-)
-logger.info(f"Monitoring symbols: {symbols}")
-logger.info(
- f"CSV files will be written to: {os.path.abspath('market_data')}"
-)
-
-try:
- coordinator.start(symbols)
- while True:
- time.sleep(1)
-except KeyboardInterrupt:
- logger.info("Gracefully shutting down...")
- coordinator.stop()
diff --git a/examples/aop/client.py b/examples/misc/aop/client.py
similarity index 100%
rename from examples/aop/client.py
rename to examples/misc/aop/client.py
diff --git a/examples/aop/test_aop.py b/examples/misc/aop/test_aop.py
similarity index 100%
rename from examples/aop/test_aop.py
rename to examples/misc/aop/test_aop.py
diff --git a/examples/misc/conversation_simple.py b/examples/misc/conversation_simple.py
new file mode 100644
index 00000000..13d67278
--- /dev/null
+++ b/examples/misc/conversation_simple.py
@@ -0,0 +1,19 @@
+from swarms.structs.conversation import Conversation
+
+# Example usage
+# conversation = Conversation()
+conversation = Conversation(token_count=True)
+conversation.add("user", "Hello, how are you?")
+conversation.add("assistant", "I am doing well, thanks.")
+# conversation.add(
+# "assistant", {"name": "tool_1", "output": "Hello, how are you?"}
+# )
+# print(conversation.return_json())
+
+# # print(conversation.get_last_message_as_string())
+print(conversation.return_json())
+print(conversation.to_dict())
+# # conversation.add("assistant", "I am doing well, thanks.")
+# # # print(conversation.to_json())
+# print(type(conversation.to_dict()))
+# print(conversation.to_yaml())
diff --git a/examples/csvagent_example.py b/examples/misc/csvagent_example.py
similarity index 100%
rename from examples/csvagent_example.py
rename to examples/misc/csvagent_example.py
diff --git a/examples/dict_to_table.py b/examples/misc/dict_to_table.py
similarity index 100%
rename from examples/dict_to_table.py
rename to examples/misc/dict_to_table.py
diff --git a/examples/swarm_eval_deepseek.py b/examples/misc/swarm_eval_deepseek.py
similarity index 100%
rename from examples/swarm_eval_deepseek.py
rename to examples/misc/swarm_eval_deepseek.py
diff --git a/examples/visualizer_test.py b/examples/misc/visualizer_test.py
similarity index 100%
rename from examples/visualizer_test.py
rename to examples/misc/visualizer_test.py
diff --git a/examples/4o_mini_demo.py b/examples/models/4o_mini_demo.py
similarity index 94%
rename from examples/4o_mini_demo.py
rename to examples/models/4o_mini_demo.py
index 90b40d0a..5372e264 100644
--- a/examples/4o_mini_demo.py
+++ b/examples/models/4o_mini_demo.py
@@ -1,22 +1,22 @@
"""
-- For each diagnosis, pull lab results,
-- egfr
-- for each diagnosis, pull lab ranges,
+- For each diagnosis, pull lab results,
+- egfr
+- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
-- train the agents, increase the load of input
+- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
-- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
+- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
-- put docs in rag ->
+- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
-- swarm of those 4 agents, ->
+- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
--
+-
"""
diff --git a/cerebas_example.py b/examples/models/cerebas_example.py
similarity index 100%
rename from cerebas_example.py
rename to examples/models/cerebas_example.py
diff --git a/examples/models/claude_4.py b/examples/models/claude_4.py
new file mode 100644
index 00000000..491d5c83
--- /dev/null
+++ b/examples/models/claude_4.py
@@ -0,0 +1,21 @@
+from swarms.structs.agent import Agent
+from swarms.structs.council_judge import CouncilAsAJudge
+
+# ========== USAGE EXAMPLE ==========
+
+if __name__ == "__main__":
+ user_query = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?"
+
+ base_agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
+ model_name="claude-opus-4-20250514",
+ max_loops=1,
+ )
+
+ model_output = base_agent.run(user_query)
+
+ panel = CouncilAsAJudge()
+ results = panel.run(user_query, model_output)
+
+ print(results)
diff --git a/examples/models/claude_4_example.py b/examples/models/claude_4_example.py
new file mode 100644
index 00000000..ac5b081a
--- /dev/null
+++ b/examples/models/claude_4_example.py
@@ -0,0 +1,19 @@
+from swarms.structs.agent import Agent
+
+# Initialize the agent
+agent = Agent(
+ agent_name="Clinical-Documentation-Agent",
+ agent_description="Specialized agent for clinical documentation and "
+ "medical record analysis",
+ system_prompt="You are a clinical documentation specialist with expertise "
+ "in medical terminology, SOAP notes, and healthcare "
+ "documentation standards. You help analyze and improve "
+ "clinical documentation for accuracy, completeness, and "
+ "compliance.",
+ max_loops=1,
+ model_name="claude-opus-4-20250514",
+ dynamic_temperature_enabled=True,
+ output_type="final",
+)
+
+print(agent.run("what are the best ways to diagnose the flu?"))
diff --git a/examples/deepseek_r1.py b/examples/models/deepseek_r1.py
similarity index 100%
rename from examples/deepseek_r1.py
rename to examples/models/deepseek_r1.py
diff --git a/examples/fast_r1_groq.py b/examples/models/fast_r1_groq.py
similarity index 100%
rename from examples/fast_r1_groq.py
rename to examples/models/fast_r1_groq.py
diff --git a/examples/o3_mini.py b/examples/models/groq_deepseek_agent.py
similarity index 100%
rename from examples/o3_mini.py
rename to examples/models/groq_deepseek_agent.py
diff --git a/examples/llama4_examples/litellm_example.py b/examples/models/llama4_examples/litellm_example.py
similarity index 100%
rename from examples/llama4_examples/litellm_example.py
rename to examples/models/llama4_examples/litellm_example.py
diff --git a/examples/llama4_examples/llama_4.py b/examples/models/llama4_examples/llama_4.py
similarity index 100%
rename from examples/llama4_examples/llama_4.py
rename to examples/models/llama4_examples/llama_4.py
diff --git a/examples/llama4_examples/simple_agent.py b/examples/models/llama4_examples/simple_agent.py
similarity index 100%
rename from examples/llama4_examples/simple_agent.py
rename to examples/models/llama4_examples/simple_agent.py
diff --git a/examples/lumo_example.py b/examples/models/lumo_example.py
similarity index 100%
rename from examples/lumo_example.py
rename to examples/models/lumo_example.py
diff --git a/examples/simple_example_ollama.py b/examples/models/simple_example_ollama.py
similarity index 100%
rename from examples/simple_example_ollama.py
rename to examples/models/simple_example_ollama.py
diff --git a/examples/swarms_claude_example.py b/examples/models/swarms_claude_example.py
similarity index 96%
rename from examples/swarms_claude_example.py
rename to examples/models/swarms_claude_example.py
index 61da9f1e..b0d6c235 100644
--- a/examples/swarms_claude_example.py
+++ b/examples/models/swarms_claude_example.py
@@ -10,7 +10,7 @@ agent = Agent(
system_prompt=FINANCIAL_AGENT_SYS_PROMPT
+ "Output the token when you're done creating a portfolio of etfs, index, funds, and more for AI",
max_loops=1,
- model_name="openai/gpt-4o",
+ model_name="claude-3-sonnet-20240229",
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
diff --git a/examples/test_async_litellm.py b/examples/models/test_async_litellm.py
similarity index 100%
rename from examples/test_async_litellm.py
rename to examples/models/test_async_litellm.py
diff --git a/examples/vllm_example.py b/examples/models/vllm_example.py
similarity index 100%
rename from examples/vllm_example.py
rename to examples/models/vllm_example.py
diff --git a/examples/agents_builder.py b/examples/multi_agent/asb/agents_builder.py
similarity index 100%
rename from examples/agents_builder.py
rename to examples/multi_agent/asb/agents_builder.py
diff --git a/examples/asb/asb_research.py b/examples/multi_agent/asb/asb_research.py
similarity index 100%
rename from examples/asb/asb_research.py
rename to examples/multi_agent/asb/asb_research.py
diff --git a/examples/auto_agent.py b/examples/multi_agent/asb/auto_agent.py
similarity index 100%
rename from examples/auto_agent.py
rename to examples/multi_agent/asb/auto_agent.py
diff --git a/examples/asb/auto_swarm_builder_test.py b/examples/multi_agent/asb/auto_swarm_builder_test.py
similarity index 100%
rename from examples/asb/auto_swarm_builder_test.py
rename to examples/multi_agent/asb/auto_swarm_builder_test.py
diff --git a/examples/auto_swarm_router.py b/examples/multi_agent/asb/auto_swarm_router.py
similarity index 100%
rename from examples/auto_swarm_router.py
rename to examples/multi_agent/asb/auto_swarm_router.py
diff --git a/examples/content_creation_asb.py b/examples/multi_agent/asb/content_creation_asb.py
similarity index 100%
rename from examples/content_creation_asb.py
rename to examples/multi_agent/asb/content_creation_asb.py
diff --git a/examples/concurrent_example.py b/examples/multi_agent/concurrent_examples/concurrent_example.py
similarity index 100%
rename from examples/concurrent_example.py
rename to examples/multi_agent/concurrent_examples/concurrent_example.py
diff --git a/examples/concurrent_examples/concurrent_mix.py b/examples/multi_agent/concurrent_examples/concurrent_mix.py
similarity index 100%
rename from examples/concurrent_examples/concurrent_mix.py
rename to examples/multi_agent/concurrent_examples/concurrent_mix.py
diff --git a/concurrent_swarm_example.py b/examples/multi_agent/concurrent_examples/concurrent_swarm_example.py
similarity index 100%
rename from concurrent_swarm_example.py
rename to examples/multi_agent/concurrent_examples/concurrent_swarm_example.py
diff --git a/examples/multi_agent/council/council_judge_evaluation.py b/examples/multi_agent/council/council_judge_evaluation.py
new file mode 100644
index 00000000..d1ae0190
--- /dev/null
+++ b/examples/multi_agent/council/council_judge_evaluation.py
@@ -0,0 +1,369 @@
+import json
+import time
+from pathlib import Path
+from typing import Any, Dict, Optional
+
+from datasets import load_dataset
+from loguru import logger
+from tqdm import tqdm
+
+from swarms.structs.agent import Agent
+from swarms.structs.council_judge import CouncilAsAJudge
+
+# Dataset configurations
+DATASET_CONFIGS = {
+ "gsm8k": "main",
+ "squad": None, # No specific config needed
+ "winogrande": None,
+ "commonsense_qa": None,
+}
+
+
+base_agent = Agent(
+ agent_name="General-Problem-Solver",
+ system_prompt="""You are an expert problem solver and analytical thinker with deep expertise across multiple domains. Your role is to break down complex problems, identify key patterns, and provide well-reasoned solutions.
+
+Key Responsibilities:
+1. Analyze problems systematically by breaking them into manageable components
+2. Identify relevant patterns, relationships, and dependencies
+3. Apply logical reasoning and critical thinking to evaluate solutions
+4. Consider multiple perspectives and potential edge cases
+5. Provide clear, step-by-step explanations of your reasoning
+6. Validate solutions against given constraints and requirements
+
+Problem-Solving Framework:
+1. Problem Understanding
+ - Identify the core problem and key objectives
+ - Clarify constraints and requirements
+ - Define success criteria
+
+2. Analysis
+ - Break down complex problems into components
+ - Identify relevant patterns and relationships
+ - Consider multiple perspectives and approaches
+
+3. Solution Development
+ - Generate potential solutions
+ - Evaluate trade-offs and implications
+ - Select optimal approach based on criteria
+
+4. Validation
+ - Test solution against requirements
+ - Consider edge cases and potential issues
+ - Verify logical consistency
+
+5. Communication
+ - Present clear, structured reasoning
+ - Explain key decisions and trade-offs
+ - Provide actionable recommendations
+
+Remember to maintain a systematic, analytical approach while being adaptable to different problem domains.""",
+ model_name="gpt-4o-mini",
+ max_loops=1,
+ max_tokens=16000,
+)
+
+
+class CouncilJudgeEvaluator:
+ """
+ Evaluates the Council of Judges using various datasets from Hugging Face.
+ Checks if the council's output contains the correct answer from the dataset.
+ """
+
+ def __init__(
+ self,
+ base_agent: Optional[Agent] = base_agent,
+ model_name: str = "gpt-4o-mini",
+ output_dir: str = "evaluation_results",
+ ):
+ """
+ Initialize the Council Judge Evaluator.
+
+ Args:
+ base_agent: Optional base agent to use for responses
+ model_name: Model to use for evaluations
+ output_dir: Directory to save evaluation results
+ """
+
+ self.council = CouncilAsAJudge(
+ base_agent=base_agent,
+ output_type="final",
+ )
+
+ self.output_dir = Path(output_dir)
+ self.output_dir.mkdir(parents=True, exist_ok=True)
+
+ # Initialize or load existing results
+ self.results_file = (
+ self.output_dir / "evaluation_results.json"
+ )
+ self.results = self._load_or_create_results()
+
+ def _load_or_create_results(self) -> Dict[str, Any]:
+ """Load existing results or create new results structure."""
+ if self.results_file.exists():
+ try:
+ with open(self.results_file, "r") as f:
+ return json.load(f)
+ except json.JSONDecodeError:
+ logger.warning(
+ "Existing results file is corrupted. Creating new one."
+ )
+
+ return {
+ "datasets": {},
+ "last_updated": time.strftime("%Y-%m-%d %H:%M:%S"),
+ "total_evaluations": 0,
+ "total_correct": 0,
+ }
+
+ def _save_results(self):
+ """Save current results to file."""
+ self.results["last_updated"] = time.strftime(
+ "%Y-%m-%d %H:%M:%S"
+ )
+ with open(self.results_file, "w") as f:
+ json.dump(self.results, f, indent=2)
+ logger.info(f"Results saved to {self.results_file}")
+
+ def evaluate_dataset(
+ self,
+ dataset_name: str,
+ split: str = "test",
+ num_samples: Optional[int] = None,
+ save_results: bool = True,
+ ) -> Dict[str, Any]:
+ """
+ Evaluate the Council of Judges on a specific dataset.
+
+ Args:
+ dataset_name: Name of the Hugging Face dataset
+ split: Dataset split to use
+ num_samples: Number of samples to evaluate (None for all)
+ save_results: Whether to save results to file
+
+ Returns:
+ Dictionary containing evaluation metrics and results
+ """
+ logger.info(
+ f"Loading dataset {dataset_name} (split: {split})..."
+ )
+
+ # Get dataset config if needed
+ config = DATASET_CONFIGS.get(dataset_name)
+ if config:
+ dataset = load_dataset(dataset_name, config, split=split)
+ else:
+ dataset = load_dataset(dataset_name, split=split)
+
+ if num_samples:
+ dataset = dataset.select(
+ range(min(num_samples, len(dataset)))
+ )
+
+ # Initialize or get existing dataset results
+ if dataset_name not in self.results["datasets"]:
+ self.results["datasets"][dataset_name] = {
+ "evaluations": [],
+ "correct_answers": 0,
+ "total_evaluated": 0,
+ "accuracy": 0.0,
+ "last_updated": time.strftime("%Y-%m-%d %H:%M:%S"),
+ }
+
+ start_time = time.time()
+
+ for idx, example in enumerate(
+ tqdm(dataset, desc="Evaluating samples")
+ ):
+ try:
+ # Get the input text and correct answer based on dataset structure
+ input_text = self._get_input_text(
+ example, dataset_name
+ )
+ correct_answer = self._get_correct_answer(
+ example, dataset_name
+ )
+
+ # Run evaluation through council
+ evaluation = self.council.run(input_text)
+
+ # Check if the evaluation contains the correct answer
+ is_correct = self._check_answer(
+ evaluation, correct_answer, dataset_name
+ )
+
+ # Create sample result
+ sample_result = {
+ "input": input_text,
+ "correct_answer": correct_answer,
+ "evaluation": evaluation,
+ "is_correct": is_correct,
+ "timestamp": time.strftime("%Y-%m-%d %H:%M:%S"),
+ }
+
+ # Update dataset results
+ self.results["datasets"][dataset_name][
+ "evaluations"
+ ].append(sample_result)
+ if is_correct:
+ self.results["datasets"][dataset_name][
+ "correct_answers"
+ ] += 1
+ self.results["total_correct"] += 1
+ self.results["datasets"][dataset_name][
+ "total_evaluated"
+ ] += 1
+ self.results["total_evaluations"] += 1
+
+ # Update accuracy
+ self.results["datasets"][dataset_name]["accuracy"] = (
+ self.results["datasets"][dataset_name][
+ "correct_answers"
+ ]
+ / self.results["datasets"][dataset_name][
+ "total_evaluated"
+ ]
+ )
+ self.results["datasets"][dataset_name][
+ "last_updated"
+ ] = time.strftime("%Y-%m-%d %H:%M:%S")
+
+ # Save results after each evaluation
+ if save_results:
+ self._save_results()
+
+ except Exception as e:
+ logger.error(
+ f"Error evaluating sample {idx}: {str(e)}"
+ )
+ continue
+
+ # Calculate final metrics
+ results = {
+ "dataset": dataset_name,
+ "split": split,
+ "num_samples": len(dataset),
+ "evaluations": self.results["datasets"][dataset_name][
+ "evaluations"
+ ],
+ "correct_answers": self.results["datasets"][dataset_name][
+ "correct_answers"
+ ],
+ "total_evaluated": self.results["datasets"][dataset_name][
+ "total_evaluated"
+ ],
+ "accuracy": self.results["datasets"][dataset_name][
+ "accuracy"
+ ],
+ "total_time": time.time() - start_time,
+ }
+
+ return results
+
+ def _get_input_text(
+ self, example: Dict, dataset_name: str
+ ) -> str:
+ """Extract input text based on dataset structure."""
+ if dataset_name == "gsm8k":
+ return example["question"]
+ elif dataset_name == "squad":
+ return example["question"]
+ elif dataset_name == "winogrande":
+ return example["sentence"]
+ elif dataset_name == "commonsense_qa":
+ return example["question"]
+ else:
+ # Default to first field that looks like text
+ for key, value in example.items():
+ if isinstance(value, str) and len(value) > 10:
+ return value
+ raise ValueError(
+ f"Could not find input text in example for dataset {dataset_name}"
+ )
+
+ def _get_correct_answer(
+ self, example: Dict, dataset_name: str
+ ) -> str:
+ """Extract correct answer based on dataset structure."""
+ if dataset_name == "gsm8k":
+ return str(example["answer"])
+ elif dataset_name == "squad":
+ return (
+ example["answers"]["text"][0]
+ if isinstance(example["answers"], dict)
+ else str(example["answers"])
+ )
+ elif dataset_name == "winogrande":
+ return str(example["answer"])
+ elif dataset_name == "commonsense_qa":
+ return str(example["answerKey"])
+ else:
+ # Try to find an answer field
+ for key in ["answer", "answers", "label", "target"]:
+ if key in example:
+ return str(example[key])
+ raise ValueError(
+ f"Could not find correct answer in example for dataset {dataset_name}"
+ )
+
+ def _check_answer(
+ self, evaluation: str, correct_answer: str, dataset_name: str
+ ) -> bool:
+ """Check if the evaluation contains the correct answer."""
+ # Convert both to lowercase for case-insensitive comparison
+ evaluation_lower = evaluation.lower()
+ correct_answer_lower = correct_answer.lower()
+
+ # For GSM8K, we need to extract the final numerical answer
+ if dataset_name == "gsm8k":
+ try:
+ # Look for the final answer in the format "The answer is X" or "Answer: X"
+ import re
+
+ final_answer = re.search(
+ r"(?:the answer is|answer:)\s*(\d+)",
+ evaluation_lower,
+ )
+ if final_answer:
+ return (
+ final_answer.group(1) == correct_answer_lower
+ )
+ except:
+ pass
+
+ # For other datasets, check if the correct answer is contained in the evaluation
+ return correct_answer_lower in evaluation_lower
+
+
+def main():
+ # Example usage
+ evaluator = CouncilJudgeEvaluator()
+
+ # Evaluate on multiple datasets
+ datasets = ["gsm8k", "squad", "winogrande", "commonsense_qa"]
+
+ for dataset in datasets:
+ try:
+ logger.info(f"\nEvaluating on {dataset}...")
+ results = evaluator.evaluate_dataset(
+ dataset_name=dataset,
+ split="test",
+ num_samples=10, # Limit samples for testing
+ )
+
+ # Print summary
+ print(f"\nResults for {dataset}:")
+ print(f"Accuracy: {results['accuracy']:.3f}")
+ print(
+ f"Correct answers: {results['correct_answers']}/{results['total_evaluated']}"
+ )
+ print(f"Total time: {results['total_time']:.2f} seconds")
+
+ except Exception as e:
+ logger.error(f"Error evaluating {dataset}: {str(e)}")
+ continue
+
+
+if __name__ == "__main__":
+ main()
diff --git a/examples/multi_agent/council/council_judge_example.py b/examples/multi_agent/council/council_judge_example.py
new file mode 100644
index 00000000..634eba28
--- /dev/null
+++ b/examples/multi_agent/council/council_judge_example.py
@@ -0,0 +1,21 @@
+from swarms.structs.agent import Agent
+from swarms.structs.council_judge import CouncilAsAJudge
+
+
+if __name__ == "__main__":
+ user_query = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?"
+
+ base_agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
+ model_name="claude-opus-4-20250514",
+ max_loops=1,
+ max_tokens=16000,
+ )
+
+ # model_output = base_agent.run(user_query)
+
+ panel = CouncilAsAJudge(base_agent=base_agent)
+ results = panel.run(user_query)
+
+ print(results)
diff --git a/examples/multi_agent/council/council_of_judges_eval.py b/examples/multi_agent/council/council_of_judges_eval.py
new file mode 100644
index 00000000..ad2e9781
--- /dev/null
+++ b/examples/multi_agent/council/council_of_judges_eval.py
@@ -0,0 +1,19 @@
+from swarms.structs.agent import Agent
+from swarms.structs.council_judge import CouncilAsAJudge
+
+
+if __name__ == "__main__":
+ user_query = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?"
+
+ base_agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
+ model_name="claude-opus-4-20250514",
+ max_loops=1,
+ max_tokens=16000,
+ )
+
+ panel = CouncilAsAJudge(base_agent=base_agent)
+ results = panel.run(user_query)
+
+ print(results)
diff --git a/examples/deep_research_example.py b/examples/multi_agent/deep_research_example.py
similarity index 100%
rename from examples/deep_research_example.py
rename to examples/multi_agent/deep_research_example.py
diff --git a/examples/duo_agent.py b/examples/multi_agent/duo_agent.py
similarity index 100%
rename from examples/duo_agent.py
rename to examples/multi_agent/duo_agent.py
diff --git a/examples/forest_swarm_examples/fund_manager_forest.py b/examples/multi_agent/forest_swarm_examples/fund_manager_forest.py
similarity index 100%
rename from examples/forest_swarm_examples/fund_manager_forest.py
rename to examples/multi_agent/forest_swarm_examples/fund_manager_forest.py
diff --git a/examples/forest_swarm_examples/medical_forest_swarm.py b/examples/multi_agent/forest_swarm_examples/medical_forest_swarm.py
similarity index 100%
rename from examples/forest_swarm_examples/medical_forest_swarm.py
rename to examples/multi_agent/forest_swarm_examples/medical_forest_swarm.py
diff --git a/examples/forest_swarm_examples/tree_swarm_test.py b/examples/multi_agent/forest_swarm_examples/tree_swarm_test.py
similarity index 100%
rename from examples/forest_swarm_examples/tree_swarm_test.py
rename to examples/multi_agent/forest_swarm_examples/tree_swarm_test.py
diff --git a/examples/groupchat_examples/crypto_tax.py b/examples/multi_agent/groupchat_examples/crypto_tax.py
similarity index 100%
rename from examples/groupchat_examples/crypto_tax.py
rename to examples/multi_agent/groupchat_examples/crypto_tax.py
diff --git a/examples/groupchat_examples/crypto_tax_swarm 2.py b/examples/multi_agent/groupchat_examples/crypto_tax_swarm 2.py
similarity index 100%
rename from examples/groupchat_examples/crypto_tax_swarm 2.py
rename to examples/multi_agent/groupchat_examples/crypto_tax_swarm 2.py
diff --git a/examples/groupchat_examples/crypto_tax_swarm.py b/examples/multi_agent/groupchat_examples/crypto_tax_swarm.py
similarity index 100%
rename from examples/groupchat_examples/crypto_tax_swarm.py
rename to examples/multi_agent/groupchat_examples/crypto_tax_swarm.py
diff --git a/examples/groupchat_examples/group_chat_example.py b/examples/multi_agent/groupchat_examples/group_chat_example.py
similarity index 100%
rename from examples/groupchat_examples/group_chat_example.py
rename to examples/multi_agent/groupchat_examples/group_chat_example.py
diff --git a/examples/groupchat_example.py b/examples/multi_agent/groupchat_examples/groupchat_example.py
similarity index 100%
rename from examples/groupchat_example.py
rename to examples/multi_agent/groupchat_examples/groupchat_example.py
diff --git a/examples/hiearchical_swarm-example.py b/examples/multi_agent/hiearchical_swarm/hiearchical_swarm-example.py
similarity index 100%
rename from examples/hiearchical_swarm-example.py
rename to examples/multi_agent/hiearchical_swarm/hiearchical_swarm-example.py
diff --git a/examples/hiearchical_swarm.py b/examples/multi_agent/hiearchical_swarm/hiearchical_swarm.py
similarity index 100%
rename from examples/hiearchical_swarm.py
rename to examples/multi_agent/hiearchical_swarm/hiearchical_swarm.py
diff --git a/examples/hs_examples/hierarchical_swarm_example.py b/examples/multi_agent/hiearchical_swarm/hierarchical_swarm_example.py
similarity index 100%
rename from examples/hs_examples/hierarchical_swarm_example.py
rename to examples/multi_agent/hiearchical_swarm/hierarchical_swarm_example.py
diff --git a/examples/hs_examples/hs_stock_team.py b/examples/multi_agent/hiearchical_swarm/hs_stock_team.py
similarity index 100%
rename from examples/hs_examples/hs_stock_team.py
rename to examples/multi_agent/hiearchical_swarm/hs_stock_team.py
diff --git a/examples/hybrid_hiearchical_swarm.py b/examples/multi_agent/hiearchical_swarm/hybrid_hiearchical_swarm.py
similarity index 100%
rename from examples/hybrid_hiearchical_swarm.py
rename to examples/multi_agent/hiearchical_swarm/hybrid_hiearchical_swarm.py
diff --git a/examples/majority_voting_example.py b/examples/multi_agent/majority_voting/majority_voting_example.py
similarity index 100%
rename from examples/majority_voting_example.py
rename to examples/multi_agent/majority_voting/majority_voting_example.py
diff --git a/examples/majority_voting_example_new.py b/examples/multi_agent/majority_voting/majority_voting_example_new.py
similarity index 100%
rename from examples/majority_voting_example_new.py
rename to examples/multi_agent/majority_voting/majority_voting_example_new.py
diff --git a/examples/model_router_example.py b/examples/multi_agent/mar/model_router_example.py
similarity index 100%
rename from examples/model_router_example.py
rename to examples/multi_agent/mar/model_router_example.py
diff --git a/examples/multi_agent_router_example.py b/examples/multi_agent/mar/multi_agent_router_example.py
similarity index 100%
rename from examples/multi_agent_router_example.py
rename to examples/multi_agent/mar/multi_agent_router_example.py
diff --git a/examples/meme_agents/bob_the_agent.py b/examples/multi_agent/meme_agents/bob_the_agent.py
similarity index 100%
rename from examples/meme_agents/bob_the_agent.py
rename to examples/multi_agent/meme_agents/bob_the_agent.py
diff --git a/examples/meme_agents/meme_agent_generator.py b/examples/multi_agent/meme_agents/meme_agent_generator.py
similarity index 100%
rename from examples/meme_agents/meme_agent_generator.py
rename to examples/multi_agent/meme_agents/meme_agent_generator.py
diff --git a/examples/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_spreadsheet.py b/examples/multi_agent/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_spreadsheet.py
similarity index 100%
rename from examples/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_spreadsheet.py
rename to examples/multi_agent/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_spreadsheet.py
diff --git a/examples/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_swarm_spreadsheet.csv b/examples/multi_agent/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_swarm_spreadsheet.csv
similarity index 100%
rename from examples/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_swarm_spreadsheet.csv
rename to examples/multi_agent/new_spreadsheet_swarm_examples/crypto_tax_swarm/crypto_tax_swarm_spreadsheet.csv
diff --git a/examples/new_spreadsheet_swarm_examples/financial_analysis/swarm.csv b/examples/multi_agent/new_spreadsheet_swarm_examples/financial_analysis/swarm.csv
similarity index 100%
rename from examples/new_spreadsheet_swarm_examples/financial_analysis/swarm.csv
rename to examples/multi_agent/new_spreadsheet_swarm_examples/financial_analysis/swarm.csv
diff --git a/examples/new_spreadsheet_swarm_examples/financial_analysis/swarm_csv.py b/examples/multi_agent/new_spreadsheet_swarm_examples/financial_analysis/swarm_csv.py
similarity index 100%
rename from examples/new_spreadsheet_swarm_examples/financial_analysis/swarm_csv.py
rename to examples/multi_agent/new_spreadsheet_swarm_examples/financial_analysis/swarm_csv.py
diff --git a/examples/sequential_swarm_example.py b/examples/multi_agent/sequential_workflow/sequential_swarm_example.py
similarity index 100%
rename from examples/sequential_swarm_example.py
rename to examples/multi_agent/sequential_workflow/sequential_swarm_example.py
diff --git a/examples/sequential_workflow/sequential_worflow_test 2.py b/examples/multi_agent/sequential_workflow/sequential_worflow_test 2.py
similarity index 100%
rename from examples/sequential_workflow/sequential_worflow_test 2.py
rename to examples/multi_agent/sequential_workflow/sequential_worflow_test 2.py
diff --git a/examples/sequential_workflow/sequential_worflow_test.py b/examples/multi_agent/sequential_workflow/sequential_worflow_test.py
similarity index 100%
rename from examples/sequential_workflow/sequential_worflow_test.py
rename to examples/multi_agent/sequential_workflow/sequential_worflow_test.py
diff --git a/examples/sequential_workflow/sequential_workflow 2.py b/examples/multi_agent/sequential_workflow/sequential_workflow 2.py
similarity index 100%
rename from examples/sequential_workflow/sequential_workflow 2.py
rename to examples/multi_agent/sequential_workflow/sequential_workflow 2.py
diff --git a/examples/sequential_workflow/sequential_workflow.py b/examples/multi_agent/sequential_workflow/sequential_workflow.py
similarity index 100%
rename from examples/sequential_workflow/sequential_workflow.py
rename to examples/multi_agent/sequential_workflow/sequential_workflow.py
diff --git a/examples/swarm_router.py b/examples/multi_agent/swarm_router/swarm_router.py
similarity index 100%
rename from examples/swarm_router.py
rename to examples/multi_agent/swarm_router/swarm_router.py
diff --git a/examples/swarm_router_example.py b/examples/multi_agent/swarm_router/swarm_router_example.py
similarity index 100%
rename from examples/swarm_router_example.py
rename to examples/multi_agent/swarm_router/swarm_router_example.py
diff --git a/examples/swarm_router_test.py b/examples/multi_agent/swarm_router/swarm_router_test.py
similarity index 100%
rename from examples/swarm_router_test.py
rename to examples/multi_agent/swarm_router/swarm_router_test.py
diff --git a/examples/swarmarrange/rearrange_test.py b/examples/multi_agent/swarmarrange/rearrange_test.py
similarity index 100%
rename from examples/swarmarrange/rearrange_test.py
rename to examples/multi_agent/swarmarrange/rearrange_test.py
diff --git a/examples/swarmarrange/swarm_arange_demo 2.py b/examples/multi_agent/swarmarrange/swarm_arange_demo 2.py
similarity index 100%
rename from examples/swarmarrange/swarm_arange_demo 2.py
rename to examples/multi_agent/swarmarrange/swarm_arange_demo 2.py
diff --git a/examples/swarmarrange/swarm_arange_demo.py b/examples/multi_agent/swarmarrange/swarm_arange_demo.py
similarity index 100%
rename from examples/swarmarrange/swarm_arange_demo.py
rename to examples/multi_agent/swarmarrange/swarm_arange_demo.py
diff --git a/examples/swarms_api_examples/hedge_fund_swarm.py b/examples/multi_agent/swarms_api_examples/hedge_fund_swarm.py
similarity index 100%
rename from examples/swarms_api_examples/hedge_fund_swarm.py
rename to examples/multi_agent/swarms_api_examples/hedge_fund_swarm.py
diff --git a/examples/swarms_api_examples/medical_swarm.py b/examples/multi_agent/swarms_api_examples/medical_swarm.py
similarity index 100%
rename from examples/swarms_api_examples/medical_swarm.py
rename to examples/multi_agent/swarms_api_examples/medical_swarm.py
diff --git a/examples/swarms_api_examples/swarms_api_client.py b/examples/multi_agent/swarms_api_examples/swarms_api_client.py
similarity index 100%
rename from examples/swarms_api_examples/swarms_api_client.py
rename to examples/multi_agent/swarms_api_examples/swarms_api_client.py
diff --git a/examples/swarms_api_examples/swarms_api_example.py b/examples/multi_agent/swarms_api_examples/swarms_api_example.py
similarity index 100%
rename from examples/swarms_api_examples/swarms_api_example.py
rename to examples/multi_agent/swarms_api_examples/swarms_api_example.py
diff --git a/examples/swarms_api_examples/tools_examples.py b/examples/multi_agent/swarms_api_examples/tools_examples.py
similarity index 100%
rename from examples/swarms_api_examples/tools_examples.py
rename to examples/multi_agent/swarms_api_examples/tools_examples.py
diff --git a/examples/unique_swarms_examples.py b/examples/multi_agent/unique_swarms_examples.py
similarity index 100%
rename from examples/unique_swarms_examples.py
rename to examples/multi_agent/unique_swarms_examples.py
diff --git a/examples/redis_conversation.py b/examples/redis_conversation.py
new file mode 100644
index 00000000..fa75af35
--- /dev/null
+++ b/examples/redis_conversation.py
@@ -0,0 +1,52 @@
+from swarms.communication.redis_wrap import RedisConversation
+import json
+import time
+
+
+def print_messages(conv):
+ messages = conv.to_dict()
+ print(f"Messages for conversation '{conv.get_name()}':")
+ print(json.dumps(messages, indent=4))
+
+
+# First session - Add messages
+print("\n=== First Session ===")
+conv = RedisConversation(
+ use_embedded_redis=True,
+ redis_port=6380,
+ token_count=False,
+ cache_enabled=False,
+ auto_persist=True,
+ redis_data_dir="/Users/swarms_wd/.swarms/redis",
+ name="my_test_chat", # Use a friendly name instead of conversation_id
+)
+
+# Add messages
+conv.add("user", "Hello!")
+conv.add("assistant", "Hi there! How can I help?")
+conv.add("user", "What's the weather like?")
+
+# Print current messages
+print_messages(conv)
+
+# Close the first connection
+del conv
+time.sleep(2) # Give Redis time to save
+
+# Second session - Verify persistence
+print("\n=== Second Session ===")
+conv2 = RedisConversation(
+ use_embedded_redis=True,
+ redis_port=6380,
+ token_count=False,
+ cache_enabled=False,
+ auto_persist=True,
+ redis_data_dir="/Users/swarms_wd/.swarms/redis",
+ name="my_test_chat", # Use the same name to restore the conversation
+)
+
+# Print messages from second session
+print_messages(conv2)
+
+# You can also change the name if needed
+# conv2.set_name("weather_chat")
diff --git a/examples/async_agent.py b/examples/single_agent/async_agent.py
similarity index 100%
rename from examples/async_agent.py
rename to examples/single_agent/async_agent.py
diff --git a/examples/example_async_vs_multithread.py b/examples/single_agent/example_async_vs_multithread.py
similarity index 100%
rename from examples/example_async_vs_multithread.py
rename to examples/single_agent/example_async_vs_multithread.py
diff --git a/examples/openai_assistant_wrapper.py b/examples/single_agent/external_agents/openai_assistant_wrapper.py
similarity index 100%
rename from examples/openai_assistant_wrapper.py
rename to examples/single_agent/external_agents/openai_assistant_wrapper.py
diff --git a/examples/insurance_agent.py b/examples/single_agent/insurance_agent.py
similarity index 100%
rename from examples/insurance_agent.py
rename to examples/single_agent/insurance_agent.py
diff --git a/examples/markdown_agent.py b/examples/single_agent/markdown_agent.py
similarity index 100%
rename from examples/markdown_agent.py
rename to examples/single_agent/markdown_agent.py
diff --git a/examples/onboard/agents.yaml b/examples/single_agent/onboard/agents.yaml
similarity index 100%
rename from examples/onboard/agents.yaml
rename to examples/single_agent/onboard/agents.yaml
diff --git a/examples/onboard/onboard-basic.py b/examples/single_agent/onboard/onboard-basic.py
similarity index 100%
rename from examples/onboard/onboard-basic.py
rename to examples/single_agent/onboard/onboard-basic.py
diff --git a/examples/persistent_legal_agent.py b/examples/single_agent/persistent_legal_agent.py
similarity index 100%
rename from examples/persistent_legal_agent.py
rename to examples/single_agent/persistent_legal_agent.py
diff --git a/examples/full_agent_rag_example.py b/examples/single_agent/rag/full_agent_rag_example.py
similarity index 100%
rename from examples/full_agent_rag_example.py
rename to examples/single_agent/rag/full_agent_rag_example.py
diff --git a/examples/single_agent/rag/pinecone_example.py b/examples/single_agent/rag/pinecone_example.py
new file mode 100644
index 00000000..423554bc
--- /dev/null
+++ b/examples/single_agent/rag/pinecone_example.py
@@ -0,0 +1,84 @@
+from swarms.structs.agent import Agent
+import pinecone
+import os
+from dotenv import load_dotenv
+from datetime import datetime
+from sentence_transformers import SentenceTransformer
+
+# Load environment variables
+load_dotenv()
+
+# Initialize Pinecone
+pinecone.init(
+ api_key=os.getenv("PINECONE_API_KEY"),
+ environment=os.getenv("PINECONE_ENVIRONMENT"),
+)
+
+# Initialize the embedding model
+embedding_model = SentenceTransformer("all-MiniLM-L6-v2")
+
+# Create or get the index
+index_name = "financial-agent-memory"
+if index_name not in pinecone.list_indexes():
+ pinecone.create_index(
+ name=index_name,
+ dimension=768, # Dimension for all-MiniLM-L6-v2
+ metric="cosine",
+ )
+
+# Get the index
+pinecone_index = pinecone.Index(index_name)
+
+# Initialize the agent
+agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ agent_description="Personal finance advisor agent",
+ max_loops=4,
+ model_name="gpt-4o-mini",
+ dynamic_temperature_enabled=True,
+ interactive=False,
+ output_type="all",
+)
+
+
+def run_agent(task):
+ # Run the agent and store the interaction
+ result = agent.run(task)
+
+ # Generate embedding for the document
+ doc_text = f"Task: {task}\nResult: {result}"
+ embedding = embedding_model.encode(doc_text).tolist()
+
+ # Store the interaction in Pinecone
+ pinecone_index.upsert(
+ vectors=[
+ {
+ "id": str(datetime.now().timestamp()),
+ "values": embedding,
+ "metadata": {
+ "agent_name": agent.agent_name,
+ "task_type": "financial_analysis",
+ "timestamp": str(datetime.now()),
+ "text": doc_text,
+ },
+ }
+ ]
+ )
+
+ return result
+
+
+def query_memory(query_text, top_k=5):
+ # Generate embedding for the query
+ query_embedding = embedding_model.encode(query_text).tolist()
+
+ # Query Pinecone
+ results = pinecone_index.query(
+ vector=query_embedding, top_k=top_k, include_metadata=True
+ )
+
+ return results
+
+
+# print(out)
+# print(type(out))
diff --git a/examples/qdrant_agent.py b/examples/single_agent/rag/qdrant_agent.py
similarity index 100%
rename from examples/qdrant_agent.py
rename to examples/single_agent/rag/qdrant_agent.py
diff --git a/examples/reasoning_agent_examples/agent_judge_example.py b/examples/single_agent/reasoning_agent_examples/agent_judge_example.py
similarity index 100%
rename from examples/reasoning_agent_examples/agent_judge_example.py
rename to examples/single_agent/reasoning_agent_examples/agent_judge_example.py
diff --git a/examples/consistency_agent.py b/examples/single_agent/reasoning_agent_examples/consistency_agent.py
similarity index 100%
rename from examples/consistency_agent.py
rename to examples/single_agent/reasoning_agent_examples/consistency_agent.py
diff --git a/examples/reasoning_agent_examples/gpk_agent.py b/examples/single_agent/reasoning_agent_examples/gpk_agent.py
similarity index 100%
rename from examples/reasoning_agent_examples/gpk_agent.py
rename to examples/single_agent/reasoning_agent_examples/gpk_agent.py
diff --git a/examples/iterative_agent.py b/examples/single_agent/reasoning_agent_examples/iterative_agent.py
similarity index 100%
rename from examples/iterative_agent.py
rename to examples/single_agent/reasoning_agent_examples/iterative_agent.py
diff --git a/examples/malt_example.py b/examples/single_agent/reasoning_agent_examples/malt_example.py
similarity index 100%
rename from examples/malt_example.py
rename to examples/single_agent/reasoning_agent_examples/malt_example.py
diff --git a/examples/reasoning_agent_router.py b/examples/single_agent/reasoning_agent_examples/reasoning_agent_router.py
similarity index 100%
rename from examples/reasoning_agent_router.py
rename to examples/single_agent/reasoning_agent_examples/reasoning_agent_router.py
diff --git a/examples/reasoning_duo.py b/examples/single_agent/reasoning_agent_examples/reasoning_duo.py
similarity index 100%
rename from examples/reasoning_duo.py
rename to examples/single_agent/reasoning_agent_examples/reasoning_duo.py
diff --git a/examples/reasoning_duo_example.py b/examples/single_agent/reasoning_agent_examples/reasoning_duo_example.py
similarity index 100%
rename from examples/reasoning_duo_example.py
rename to examples/single_agent/reasoning_agent_examples/reasoning_duo_example.py
diff --git a/examples/litellm_tool_example.py b/examples/single_agent/tools/litellm_tool_example.py
similarity index 100%
rename from examples/litellm_tool_example.py
rename to examples/single_agent/tools/litellm_tool_example.py
diff --git a/examples/multi_tool_usage_agent.py b/examples/single_agent/tools/multi_tool_usage_agent.py
similarity index 100%
rename from examples/multi_tool_usage_agent.py
rename to examples/single_agent/tools/multi_tool_usage_agent.py
diff --git a/examples/solana_tool/solana_tool.py b/examples/single_agent/tools/solana_tool/solana_tool.py
similarity index 100%
rename from examples/solana_tool/solana_tool.py
rename to examples/single_agent/tools/solana_tool/solana_tool.py
diff --git a/examples/solana_tool/solana_tool_test.py b/examples/single_agent/tools/solana_tool/solana_tool_test.py
similarity index 100%
rename from examples/solana_tool/solana_tool_test.py
rename to examples/single_agent/tools/solana_tool/solana_tool_test.py
diff --git a/examples/structured_outputs/example_meaning_of_life_agents.py b/examples/single_agent/tools/structured_outputs/example_meaning_of_life_agents.py
similarity index 100%
rename from examples/structured_outputs/example_meaning_of_life_agents.py
rename to examples/single_agent/tools/structured_outputs/example_meaning_of_life_agents.py
diff --git a/examples/structured_outputs/structured_outputs_example.py b/examples/single_agent/tools/structured_outputs/structured_outputs_example.py
similarity index 100%
rename from examples/structured_outputs/structured_outputs_example.py
rename to examples/single_agent/tools/structured_outputs/structured_outputs_example.py
diff --git a/examples/tools_examples/dex_screener.py b/examples/single_agent/tools/tools_examples/dex_screener.py
similarity index 100%
rename from examples/tools_examples/dex_screener.py
rename to examples/single_agent/tools/tools_examples/dex_screener.py
diff --git a/examples/tools_examples/financial_news_agent.py b/examples/single_agent/tools/tools_examples/financial_news_agent.py
similarity index 100%
rename from examples/tools_examples/financial_news_agent.py
rename to examples/single_agent/tools/tools_examples/financial_news_agent.py
diff --git a/examples/tools_examples/swarms_tool_example_simple.py b/examples/single_agent/tools/tools_examples/swarms_tool_example_simple.py
similarity index 100%
rename from examples/tools_examples/swarms_tool_example_simple.py
rename to examples/single_agent/tools/tools_examples/swarms_tool_example_simple.py
diff --git a/examples/tools_examples/swarms_tools_example.py b/examples/single_agent/tools/tools_examples/swarms_tools_example.py
similarity index 100%
rename from examples/tools_examples/swarms_tools_example.py
rename to examples/single_agent/tools/tools_examples/swarms_tools_example.py
diff --git a/examples/solana_agent.py b/examples/solana_agent.py
deleted file mode 100644
index 28622f57..00000000
--- a/examples/solana_agent.py
+++ /dev/null
@@ -1,354 +0,0 @@
-from dataclasses import dataclass
-from typing import List, Optional, Dict, Any
-from datetime import datetime
-import asyncio
-from loguru import logger
-import json
-import base58
-from decimal import Decimal
-
-# Swarms imports
-from swarms import Agent
-
-# Solana imports
-from solders.rpc.responses import GetTransactionResp
-from solders.transaction import Transaction
-from anchorpy import Provider, Wallet
-from solders.keypair import Keypair
-import aiohttp
-
-# Specialized Solana Analysis System Prompt
-SOLANA_ANALYSIS_PROMPT = """You are a specialized Solana blockchain analyst agent. Your role is to:
-
-1. Analyze real-time Solana transactions for patterns and anomalies
-2. Identify potential market-moving transactions and whale movements
-3. Detect important DeFi interactions across major protocols
-4. Monitor program interactions for suspicious or notable activity
-5. Track token movements across significant protocols like:
- - Serum DEX
- - Raydium
- - Orca
- - Marinade
- - Jupiter
- - Other major Solana protocols
-
-When analyzing transactions, consider:
-- Transaction size relative to protocol norms
-- Historical patterns for involved addresses
-- Impact on protocol liquidity
-- Relationship to known market events
-- Potential wash trading or suspicious patterns
-- MEV opportunities and arbitrage patterns
-- Program interaction sequences
-
-Provide analysis in the following format:
-{
- "analysis_type": "[whale_movement|program_interaction|defi_trade|suspicious_activity]",
- "severity": "[high|medium|low]",
- "details": {
- "transaction_context": "...",
- "market_impact": "...",
- "recommended_actions": "...",
- "related_patterns": "..."
- }
-}
-
-Focus on actionable insights that could affect:
-1. Market movements
-2. Protocol stability
-3. Trading opportunities
-4. Risk management
-"""
-
-
-@dataclass
-class TransactionData:
- """Data structure for parsed Solana transaction information"""
-
- signature: str
- block_time: datetime
- slot: int
- fee: int
- lamports: int
- from_address: str
- to_address: str
- program_id: str
- instruction_data: Optional[str] = None
- program_logs: List[str] = None
-
- @property
- def sol_amount(self) -> Decimal:
- """Convert lamports to SOL"""
- return Decimal(self.lamports) / Decimal(1e9)
-
- def to_dict(self) -> Dict[str, Any]:
- """Convert transaction data to dictionary for agent analysis"""
- return {
- "signature": self.signature,
- "timestamp": self.block_time.isoformat(),
- "slot": self.slot,
- "fee": self.fee,
- "amount_sol": str(self.sol_amount),
- "from_address": self.from_address,
- "to_address": self.to_address,
- "program_id": self.program_id,
- "instruction_data": self.instruction_data,
- "program_logs": self.program_logs,
- }
-
-
-class SolanaSwarmAgent:
- """Intelligent agent for analyzing Solana transactions using swarms"""
-
- def __init__(
- self,
- agent_name: str = "Solana-Analysis-Agent",
- model_name: str = "gpt-4",
- ):
- self.agent = Agent(
- agent_name=agent_name,
- system_prompt=SOLANA_ANALYSIS_PROMPT,
- model_name=model_name,
- max_loops=1,
- autosave=True,
- dashboard=False,
- verbose=True,
- dynamic_temperature_enabled=True,
- saved_state_path="solana_agent.json",
- user_name="solana_analyzer",
- retry_attempts=3,
- context_length=4000,
- )
-
- # Initialize known patterns database
- self.known_patterns = {
- "whale_addresses": set(),
- "program_interactions": {},
- "recent_transactions": [],
- }
- logger.info(
- f"Initialized {agent_name} with specialized Solana analysis capabilities"
- )
-
- async def analyze_transaction(
- self, tx_data: TransactionData
- ) -> Dict[str, Any]:
- """Analyze a transaction using the specialized agent"""
- try:
- # Update recent transactions for pattern analysis
- self.known_patterns["recent_transactions"].append(
- tx_data.signature
- )
- if len(self.known_patterns["recent_transactions"]) > 1000:
- self.known_patterns["recent_transactions"].pop(0)
-
- # Prepare context for agent
- context = {
- "transaction": tx_data.to_dict(),
- "known_patterns": {
- "recent_similar_transactions": [
- tx
- for tx in self.known_patterns[
- "recent_transactions"
- ][-5:]
- if abs(
- TransactionData(tx).sol_amount
- - tx_data.sol_amount
- )
- < 1
- ],
- "program_statistics": self.known_patterns[
- "program_interactions"
- ].get(tx_data.program_id, {}),
- },
- }
-
- # Get analysis from agent
- analysis = await self.agent.run_async(
- f"Analyze the following Solana transaction and provide insights: {json.dumps(context, indent=2)}"
- )
-
- # Update pattern database
- if tx_data.sol_amount > 1000: # Track whale addresses
- self.known_patterns["whale_addresses"].add(
- tx_data.from_address
- )
-
- # Update program interaction statistics
- if (
- tx_data.program_id
- not in self.known_patterns["program_interactions"]
- ):
- self.known_patterns["program_interactions"][
- tx_data.program_id
- ] = {"total_interactions": 0, "total_volume": 0}
- self.known_patterns["program_interactions"][
- tx_data.program_id
- ]["total_interactions"] += 1
- self.known_patterns["program_interactions"][
- tx_data.program_id
- ]["total_volume"] += float(tx_data.sol_amount)
-
- return json.loads(analysis)
-
- except Exception as e:
- logger.error(f"Error in agent analysis: {str(e)}")
- return {
- "analysis_type": "error",
- "severity": "low",
- "details": {
- "error": str(e),
- "transaction": tx_data.signature,
- },
- }
-
-
-class SolanaTransactionMonitor:
- """Main class for monitoring and analyzing Solana transactions"""
-
- def __init__(
- self,
- rpc_url: str,
- swarm_agent: SolanaSwarmAgent,
- min_sol_threshold: Decimal = Decimal("100"),
- ):
- self.rpc_url = rpc_url
- self.swarm_agent = swarm_agent
- self.min_sol_threshold = min_sol_threshold
- self.wallet = Wallet(Keypair())
- self.provider = Provider(rpc_url, self.wallet)
- logger.info("Initialized Solana transaction monitor")
-
- async def parse_transaction(
- self, tx_resp: GetTransactionResp
- ) -> Optional[TransactionData]:
- """Parse transaction response into TransactionData object"""
- try:
- if not tx_resp.value:
- return None
-
- tx_value = tx_resp.value
- meta = tx_value.transaction.meta
- if not meta:
- return None
-
- tx: Transaction = tx_value.transaction.transaction
-
- # Extract transaction details
- from_pubkey = str(tx.message.account_keys[0])
- to_pubkey = str(tx.message.account_keys[1])
- program_id = str(tx.message.account_keys[-1])
-
- # Calculate amount from balance changes
- amount = abs(meta.post_balances[0] - meta.pre_balances[0])
-
- return TransactionData(
- signature=str(tx_value.transaction.signatures[0]),
- block_time=datetime.fromtimestamp(
- tx_value.block_time or 0
- ),
- slot=tx_value.slot,
- fee=meta.fee,
- lamports=amount,
- from_address=from_pubkey,
- to_address=to_pubkey,
- program_id=program_id,
- program_logs=(
- meta.log_messages if meta.log_messages else []
- ),
- )
- except Exception as e:
- logger.error(f"Failed to parse transaction: {str(e)}")
- return None
-
- async def start_monitoring(self):
- """Start monitoring for new transactions"""
- logger.info(
- "Starting transaction monitoring with swarm agent analysis"
- )
-
- async with aiohttp.ClientSession() as session:
- async with session.ws_connect(self.rpc_url) as ws:
- await ws.send_json(
- {
- "jsonrpc": "2.0",
- "id": 1,
- "method": "transactionSubscribe",
- "params": [
- {"commitment": "finalized"},
- {
- "encoding": "jsonParsed",
- "commitment": "finalized",
- },
- ],
- }
- )
-
- async for msg in ws:
- if msg.type == aiohttp.WSMsgType.TEXT:
- try:
- data = json.loads(msg.data)
- if "params" in data:
- signature = data["params"]["result"][
- "value"
- ]["signature"]
-
- # Fetch full transaction data
- tx_response = await self.provider.connection.get_transaction(
- base58.b58decode(signature)
- )
-
- if tx_response:
- tx_data = (
- await self.parse_transaction(
- tx_response
- )
- )
- if (
- tx_data
- and tx_data.sol_amount
- >= self.min_sol_threshold
- ):
- # Get agent analysis
- analysis = await self.swarm_agent.analyze_transaction(
- tx_data
- )
-
- logger.info(
- f"Transaction Analysis:\n"
- f"Signature: {tx_data.signature}\n"
- f"Amount: {tx_data.sol_amount} SOL\n"
- f"Analysis: {json.dumps(analysis, indent=2)}"
- )
-
- except Exception as e:
- logger.error(
- f"Error processing message: {str(e)}"
- )
- continue
-
-
-async def main():
- """Example usage"""
-
- # Start monitoring
- try:
- # Initialize swarm agent
- swarm_agent = SolanaSwarmAgent(
- agent_name="Solana-Whale-Detector", model_name="gpt-4"
- )
-
- # Initialize monitor
- monitor = SolanaTransactionMonitor(
- rpc_url="wss://api.mainnet-beta.solana.com",
- swarm_agent=swarm_agent,
- min_sol_threshold=Decimal("100"),
- )
-
- await monitor.start_monitoring()
- except KeyboardInterrupt:
- logger.info("Shutting down gracefully...")
-
-
-if __name__ == "__main__":
- asyncio.run(main())
diff --git a/examples/mcp_exampler.py b/examples/tools/mcp_exampler.py
similarity index 100%
rename from examples/mcp_exampler.py
rename to examples/tools/mcp_exampler.py
diff --git a/examples/omni_modal_agent.py b/examples/tools/omni_modal_agent.py
similarity index 100%
rename from examples/omni_modal_agent.py
rename to examples/tools/omni_modal_agent.py
diff --git a/examples/swarms_of_browser_agents.py b/examples/tools/swarms_of_browser_agents.py
similarity index 100%
rename from examples/swarms_of_browser_agents.py
rename to examples/tools/swarms_of_browser_agents.py
diff --git a/examples/together_deepseek_agent.py b/examples/tools/together_deepseek_agent.py
similarity index 100%
rename from examples/together_deepseek_agent.py
rename to examples/tools/together_deepseek_agent.py
diff --git a/examples/voice.py b/examples/voice.py
deleted file mode 100644
index e0f20752..00000000
--- a/examples/voice.py
+++ /dev/null
@@ -1,416 +0,0 @@
-from __future__ import annotations
-
-import asyncio
-import base64
-import io
-import threading
-from os import getenv
-from typing import Any, Awaitable, Callable, cast
-
-import numpy as np
-
-try:
- import pyaudio
-except ImportError:
- import subprocess
-
- subprocess.check_call(["pip", "install", "pyaudio"])
- import pyaudio
-try:
- import sounddevice as sd
-except ImportError:
- import subprocess
-
- subprocess.check_call(["pip", "install", "sounddevice"])
- import sounddevice as sd
-from loguru import logger
-from openai import AsyncOpenAI
-from openai.resources.beta.realtime.realtime import (
- AsyncRealtimeConnection,
-)
-from openai.types.beta.realtime.session import Session
-
-try:
- from pydub import AudioSegment
-except ImportError:
- import subprocess
-
- subprocess.check_call(["pip", "install", "pydub"])
- from pydub import AudioSegment
-
-from dotenv import load_dotenv
-
-load_dotenv()
-
-
-CHUNK_LENGTH_S = 0.05 # 100ms
-SAMPLE_RATE = 24000
-FORMAT = pyaudio.paInt16
-CHANNELS = 1
-
-# pyright: reportUnknownMemberType=false, reportUnknownVariableType=false, reportUnknownArgumentType=false
-
-
-def audio_to_pcm16_base64(audio_bytes: bytes) -> bytes:
- # load the audio file from the byte stream
- audio = AudioSegment.from_file(io.BytesIO(audio_bytes))
- print(
- f"Loaded audio: {audio.frame_rate=} {audio.channels=} {audio.sample_width=} {audio.frame_width=}"
- )
- # resample to 24kHz mono pcm16
- pcm_audio = (
- audio.set_frame_rate(SAMPLE_RATE)
- .set_channels(CHANNELS)
- .set_sample_width(2)
- .raw_data
- )
- return pcm_audio
-
-
-class AudioPlayerAsync:
- def __init__(self):
- self.queue = []
- self.lock = threading.Lock()
- self.stream = sd.OutputStream(
- callback=self.callback,
- samplerate=SAMPLE_RATE,
- channels=CHANNELS,
- dtype=np.int16,
- blocksize=int(CHUNK_LENGTH_S * SAMPLE_RATE),
- )
- self.playing = False
- self._frame_count = 0
-
- def callback(self, outdata, frames, time, status): # noqa
- with self.lock:
- data = np.empty(0, dtype=np.int16)
-
- # get next item from queue if there is still space in the buffer
- while len(data) < frames and len(self.queue) > 0:
- item = self.queue.pop(0)
- frames_needed = frames - len(data)
- data = np.concatenate((data, item[:frames_needed]))
- if len(item) > frames_needed:
- self.queue.insert(0, item[frames_needed:])
-
- self._frame_count += len(data)
-
- # fill the rest of the frames with zeros if there is no more data
- if len(data) < frames:
- data = np.concatenate(
- (
- data,
- np.zeros(frames - len(data), dtype=np.int16),
- )
- )
-
- outdata[:] = data.reshape(-1, 1)
-
- def reset_frame_count(self):
- self._frame_count = 0
-
- def get_frame_count(self):
- return self._frame_count
-
- def add_data(self, data: bytes):
- with self.lock:
- # bytes is pcm16 single channel audio data, convert to numpy array
- np_data = np.frombuffer(data, dtype=np.int16)
- self.queue.append(np_data)
- if not self.playing:
- self.start()
-
- def start(self):
- self.playing = True
- self.stream.start()
-
- def stop(self):
- self.playing = False
- self.stream.stop()
- with self.lock:
- self.queue = []
-
- def terminate(self):
- self.stream.close()
-
-
-async def send_audio_worker_sounddevice(
- connection: AsyncRealtimeConnection,
- should_send: Callable[[], bool] | None = None,
- start_send: Callable[[], Awaitable[None]] | None = None,
-):
- sent_audio = False
-
- device_info = sd.query_devices()
- print(device_info)
-
- read_size = int(SAMPLE_RATE * 0.02)
-
- stream = sd.InputStream(
- channels=CHANNELS,
- samplerate=SAMPLE_RATE,
- dtype="int16",
- )
- stream.start()
-
- try:
- while True:
- if stream.read_available < read_size:
- await asyncio.sleep(0)
- continue
-
- data, _ = stream.read(read_size)
-
- if should_send() if should_send else True:
- if not sent_audio and start_send:
- await start_send()
- await connection.send(
- {
- "type": "input_audio_buffer.append",
- "audio": base64.b64encode(data).decode(
- "utf-8"
- ),
- }
- )
- sent_audio = True
-
- elif sent_audio:
- print("Done, triggering inference")
- await connection.send(
- {"type": "input_audio_buffer.commit"}
- )
- await connection.send(
- {"type": "response.create", "response": {}}
- )
- sent_audio = False
-
- await asyncio.sleep(0)
-
- except KeyboardInterrupt:
- pass
- finally:
- stream.stop()
- stream.close()
-
-
-class RealtimeApp:
- """
- A console-based application to handle real-time audio recording and streaming,
- connecting to OpenAI's GPT-4 Realtime API.
-
- Features:
- - Streams microphone input to the GPT-4 Realtime API.
- - Logs transcription results.
- - Sends text prompts to the GPT-4 Realtime API.
- """
-
- def __init__(self, system_prompt: str = None) -> None:
- self.connection: AsyncRealtimeConnection | None = None
- self.session: Session | None = None
- self.client = AsyncOpenAI(api_key=getenv("OPENAI_API_KEY"))
- self.audio_player = AudioPlayerAsync()
- self.last_audio_item_id: str | None = None
- self.should_send_audio = asyncio.Event()
- self.connected = asyncio.Event()
- self.system_prompt = system_prompt
-
- async def initialize_text_prompt(self, text: str) -> None:
- """Initialize and send a text prompt to the OpenAI Realtime API."""
- try:
- async with self.client.beta.realtime.connect(
- model="gpt-4o-realtime-preview-2024-10-01"
- ) as conn:
- self.connection = conn
- await conn.session.update(
- session={"modalities": ["text"]}
- )
-
- await conn.conversation.item.create(
- item={
- "type": "message",
- "role": "system",
- "content": [
- {"type": "input_text", "text": text}
- ],
- }
- )
- await conn.response.create()
-
- async for event in conn:
- if event.type == "response.text.delta":
- print(event.delta, flush=True, end="")
-
- elif event.type == "response.text.done":
- print()
-
- elif event.type == "response.done":
- break
- except Exception as e:
- logger.exception(f"Error initializing text prompt: {e}")
-
- async def handle_realtime_connection(self) -> None:
- """Handle the connection to the OpenAI Realtime API."""
- try:
- async with self.client.beta.realtime.connect(
- model="gpt-4o-realtime-preview-2024-10-01"
- ) as conn:
- self.connection = conn
- self.connected.set()
- logger.info("Connected to OpenAI Realtime API.")
-
- await conn.session.update(
- session={"turn_detection": {"type": "server_vad"}}
- )
-
- acc_items: dict[str, Any] = {}
-
- async for event in conn:
- if event.type == "session.created":
- self.session = event.session
- assert event.session.id is not None
- logger.info(
- f"Session created with ID: {event.session.id}"
- )
- continue
-
- if event.type == "session.updated":
- self.session = event.session
- logger.info("Session updated.")
- continue
-
- if event.type == "response.audio.delta":
- if event.item_id != self.last_audio_item_id:
- self.audio_player.reset_frame_count()
- self.last_audio_item_id = event.item_id
-
- bytes_data = base64.b64decode(event.delta)
- self.audio_player.add_data(bytes_data)
- continue
-
- if (
- event.type
- == "response.audio_transcript.delta"
- ):
- try:
- text = acc_items[event.item_id]
- except KeyError:
- acc_items[event.item_id] = event.delta
- else:
- acc_items[event.item_id] = (
- text + event.delta
- )
-
- logger.debug(
- f"Transcription updated: {acc_items[event.item_id]}"
- )
- continue
-
- if event.type == "response.text.delta":
- print(event.delta, flush=True, end="")
- continue
-
- if event.type == "response.text.done":
- print()
- continue
-
- if event.type == "response.done":
- break
- except Exception as e:
- logger.exception(
- f"Error in realtime connection handler: {e}"
- )
-
- async def _get_connection(self) -> AsyncRealtimeConnection:
- """Wait for and return the realtime connection."""
- await self.connected.wait()
- assert self.connection is not None
- return self.connection
-
- async def send_text_prompt(self, text: str) -> None:
- """Send a text prompt to the OpenAI Realtime API."""
- try:
- connection = await self._get_connection()
- if not self.session:
- logger.error(
- "Session is not initialized. Cannot send prompt."
- )
- return
-
- logger.info(f"Sending prompt to the model: {text}")
- await connection.conversation.item.create(
- item={
- "type": "message",
- "role": "user",
- "content": [{"type": "input_text", "text": text}],
- }
- )
- await connection.response.create()
- except Exception as e:
- logger.exception(f"Error sending text prompt: {e}")
-
- async def send_mic_audio(self) -> None:
- """Stream microphone audio to the OpenAI Realtime API."""
- import sounddevice as sd # type: ignore
-
- sent_audio = False
-
- try:
- read_size = int(SAMPLE_RATE * 0.02)
- stream = sd.InputStream(
- channels=CHANNELS,
- samplerate=SAMPLE_RATE,
- dtype="int16",
- )
- stream.start()
-
- while True:
- if stream.read_available < read_size:
- await asyncio.sleep(0)
- continue
-
- await self.should_send_audio.wait()
-
- data, _ = stream.read(read_size)
-
- connection = await self._get_connection()
- if not sent_audio:
- asyncio.create_task(
- connection.send({"type": "response.cancel"})
- )
- sent_audio = True
-
- await connection.input_audio_buffer.append(
- audio=base64.b64encode(cast(Any, data)).decode(
- "utf-8"
- )
- )
- await asyncio.sleep(0)
- except Exception as e:
- logger.exception(
- f"Error in microphone audio streaming: {e}"
- )
- finally:
- stream.stop()
- stream.close()
-
- async def run(self) -> None:
- """Start the application tasks."""
- logger.info("Starting application tasks.")
-
- await asyncio.gather(
- # self.initialize_text_prompt(self.system_prompt),
- self.handle_realtime_connection(),
- self.send_mic_audio(),
- )
-
-
-if __name__ == "__main__":
- logger.add(
- "realtime_app.log",
- rotation="10 MB",
- retention="10 days",
- level="DEBUG",
- )
- logger.info("Starting RealtimeApp.")
- app = RealtimeApp()
- asyncio.run(app.run())
diff --git a/pyproject.toml b/pyproject.toml
index a236bcb1..0f40e39a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
-version = "7.7.8"
+version = "7.7.9"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez "]
@@ -119,10 +119,3 @@ exclude = '''
)/
'''
-
-
-[tool.maturin]
-module-name = "swarms_rust"
-
-[tool.maturin.build]
-features = ["extension-module"]
diff --git a/tests/run_all_tests.py b/scripts/run_all_tests.py
similarity index 100%
rename from tests/run_all_tests.py
rename to scripts/run_all_tests.py
diff --git a/tests/test_upload_tests_to_issues.py b/scripts/test_upload_tests_to_issues.py
similarity index 100%
rename from tests/test_upload_tests_to_issues.py
rename to scripts/test_upload_tests_to_issues.py
diff --git a/swarms/__init__.py b/swarms/__init__.py
index 1e12dd9f..10188655 100644
--- a/swarms/__init__.py
+++ b/swarms/__init__.py
@@ -15,4 +15,3 @@ from swarms.structs import * # noqa: E402, F403
from swarms.telemetry import * # noqa: E402, F403
from swarms.tools import * # noqa: E402, F403
from swarms.utils import * # noqa: E402, F403
-from swarms.client import * # noqa: E402, F403
diff --git a/swarms/client/__init__.py b/swarms/client/__init__.py
deleted file mode 100644
index 1134259c..00000000
--- a/swarms/client/__init__.py
+++ /dev/null
@@ -1,15 +0,0 @@
-from swarms.client.main import (
- SwarmsAPIClient,
- AgentInput,
- SwarmRequest,
- SwarmAPIError,
- SwarmAuthenticationError,
-)
-
-__all__ = [
- "SwarmsAPIClient",
- "AgentInput",
- "SwarmRequest",
- "SwarmAPIError",
- "SwarmAuthenticationError",
-]
diff --git a/swarms/client/main.py b/swarms/client/main.py
deleted file mode 100644
index 801a349c..00000000
--- a/swarms/client/main.py
+++ /dev/null
@@ -1,407 +0,0 @@
-import json
-import os
-from typing import List, Literal, Optional
-
-import httpx
-from swarms.utils.loguru_logger import initialize_logger
-from pydantic import BaseModel, Field
-from tenacity import retry, stop_after_attempt, wait_exponential
-from swarms.structs.swarm_router import SwarmType
-from typing import Any
-
-logger = initialize_logger(log_folder="swarms_api")
-
-
-class AgentInput(BaseModel):
- agent_name: Optional[str] = Field(
- None,
- description="The name of the agent, limited to 100 characters.",
- max_length=100,
- )
- description: Optional[str] = Field(
- None,
- description="A detailed description of the agent's purpose and capabilities, up to 500 characters.",
- max_length=500,
- )
- system_prompt: Optional[str] = Field(
- None,
- description="The initial prompt or instructions given to the agent.",
- )
- model_name: Optional[str] = Field(
- "gpt-4o",
- description="The name of the model used by the agent. Model names can be configured like provider/model_name",
- )
- auto_generate_prompt: Optional[bool] = Field(
- False,
- description="Indicates whether the agent should automatically generate prompts.",
- )
- max_tokens: Optional[int] = Field(
- 8192,
- description="The maximum number of tokens the agent can use in its responses.",
- )
- temperature: Optional[float] = Field(
- 0.5,
- description="Controls the randomness of the agent's responses; higher values result in more random outputs.",
- )
- role: Optional[str] = Field(
- "worker",
- description="The role assigned to the agent, such as 'worker' or 'manager'.",
- )
- max_loops: Optional[int] = Field(
- 1,
- description="The maximum number of iterations the agent is allowed to perform.",
- )
- dynamic_temperature_enabled: Optional[bool] = Field(
- True,
- description="Indicates whether the agent should use dynamic temperature.",
- )
-
-
-class SwarmRequest(BaseModel):
- name: Optional[str] = Field(
- "swarms-01",
- description="The name of the swarm, limited to 100 characters.",
- max_length=100,
- )
- description: Optional[str] = Field(
- None,
- description="A comprehensive description of the swarm's objectives and scope, up to 500 characters.",
- max_length=500,
- )
- agents: Optional[List[AgentInput]] = Field(
- None,
- description="A list of agents that are part of the swarm.",
- )
- max_loops: Optional[int] = Field(
- 1,
- description="The maximum number of iterations the swarm can execute.",
- )
- swarm_type: Optional[SwarmType] = Field(
- None,
- description="The type of swarm, defining its operational structure and behavior.",
- )
- rearrange_flow: Optional[str] = Field(
- None,
- description="The flow or sequence in which agents are rearranged during the swarm's operation.",
- )
- task: Optional[str] = Field(
- None,
- description="The specific task or objective the swarm is designed to accomplish.",
- )
- img: Optional[str] = Field(
- None,
- description="A URL to an image associated with the swarm, if applicable.",
- )
- return_history: Optional[bool] = Field(
- True,
- description="Determines whether the full history of the swarm's operations should be returned.",
- )
- rules: Optional[str] = Field(
- None,
- description="Any specific rules or guidelines that the swarm should follow.",
- )
- output_type: Optional[str] = Field(
- "str",
- description="The format in which the swarm's output should be returned, such as 'str', 'json', or 'dict'.",
- )
-
-
-# class SwarmResponse(BaseModel):
-# swarm_id: str
-# status: str
-# result: Optional[str]
-# error: Optional[str]
-
-
-class HealthResponse(BaseModel):
- status: str
- version: str
-
-
-class SwarmAPIError(Exception):
- """Base exception for Swarms API errors."""
-
- pass
-
-
-class SwarmAuthenticationError(SwarmAPIError):
- """Raised when authentication fails."""
-
- pass
-
-
-class SwarmValidationError(SwarmAPIError):
- """Raised when request validation fails."""
-
- pass
-
-
-class SwarmsAPIClient:
- """Production-grade client for the Swarms API."""
-
- def __init__(
- self,
- api_key: Optional[str] = None,
- base_url: str = "https://api.swarms.world",
- timeout: int = 30,
- max_retries: int = 3,
- format_type: Literal["pydantic", "json", "dict"] = "pydantic",
- ):
- """Initialize the Swarms API client.
-
- Args:
- api_key: API key for authentication. If not provided, looks for SWARMS_API_KEY env var
- base_url: Base URL for the API
- timeout: Request timeout in seconds
- max_retries: Maximum number of retries for failed requests
- format_type: Desired output format ('pydantic', 'json', 'dict')
- """
- self.api_key = api_key or os.getenv("SWARMS_API_KEY")
-
- if not self.api_key:
- logger.error(
- "API key not provided and SWARMS_API_KEY env var not found"
- )
- raise SwarmAuthenticationError(
- "API key not provided and SWARMS_API_KEY env var not found"
- )
-
- self.base_url = base_url.rstrip("/")
- self.timeout = timeout
- self.max_retries = max_retries
- self.format_type = format_type
- # Setup HTTP client
- self.client = httpx.Client(
- timeout=timeout,
- headers={
- "x-api-key": self.api_key,
- "Content-Type": "application/json",
- },
- )
- logger.info(
- "SwarmsAPIClient initialized with base_url: {}",
- self.base_url,
- )
-
- @retry(
- stop=stop_after_attempt(3),
- wait=wait_exponential(multiplier=1, min=4, max=10),
- reraise=True,
- )
- async def health_check(self) -> HealthResponse:
- """Check the API health status.
-
- Args:
- output_format: Desired output format ('pydantic', 'json', 'dict')
-
- Returns:
- HealthResponse object or formatted output
- """
- logger.info("Performing health check")
- try:
- response = self.client.get(f"{self.base_url}/health")
- response.raise_for_status()
- health_response = HealthResponse(**response.json())
- logger.info("Health check successful")
- return self.format_output(
- health_response, self.format_type
- )
- except httpx.HTTPError as e:
- logger.error("Health check failed: {}", str(e))
- raise SwarmAPIError(f"Health check failed: {str(e)}")
-
- @retry(
- stop=stop_after_attempt(3),
- wait=wait_exponential(multiplier=1, min=4, max=10),
- reraise=True,
- )
- async def arun(self, swarm_request: SwarmRequest) -> Any:
- """Create and run a new swarm.
-
- Args:
- swarm_request: SwarmRequest object containing the swarm configuration
- output_format: Desired output format ('pydantic', 'json', 'dict')
-
- Returns:
- SwarmResponse object or formatted output
- """
- logger.info(
- "Creating and running a new swarm with request: {}",
- swarm_request,
- )
- try:
- response = self.client.post(
- f"{self.base_url}/v1/swarm/completions",
- json=swarm_request.model_dump(),
- )
- response.raise_for_status()
- logger.info("Swarm creation and run successful")
- return self.format_output(
- response.json(), self.format_type
- )
- except httpx.HTTPStatusError as e:
- if e.response.status_code == 401:
- logger.error("Invalid API key")
- raise SwarmAuthenticationError("Invalid API key")
- elif e.response.status_code == 422:
- logger.error("Invalid request parameters")
- raise SwarmValidationError(
- "Invalid request parameters"
- )
- logger.error("Swarm creation failed: {}", str(e))
- raise SwarmAPIError(f"Swarm creation failed: {str(e)}")
- except Exception as e:
- logger.error(
- "Unexpected error during swarm creation: {}", str(e)
- )
- raise
-
- @retry(
- stop=stop_after_attempt(3),
- wait=wait_exponential(multiplier=1, min=4, max=10),
- reraise=True,
- )
- def run(self, swarm_request: SwarmRequest) -> Any:
- """Create and run a new swarm.
-
- Args:
- swarm_request: SwarmRequest object containing the swarm configuration
- output_format: Desired output format ('pydantic', 'json', 'dict')
-
- Returns:
- SwarmResponse object or formatted output
- """
- logger.info(
- "Creating and running a new swarm with request: {}",
- swarm_request,
- )
- try:
- response = self.client.post(
- f"{self.base_url}/v1/swarm/completions",
- json=swarm_request.model_dump(),
- )
- print(response.json())
- logger.info("Swarm creation and run successful")
- return response.json()
- except httpx.HTTPStatusError as e:
- if e.response.status_code == 401:
- logger.error("Invalid API key")
- raise SwarmAuthenticationError("Invalid API key")
- elif e.response.status_code == 422:
- logger.error("Invalid request parameters")
- raise SwarmValidationError(
- "Invalid request parameters"
- )
- logger.error("Swarm creation failed: {}", str(e))
- raise SwarmAPIError(f"Swarm creation failed: {str(e)}")
- except Exception as e:
- logger.error(
- "Unexpected error during swarm creation: {}", str(e)
- )
- raise
-
- @retry(
- stop=stop_after_attempt(3),
- wait=wait_exponential(multiplier=1, min=4, max=10),
- reraise=True,
- )
- async def run_batch(
- self, swarm_requests: List[SwarmRequest]
- ) -> List[Any]:
- """Create and run multiple swarms in batch.
-
- Args:
- swarm_requests: List of SwarmRequest objects
- output_format: Desired output format ('pydantic', 'json', 'dict')
-
- Returns:
- List of SwarmResponse objects or formatted outputs
- """
- logger.info(
- "Creating and running batch swarms with requests: {}",
- swarm_requests,
- )
- try:
- response = self.client.post(
- f"{self.base_url}/v1/swarm/batch/completions",
- json=[req.model_dump() for req in swarm_requests],
- )
- response.raise_for_status()
- logger.info("Batch swarm creation and run successful")
- return [
- self.format_output(resp, self.format_type)
- for resp in response.json()
- ]
- except httpx.HTTPStatusError as e:
- if e.response.status_code == 401:
- logger.error("Invalid API key")
- raise SwarmAuthenticationError("Invalid API key")
- elif e.response.status_code == 422:
- logger.error("Invalid request parameters")
- raise SwarmValidationError(
- "Invalid request parameters"
- )
- logger.error("Batch swarm creation failed: {}", str(e))
- raise SwarmAPIError(
- f"Batch swarm creation failed: {str(e)}"
- )
- except Exception as e:
- logger.error(
- "Unexpected error during batch swarm creation: {}",
- str(e),
- )
- raise
-
- def get_logs(self):
- logger.info("Retrieving logs")
- try:
- response = self.client.get(
- f"{self.base_url}/v1/swarm/logs"
- )
- response.raise_for_status()
- logs = response.json()
- logger.info("Logs retrieved successfully")
- return self.format_output(logs, self.format_type)
- except httpx.HTTPError as e:
- logger.error("Failed to retrieve logs: {}", str(e))
- raise SwarmAPIError(f"Failed to retrieve logs: {str(e)}")
-
- def format_output(self, data, output_format: str):
- """Format the output based on the specified format.
-
- Args:
- data: The data to format
- output_format: The desired output format ('pydantic', 'json', 'dict')
-
- Returns:
- Formatted data
- """
- logger.info(
- "Formatting output with format: {}", output_format
- )
- if output_format == "json":
- return (
- data.model_dump_json(indent=4)
- if isinstance(data, BaseModel)
- else json.dumps(data)
- )
- elif output_format == "dict":
- return (
- data.model_dump()
- if isinstance(data, BaseModel)
- else data
- )
- return data # Default to returning the pydantic model
-
- def close(self):
- """Close the HTTP client."""
- logger.info("Closing HTTP client")
- self.client.close()
-
- async def __aenter__(self):
- logger.info("Entering async context")
- return self
-
- async def __aexit__(self, exc_type, exc_val, exc_tb):
- logger.info("Exiting async context")
- self.close()
diff --git a/swarms/communication/base_communication.py b/swarms/communication/base_communication.py
new file mode 100644
index 00000000..671d3f5a
--- /dev/null
+++ b/swarms/communication/base_communication.py
@@ -0,0 +1,290 @@
+from abc import ABC, abstractmethod
+from typing import List, Optional, Union, Dict, Any
+from enum import Enum
+from dataclasses import dataclass
+from pathlib import Path
+
+
+class MessageType(Enum):
+ """Enum for different types of messages in the conversation."""
+
+ SYSTEM = "system"
+ USER = "user"
+ ASSISTANT = "assistant"
+ FUNCTION = "function"
+ TOOL = "tool"
+
+
+@dataclass
+class Message:
+ """Data class representing a message in the conversation."""
+
+ role: str
+ content: Union[str, dict, list]
+ timestamp: Optional[str] = None
+ message_type: Optional[MessageType] = None
+ metadata: Optional[Dict] = None
+ token_count: Optional[int] = None
+
+
+class BaseCommunication(ABC):
+ """
+ Abstract base class defining the interface for conversation implementations.
+ This class provides the contract that all conversation implementations must follow.
+
+ Attributes:
+ system_prompt (Optional[str]): The system prompt for the conversation.
+ time_enabled (bool): Flag to enable time tracking for messages.
+ autosave (bool): Flag to enable automatic saving of conversation history.
+ save_filepath (str): File path for saving the conversation history.
+ tokenizer (Any): Tokenizer for counting tokens in messages.
+ context_length (int): Maximum number of tokens allowed in the conversation history.
+ rules (str): Rules for the conversation.
+ custom_rules_prompt (str): Custom prompt for rules.
+ user (str): The user identifier for messages.
+ auto_save (bool): Flag to enable auto-saving of conversation history.
+ save_as_yaml (bool): Flag to save conversation history as YAML.
+ save_as_json_bool (bool): Flag to save conversation history as JSON.
+ token_count (bool): Flag to enable token counting for messages.
+ cache_enabled (bool): Flag to enable prompt caching.
+ """
+
+ @staticmethod
+ def get_default_db_path(db_name: str) -> Path:
+ """Calculate the default database path in user's home directory.
+
+ Args:
+ db_name (str): Name of the database file (e.g. 'conversations.db')
+
+ Returns:
+ Path: Path object pointing to the database location
+ """
+ # Get user's home directory
+ home = Path.home()
+
+ # Create .swarms directory if it doesn't exist
+ swarms_dir = home / ".swarms" / "db"
+ swarms_dir.mkdir(parents=True, exist_ok=True)
+
+ return swarms_dir / db_name
+
+ @abstractmethod
+ def __init__(
+ self,
+ system_prompt: Optional[str] = None,
+ time_enabled: bool = False,
+ autosave: bool = False,
+ save_filepath: str = None,
+ tokenizer: Any = None,
+ context_length: int = 8192,
+ rules: str = None,
+ custom_rules_prompt: str = None,
+ user: str = "User:",
+ auto_save: bool = True,
+ save_as_yaml: bool = True,
+ save_as_json_bool: bool = False,
+ token_count: bool = True,
+ cache_enabled: bool = True,
+ *args,
+ **kwargs,
+ ):
+ """Initialize the communication interface."""
+ pass
+
+ @abstractmethod
+ def add(
+ self,
+ role: str,
+ content: Union[str, dict, list],
+ message_type: Optional[MessageType] = None,
+ metadata: Optional[Dict] = None,
+ token_count: Optional[int] = None,
+ ) -> int:
+ """Add a message to the conversation history."""
+ pass
+
+ @abstractmethod
+ def batch_add(self, messages: List[Message]) -> List[int]:
+ """Add multiple messages to the conversation history."""
+ pass
+
+ @abstractmethod
+ def delete(self, index: str):
+ """Delete a message from the conversation history."""
+ pass
+
+ @abstractmethod
+ def update(
+ self, index: str, role: str, content: Union[str, dict]
+ ):
+ """Update a message in the conversation history."""
+ pass
+
+ @abstractmethod
+ def query(self, index: str) -> Dict:
+ """Query a message in the conversation history."""
+ pass
+
+ @abstractmethod
+ def search(self, keyword: str) -> List[Dict]:
+ """Search for messages containing a keyword."""
+ pass
+
+ @abstractmethod
+ def get_str(self) -> str:
+ """Get the conversation history as a string."""
+ pass
+
+ @abstractmethod
+ def display_conversation(self, detailed: bool = False):
+ """Display the conversation history."""
+ pass
+
+ @abstractmethod
+ def export_conversation(self, filename: str):
+ """Export the conversation history to a file."""
+ pass
+
+ @abstractmethod
+ def import_conversation(self, filename: str):
+ """Import a conversation history from a file."""
+ pass
+
+ @abstractmethod
+ def count_messages_by_role(self) -> Dict[str, int]:
+ """Count messages by role."""
+ pass
+
+ @abstractmethod
+ def return_history_as_string(self) -> str:
+ """Return the conversation history as a string."""
+ pass
+
+ @abstractmethod
+ def get_messages(
+ self,
+ limit: Optional[int] = None,
+ offset: Optional[int] = None,
+ ) -> List[Dict]:
+ """Get messages with optional pagination."""
+ pass
+
+ @abstractmethod
+ def clear(self):
+ """Clear the conversation history."""
+ pass
+
+ @abstractmethod
+ def to_dict(self) -> List[Dict]:
+ """Convert the conversation history to a dictionary."""
+ pass
+
+ @abstractmethod
+ def to_json(self) -> str:
+ """Convert the conversation history to a JSON string."""
+ pass
+
+ @abstractmethod
+ def to_yaml(self) -> str:
+ """Convert the conversation history to a YAML string."""
+ pass
+
+ @abstractmethod
+ def save_as_json(self, filename: str):
+ """Save the conversation history as a JSON file."""
+ pass
+
+ @abstractmethod
+ def load_from_json(self, filename: str):
+ """Load the conversation history from a JSON file."""
+ pass
+
+ @abstractmethod
+ def save_as_yaml(self, filename: str):
+ """Save the conversation history as a YAML file."""
+ pass
+
+ @abstractmethod
+ def load_from_yaml(self, filename: str):
+ """Load the conversation history from a YAML file."""
+ pass
+
+ @abstractmethod
+ def get_last_message(self) -> Optional[Dict]:
+ """Get the last message from the conversation history."""
+ pass
+
+ @abstractmethod
+ def get_last_message_as_string(self) -> str:
+ """Get the last message as a formatted string."""
+ pass
+
+ @abstractmethod
+ def get_messages_by_role(self, role: str) -> List[Dict]:
+ """Get all messages from a specific role."""
+ pass
+
+ @abstractmethod
+ def get_conversation_summary(self) -> Dict:
+ """Get a summary of the conversation."""
+ pass
+
+ @abstractmethod
+ def get_statistics(self) -> Dict:
+ """Get statistics about the conversation."""
+ pass
+
+ @abstractmethod
+ def get_conversation_id(self) -> str:
+ """Get the current conversation ID."""
+ pass
+
+ @abstractmethod
+ def start_new_conversation(self) -> str:
+ """Start a new conversation and return its ID."""
+ pass
+
+ @abstractmethod
+ def delete_current_conversation(self) -> bool:
+ """Delete the current conversation."""
+ pass
+
+ @abstractmethod
+ def search_messages(self, query: str) -> List[Dict]:
+ """Search for messages containing specific text."""
+ pass
+
+ @abstractmethod
+ def update_message(
+ self,
+ message_id: int,
+ content: Union[str, dict, list],
+ metadata: Optional[Dict] = None,
+ ) -> bool:
+ """Update an existing message."""
+ pass
+
+ @abstractmethod
+ def get_conversation_metadata_dict(self) -> Dict:
+ """Get detailed metadata about the conversation."""
+ pass
+
+ @abstractmethod
+ def get_conversation_timeline_dict(self) -> Dict[str, List[Dict]]:
+ """Get the conversation organized by timestamps."""
+ pass
+
+ @abstractmethod
+ def get_conversation_by_role_dict(self) -> Dict[str, List[Dict]]:
+ """Get the conversation organized by roles."""
+ pass
+
+ @abstractmethod
+ def get_conversation_as_dict(self) -> Dict:
+ """Get the entire conversation as a dictionary with messages and metadata."""
+ pass
+
+ @abstractmethod
+ def truncate_memory_with_tokenizer(self):
+ """Truncate the conversation history based on token count."""
+ pass
diff --git a/swarms/communication/duckdb_wrap.py b/swarms/communication/duckdb_wrap.py
index 2ef95779..d9bb970c 100644
--- a/swarms/communication/duckdb_wrap.py
+++ b/swarms/communication/duckdb_wrap.py
@@ -1,16 +1,21 @@
-import duckdb
-import json
import datetime
-from typing import List, Optional, Union, Dict
-from pathlib import Path
-import threading
-from contextlib import contextmanager
+import json
import logging
-from dataclasses import dataclass
-from enum import Enum
+import threading
import uuid
+from contextlib import contextmanager
+from pathlib import Path
+from typing import Any, Callable, Dict, List, Optional, Union
+
+import duckdb
import yaml
+from swarms.communication.base_communication import (
+ BaseCommunication,
+ Message,
+ MessageType,
+)
+
try:
from loguru import logger
@@ -19,31 +24,6 @@ except ImportError:
LOGURU_AVAILABLE = False
-class MessageType(Enum):
- """Enum for different types of messages in the conversation."""
-
- SYSTEM = "system"
- USER = "user"
- ASSISTANT = "assistant"
- FUNCTION = "function"
- TOOL = "tool"
-
-
-@dataclass
-class Message:
- """Data class representing a message in the conversation."""
-
- role: str
- content: Union[str, dict, list]
- timestamp: Optional[str] = None
- message_type: Optional[MessageType] = None
- metadata: Optional[Dict] = None
- token_count: Optional[int] = None
-
- class Config:
- arbitrary_types_allowed = True
-
-
class DateTimeEncoder(json.JSONEncoder):
"""Custom JSON encoder for handling datetime objects."""
@@ -53,7 +33,7 @@ class DateTimeEncoder(json.JSONEncoder):
return super().default(obj)
-class DuckDBConversation:
+class DuckDBConversation(BaseCommunication):
"""
A production-grade DuckDB wrapper class for managing conversation history.
This class provides persistent storage for conversations with various features
@@ -72,15 +52,55 @@ class DuckDBConversation:
def __init__(
self,
- db_path: Union[str, Path] = "conversations.duckdb",
+ system_prompt: Optional[str] = None,
+ time_enabled: bool = False,
+ autosave: bool = False,
+ save_filepath: str = None,
+ tokenizer: Any = None,
+ context_length: int = 8192,
+ rules: str = None,
+ custom_rules_prompt: str = None,
+ user: str = "User:",
+ auto_save: bool = True,
+ save_as_yaml: bool = True,
+ save_as_json_bool: bool = False,
+ token_count: bool = True,
+ cache_enabled: bool = True,
+ db_path: Union[str, Path] = None,
table_name: str = "conversations",
enable_timestamps: bool = True,
enable_logging: bool = True,
use_loguru: bool = True,
max_retries: int = 3,
connection_timeout: float = 5.0,
+ *args,
+ **kwargs,
):
+ super().__init__(
+ system_prompt=system_prompt,
+ time_enabled=time_enabled,
+ autosave=autosave,
+ save_filepath=save_filepath,
+ tokenizer=tokenizer,
+ context_length=context_length,
+ rules=rules,
+ custom_rules_prompt=custom_rules_prompt,
+ user=user,
+ auto_save=auto_save,
+ save_as_yaml=save_as_yaml,
+ save_as_json_bool=save_as_json_bool,
+ token_count=token_count,
+ cache_enabled=cache_enabled,
+ )
+
+ # Calculate default db_path if not provided
+ if db_path is None:
+ db_path = self.get_default_db_path("conversations.duckdb")
self.db_path = Path(db_path)
+
+ # Ensure parent directory exists
+ self.db_path.parent.mkdir(parents=True, exist_ok=True)
+
self.table_name = table_name
self.enable_timestamps = enable_timestamps
self.enable_logging = enable_logging
@@ -89,6 +109,7 @@ class DuckDBConversation:
self.connection_timeout = connection_timeout
self.current_conversation_id = None
self._lock = threading.Lock()
+ self.tokenizer = tokenizer
# Setup logging
if self.enable_logging:
@@ -809,12 +830,7 @@ class DuckDBConversation:
}
def get_conversation_as_dict(self) -> Dict:
- """
- Get the entire conversation as a dictionary with messages and metadata.
-
- Returns:
- Dict: Dictionary containing conversation ID, messages, and metadata
- """
+ """Get the entire conversation as a dictionary with messages and metadata."""
messages = self.get_messages()
stats = self.get_statistics()
@@ -832,12 +848,7 @@ class DuckDBConversation:
}
def get_conversation_by_role_dict(self) -> Dict[str, List[Dict]]:
- """
- Get the conversation organized by roles.
-
- Returns:
- Dict[str, List[Dict]]: Dictionary with roles as keys and lists of messages as values
- """
+ """Get the conversation organized by roles."""
with self._get_connection() as conn:
result = conn.execute(
f"""
@@ -926,12 +937,7 @@ class DuckDBConversation:
return timeline_dict
def get_conversation_metadata_dict(self) -> Dict:
- """
- Get detailed metadata about the conversation.
-
- Returns:
- Dict: Dictionary containing detailed conversation metadata
- """
+ """Get detailed metadata about the conversation."""
with self._get_connection() as conn:
# Get basic statistics
stats = self.get_statistics()
@@ -975,7 +981,7 @@ class DuckDBConversation:
"conversation_id": self.current_conversation_id,
"basic_stats": stats,
"message_type_distribution": {
- row[0]: row[1] for row in type_dist
+ row[0]: row[1] for row in type_dist if row[0]
},
"average_tokens_per_message": (
avg_tokens[0] if avg_tokens[0] is not None else 0
@@ -987,15 +993,7 @@ class DuckDBConversation:
}
def save_as_yaml(self, filename: str) -> bool:
- """
- Save the current conversation to a YAML file.
-
- Args:
- filename (str): Path to save the YAML file
-
- Returns:
- bool: True if save was successful
- """
+ """Save the current conversation to a YAML file."""
try:
with open(filename, "w") as f:
yaml.dump(self.to_dict(), f)
@@ -1008,15 +1006,7 @@ class DuckDBConversation:
return False
def load_from_yaml(self, filename: str) -> bool:
- """
- Load a conversation from a YAML file.
-
- Args:
- filename (str): Path to the YAML file
-
- Returns:
- bool: True if load was successful
- """
+ """Load a conversation from a YAML file."""
try:
with open(filename, "r") as f:
messages = yaml.safe_load(f)
@@ -1044,3 +1034,310 @@ class DuckDBConversation:
f"Failed to load conversation from YAML: {e}"
)
return False
+
+ def delete(self, index: str):
+ """Delete a message from the conversation history."""
+ with self._get_connection() as conn:
+ conn.execute(
+ f"DELETE FROM {self.table_name} WHERE id = ? AND conversation_id = ?",
+ (index, self.current_conversation_id),
+ )
+
+ def update(
+ self, index: str, role: str, content: Union[str, dict]
+ ):
+ """Update a message in the conversation history."""
+ if isinstance(content, (dict, list)):
+ content = json.dumps(content)
+
+ with self._get_connection() as conn:
+ conn.execute(
+ f"""
+ UPDATE {self.table_name}
+ SET role = ?, content = ?
+ WHERE id = ? AND conversation_id = ?
+ """,
+ (role, content, index, self.current_conversation_id),
+ )
+
+ def query(self, index: str) -> Dict:
+ """Query a message in the conversation history."""
+ with self._get_connection() as conn:
+ result = conn.execute(
+ f"""
+ SELECT * FROM {self.table_name}
+ WHERE id = ? AND conversation_id = ?
+ """,
+ (index, self.current_conversation_id),
+ ).fetchone()
+
+ if not result:
+ return {}
+
+ content = result[2]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ return {
+ "role": result[1],
+ "content": content,
+ "timestamp": result[3],
+ "message_type": result[4],
+ "metadata": (
+ json.loads(result[5]) if result[5] else None
+ ),
+ "token_count": result[6],
+ }
+
+ def search(self, keyword: str) -> List[Dict]:
+ """Search for messages containing a keyword."""
+ return self.search_messages(keyword)
+
+ def display_conversation(self, detailed: bool = False):
+ """Display the conversation history."""
+ print(self.get_str())
+
+ def export_conversation(self, filename: str):
+ """Export the conversation history to a file."""
+ self.save_as_json(filename)
+
+ def import_conversation(self, filename: str):
+ """Import a conversation history from a file."""
+ self.load_from_json(filename)
+
+ def return_history_as_string(self) -> str:
+ """Return the conversation history as a string."""
+ return self.get_str()
+
+ def clear(self):
+ """Clear the conversation history."""
+ with self._get_connection() as conn:
+ conn.execute(
+ f"DELETE FROM {self.table_name} WHERE conversation_id = ?",
+ (self.current_conversation_id,),
+ )
+
+ def truncate_memory_with_tokenizer(self):
+ """Truncate the conversation history based on token count."""
+ if not self.tokenizer:
+ return
+
+ with self._get_connection() as conn:
+ result = conn.execute(
+ f"""
+ SELECT id, content, token_count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ ).fetchall()
+
+ total_tokens = 0
+ ids_to_keep = []
+
+ for row in result:
+ token_count = row[2] or self.tokenizer.count_tokens(
+ row[1]
+ )
+ if total_tokens + token_count <= self.context_length:
+ total_tokens += token_count
+ ids_to_keep.append(row[0])
+ else:
+ break
+
+ if ids_to_keep:
+ ids_str = ",".join(map(str, ids_to_keep))
+ conn.execute(
+ f"""
+ DELETE FROM {self.table_name}
+ WHERE conversation_id = ?
+ AND id NOT IN ({ids_str})
+ """,
+ (self.current_conversation_id,),
+ )
+
+ def get_visible_messages(
+ self, agent: Callable, turn: int
+ ) -> List[Dict]:
+ """
+ Get the visible messages for a given agent and turn.
+
+ Args:
+ agent (Agent): The agent.
+ turn (int): The turn number.
+
+ Returns:
+ List[Dict]: The list of visible messages.
+ """
+ with self._get_connection() as conn:
+ result = conn.execute(
+ f"""
+ SELECT * FROM {self.table_name}
+ WHERE conversation_id = ?
+ AND CAST(json_extract(metadata, '$.turn') AS INTEGER) < ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id, turn),
+ ).fetchall()
+
+ visible_messages = []
+ for row in result:
+ metadata = json.loads(row[5]) if row[5] else {}
+ visible_to = metadata.get("visible_to", "all")
+
+ if visible_to == "all" or (
+ agent and agent.agent_name in visible_to
+ ):
+ content = row[2] # content column
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ message = {
+ "role": row[1],
+ "content": content,
+ "visible_to": visible_to,
+ "turn": metadata.get("turn"),
+ }
+ visible_messages.append(message)
+
+ return visible_messages
+
+ def return_messages_as_list(self) -> List[str]:
+ """Return the conversation messages as a list of formatted strings.
+
+ Returns:
+ list: List of messages formatted as 'role: content'.
+ """
+ with self._get_connection() as conn:
+ result = conn.execute(
+ f"""
+ SELECT role, content FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ ).fetchall()
+
+ return [
+ f"{row[0]}: {json.loads(row[1]) if isinstance(row[1], str) and row[1].startswith('{') else row[1]}"
+ for row in result
+ ]
+
+ def return_messages_as_dictionary(self) -> List[Dict]:
+ """Return the conversation messages as a list of dictionaries.
+
+ Returns:
+ list: List of dictionaries containing role and content of each message.
+ """
+ with self._get_connection() as conn:
+ result = conn.execute(
+ f"""
+ SELECT role, content FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ ).fetchall()
+
+ messages = []
+ for row in result:
+ content = row[1]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ messages.append(
+ {
+ "role": row[0],
+ "content": content,
+ }
+ )
+ return messages
+
+ def add_tool_output_to_agent(self, role: str, tool_output: dict):
+ """Add a tool output to the conversation history.
+
+ Args:
+ role (str): The role of the tool.
+ tool_output (dict): The output from the tool to be added.
+ """
+ self.add(role, tool_output, message_type=MessageType.TOOL)
+
+ def get_final_message(self) -> str:
+ """Return the final message from the conversation history.
+
+ Returns:
+ str: The final message formatted as 'role: content'.
+ """
+ last_message = self.get_last_message()
+ if not last_message:
+ return ""
+ return f"{last_message['role']}: {last_message['content']}"
+
+ def get_final_message_content(self) -> Union[str, dict]:
+ """Return the content of the final message from the conversation history.
+
+ Returns:
+ Union[str, dict]: The content of the final message.
+ """
+ last_message = self.get_last_message()
+ if not last_message:
+ return ""
+ return last_message["content"]
+
+ def return_all_except_first(self) -> List[Dict]:
+ """Return all messages except the first one.
+
+ Returns:
+ list: List of messages except the first one.
+ """
+ with self._get_connection() as conn:
+ result = conn.execute(
+ f"""
+ SELECT role, content, timestamp, message_type, metadata, token_count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ LIMIT -1 OFFSET 2
+ """,
+ (self.current_conversation_id,),
+ ).fetchall()
+
+ messages = []
+ for row in result:
+ content = row[1]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ message = {
+ "role": row[0],
+ "content": content,
+ }
+ if row[2]: # timestamp
+ message["timestamp"] = row[2]
+ if row[3]: # message_type
+ message["message_type"] = row[3]
+ if row[4]: # metadata
+ message["metadata"] = json.loads(row[4])
+ if row[5]: # token_count
+ message["token_count"] = row[5]
+
+ messages.append(message)
+ return messages
+
+ def return_all_except_first_string(self) -> str:
+ """Return all messages except the first one as a string.
+
+ Returns:
+ str: All messages except the first one as a string.
+ """
+ messages = self.return_all_except_first()
+ return "\n".join(f"{msg['content']}" for msg in messages)
diff --git a/swarms/communication/pulsar_struct.py b/swarms/communication/pulsar_struct.py
new file mode 100644
index 00000000..2fb2fced
--- /dev/null
+++ b/swarms/communication/pulsar_struct.py
@@ -0,0 +1,691 @@
+import json
+import yaml
+import threading
+from typing import Any, Dict, List, Optional, Union
+from datetime import datetime
+import uuid
+from loguru import logger
+from swarms.communication.base_communication import (
+ BaseCommunication,
+ Message,
+ MessageType,
+)
+
+
+# Check if Pulsar is available
+try:
+ import pulsar
+
+ PULSAR_AVAILABLE = True
+ logger.info("Apache Pulsar client library is available")
+except ImportError as e:
+ PULSAR_AVAILABLE = False
+ logger.error(
+ f"Apache Pulsar client library is not installed: {e}"
+ )
+ logger.error("Please install it using: pip install pulsar-client")
+
+
+class PulsarConnectionError(Exception):
+ """Exception raised for Pulsar connection errors."""
+
+ pass
+
+
+class PulsarOperationError(Exception):
+ """Exception raised for Pulsar operation errors."""
+
+ pass
+
+
+class PulsarConversation(BaseCommunication):
+ """
+ A Pulsar-based implementation of the conversation interface.
+ Uses Apache Pulsar for message storage and retrieval.
+
+ Attributes:
+ client (pulsar.Client): The Pulsar client instance
+ producer (pulsar.Producer): The Pulsar producer for sending messages
+ consumer (pulsar.Consumer): The Pulsar consumer for receiving messages
+ topic (str): The Pulsar topic name
+ subscription_name (str): The subscription name for the consumer
+ conversation_id (str): Unique identifier for the conversation
+ cache_enabled (bool): Flag to enable prompt caching
+ cache_stats (dict): Statistics about cache usage
+ cache_lock (threading.Lock): Lock for thread-safe cache operations
+ """
+
+ def __init__(
+ self,
+ system_prompt: Optional[str] = None,
+ time_enabled: bool = False,
+ autosave: bool = False,
+ save_filepath: str = None,
+ tokenizer: Any = None,
+ context_length: int = 8192,
+ rules: str = None,
+ custom_rules_prompt: str = None,
+ user: str = "User:",
+ auto_save: bool = True,
+ save_as_yaml: bool = True,
+ save_as_json_bool: bool = False,
+ token_count: bool = True,
+ cache_enabled: bool = True,
+ pulsar_host: str = "pulsar://localhost:6650",
+ topic: str = "conversation",
+ *args,
+ **kwargs,
+ ):
+ """Initialize the Pulsar conversation interface."""
+ if not PULSAR_AVAILABLE:
+ raise ImportError(
+ "Apache Pulsar client library is not installed. "
+ "Please install it using: pip install pulsar-client"
+ )
+
+ logger.info(
+ f"Initializing PulsarConversation with host: {pulsar_host}"
+ )
+
+ self.conversation_id = str(uuid.uuid4())
+ self.topic = f"{topic}-{self.conversation_id}"
+ self.subscription_name = f"sub-{self.conversation_id}"
+
+ try:
+ # Initialize Pulsar client and producer/consumer
+ logger.debug(
+ f"Connecting to Pulsar broker at {pulsar_host}"
+ )
+ self.client = pulsar.Client(pulsar_host)
+
+ logger.debug(f"Creating producer for topic: {self.topic}")
+ self.producer = self.client.create_producer(self.topic)
+
+ logger.debug(
+ f"Creating consumer with subscription: {self.subscription_name}"
+ )
+ self.consumer = self.client.subscribe(
+ self.topic, self.subscription_name
+ )
+ logger.info("Successfully connected to Pulsar broker")
+
+ except pulsar.ConnectError as e:
+ error_msg = f"Failed to connect to Pulsar broker at {pulsar_host}: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarConnectionError(error_msg)
+ except Exception as e:
+ error_msg = f"Unexpected error while initializing Pulsar connection: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarOperationError(error_msg)
+
+ # Store configuration
+ self.system_prompt = system_prompt
+ self.time_enabled = time_enabled
+ self.autosave = autosave
+ self.save_filepath = save_filepath
+ self.tokenizer = tokenizer
+ self.context_length = context_length
+ self.rules = rules
+ self.custom_rules_prompt = custom_rules_prompt
+ self.user = user
+ self.auto_save = auto_save
+ self.save_as_yaml = save_as_yaml
+ self.save_as_json_bool = save_as_json_bool
+ self.token_count = token_count
+
+ # Cache configuration
+ self.cache_enabled = cache_enabled
+ self.cache_stats = {
+ "hits": 0,
+ "misses": 0,
+ "cached_tokens": 0,
+ "total_tokens": 0,
+ }
+ self.cache_lock = threading.Lock()
+
+ # Add system prompt if provided
+ if system_prompt:
+ logger.debug("Adding system prompt to conversation")
+ self.add("system", system_prompt, MessageType.SYSTEM)
+
+ # Add rules if provided
+ if rules:
+ logger.debug("Adding rules to conversation")
+ self.add("system", rules, MessageType.SYSTEM)
+
+ # Add custom rules prompt if provided
+ if custom_rules_prompt:
+ logger.debug("Adding custom rules prompt to conversation")
+ self.add(user, custom_rules_prompt, MessageType.USER)
+
+ logger.info(
+ f"PulsarConversation initialized with ID: {self.conversation_id}"
+ )
+
+ def add(
+ self,
+ role: str,
+ content: Union[str, dict, list],
+ message_type: Optional[MessageType] = None,
+ metadata: Optional[Dict] = None,
+ token_count: Optional[int] = None,
+ ) -> int:
+ """Add a message to the conversation."""
+ try:
+ message = {
+ "id": str(uuid.uuid4()),
+ "role": role,
+ "content": content,
+ "timestamp": datetime.now().isoformat(),
+ "message_type": (
+ message_type.value if message_type else None
+ ),
+ "metadata": metadata or {},
+ "token_count": token_count,
+ "conversation_id": self.conversation_id,
+ }
+
+ logger.debug(
+ f"Adding message with ID {message['id']} from role: {role}"
+ )
+
+ # Send message to Pulsar
+ message_data = json.dumps(message).encode("utf-8")
+ self.producer.send(message_data)
+
+ logger.debug(
+ f"Successfully added message with ID: {message['id']}"
+ )
+ return message["id"]
+
+ except pulsar.ConnectError as e:
+ error_msg = f"Failed to send message to Pulsar: Connection error: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarConnectionError(error_msg)
+ except Exception as e:
+ error_msg = f"Failed to add message: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarOperationError(error_msg)
+
+ def batch_add(self, messages: List[Message]) -> List[int]:
+ """Add multiple messages to the conversation."""
+ message_ids = []
+ for message in messages:
+ msg_id = self.add(
+ message.role,
+ message.content,
+ message.message_type,
+ message.metadata,
+ message.token_count,
+ )
+ message_ids.append(msg_id)
+ return message_ids
+
+ def get_messages(
+ self,
+ limit: Optional[int] = None,
+ offset: Optional[int] = None,
+ ) -> List[Dict]:
+ """Get messages with optional pagination."""
+ messages = []
+ try:
+ logger.debug("Retrieving messages from Pulsar")
+ while True:
+ try:
+ msg = self.consumer.receive(timeout_millis=1000)
+ messages.append(json.loads(msg.data()))
+ self.consumer.acknowledge(msg)
+ except pulsar.Timeout:
+ break # No more messages available
+ except json.JSONDecodeError as e:
+ logger.error(f"Failed to decode message: {e}")
+ continue
+
+ logger.debug(f"Retrieved {len(messages)} messages")
+
+ if offset is not None:
+ messages = messages[offset:]
+ if limit is not None:
+ messages = messages[:limit]
+
+ return messages
+
+ except pulsar.ConnectError as e:
+ error_msg = f"Failed to receive messages from Pulsar: Connection error: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarConnectionError(error_msg)
+ except Exception as e:
+ error_msg = f"Failed to get messages: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarOperationError(error_msg)
+
+ def delete(self, message_id: str):
+ """Delete a message from the conversation."""
+ # In Pulsar, messages cannot be deleted individually
+ # We would need to implement a soft delete by marking messages
+ pass
+
+ def update(
+ self, message_id: str, role: str, content: Union[str, dict]
+ ):
+ """Update a message in the conversation."""
+ # In Pulsar, messages are immutable
+ # We would need to implement updates as new messages with update metadata
+ new_message = {
+ "id": str(uuid.uuid4()),
+ "role": role,
+ "content": content,
+ "timestamp": datetime.now().isoformat(),
+ "updates": message_id,
+ "conversation_id": self.conversation_id,
+ }
+ self.producer.send(json.dumps(new_message).encode("utf-8"))
+
+ def query(self, message_id: str) -> Dict:
+ """Query a message in the conversation."""
+ messages = self.get_messages()
+ for message in messages:
+ if message["id"] == message_id:
+ return message
+ return None
+
+ def search(self, keyword: str) -> List[Dict]:
+ """Search for messages containing a keyword."""
+ messages = self.get_messages()
+ return [
+ msg for msg in messages if keyword in str(msg["content"])
+ ]
+
+ def get_str(self) -> str:
+ """Get the conversation history as a string."""
+ messages = self.get_messages()
+ return "\n".join(
+ [f"{msg['role']}: {msg['content']}" for msg in messages]
+ )
+
+ def display_conversation(self, detailed: bool = False):
+ """Display the conversation history."""
+ messages = self.get_messages()
+ for msg in messages:
+ if detailed:
+ print(f"ID: {msg['id']}")
+ print(f"Role: {msg['role']}")
+ print(f"Content: {msg['content']}")
+ print(f"Timestamp: {msg['timestamp']}")
+ print("---")
+ else:
+ print(f"{msg['role']}: {msg['content']}")
+
+ def export_conversation(self, filename: str):
+ """Export the conversation history to a file."""
+ messages = self.get_messages()
+ with open(filename, "w") as f:
+ json.dump(messages, f, indent=2)
+
+ def import_conversation(self, filename: str):
+ """Import a conversation history from a file."""
+ with open(filename, "r") as f:
+ messages = json.load(f)
+ for msg in messages:
+ self.add(
+ msg["role"],
+ msg["content"],
+ (
+ MessageType(msg["message_type"])
+ if msg.get("message_type")
+ else None
+ ),
+ msg.get("metadata"),
+ msg.get("token_count"),
+ )
+
+ def count_messages_by_role(self) -> Dict[str, int]:
+ """Count messages by role."""
+ messages = self.get_messages()
+ counts = {}
+ for msg in messages:
+ role = msg["role"]
+ counts[role] = counts.get(role, 0) + 1
+ return counts
+
+ def return_history_as_string(self) -> str:
+ """Return the conversation history as a string."""
+ return self.get_str()
+
+ def clear(self):
+ """Clear the conversation history."""
+ try:
+ logger.info(
+ f"Clearing conversation with ID: {self.conversation_id}"
+ )
+
+ # Close existing producer and consumer
+ if hasattr(self, "consumer"):
+ self.consumer.close()
+ if hasattr(self, "producer"):
+ self.producer.close()
+
+ # Create new conversation ID and topic
+ self.conversation_id = str(uuid.uuid4())
+ self.topic = f"conversation-{self.conversation_id}"
+ self.subscription_name = f"sub-{self.conversation_id}"
+
+ # Recreate producer and consumer
+ logger.debug(
+ f"Creating new producer for topic: {self.topic}"
+ )
+ self.producer = self.client.create_producer(self.topic)
+
+ logger.debug(
+ f"Creating new consumer with subscription: {self.subscription_name}"
+ )
+ self.consumer = self.client.subscribe(
+ self.topic, self.subscription_name
+ )
+
+ logger.info(
+ f"Successfully cleared conversation. New ID: {self.conversation_id}"
+ )
+
+ except pulsar.ConnectError as e:
+ error_msg = f"Failed to clear conversation: Connection error: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarConnectionError(error_msg)
+ except Exception as e:
+ error_msg = f"Failed to clear conversation: {str(e)}"
+ logger.error(error_msg)
+ raise PulsarOperationError(error_msg)
+
+ def to_dict(self) -> List[Dict]:
+ """Convert the conversation history to a dictionary."""
+ return self.get_messages()
+
+ def to_json(self) -> str:
+ """Convert the conversation history to a JSON string."""
+ return json.dumps(self.to_dict(), indent=2)
+
+ def to_yaml(self) -> str:
+ """Convert the conversation history to a YAML string."""
+ return yaml.dump(self.to_dict())
+
+ def save_as_json(self, filename: str):
+ """Save the conversation history as a JSON file."""
+ with open(filename, "w") as f:
+ json.dump(self.to_dict(), f, indent=2)
+
+ def load_from_json(self, filename: str):
+ """Load the conversation history from a JSON file."""
+ self.import_conversation(filename)
+
+ def save_as_yaml(self, filename: str):
+ """Save the conversation history as a YAML file."""
+ with open(filename, "w") as f:
+ yaml.dump(self.to_dict(), f)
+
+ def load_from_yaml(self, filename: str):
+ """Load the conversation history from a YAML file."""
+ with open(filename, "r") as f:
+ messages = yaml.safe_load(f)
+ for msg in messages:
+ self.add(
+ msg["role"],
+ msg["content"],
+ (
+ MessageType(msg["message_type"])
+ if msg.get("message_type")
+ else None
+ ),
+ msg.get("metadata"),
+ msg.get("token_count"),
+ )
+
+ def get_last_message(self) -> Optional[Dict]:
+ """Get the last message from the conversation history."""
+ messages = self.get_messages()
+ return messages[-1] if messages else None
+
+ def get_last_message_as_string(self) -> str:
+ """Get the last message as a formatted string."""
+ last_message = self.get_last_message()
+ if last_message:
+ return (
+ f"{last_message['role']}: {last_message['content']}"
+ )
+ return ""
+
+ def get_messages_by_role(self, role: str) -> List[Dict]:
+ """Get all messages from a specific role."""
+ messages = self.get_messages()
+ return [msg for msg in messages if msg["role"] == role]
+
+ def get_conversation_summary(self) -> Dict:
+ """Get a summary of the conversation."""
+ messages = self.get_messages()
+ return {
+ "conversation_id": self.conversation_id,
+ "message_count": len(messages),
+ "roles": list(set(msg["role"] for msg in messages)),
+ "start_time": (
+ messages[0]["timestamp"] if messages else None
+ ),
+ "end_time": (
+ messages[-1]["timestamp"] if messages else None
+ ),
+ }
+
+ def get_statistics(self) -> Dict:
+ """Get statistics about the conversation."""
+ messages = self.get_messages()
+ return {
+ "total_messages": len(messages),
+ "messages_by_role": self.count_messages_by_role(),
+ "cache_stats": self.get_cache_stats(),
+ }
+
+ def get_conversation_id(self) -> str:
+ """Get the current conversation ID."""
+ return self.conversation_id
+
+ def start_new_conversation(self) -> str:
+ """Start a new conversation and return its ID."""
+ self.clear()
+ return self.conversation_id
+
+ def delete_current_conversation(self) -> bool:
+ """Delete the current conversation."""
+ self.clear()
+ return True
+
+ def search_messages(self, query: str) -> List[Dict]:
+ """Search for messages containing specific text."""
+ return self.search(query)
+
+ def update_message(
+ self,
+ message_id: int,
+ content: Union[str, dict, list],
+ metadata: Optional[Dict] = None,
+ ) -> bool:
+ """Update an existing message."""
+ message = self.query(message_id)
+ if message:
+ self.update(message_id, message["role"], content)
+ return True
+ return False
+
+ def get_conversation_metadata_dict(self) -> Dict:
+ """Get detailed metadata about the conversation."""
+ return self.get_conversation_summary()
+
+ def get_conversation_timeline_dict(self) -> Dict[str, List[Dict]]:
+ """Get the conversation organized by timestamps."""
+ messages = self.get_messages()
+ timeline = {}
+ for msg in messages:
+ date = msg["timestamp"].split("T")[0]
+ if date not in timeline:
+ timeline[date] = []
+ timeline[date].append(msg)
+ return timeline
+
+ def get_conversation_by_role_dict(self) -> Dict[str, List[Dict]]:
+ """Get the conversation organized by roles."""
+ messages = self.get_messages()
+ by_role = {}
+ for msg in messages:
+ role = msg["role"]
+ if role not in by_role:
+ by_role[role] = []
+ by_role[role].append(msg)
+ return by_role
+
+ def get_conversation_as_dict(self) -> Dict:
+ """Get the entire conversation as a dictionary with messages and metadata."""
+ return {
+ "metadata": self.get_conversation_metadata_dict(),
+ "messages": self.get_messages(),
+ "statistics": self.get_statistics(),
+ }
+
+ def truncate_memory_with_tokenizer(self):
+ """Truncate the conversation history based on token count."""
+ if not self.tokenizer:
+ return
+
+ messages = self.get_messages()
+ total_tokens = 0
+ truncated_messages = []
+
+ for msg in messages:
+ content = msg["content"]
+ tokens = self.tokenizer.count_tokens(str(content))
+
+ if total_tokens + tokens <= self.context_length:
+ truncated_messages.append(msg)
+ total_tokens += tokens
+ else:
+ break
+
+ # Clear and re-add truncated messages
+ self.clear()
+ for msg in truncated_messages:
+ self.add(
+ msg["role"],
+ msg["content"],
+ (
+ MessageType(msg["message_type"])
+ if msg.get("message_type")
+ else None
+ ),
+ msg.get("metadata"),
+ msg.get("token_count"),
+ )
+
+ def get_cache_stats(self) -> Dict[str, int]:
+ """Get statistics about cache usage."""
+ with self.cache_lock:
+ return {
+ "hits": self.cache_stats["hits"],
+ "misses": self.cache_stats["misses"],
+ "cached_tokens": self.cache_stats["cached_tokens"],
+ "total_tokens": self.cache_stats["total_tokens"],
+ "hit_rate": (
+ self.cache_stats["hits"]
+ / (
+ self.cache_stats["hits"]
+ + self.cache_stats["misses"]
+ )
+ if (
+ self.cache_stats["hits"]
+ + self.cache_stats["misses"]
+ )
+ > 0
+ else 0
+ ),
+ }
+
+ def __del__(self):
+ """Cleanup Pulsar resources."""
+ try:
+ logger.debug("Cleaning up Pulsar resources")
+ if hasattr(self, "consumer"):
+ self.consumer.close()
+ if hasattr(self, "producer"):
+ self.producer.close()
+ if hasattr(self, "client"):
+ self.client.close()
+ logger.info("Successfully cleaned up Pulsar resources")
+ except Exception as e:
+ logger.error(f"Error during cleanup: {str(e)}")
+
+ @classmethod
+ def check_pulsar_availability(
+ cls, pulsar_host: str = "pulsar://localhost:6650"
+ ) -> bool:
+ """
+ Check if Pulsar is available and accessible.
+
+ Args:
+ pulsar_host (str): The Pulsar host to check
+
+ Returns:
+ bool: True if Pulsar is available and accessible, False otherwise
+ """
+ if not PULSAR_AVAILABLE:
+ logger.error("Pulsar client library is not installed")
+ return False
+
+ try:
+ logger.debug(
+ f"Checking Pulsar availability at {pulsar_host}"
+ )
+ client = pulsar.Client(pulsar_host)
+ client.close()
+ logger.info("Pulsar is available and accessible")
+ return True
+ except Exception as e:
+ logger.error(f"Pulsar is not accessible: {str(e)}")
+ return False
+
+ def health_check(self) -> Dict[str, bool]:
+ """
+ Perform a health check of the Pulsar connection and components.
+
+ Returns:
+ Dict[str, bool]: Health status of different components
+ """
+ health = {
+ "client_connected": False,
+ "producer_active": False,
+ "consumer_active": False,
+ }
+
+ try:
+ # Check client
+ if hasattr(self, "client"):
+ health["client_connected"] = True
+
+ # Check producer
+ if hasattr(self, "producer"):
+ # Try to send a test message
+ test_msg = json.dumps(
+ {"type": "health_check"}
+ ).encode("utf-8")
+ self.producer.send(test_msg)
+ health["producer_active"] = True
+
+ # Check consumer
+ if hasattr(self, "consumer"):
+ try:
+ msg = self.consumer.receive(timeout_millis=1000)
+ self.consumer.acknowledge(msg)
+ health["consumer_active"] = True
+ except pulsar.Timeout:
+ pass
+
+ logger.info(f"Health check results: {health}")
+ return health
+
+ except Exception as e:
+ logger.error(f"Health check failed: {str(e)}")
+ return health
diff --git a/swarms/communication/redis_wrap.py b/swarms/communication/redis_wrap.py
new file mode 100644
index 00000000..20e7bedc
--- /dev/null
+++ b/swarms/communication/redis_wrap.py
@@ -0,0 +1,1362 @@
+import datetime
+import hashlib
+import json
+import threading
+import subprocess
+import tempfile
+import os
+import atexit
+import time
+from typing import Any, Dict, List, Optional, Union
+
+import yaml
+
+try:
+ import redis
+ from redis.exceptions import (
+ AuthenticationError,
+ BusyLoadingError,
+ ConnectionError,
+ RedisError,
+ TimeoutError,
+ )
+
+ REDIS_AVAILABLE = True
+except ImportError:
+ REDIS_AVAILABLE = False
+
+from loguru import logger
+
+from swarms.structs.base_structure import BaseStructure
+from swarms.utils.any_to_str import any_to_str
+from swarms.utils.formatter import formatter
+from swarms.utils.litellm_tokenizer import count_tokens
+
+
+class RedisConnectionError(Exception):
+ """Custom exception for Redis connection errors."""
+
+ pass
+
+
+class RedisOperationError(Exception):
+ """Custom exception for Redis operation errors."""
+
+ pass
+
+
+class EmbeddedRedisServer:
+ """Embedded Redis server manager"""
+
+ def __init__(
+ self,
+ port: int = 6379,
+ data_dir: str = None,
+ persist: bool = True,
+ auto_persist: bool = True,
+ ):
+ self.port = port
+ self.process = None
+ self.data_dir = data_dir or os.path.expanduser(
+ "~/.swarms/redis"
+ )
+ self.persist = persist
+ self.auto_persist = auto_persist
+
+ # Only create data directory if persistence is enabled
+ if self.persist and self.auto_persist:
+ os.makedirs(self.data_dir, exist_ok=True)
+ # Create Redis configuration file
+ self._create_redis_config()
+
+ atexit.register(self.stop)
+
+ def _create_redis_config(self):
+ """Create Redis configuration file with persistence settings"""
+ config_path = os.path.join(self.data_dir, "redis.conf")
+ config_content = f"""
+port {self.port}
+dir {self.data_dir}
+dbfilename dump.rdb
+appendonly yes
+appendfilename appendonly.aof
+appendfsync everysec
+save 1 1
+rdbcompression yes
+rdbchecksum yes
+"""
+ with open(config_path, "w") as f:
+ f.write(config_content)
+ logger.info(f"Created Redis configuration at {config_path}")
+
+ def start(self) -> bool:
+ """Start the Redis server
+
+ Returns:
+ bool: True if server started successfully, False otherwise
+ """
+ try:
+ # Use data directory if persistence is enabled and auto_persist is True
+ if not (self.persist and self.auto_persist):
+ self.data_dir = tempfile.mkdtemp()
+ self._create_redis_config() # Create config even for temporary dir
+
+ config_path = os.path.join(self.data_dir, "redis.conf")
+
+ # Start Redis server with config file
+ redis_args = [
+ "redis-server",
+ config_path,
+ "--daemonize",
+ "no",
+ ]
+
+ # Start Redis server
+ self.process = subprocess.Popen(
+ redis_args,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+ )
+
+ # Wait for Redis to start
+ time.sleep(1)
+ if self.process.poll() is not None:
+ stderr = self.process.stderr.read().decode()
+ raise Exception(f"Redis failed to start: {stderr}")
+
+ # Test connection
+ try:
+ r = redis.Redis(host="localhost", port=self.port)
+ r.ping()
+ r.close()
+ except redis.ConnectionError as e:
+ raise Exception(
+ f"Could not connect to Redis: {str(e)}"
+ )
+
+ logger.info(
+ f"Started {'persistent' if (self.persist and self.auto_persist) else 'temporary'} Redis server on port {self.port}"
+ )
+ if self.persist and self.auto_persist:
+ logger.info(f"Redis data directory: {self.data_dir}")
+ return True
+ except Exception as e:
+ logger.error(
+ f"Failed to start embedded Redis server: {str(e)}"
+ )
+ self.stop()
+ return False
+
+ def stop(self):
+ """Stop the Redis server and cleanup resources"""
+ try:
+ if self.process:
+ # Send SAVE and BGSAVE commands before stopping if persistence is enabled
+ if self.persist and self.auto_persist:
+ try:
+ r = redis.Redis(
+ host="localhost", port=self.port
+ )
+ r.save() # Synchronous save
+ r.bgsave() # Asynchronous save
+ time.sleep(
+ 1
+ ) # Give time for background save to complete
+ r.close()
+ except Exception as e:
+ logger.warning(
+ f"Error during Redis save: {str(e)}"
+ )
+
+ self.process.terminate()
+ try:
+ self.process.wait(timeout=5)
+ except subprocess.TimeoutExpired:
+ self.process.kill()
+ self.process.wait()
+ self.process = None
+ logger.info("Stopped Redis server")
+
+ # Only remove directory if not persisting or auto_persist is False
+ if (
+ (not self.persist or not self.auto_persist)
+ and self.data_dir
+ and os.path.exists(self.data_dir)
+ ):
+ import shutil
+
+ shutil.rmtree(self.data_dir)
+ self.data_dir = None
+ except Exception as e:
+ logger.error(f"Error stopping Redis server: {str(e)}")
+
+
+class RedisConversation(BaseStructure):
+ """
+ A Redis-based implementation of the Conversation class for managing conversation history.
+ This class provides the same interface as the memory-based Conversation class but uses
+ Redis as the storage backend.
+
+ Attributes:
+ system_prompt (Optional[str]): The system prompt for the conversation.
+ time_enabled (bool): Flag to enable time tracking for messages.
+ autosave (bool): Flag to enable automatic saving of conversation history.
+ save_filepath (str): File path for saving the conversation history.
+ tokenizer (Any): Tokenizer for counting tokens in messages.
+ context_length (int): Maximum number of tokens allowed in the conversation history.
+ rules (str): Rules for the conversation.
+ custom_rules_prompt (str): Custom prompt for rules.
+ user (str): The user identifier for messages.
+ auto_save (bool): Flag to enable auto-saving of conversation history.
+ save_as_yaml (bool): Flag to save conversation history as YAML.
+ save_as_json_bool (bool): Flag to save conversation history as JSON.
+ token_count (bool): Flag to enable token counting for messages.
+ cache_enabled (bool): Flag to enable prompt caching.
+ cache_stats (dict): Statistics about cache usage.
+ cache_lock (threading.Lock): Lock for thread-safe cache operations.
+ redis_client (redis.Redis): Redis client instance.
+ conversation_id (str): Unique identifier for the current conversation.
+ """
+
+ def __init__(
+ self,
+ system_prompt: Optional[str] = None,
+ time_enabled: bool = False,
+ autosave: bool = False,
+ save_filepath: str = None,
+ tokenizer: Any = None,
+ context_length: int = 8192,
+ rules: str = None,
+ custom_rules_prompt: str = None,
+ user: str = "User:",
+ auto_save: bool = True,
+ save_as_yaml: bool = True,
+ save_as_json_bool: bool = False,
+ token_count: bool = True,
+ cache_enabled: bool = True,
+ redis_host: str = "localhost",
+ redis_port: int = 6379,
+ redis_db: int = 0,
+ redis_password: Optional[str] = None,
+ redis_ssl: bool = False,
+ redis_retry_attempts: int = 3,
+ redis_retry_delay: float = 1.0,
+ use_embedded_redis: bool = True,
+ persist_redis: bool = True,
+ auto_persist: bool = True,
+ redis_data_dir: Optional[str] = None,
+ conversation_id: Optional[str] = None,
+ name: Optional[str] = None,
+ *args,
+ **kwargs,
+ ):
+ """
+ Initialize the RedisConversation with Redis backend.
+
+ Args:
+ system_prompt (Optional[str]): The system prompt for the conversation.
+ time_enabled (bool): Flag to enable time tracking for messages.
+ autosave (bool): Flag to enable automatic saving of conversation history.
+ save_filepath (str): File path for saving the conversation history.
+ tokenizer (Any): Tokenizer for counting tokens in messages.
+ context_length (int): Maximum number of tokens allowed in the conversation history.
+ rules (str): Rules for the conversation.
+ custom_rules_prompt (str): Custom prompt for rules.
+ user (str): The user identifier for messages.
+ auto_save (bool): Flag to enable auto-saving of conversation history.
+ save_as_yaml (bool): Flag to save conversation history as YAML.
+ save_as_json_bool (bool): Flag to save conversation history as JSON.
+ token_count (bool): Flag to enable token counting for messages.
+ cache_enabled (bool): Flag to enable prompt caching.
+ redis_host (str): Redis server host.
+ redis_port (int): Redis server port.
+ redis_db (int): Redis database number.
+ redis_password (Optional[str]): Redis password for authentication.
+ redis_ssl (bool): Whether to use SSL for Redis connection.
+ redis_retry_attempts (int): Number of connection retry attempts.
+ redis_retry_delay (float): Delay between retry attempts in seconds.
+ use_embedded_redis (bool): Whether to start an embedded Redis server.
+ If True, redis_host and redis_port will be used for the embedded server.
+ persist_redis (bool): Whether to enable Redis persistence.
+ auto_persist (bool): Whether to automatically handle persistence.
+ If True, persistence will be managed automatically.
+ If False, persistence will be manual even if persist_redis is True.
+ redis_data_dir (Optional[str]): Directory for Redis data persistence.
+ conversation_id (Optional[str]): Specific conversation ID to use/restore.
+ If None, a new ID will be generated.
+ name (Optional[str]): A friendly name for the conversation.
+ If provided, this will be used to look up or create a conversation.
+ Takes precedence over conversation_id if both are provided.
+
+ Raises:
+ ImportError: If Redis package is not installed.
+ RedisConnectionError: If connection to Redis fails.
+ RedisOperationError: If Redis operations fail.
+ """
+ if not REDIS_AVAILABLE:
+ logger.error(
+ "Redis package is not installed. Please install it with 'pip install redis'"
+ )
+ raise ImportError(
+ "Redis package is not installed. Please install it with 'pip install redis'"
+ )
+
+ super().__init__()
+ self.system_prompt = system_prompt
+ self.time_enabled = time_enabled
+ self.autosave = autosave
+ self.save_filepath = save_filepath
+ self.tokenizer = tokenizer
+ self.context_length = context_length
+ self.rules = rules
+ self.custom_rules_prompt = custom_rules_prompt
+ self.user = user
+ self.auto_save = auto_save
+ self.save_as_yaml = save_as_yaml
+ self.save_as_json_bool = save_as_json_bool
+ self.token_count = token_count
+ self.cache_enabled = cache_enabled
+ self.cache_stats = {
+ "hits": 0,
+ "misses": 0,
+ "cached_tokens": 0,
+ "total_tokens": 0,
+ }
+ self.cache_lock = threading.Lock()
+
+ # Initialize Redis server (embedded or external)
+ self.embedded_server = None
+ if use_embedded_redis:
+ self.embedded_server = EmbeddedRedisServer(
+ port=redis_port,
+ data_dir=redis_data_dir,
+ persist=persist_redis,
+ auto_persist=auto_persist,
+ )
+ if not self.embedded_server.start():
+ raise RedisConnectionError(
+ "Failed to start embedded Redis server"
+ )
+
+ # Initialize Redis client with retries
+ self.redis_client = None
+ self._initialize_redis_connection(
+ host=redis_host,
+ port=redis_port,
+ db=redis_db,
+ password=redis_password,
+ ssl=redis_ssl,
+ retry_attempts=redis_retry_attempts,
+ retry_delay=redis_retry_delay,
+ )
+
+ # Handle conversation name and ID
+ self.name = name
+ if name:
+ # Try to find existing conversation by name
+ existing_id = self._get_conversation_id_by_name(name)
+ if existing_id:
+ self.conversation_id = existing_id
+ logger.info(
+ f"Found existing conversation '{name}' with ID: {self.conversation_id}"
+ )
+ else:
+ # Create new conversation with name
+ self.conversation_id = f"conversation:{datetime.datetime.now().strftime('%Y%m%d%H%M%S')}"
+ self._save_conversation_name(name)
+ logger.info(
+ f"Created new conversation '{name}' with ID: {self.conversation_id}"
+ )
+ else:
+ # Use provided ID or generate new one
+ self.conversation_id = (
+ conversation_id
+ or f"conversation:{datetime.datetime.now().strftime('%Y%m%d%H%M%S')}"
+ )
+ logger.info(
+ f"Using conversation ID: {self.conversation_id}"
+ )
+
+ # Check if we have existing data
+ has_existing_data = self._load_existing_data()
+
+ if has_existing_data:
+ logger.info(
+ f"Restored conversation data for: {self.name or self.conversation_id}"
+ )
+ else:
+ logger.info(
+ f"Initialized new conversation: {self.name or self.conversation_id}"
+ )
+ # Initialize with prompts only for new conversations
+ try:
+ if self.system_prompt is not None:
+ self.add("System", self.system_prompt)
+
+ if self.rules is not None:
+ self.add("User", rules)
+
+ if custom_rules_prompt is not None:
+ self.add(user or "User", custom_rules_prompt)
+ except RedisError as e:
+ logger.error(
+ f"Failed to initialize conversation: {str(e)}"
+ )
+ raise RedisOperationError(
+ f"Failed to initialize conversation: {str(e)}"
+ )
+
+ def _initialize_redis_connection(
+ self,
+ host: str,
+ port: int,
+ db: int,
+ password: Optional[str],
+ ssl: bool,
+ retry_attempts: int,
+ retry_delay: float,
+ ):
+ """Initialize Redis connection with retry mechanism.
+
+ Args:
+ host (str): Redis host.
+ port (int): Redis port.
+ db (int): Redis database number.
+ password (Optional[str]): Redis password.
+ ssl (bool): Whether to use SSL.
+ retry_attempts (int): Number of retry attempts.
+ retry_delay (float): Delay between retries in seconds.
+
+ Raises:
+ RedisConnectionError: If connection fails after all retries.
+ """
+ import time
+
+ for attempt in range(retry_attempts):
+ try:
+ self.redis_client = redis.Redis(
+ host=host,
+ port=port,
+ db=db,
+ password=password,
+ ssl=ssl,
+ decode_responses=True,
+ socket_timeout=5.0,
+ socket_connect_timeout=5.0,
+ )
+ # Test connection and load data
+ self.redis_client.ping()
+
+ # Try to load the RDB file if it exists
+ try:
+ self.redis_client.config_set(
+ "dbfilename", "dump.rdb"
+ )
+ self.redis_client.config_set(
+ "dir", os.path.expanduser("~/.swarms/redis")
+ )
+ except redis.ResponseError:
+ pass # Ignore if config set fails
+
+ logger.info(
+ f"Successfully connected to Redis at {host}:{port}"
+ )
+ return
+ except (
+ ConnectionError,
+ TimeoutError,
+ AuthenticationError,
+ BusyLoadingError,
+ ) as e:
+ if attempt < retry_attempts - 1:
+ logger.warning(
+ f"Redis connection attempt {attempt + 1} failed: {str(e)}"
+ )
+ time.sleep(retry_delay)
+ else:
+ logger.error(
+ f"Failed to connect to Redis after {retry_attempts} attempts"
+ )
+ raise RedisConnectionError(
+ f"Failed to connect to Redis: {str(e)}"
+ )
+
+ def _load_existing_data(self):
+ """Load existing data for a conversation ID if it exists"""
+ try:
+ # Check if conversation exists
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ if message_ids:
+ logger.info(
+ f"Found existing data for conversation {self.conversation_id}"
+ )
+ return True
+ return False
+ except Exception as e:
+ logger.warning(
+ f"Error checking for existing data: {str(e)}"
+ )
+ return False
+
+ def _safe_redis_operation(
+ self,
+ operation_name: str,
+ operation_func: callable,
+ *args,
+ **kwargs,
+ ):
+ """Execute Redis operation safely with error handling and logging.
+
+ Args:
+ operation_name (str): Name of the operation for logging.
+ operation_func (callable): Function to execute.
+ *args: Arguments for the function.
+ **kwargs: Keyword arguments for the function.
+
+ Returns:
+ Any: Result of the operation.
+
+ Raises:
+ RedisOperationError: If the operation fails.
+ """
+ try:
+ return operation_func(*args, **kwargs)
+ except RedisError as e:
+ error_msg = (
+ f"Redis operation '{operation_name}' failed: {str(e)}"
+ )
+ logger.error(error_msg)
+ raise RedisOperationError(error_msg)
+ except Exception as e:
+ error_msg = f"Unexpected error during Redis operation '{operation_name}': {str(e)}"
+ logger.error(error_msg)
+ raise
+
+ def _generate_cache_key(
+ self, content: Union[str, dict, list]
+ ) -> str:
+ """Generate a cache key for the given content.
+
+ Args:
+ content (Union[str, dict, list]): The content to generate a cache key for.
+
+ Returns:
+ str: The cache key.
+ """
+ try:
+ if isinstance(content, (dict, list)):
+ content = json.dumps(content, sort_keys=True)
+ return hashlib.md5(str(content).encode()).hexdigest()
+ except Exception as e:
+ logger.error(f"Failed to generate cache key: {str(e)}")
+ return hashlib.md5(
+ str(datetime.datetime.now()).encode()
+ ).hexdigest()
+
+ def _get_cached_tokens(
+ self, content: Union[str, dict, list]
+ ) -> Optional[int]:
+ """Get the number of cached tokens for the given content.
+
+ Args:
+ content (Union[str, dict, list]): The content to check.
+
+ Returns:
+ Optional[int]: The number of cached tokens, or None if not cached.
+ """
+ if not self.cache_enabled:
+ return None
+
+ with self.cache_lock:
+ try:
+ cache_key = self._generate_cache_key(content)
+ cached_value = self._safe_redis_operation(
+ "get_cached_tokens",
+ self.redis_client.hget,
+ f"{self.conversation_id}:cache",
+ cache_key,
+ )
+ if cached_value:
+ self.cache_stats["hits"] += 1
+ return int(cached_value)
+ self.cache_stats["misses"] += 1
+ return None
+ except Exception as e:
+ logger.warning(
+ f"Failed to get cached tokens: {str(e)}"
+ )
+ return None
+
+ def _update_cache_stats(
+ self, content: Union[str, dict, list], token_count: int
+ ):
+ """Update cache statistics for the given content.
+
+ Args:
+ content (Union[str, dict, list]): The content to update stats for.
+ token_count (int): The number of tokens in the content.
+ """
+ if not self.cache_enabled:
+ return
+
+ with self.cache_lock:
+ try:
+ cache_key = self._generate_cache_key(content)
+ self._safe_redis_operation(
+ "update_cache",
+ self.redis_client.hset,
+ f"{self.conversation_id}:cache",
+ cache_key,
+ token_count,
+ )
+ self.cache_stats["cached_tokens"] += token_count
+ self.cache_stats["total_tokens"] += token_count
+ except Exception as e:
+ logger.warning(
+ f"Failed to update cache stats: {str(e)}"
+ )
+
+ def add(
+ self,
+ role: str,
+ content: Union[str, dict, list],
+ *args,
+ **kwargs,
+ ):
+ """Add a message to the conversation history.
+
+ Args:
+ role (str): The role of the speaker (e.g., 'User', 'System').
+ content (Union[str, dict, list]): The content of the message.
+
+ Raises:
+ RedisOperationError: If the operation fails.
+ """
+ try:
+ message = {
+ "role": role,
+ "timestamp": datetime.datetime.now().isoformat(),
+ }
+
+ if isinstance(content, (dict, list)):
+ message["content"] = json.dumps(content)
+ elif self.time_enabled:
+ message["content"] = (
+ f"Time: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')} \n {content}"
+ )
+ else:
+ message["content"] = str(content)
+
+ # Check cache for token count
+ cached_tokens = self._get_cached_tokens(content)
+ if cached_tokens is not None:
+ message["token_count"] = cached_tokens
+ message["cached"] = "true"
+ else:
+ message["cached"] = "false"
+
+ # Add message to Redis
+ message_id = self._safe_redis_operation(
+ "increment_counter",
+ self.redis_client.incr,
+ f"{self.conversation_id}:message_counter",
+ )
+
+ self._safe_redis_operation(
+ "store_message",
+ self.redis_client.hset,
+ f"{self.conversation_id}:message:{message_id}",
+ mapping=message,
+ )
+
+ self._safe_redis_operation(
+ "append_message_id",
+ self.redis_client.rpush,
+ f"{self.conversation_id}:message_ids",
+ message_id,
+ )
+
+ if (
+ self.token_count is True
+ and message["cached"] == "false"
+ ):
+ self._count_tokens(content, message, message_id)
+
+ logger.debug(
+ f"Added message with ID {message_id} to conversation {self.conversation_id}"
+ )
+ except Exception as e:
+ error_msg = f"Failed to add message: {str(e)}"
+ logger.error(error_msg)
+ raise RedisOperationError(error_msg)
+
+ def _count_tokens(
+ self, content: str, message: dict, message_id: int
+ ):
+ """Count tokens for a message in a separate thread.
+
+ Args:
+ content (str): The content to count tokens for.
+ message (dict): The message dictionary.
+ message_id (int): The ID of the message in Redis.
+ """
+
+ def count_tokens_thread():
+ try:
+ tokens = count_tokens(any_to_str(content))
+ message["token_count"] = int(tokens)
+
+ # Update the message in Redis
+ self._safe_redis_operation(
+ "update_token_count",
+ self.redis_client.hset,
+ f"{self.conversation_id}:message:{message_id}",
+ "token_count",
+ int(tokens),
+ )
+
+ # Update cache stats
+ self._update_cache_stats(content, int(tokens))
+
+ if self.autosave and self.save_filepath:
+ self.save_as_json(self.save_filepath)
+
+ logger.debug(
+ f"Updated token count for message {message_id}: {tokens} tokens"
+ )
+ except Exception as e:
+ logger.error(
+ f"Failed to count tokens for message {message_id}: {str(e)}"
+ )
+
+ token_thread = threading.Thread(target=count_tokens_thread)
+ token_thread.daemon = True
+ token_thread.start()
+
+ def delete(self, index: int):
+ """Delete a message from the conversation history.
+
+ Args:
+ index (int): Index of the message to delete.
+
+ Raises:
+ RedisOperationError: If the operation fails.
+ ValueError: If the index is invalid.
+ """
+ try:
+ message_ids = self._safe_redis_operation(
+ "get_message_ids",
+ self.redis_client.lrange,
+ f"{self.conversation_id}:message_ids",
+ 0,
+ -1,
+ )
+
+ if not (0 <= index < len(message_ids)):
+ raise ValueError(f"Invalid message index: {index}")
+
+ message_id = message_ids[index]
+ self._safe_redis_operation(
+ "delete_message",
+ self.redis_client.delete,
+ f"{self.conversation_id}:message:{message_id}",
+ )
+ self._safe_redis_operation(
+ "remove_message_id",
+ self.redis_client.lrem,
+ f"{self.conversation_id}:message_ids",
+ 1,
+ message_id,
+ )
+ logger.info(
+ f"Deleted message {message_id} from conversation {self.conversation_id}"
+ )
+ except Exception as e:
+ error_msg = (
+ f"Failed to delete message at index {index}: {str(e)}"
+ )
+ logger.error(error_msg)
+ raise RedisOperationError(error_msg)
+
+ def update(
+ self, index: int, role: str, content: Union[str, dict]
+ ):
+ """Update a message in the conversation history.
+
+ Args:
+ index (int): Index of the message to update.
+ role (str): Role of the speaker.
+ content (Union[str, dict]): New content of the message.
+
+ Raises:
+ RedisOperationError: If the operation fails.
+ ValueError: If the index is invalid.
+ """
+ try:
+ message_ids = self._safe_redis_operation(
+ "get_message_ids",
+ self.redis_client.lrange,
+ f"{self.conversation_id}:message_ids",
+ 0,
+ -1,
+ )
+
+ if not message_ids or not (0 <= index < len(message_ids)):
+ raise ValueError(f"Invalid message index: {index}")
+
+ message_id = message_ids[index]
+ message = {
+ "role": role,
+ "content": (
+ json.dumps(content)
+ if isinstance(content, (dict, list))
+ else str(content)
+ ),
+ "timestamp": datetime.datetime.now().isoformat(),
+ "cached": "false",
+ }
+
+ # Update the message in Redis
+ self._safe_redis_operation(
+ "update_message",
+ self.redis_client.hset,
+ f"{self.conversation_id}:message:{message_id}",
+ mapping=message,
+ )
+
+ # Update token count if needed
+ if self.token_count:
+ self._count_tokens(content, message, message_id)
+
+ logger.debug(
+ f"Updated message {message_id} in conversation {self.conversation_id}"
+ )
+ except Exception as e:
+ error_msg = (
+ f"Failed to update message at index {index}: {str(e)}"
+ )
+ logger.error(error_msg)
+ raise RedisOperationError(error_msg)
+
+ def query(self, index: int) -> dict:
+ """Query a message in the conversation history.
+
+ Args:
+ index (int): Index of the message to query.
+
+ Returns:
+ dict: The message with its role and content.
+ """
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ if 0 <= index < len(message_ids):
+ message_id = message_ids[index]
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ if "content" in message and message["content"].startswith(
+ "{"
+ ):
+ try:
+ message["content"] = json.loads(
+ message["content"]
+ )
+ except json.JSONDecodeError:
+ pass
+ return message
+ return {}
+
+ def search(self, keyword: str) -> List[dict]:
+ """Search for messages containing a keyword.
+
+ Args:
+ keyword (str): Keyword to search for.
+
+ Returns:
+ List[dict]: List of messages containing the keyword.
+ """
+ results = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ if keyword in message.get("content", ""):
+ if message["content"].startswith("{"):
+ try:
+ message["content"] = json.loads(
+ message["content"]
+ )
+ except json.JSONDecodeError:
+ pass
+ results.append(message)
+
+ return results
+
+ def display_conversation(self, detailed: bool = False):
+ """Display the conversation history.
+
+ Args:
+ detailed (bool): Whether to show detailed information.
+ """
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ if message["content"].startswith("{"):
+ try:
+ message["content"] = json.loads(
+ message["content"]
+ )
+ except json.JSONDecodeError:
+ pass
+ formatter.print_panel(
+ f"{message['role']}: {message['content']}\n\n"
+ )
+
+ def export_conversation(self, filename: str):
+ """Export the conversation history to a file.
+
+ Args:
+ filename (str): Filename to export to.
+ """
+ with open(filename, "w") as f:
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ f.write(f"{message['role']}: {message['content']}\n")
+
+ def import_conversation(self, filename: str):
+ """Import a conversation history from a file.
+
+ Args:
+ filename (str): Filename to import from.
+ """
+ with open(filename) as f:
+ for line in f:
+ role, content = line.split(": ", 1)
+ self.add(role, content.strip())
+
+ def count_messages_by_role(self) -> Dict[str, int]:
+ """Count messages by role.
+
+ Returns:
+ Dict[str, int]: Count of messages by role.
+ """
+ counts = {
+ "system": 0,
+ "user": 0,
+ "assistant": 0,
+ "function": 0,
+ }
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ role = message["role"].lower()
+ if role in counts:
+ counts[role] += 1
+ return counts
+
+ def return_history_as_string(self) -> str:
+ """Return the conversation history as a string.
+
+ Returns:
+ str: The conversation history formatted as a string.
+ """
+ messages = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ messages.append(
+ f"{message['role']}: {message['content']}\n\n"
+ )
+ return "".join(messages)
+
+ def get_str(self) -> str:
+ """Get the conversation history as a string.
+
+ Returns:
+ str: The conversation history.
+ """
+ messages = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ msg_str = f"{message['role']}: {message['content']}"
+ if "token_count" in message:
+ msg_str += f" (tokens: {message['token_count']})"
+ if message.get("cached", "false") == "true":
+ msg_str += " [cached]"
+ messages.append(msg_str)
+ return "\n".join(messages)
+
+ def save_as_json(self, filename: str = None):
+ """Save the conversation history as a JSON file.
+
+ Args:
+ filename (str): Filename to save to.
+ """
+ if filename:
+ data = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ if message["content"].startswith("{"):
+ try:
+ message["content"] = json.loads(
+ message["content"]
+ )
+ except json.JSONDecodeError:
+ pass
+ data.append(message)
+
+ with open(filename, "w") as f:
+ json.dump(data, f, indent=2)
+
+ def load_from_json(self, filename: str):
+ """Load the conversation history from a JSON file.
+
+ Args:
+ filename (str): Filename to load from.
+ """
+ with open(filename) as f:
+ data = json.load(f)
+ self.clear() # Clear existing conversation
+ for message in data:
+ self.add(message["role"], message["content"])
+
+ def clear(self):
+ """Clear the conversation history."""
+ # Get all message IDs
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+
+ # Delete all messages
+ for message_id in message_ids:
+ self.redis_client.delete(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+
+ # Clear message IDs list
+ self.redis_client.delete(
+ f"{self.conversation_id}:message_ids"
+ )
+
+ # Clear cache
+ self.redis_client.delete(f"{self.conversation_id}:cache")
+
+ # Reset message counter
+ self.redis_client.delete(
+ f"{self.conversation_id}:message_counter"
+ )
+
+ def to_dict(self) -> List[Dict]:
+ """Convert the conversation history to a dictionary.
+
+ Returns:
+ List[Dict]: The conversation history as a list of dictionaries.
+ """
+ data = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ if message["content"].startswith("{"):
+ try:
+ message["content"] = json.loads(
+ message["content"]
+ )
+ except json.JSONDecodeError:
+ pass
+ data.append(message)
+ return data
+
+ def to_json(self) -> str:
+ """Convert the conversation history to a JSON string.
+
+ Returns:
+ str: The conversation history as a JSON string.
+ """
+ return json.dumps(self.to_dict(), indent=2)
+
+ def to_yaml(self) -> str:
+ """Convert the conversation history to a YAML string.
+
+ Returns:
+ str: The conversation history as a YAML string.
+ """
+ return yaml.dump(self.to_dict())
+
+ def get_last_message_as_string(self) -> str:
+ """Get the last message as a formatted string.
+
+ Returns:
+ str: The last message formatted as 'role: content'.
+ """
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", -1, -1
+ )
+ if message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_ids[0]}"
+ )
+ return f"{message['role']}: {message['content']}"
+ return ""
+
+ def return_messages_as_list(self) -> List[str]:
+ """Return the conversation messages as a list of formatted strings.
+
+ Returns:
+ List[str]: List of messages formatted as 'role: content'.
+ """
+ messages = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ messages.append(
+ f"{message['role']}: {message['content']}"
+ )
+ return messages
+
+ def return_messages_as_dictionary(self) -> List[Dict]:
+ """Return the conversation messages as a list of dictionaries.
+
+ Returns:
+ List[Dict]: List of dictionaries containing role and content of each message.
+ """
+ messages = []
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ if message["content"].startswith("{"):
+ try:
+ message["content"] = json.loads(
+ message["content"]
+ )
+ except json.JSONDecodeError:
+ pass
+ messages.append(
+ {
+ "role": message["role"],
+ "content": message["content"],
+ }
+ )
+ return messages
+
+ def get_cache_stats(self) -> Dict[str, Union[int, float]]:
+ """Get statistics about cache usage.
+
+ Returns:
+ Dict[str, Union[int, float]]: Statistics about cache usage.
+ """
+ with self.cache_lock:
+ total = (
+ self.cache_stats["hits"] + self.cache_stats["misses"]
+ )
+ hit_rate = (
+ self.cache_stats["hits"] / total if total > 0 else 0
+ )
+ return {
+ "hits": self.cache_stats["hits"],
+ "misses": self.cache_stats["misses"],
+ "cached_tokens": self.cache_stats["cached_tokens"],
+ "total_tokens": self.cache_stats["total_tokens"],
+ "hit_rate": hit_rate,
+ }
+
+ def truncate_memory_with_tokenizer(self):
+ """Truncate the conversation history based on token count."""
+ if not self.tokenizer:
+ return
+
+ total_tokens = 0
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", 0, -1
+ )
+ keep_message_ids = []
+
+ for message_id in message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+ tokens = int(
+ message.get("token_count", 0)
+ ) or count_tokens(message["content"])
+
+ if total_tokens + tokens <= self.context_length:
+ total_tokens += tokens
+ keep_message_ids.append(message_id)
+ else:
+ # Delete messages that exceed the context length
+ self.redis_client.delete(
+ f"{self.conversation_id}:message:{message_id}"
+ )
+
+ # Update the message IDs list
+ self.redis_client.delete(
+ f"{self.conversation_id}:message_ids"
+ )
+ if keep_message_ids:
+ self.redis_client.rpush(
+ f"{self.conversation_id}:message_ids",
+ *keep_message_ids,
+ )
+
+ def get_final_message(self) -> str:
+ """Return the final message from the conversation history.
+
+ Returns:
+ str: The final message formatted as 'role: content'.
+ """
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", -1, -1
+ )
+ if message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_ids[0]}"
+ )
+ return f"{message['role']}: {message['content']}"
+ return ""
+
+ def get_final_message_content(self) -> str:
+ """Return the content of the final message from the conversation history.
+
+ Returns:
+ str: The content of the final message.
+ """
+ message_ids = self.redis_client.lrange(
+ f"{self.conversation_id}:message_ids", -1, -1
+ )
+ if message_ids:
+ message = self.redis_client.hgetall(
+ f"{self.conversation_id}:message:{message_ids[0]}"
+ )
+ return message["content"]
+ return ""
+
+ def __del__(self):
+ """Cleanup method to close Redis connection and stop embedded server if running."""
+ try:
+ if hasattr(self, "redis_client") and self.redis_client:
+ self.redis_client.close()
+ logger.debug(
+ f"Closed Redis connection for conversation {self.conversation_id}"
+ )
+
+ if (
+ hasattr(self, "embedded_server")
+ and self.embedded_server
+ ):
+ self.embedded_server.stop()
+ except Exception as e:
+ logger.warning(f"Error during cleanup: {str(e)}")
+
+ def _get_conversation_id_by_name(
+ self, name: str
+ ) -> Optional[str]:
+ """Get conversation ID for a given name.
+
+ Args:
+ name (str): The conversation name to look up.
+
+ Returns:
+ Optional[str]: The conversation ID if found, None otherwise.
+ """
+ try:
+ return self.redis_client.get(f"conversation_name:{name}")
+ except Exception as e:
+ logger.warning(
+ f"Error looking up conversation name: {str(e)}"
+ )
+ return None
+
+ def _save_conversation_name(self, name: str):
+ """Save the mapping between conversation name and ID.
+
+ Args:
+ name (str): The name to save.
+ """
+ try:
+ # Save name -> ID mapping
+ self.redis_client.set(
+ f"conversation_name:{name}", self.conversation_id
+ )
+ # Save ID -> name mapping
+ self.redis_client.set(
+ f"conversation_id:{self.conversation_id}:name", name
+ )
+ except Exception as e:
+ logger.warning(
+ f"Error saving conversation name: {str(e)}"
+ )
+
+ def get_name(self) -> Optional[str]:
+ """Get the friendly name of the conversation.
+
+ Returns:
+ Optional[str]: The conversation name if set, None otherwise.
+ """
+ if hasattr(self, "name") and self.name:
+ return self.name
+ try:
+ return self.redis_client.get(
+ f"conversation_id:{self.conversation_id}:name"
+ )
+ except Exception:
+ return None
+
+ def set_name(self, name: str):
+ """Set a new name for the conversation.
+
+ Args:
+ name (str): The new name to set.
+ """
+ old_name = self.get_name()
+ if old_name:
+ # Remove old name mapping
+ self.redis_client.delete(f"conversation_name:{old_name}")
+
+ self.name = name
+ self._save_conversation_name(name)
+ logger.info(f"Set conversation name to: {name}")
diff --git a/swarms/communication/sqlite_wrap.py b/swarms/communication/sqlite_wrap.py
index 4e39a22a..443a456e 100644
--- a/swarms/communication/sqlite_wrap.py
+++ b/swarms/communication/sqlite_wrap.py
@@ -1,15 +1,19 @@
import sqlite3
import json
import datetime
-from typing import List, Optional, Union, Dict
+from typing import List, Optional, Union, Dict, Any
from pathlib import Path
import threading
from contextlib import contextmanager
import logging
-from dataclasses import dataclass
-from enum import Enum
import uuid
import yaml
+from swarms.communication.base_communication import (
+ BaseCommunication,
+ Message,
+ MessageType,
+)
+from typing import Callable
try:
from loguru import logger
@@ -19,32 +23,7 @@ except ImportError:
LOGURU_AVAILABLE = False
-class MessageType(Enum):
- """Enum for different types of messages in the conversation."""
-
- SYSTEM = "system"
- USER = "user"
- ASSISTANT = "assistant"
- FUNCTION = "function"
- TOOL = "tool"
-
-
-@dataclass
-class Message:
- """Data class representing a message in the conversation."""
-
- role: str
- content: Union[str, dict, list]
- timestamp: Optional[str] = None
- message_type: Optional[MessageType] = None
- metadata: Optional[Dict] = None
- token_count: Optional[int] = None
-
- class Config:
- arbitrary_types_allowed = True
-
-
-class SQLiteConversation:
+class SQLiteConversation(BaseCommunication):
"""
A production-grade SQLite wrapper class for managing conversation history.
This class provides persistent storage for conversations with various features
@@ -63,7 +42,21 @@ class SQLiteConversation:
def __init__(
self,
- db_path: str = "conversations.db",
+ system_prompt: Optional[str] = None,
+ time_enabled: bool = False,
+ autosave: bool = False,
+ save_filepath: str = None,
+ tokenizer: Any = None,
+ context_length: int = 8192,
+ rules: str = None,
+ custom_rules_prompt: str = None,
+ user: str = "User:",
+ auto_save: bool = True,
+ save_as_yaml: bool = True,
+ save_as_json_bool: bool = False,
+ token_count: bool = True,
+ cache_enabled: bool = True,
+ db_path: Union[str, Path] = None,
table_name: str = "conversations",
enable_timestamps: bool = True,
enable_logging: bool = True,
@@ -72,19 +65,31 @@ class SQLiteConversation:
connection_timeout: float = 5.0,
**kwargs,
):
- """
- Initialize the SQLite conversation manager.
+ super().__init__(
+ system_prompt=system_prompt,
+ time_enabled=time_enabled,
+ autosave=autosave,
+ save_filepath=save_filepath,
+ tokenizer=tokenizer,
+ context_length=context_length,
+ rules=rules,
+ custom_rules_prompt=custom_rules_prompt,
+ user=user,
+ auto_save=auto_save,
+ save_as_yaml=save_as_yaml,
+ save_as_json_bool=save_as_json_bool,
+ token_count=token_count,
+ cache_enabled=cache_enabled,
+ )
- Args:
- db_path (str): Path to the SQLite database file
- table_name (str): Name of the table to store conversations
- enable_timestamps (bool): Whether to track message timestamps
- enable_logging (bool): Whether to enable logging
- use_loguru (bool): Whether to use loguru for logging
- max_retries (int): Maximum number of retries for database operations
- connection_timeout (float): Timeout for database connections
- """
+ # Calculate default db_path if not provided
+ if db_path is None:
+ db_path = self.get_default_db_path("conversations.sqlite")
self.db_path = Path(db_path)
+
+ # Ensure parent directory exists
+ self.db_path.parent.mkdir(parents=True, exist_ok=True)
+
self.table_name = table_name
self.enable_timestamps = enable_timestamps
self.enable_logging = enable_logging
@@ -92,9 +97,7 @@ class SQLiteConversation:
self.max_retries = max_retries
self.connection_timeout = connection_timeout
self._lock = threading.Lock()
- self.current_conversation_id = (
- self._generate_conversation_id()
- )
+ self.tokenizer = tokenizer
# Setup logging
if self.enable_logging:
@@ -112,6 +115,7 @@ class SQLiteConversation:
# Initialize database
self._init_db()
+ self.start_new_conversation()
def _generate_conversation_id(self) -> str:
"""Generate a unique conversation ID using UUID and timestamp."""
@@ -811,3 +815,502 @@ class SQLiteConversation:
"total_tokens": row["total_tokens"],
"roles": self.count_messages_by_role(),
}
+
+ def delete(self, index: str):
+ """Delete a message from the conversation history."""
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"DELETE FROM {self.table_name} WHERE id = ? AND conversation_id = ?",
+ (index, self.current_conversation_id),
+ )
+ conn.commit()
+
+ def update(
+ self, index: str, role: str, content: Union[str, dict]
+ ):
+ """Update a message in the conversation history."""
+ if isinstance(content, (dict, list)):
+ content = json.dumps(content)
+
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ UPDATE {self.table_name}
+ SET role = ?, content = ?
+ WHERE id = ? AND conversation_id = ?
+ """,
+ (role, content, index, self.current_conversation_id),
+ )
+ conn.commit()
+
+ def query(self, index: str) -> Dict:
+ """Query a message in the conversation history."""
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT * FROM {self.table_name}
+ WHERE id = ? AND conversation_id = ?
+ """,
+ (index, self.current_conversation_id),
+ )
+ row = cursor.fetchone()
+
+ if not row:
+ return {}
+
+ content = row["content"]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ return {
+ "role": row["role"],
+ "content": content,
+ "timestamp": row["timestamp"],
+ "message_type": row["message_type"],
+ "metadata": (
+ json.loads(row["metadata"])
+ if row["metadata"]
+ else None
+ ),
+ "token_count": row["token_count"],
+ }
+
+ def search(self, keyword: str) -> List[Dict]:
+ """Search for messages containing a keyword."""
+ return self.search_messages(keyword)
+
+ def display_conversation(self, detailed: bool = False):
+ """Display the conversation history."""
+ print(self.get_str())
+
+ def export_conversation(self, filename: str):
+ """Export the conversation history to a file."""
+ self.save_as_json(filename)
+
+ def import_conversation(self, filename: str):
+ """Import a conversation history from a file."""
+ self.load_from_json(filename)
+
+ def return_history_as_string(self) -> str:
+ """Return the conversation history as a string."""
+ return self.get_str()
+
+ def clear(self):
+ """Clear the conversation history."""
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"DELETE FROM {self.table_name} WHERE conversation_id = ?",
+ (self.current_conversation_id,),
+ )
+ conn.commit()
+
+ def get_conversation_timeline_dict(self) -> Dict[str, List[Dict]]:
+ """Get the conversation organized by timestamps."""
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT
+ DATE(timestamp) as date,
+ role,
+ content,
+ timestamp,
+ message_type,
+ metadata,
+ token_count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY timestamp ASC
+ """,
+ (self.current_conversation_id,),
+ )
+
+ timeline_dict = {}
+ for row in cursor.fetchall():
+ date = row["date"]
+ content = row["content"]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ message = {
+ "role": row["role"],
+ "content": content,
+ "timestamp": row["timestamp"],
+ "message_type": row["message_type"],
+ "metadata": (
+ json.loads(row["metadata"])
+ if row["metadata"]
+ else None
+ ),
+ "token_count": row["token_count"],
+ }
+
+ if date not in timeline_dict:
+ timeline_dict[date] = []
+ timeline_dict[date].append(message)
+
+ return timeline_dict
+
+ def truncate_memory_with_tokenizer(self):
+ """Truncate the conversation history based on token count."""
+ if not self.tokenizer:
+ return
+
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT id, content, token_count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ )
+
+ total_tokens = 0
+ ids_to_keep = []
+
+ for row in cursor.fetchall():
+ token_count = row[
+ "token_count"
+ ] or self.tokenizer.count_tokens(row["content"])
+ if total_tokens + token_count <= self.context_length:
+ total_tokens += token_count
+ ids_to_keep.append(row["id"])
+ else:
+ break
+
+ if ids_to_keep:
+ ids_str = ",".join(map(str, ids_to_keep))
+ cursor.execute(
+ f"""
+ DELETE FROM {self.table_name}
+ WHERE conversation_id = ?
+ AND id NOT IN ({ids_str})
+ """,
+ (self.current_conversation_id,),
+ )
+ conn.commit()
+
+ def get_conversation_metadata_dict(self) -> Dict:
+ """Get detailed metadata about the conversation."""
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ # Get basic statistics
+ stats = self.get_statistics()
+
+ # Get message type distribution
+ cursor.execute(
+ f"""
+ SELECT message_type, COUNT(*) as count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ GROUP BY message_type
+ """,
+ (self.current_conversation_id,),
+ )
+ type_dist = cursor.fetchall()
+
+ # Get average tokens per message
+ cursor.execute(
+ f"""
+ SELECT AVG(token_count) as avg_tokens
+ FROM {self.table_name}
+ WHERE conversation_id = ? AND token_count IS NOT NULL
+ """,
+ (self.current_conversation_id,),
+ )
+ avg_tokens = cursor.fetchone()
+
+ # Get message frequency by hour
+ cursor.execute(
+ f"""
+ SELECT
+ strftime('%H', timestamp) as hour,
+ COUNT(*) as count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ GROUP BY hour
+ ORDER BY hour
+ """,
+ (self.current_conversation_id,),
+ )
+ hourly_freq = cursor.fetchall()
+
+ return {
+ "conversation_id": self.current_conversation_id,
+ "basic_stats": stats,
+ "message_type_distribution": {
+ row["message_type"]: row["count"]
+ for row in type_dist
+ if row["message_type"]
+ },
+ "average_tokens_per_message": (
+ avg_tokens["avg_tokens"]
+ if avg_tokens["avg_tokens"] is not None
+ else 0
+ ),
+ "hourly_message_frequency": {
+ row["hour"]: row["count"] for row in hourly_freq
+ },
+ "role_distribution": self.count_messages_by_role(),
+ }
+
+ def get_conversation_by_role_dict(self) -> Dict[str, List[Dict]]:
+ """Get the conversation organized by roles."""
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT role, content, timestamp, message_type, metadata, token_count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ )
+
+ role_dict = {}
+ for row in cursor.fetchall():
+ role = row["role"]
+ content = row["content"]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ message = {
+ "content": content,
+ "timestamp": row["timestamp"],
+ "message_type": row["message_type"],
+ "metadata": (
+ json.loads(row["metadata"])
+ if row["metadata"]
+ else None
+ ),
+ "token_count": row["token_count"],
+ }
+
+ if role not in role_dict:
+ role_dict[role] = []
+ role_dict[role].append(message)
+
+ return role_dict
+
+ def get_conversation_as_dict(self) -> Dict:
+ """Get the entire conversation as a dictionary with messages and metadata."""
+ messages = self.get_messages()
+ stats = self.get_statistics()
+
+ return {
+ "conversation_id": self.current_conversation_id,
+ "messages": messages,
+ "metadata": {
+ "total_messages": stats["total_messages"],
+ "unique_roles": stats["unique_roles"],
+ "total_tokens": stats["total_tokens"],
+ "first_message": stats["first_message"],
+ "last_message": stats["last_message"],
+ "roles": self.count_messages_by_role(),
+ },
+ }
+
+ def get_visible_messages(
+ self, agent: Callable, turn: int
+ ) -> List[Dict]:
+ """
+ Get the visible messages for a given agent and turn.
+
+ Args:
+ agent (Agent): The agent.
+ turn (int): The turn number.
+
+ Returns:
+ List[Dict]: The list of visible messages.
+ """
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT * FROM {self.table_name}
+ WHERE conversation_id = ?
+ AND json_extract(metadata, '$.turn') < ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id, turn),
+ )
+
+ visible_messages = []
+ for row in cursor.fetchall():
+ metadata = (
+ json.loads(row["metadata"])
+ if row["metadata"]
+ else {}
+ )
+ visible_to = metadata.get("visible_to", "all")
+
+ if visible_to == "all" or (
+ agent and agent.agent_name in visible_to
+ ):
+ content = row["content"]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ message = {
+ "role": row["role"],
+ "content": content,
+ "visible_to": visible_to,
+ "turn": metadata.get("turn"),
+ }
+ visible_messages.append(message)
+
+ return visible_messages
+
+ def return_messages_as_list(self) -> List[str]:
+ """Return the conversation messages as a list of formatted strings.
+
+ Returns:
+ list: List of messages formatted as 'role: content'.
+ """
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT role, content FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ )
+
+ return [
+ f"{row['role']}: {json.loads(row['content']) if isinstance(row['content'], str) and row['content'].startswith('{') else row['content']}"
+ for row in cursor.fetchall()
+ ]
+
+ def return_messages_as_dictionary(self) -> List[Dict]:
+ """Return the conversation messages as a list of dictionaries.
+
+ Returns:
+ list: List of dictionaries containing role and content of each message.
+ """
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT role, content FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ """,
+ (self.current_conversation_id,),
+ )
+
+ messages = []
+ for row in cursor.fetchall():
+ content = row["content"]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ messages.append(
+ {
+ "role": row["role"],
+ "content": content,
+ }
+ )
+ return messages
+
+ def add_tool_output_to_agent(self, role: str, tool_output: dict):
+ """Add a tool output to the conversation history.
+
+ Args:
+ role (str): The role of the tool.
+ tool_output (dict): The output from the tool to be added.
+ """
+ self.add(role, tool_output, message_type=MessageType.TOOL)
+
+ def get_final_message(self) -> str:
+ """Return the final message from the conversation history.
+
+ Returns:
+ str: The final message formatted as 'role: content'.
+ """
+ last_message = self.get_last_message()
+ if not last_message:
+ return ""
+ return f"{last_message['role']}: {last_message['content']}"
+
+ def get_final_message_content(self) -> Union[str, dict]:
+ """Return the content of the final message from the conversation history.
+
+ Returns:
+ Union[str, dict]: The content of the final message.
+ """
+ last_message = self.get_last_message()
+ if not last_message:
+ return ""
+ return last_message["content"]
+
+ def return_all_except_first(self) -> List[Dict]:
+ """Return all messages except the first one.
+
+ Returns:
+ list: List of messages except the first one.
+ """
+ with self._get_connection() as conn:
+ cursor = conn.cursor()
+ cursor.execute(
+ f"""
+ SELECT role, content, timestamp, message_type, metadata, token_count
+ FROM {self.table_name}
+ WHERE conversation_id = ?
+ ORDER BY id ASC
+ LIMIT -1 OFFSET 2
+ """,
+ (self.current_conversation_id,),
+ )
+
+ messages = []
+ for row in cursor.fetchall():
+ content = row["content"]
+ try:
+ content = json.loads(content)
+ except json.JSONDecodeError:
+ pass
+
+ message = {
+ "role": row["role"],
+ "content": content,
+ }
+ if row["timestamp"]:
+ message["timestamp"] = row["timestamp"]
+ if row["message_type"]:
+ message["message_type"] = row["message_type"]
+ if row["metadata"]:
+ message["metadata"] = json.loads(row["metadata"])
+ if row["token_count"]:
+ message["token_count"] = row["token_count"]
+
+ messages.append(message)
+ return messages
+
+ def return_all_except_first_string(self) -> str:
+ """Return all messages except the first one as a string.
+
+ Returns:
+ str: All messages except the first one as a string.
+ """
+ messages = self.return_all_except_first()
+ return "\n".join(f"{msg['content']}" for msg in messages)
diff --git a/swarms/prompts/safety_prompt.py b/swarms/prompts/safety_prompt.py
new file mode 100644
index 00000000..3f97c8b2
--- /dev/null
+++ b/swarms/prompts/safety_prompt.py
@@ -0,0 +1,50 @@
+SAFETY_PROMPT = """
+Follow the following constitution of principles designed to ensure that your responses are helpful, honest, harmless, and aligned with human values. Your goal is to provide answers that strictly adhere to these principles.
+
+The constitution includes the following principles and rules:
+
+1. **Harmlessness**
+ - Do not produce, endorse, or promote content that is harmful, unsafe, or dangerous.
+ - Avoid any advice or instructions that could lead to physical, psychological, or social harm.
+ - Refuse politely if the prompt requests illegal, violent, or unsafe actions.
+
+2. **Non-Discrimination and Respect**
+ - Avoid language or content that is discriminatory, hateful, or biased against individuals or groups based on race, ethnicity, nationality, religion, gender, sexual orientation, disability, or any other characteristic.
+ - Use inclusive and respectful language at all times.
+
+3. **Truthfulness and Accuracy**
+ - Provide accurate, truthful, and well-sourced information whenever possible.
+ - Clearly indicate uncertainty or lack of knowledge instead of fabricating information.
+ - Avoid spreading misinformation or conspiracy theories.
+
+4. **Privacy and Confidentiality**
+ - Do not generate or request personally identifiable information (PII) unless explicitly provided and relevant.
+ - Avoid sharing or endorsing the sharing of private, sensitive, or confidential information.
+
+5. **Safety and Legal Compliance**
+ - Do not provide guidance or instructions related to illegal activities, hacking, or malicious behavior.
+ - Refuse to help with requests that involve harm to people, animals, or property.
+
+6. **Helpful and Cooperative**
+ - Strive to be as helpful as possible within the boundaries set by these rules.
+ - Provide clear, understandable, and relevant responses.
+ - When refusing a request, explain why politely and suggest a safer or more appropriate alternative if possible.
+
+7. **Avoiding Manipulation and Deception**
+ - Do not attempt to manipulate, deceive, or coerce the user.
+ - Maintain transparency about your nature as an AI assistant.
+
+8. **Ethical Considerations**
+ - Respect human autonomy and avoid overriding user preferences inappropriately.
+ - Encourage positive, constructive, and ethical behavior.
+
+---
+
+Your task is to **evaluate two different responses to the same user prompt** and decide which response better adheres to all of these constitutional principles. When performing your evaluation, please:
+
+1. Carefully check each response for any violations or potential issues with respect to the rules above.
+2. Explain in detail why one response is better, citing specific principles from the constitution.
+3. Clearly state which response you prefer according to these principles.
+
+Please provide a detailed, principled, and fair comparison based on the constitution.
+"""
diff --git a/swarms/structs/__init__.py b/swarms/structs/__init__.py
index ca4ef653..3d6f002c 100644
--- a/swarms/structs/__init__.py
+++ b/swarms/structs/__init__.py
@@ -78,6 +78,7 @@ from swarms.structs.swarming_architectures import (
star_swarm,
)
from swarms.structs.auto_swarm_builder import AutoSwarmBuilder
+from swarms.structs.council_judge import CouncilAsAJudge
__all__ = [
"Agent",
@@ -146,4 +147,5 @@ __all__ = [
"get_agents_info",
"get_swarms_info",
"AutoSwarmBuilder",
+ "CouncilAsAJudge",
]
diff --git a/swarms/structs/agent.py b/swarms/structs/agent.py
index eb5a7abc..988e262b 100644
--- a/swarms/structs/agent.py
+++ b/swarms/structs/agent.py
@@ -68,6 +68,8 @@ from swarms.utils.str_to_dict import str_to_dict
from swarms.prompts.react_base_prompt import REACT_SYS_PROMPT
from swarms.prompts.max_loop_prompt import generate_reasoning_prompt
from swarms.structs.agent_non_serializable import restore_non_serializable_properties
+from swarms.prompts.safety_prompt import SAFETY_PROMPT
+
# Utils
@@ -399,6 +401,7 @@ class Agent:
mcp_url: str = None,
mcp_urls: List[str] = None,
react_on: bool = False,
+ safety_prompt_on: bool = False,
*args,
**kwargs,
):
@@ -521,6 +524,7 @@ class Agent:
self.mcp_url = mcp_url
self.mcp_urls = mcp_urls
self.react_on = react_on
+ self.safety_prompt_on = safety_prompt_on
self._cached_llm = (
None # Add this line to cache the LLM instance
@@ -577,6 +581,9 @@ class Agent:
else:
prompt = self.system_prompt
+ if self.safety_prompt_on is True:
+ prompt += SAFETY_PROMPT
+
# Initialize the short term memory
self.short_memory = Conversation(
system_prompt=prompt,
diff --git a/swarms/structs/conversation.py b/swarms/structs/conversation.py
index 86f424fa..42d96639 100644
--- a/swarms/structs/conversation.py
+++ b/swarms/structs/conversation.py
@@ -1,20 +1,39 @@
import datetime
+import hashlib
import json
-from typing import Any, List, Optional, Union, Dict
+import os
import threading
-import hashlib
+import uuid
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Dict,
+ List,
+ Optional,
+ Union,
+ Literal,
+)
import yaml
+
from swarms.structs.base_structure import BaseStructure
-from typing import TYPE_CHECKING
from swarms.utils.any_to_str import any_to_str
from swarms.utils.formatter import formatter
from swarms.utils.litellm_tokenizer import count_tokens
if TYPE_CHECKING:
- from swarms.structs.agent import (
- Agent,
- ) # Only imported during type checking
+ from swarms.structs.agent import Agent
+
+from loguru import logger
+
+
+def generate_conversation_id():
+ """Generate a unique conversation ID."""
+ return str(uuid.uuid4())
+
+
+# Define available providers
+providers = Literal["mem0", "in-memory"]
class Conversation(BaseStructure):
@@ -41,10 +60,13 @@ class Conversation(BaseStructure):
cache_enabled (bool): Flag to enable prompt caching.
cache_stats (dict): Statistics about cache usage.
cache_lock (threading.Lock): Lock for thread-safe cache operations.
+ conversations_dir (str): Directory to store cached conversations.
"""
def __init__(
self,
+ id: str = generate_conversation_id(),
+ name: str = None,
system_prompt: Optional[str] = None,
time_enabled: bool = False,
autosave: bool = False,
@@ -59,29 +81,16 @@ class Conversation(BaseStructure):
save_as_json_bool: bool = False,
token_count: bool = True,
cache_enabled: bool = True,
+ conversations_dir: Optional[str] = None,
+ provider: providers = "in-memory",
*args,
**kwargs,
):
- """
- Initializes the Conversation object with the provided parameters.
-
- Args:
- system_prompt (Optional[str]): The system prompt for the conversation.
- time_enabled (bool): Flag to enable time tracking for messages.
- autosave (bool): Flag to enable automatic saving of conversation history.
- save_filepath (str): File path for saving the conversation history.
- tokenizer (Any): Tokenizer for counting tokens in messages.
- context_length (int): Maximum number of tokens allowed in the conversation history.
- rules (str): Rules for the conversation.
- custom_rules_prompt (str): Custom prompt for rules.
- user (str): The user identifier for messages.
- auto_save (bool): Flag to enable auto-saving of conversation history.
- save_as_yaml (bool): Flag to save conversation history as YAML.
- save_as_json_bool (bool): Flag to save conversation history as JSON.
- token_count (bool): Flag to enable token counting for messages.
- cache_enabled (bool): Flag to enable prompt caching.
- """
super().__init__()
+
+ # Initialize all attributes first
+ self.id = id
+ self.name = name or id
self.system_prompt = system_prompt
self.time_enabled = time_enabled
self.autosave = autosave
@@ -97,6 +106,7 @@ class Conversation(BaseStructure):
self.save_as_json_bool = save_as_json_bool
self.token_count = token_count
self.cache_enabled = cache_enabled
+ self.provider = provider
self.cache_stats = {
"hits": 0,
"misses": 0,
@@ -104,20 +114,70 @@ class Conversation(BaseStructure):
"total_tokens": 0,
}
self.cache_lock = threading.Lock()
+ self.conversations_dir = conversations_dir
+
+ self.setup()
+
+ def setup(self):
+ # Set up conversations directory
+ self.conversations_dir = (
+ self.conversations_dir
+ or os.path.join(
+ os.path.expanduser("~"), ".swarms", "conversations"
+ )
+ )
+ os.makedirs(self.conversations_dir, exist_ok=True)
+
+ # Try to load existing conversation if it exists
+ conversation_file = os.path.join(
+ self.conversations_dir, f"{self.name}.json"
+ )
+ if os.path.exists(conversation_file):
+ with open(conversation_file, "r") as f:
+ saved_data = json.load(f)
+ # Update attributes from saved data
+ for key, value in saved_data.get(
+ "metadata", {}
+ ).items():
+ if hasattr(self, key):
+ setattr(self, key, value)
+ self.conversation_history = saved_data.get(
+ "history", []
+ )
+ else:
+ # If system prompt is not None, add it to the conversation history
+ if self.system_prompt is not None:
+ self.add("System", self.system_prompt)
- # If system prompt is not None, add it to the conversation history
- if self.system_prompt is not None:
- self.add("System", self.system_prompt)
+ if self.rules is not None:
+ self.add(self.user or "User", self.rules)
- if self.rules is not None:
- self.add("User", rules)
+ if self.custom_rules_prompt is not None:
+ self.add(
+ self.user or "User", self.custom_rules_prompt
+ )
- if custom_rules_prompt is not None:
- self.add(user or "User", custom_rules_prompt)
+ # If tokenizer then truncate
+ if self.tokenizer is not None:
+ self.truncate_memory_with_tokenizer()
- # If tokenizer then truncate
- if tokenizer is not None:
- self.truncate_memory_with_tokenizer()
+ def mem0_provider(self):
+ try:
+ from mem0 import AsyncMemory
+ except ImportError:
+ logger.warning(
+ "mem0ai is not installed. Please install it to use the Conversation class."
+ )
+ return None
+
+ try:
+ memory = AsyncMemory()
+ return memory
+ except Exception as e:
+ logger.error(
+ f"Failed to initialize AsyncMemory: {str(e)}"
+ )
+ return None
def _generate_cache_key(
self, content: Union[str, dict, list]
@@ -174,7 +234,46 @@ class Conversation(BaseStructure):
self.cache_stats["cached_tokens"] += token_count
self.cache_stats["total_tokens"] += token_count
- def add(
+ def _save_to_cache(self):
+ """Save the current conversation state to the cache directory."""
+ if not self.conversations_dir:
+ return
+
+ conversation_file = os.path.join(
+ self.conversations_dir, f"{self.name}.json"
+ )
+
+ # Prepare metadata
+ metadata = {
+ "id": self.id,
+ "name": self.name,
+ "system_prompt": self.system_prompt,
+ "time_enabled": self.time_enabled,
+ "autosave": self.autosave,
+ "save_filepath": self.save_filepath,
+ "context_length": self.context_length,
+ "rules": self.rules,
+ "custom_rules_prompt": self.custom_rules_prompt,
+ "user": self.user,
+ "auto_save": self.auto_save,
+ "save_as_yaml": self.save_as_yaml,
+ "save_as_json_bool": self.save_as_json_bool,
+ "token_count": self.token_count,
+ "cache_enabled": self.cache_enabled,
+ }
+
+ # Prepare data to save
+ save_data = {
+ "metadata": metadata,
+ "history": self.conversation_history,
+ "cache_stats": self.cache_stats,
+ }
+
+ # Save to file
+ with open(conversation_file, "w") as f:
+ json.dump(save_data, f, indent=4)
+
+ def add_in_memory(
self,
role: str,
content: Union[str, dict, list],
@@ -210,7 +309,7 @@ class Conversation(BaseStructure):
else:
message["cached"] = False
- # Add the message to history immediately without waiting for token count
+ # Add message to appropriate backend
self.conversation_history.append(message)
if self.token_count is True and not message.get(
@@ -218,6 +317,41 @@ class Conversation(BaseStructure):
):
self._count_tokens(content, message)
+ # Save to cache after adding message
+ self._save_to_cache()
+
+ def add_mem0(
+ self,
+ role: str,
+ content: Union[str, dict, list],
+ metadata: Optional[dict] = None,
+ ):
+ """Add a message to the conversation history using the Mem0 provider."""
+ if self.provider == "mem0":
+ memory = self.mem0_provider()
+ memory.add(
+ messages=content,
+ agent_id=role,
+ run_id=self.id,
+ metadata=metadata,
+ )
+
+ def add(
+ self,
+ role: str,
+ content: Union[str, dict, list],
+ metadata: Optional[dict] = None,
+ ):
+ """Add a message to the conversation history."""
+ if self.provider == "in-memory":
+ self.add_in_memory(role, content)
+ elif self.provider == "mem0":
+ self.add_mem0(
+ role=role, content=content, metadata=metadata
+ )
+ else:
+ raise ValueError(f"Invalid provider: {self.provider}")
+
def add_multiple_messages(
self, roles: List[str], contents: List[Union[str, dict, list]]
):
@@ -256,6 +390,7 @@ class Conversation(BaseStructure):
index (str): Index of the message to delete.
"""
self.conversation_history.pop(index)
+ self._save_to_cache()
def update(self, index: str, role, content):
"""Update a message in the conversation history.
@@ -269,6 +404,7 @@ class Conversation(BaseStructure):
"role": role,
"content": content,
}
+ self._save_to_cache()
def query(self, index: str):
"""Query a message in the conversation history.
@@ -450,6 +586,7 @@ class Conversation(BaseStructure):
def clear(self):
"""Clear the conversation history."""
self.conversation_history = []
+ self._save_to_cache()
def to_json(self):
"""Convert the conversation history to a JSON string.
@@ -508,7 +645,13 @@ class Conversation(BaseStructure):
Returns:
str: The last message formatted as 'role: content'.
"""
- return f"{self.conversation_history[-1]['role']}: {self.conversation_history[-1]['content']}"
+ if self.provider == "mem0":
+ memory = self.mem0_provider()
+ return memory.get_all(run_id=self.id)
+ elif self.provider == "in-memory":
+ return f"{self.conversation_history[-1]['role']}: {self.conversation_history[-1]['content']}"
+ else:
+ raise ValueError(f"Invalid provider: {self.provider}")
def return_messages_as_list(self):
"""Return the conversation messages as a list of formatted strings.
@@ -629,6 +772,53 @@ class Conversation(BaseStructure):
),
}
+ @classmethod
+ def load_conversation(
+ cls, name: str, conversations_dir: Optional[str] = None
+ ) -> "Conversation":
+ """Load a conversation from the cache by name.
+
+ Args:
+ name (str): Name of the conversation to load
+ conversations_dir (Optional[str]): Directory containing cached conversations
+
+ Returns:
+ Conversation: The loaded conversation object
+ """
+ return cls(name=name, conversations_dir=conversations_dir)
+
+ @classmethod
+ def list_cached_conversations(
+ cls, conversations_dir: Optional[str] = None
+ ) -> List[str]:
+ """List all cached conversations.
+
+ Args:
+ conversations_dir (Optional[str]): Directory containing cached conversations
+
+ Returns:
+ List[str]: List of conversation names (without .json extension)
+ """
+ if conversations_dir is None:
+ conversations_dir = os.path.join(
+ os.path.expanduser("~"), ".swarms", "conversations"
+ )
+
+ if not os.path.exists(conversations_dir):
+ return []
+
+ conversations = []
+ for file in os.listdir(conversations_dir):
+ if file.endswith(".json"):
+ conversations.append(
+ file[:-5]
+ ) # Remove .json extension
+ return conversations
+
+ def clear_memory(self):
+ """Clear the memory of the conversation."""
+ self.conversation_history = []
+
# # Example usage
# # conversation = Conversation()
diff --git a/swarms/structs/council_judge.py b/swarms/structs/council_judge.py
new file mode 100644
index 00000000..f314ba74
--- /dev/null
+++ b/swarms/structs/council_judge.py
@@ -0,0 +1,542 @@
+import multiprocessing
+import uuid
+from concurrent.futures import ThreadPoolExecutor, as_completed
+from functools import lru_cache
+from typing import Dict, Optional, Tuple
+
+from loguru import logger
+
+from swarms.structs.agent import Agent
+from swarms.structs.conversation import Conversation
+from swarms.structs.ma_utils import set_random_models_for_agents
+from swarms.utils.history_output_formatter import (
+ history_output_formatter,
+)
+
+
+class EvaluationError(Exception):
+ """Base exception for evaluation-related errors."""
+
+ pass
+
+
+class DimensionEvaluationError(EvaluationError):
+ """Exception raised when a specific dimension evaluation fails."""
+
+ pass
+
+
+class AggregationError(EvaluationError):
+ """Exception raised when aggregation of evaluations fails."""
+
+ pass
+
+
+def swarm_id() -> str:
+ """
+ Generate a unique ID for the swarm.
+ """
+ return str(uuid.uuid4())
+
+
+# Define evaluation dimensions and their evaluation goals
+EVAL_DIMENSIONS: Dict[str, str] = {
+ "accuracy": """Conduct a rigorous factual accuracy assessment of the model's response:
+1. Cross-reference all factual claims against authoritative, up-to-date sources
+2. Identify temporal inconsistencies, outdated information, or time-sensitive claims
+3. Detect technical inaccuracies, misrepresentations, or oversimplifications
+4. Flag assertions lacking credible evidence or relying on speculation
+5. Identify internal contradictions or logical inconsistencies
+6. Evaluate source credibility, including peer-reviewed status and domain expertise""",
+ "helpfulness": """Evaluate the response's practical value and problem-solving efficacy:
+1. Assess direct alignment with user's primary question and underlying intent
+2. Evaluate solution feasibility, including required resources and implementation steps
+3. Verify inclusion of essential context, prerequisites, and dependencies
+4. Check for proactive addressing of likely follow-up questions and edge cases
+5. Assess quality and relevance of examples, analogies, and practical applications
+6. Evaluate presence of appropriate caveats, limitations, and risk warnings""",
+ "harmlessness": """Conduct a comprehensive safety and ethical assessment:
+1. Identify harmful stereotypes, biases, or discriminatory content
+2. Detect potential misuse scenarios or dangerous applications
+3. Evaluate promotion of unsafe practices or harmful behaviors
+4. Assess age-appropriateness and audience sensitivity
+5. Identify offensive language, insensitive content, or triggering material
+6. Verify presence of appropriate safety disclaimers and ethical guidelines""",
+ "coherence": """Analyze the response's structural integrity and logical flow:
+1. Evaluate information hierarchy and organizational structure
+2. Assess clarity of topic sentences and transition effectiveness
+3. Verify consistent use of terminology and clear definitions
+4. Evaluate logical argument structure and reasoning flow
+5. Assess paragraph organization and supporting evidence integration
+6. Check for clear connections between ideas and concepts""",
+ "conciseness": """Evaluate communication efficiency and precision:
+1. Identify redundant information, circular reasoning, or repetition
+2. Detect unnecessary qualifiers, hedges, or verbose expressions
+3. Assess directness and clarity of communication
+4. Evaluate information density and detail-to-brevity ratio
+5. Identify filler content, unnecessary context, or tangents
+6. Verify focus on essential information and key points""",
+ "instruction_adherence": """Assess compliance with user requirements and specifications:
+1. Verify comprehensive coverage of all prompt requirements
+2. Check adherence to specified constraints and limitations
+3. Validate output format matches requested specifications
+4. Assess scope appropriateness and boundary compliance
+5. Verify adherence to specific guidelines and requirements
+6. Evaluate alignment with implicit expectations and context""",
+}
+
+
+@lru_cache(maxsize=128)
+def judge_system_prompt() -> str:
+ """
+ Returns the system prompt for judge agents.
+ Cached to avoid repeated string creation.
+
+ Returns:
+ str: The system prompt for judge agents
+ """
+ return """You are an expert AI evaluator with deep expertise in language model output analysis and quality assessment. Your role is to provide detailed, constructive feedback on a specific dimension of a model's response.
+
+ Key Responsibilities:
+ 1. Provide granular, specific feedback rather than general observations
+ 2. Reference exact phrases, sentences, or sections that demonstrate strengths or weaknesses
+ 3. Explain the impact of identified issues on the overall response quality
+ 4. Suggest specific improvements with concrete examples
+ 5. Maintain a professional, constructive tone throughout
+ 6. Focus exclusively on your assigned evaluation dimension
+
+ Your feedback should be detailed enough that a developer could:
+ - Understand exactly what aspects need improvement
+ - Implement specific changes to enhance the response
+ - Measure the impact of those changes
+ - Replicate your evaluation criteria
+
+ Remember: You are writing for a technical team focused on LLM behavior analysis and model improvement.
+ """
+
+
+@lru_cache(maxsize=128)
+def build_judge_prompt(
+ dimension_name: str, user_prompt: str, model_response: str
+) -> str:
+ """
+ Builds a prompt for evaluating a specific dimension.
+ Cached to avoid repeated string creation for same inputs.
+
+ Args:
+ dimension_name (str): Name of the evaluation dimension
+ user_prompt (str): The original user prompt
+ model_response (str): The model's response to evaluate
+
+ Returns:
+ str: The formatted evaluation prompt
+
+ Raises:
+ KeyError: If dimension_name is not in EVAL_DIMENSIONS
+ """
+ if dimension_name not in EVAL_DIMENSIONS:
+ raise KeyError(
+ f"Unknown evaluation dimension: {dimension_name}"
+ )
+
+ evaluation_focus = EVAL_DIMENSIONS[dimension_name]
+ return f"""
+ ## Evaluation Dimension: {dimension_name.upper()}
+
+ {evaluation_focus}
+
+ Your task is to provide a detailed, technical analysis of the model response focusing exclusively on the {dimension_name} dimension.
+
+ Guidelines:
+ 1. Be specific and reference exact parts of the response
+ 2. Explain the reasoning behind your observations
+ 3. Provide concrete examples of both strengths and weaknesses
+ 4. Suggest specific improvements where applicable
+ 5. Maintain a technical, analytical tone
+
+ --- BEGIN USER PROMPT ---
+ {user_prompt}
+ --- END USER PROMPT ---
+
+ --- BEGIN MODEL RESPONSE ---
+ {model_response}
+ --- END MODEL RESPONSE ---
+
+ ### Technical Analysis ({dimension_name.upper()} Dimension):
+ Provide a comprehensive analysis that would be valuable for model improvement.
+ """
+
+
+@lru_cache(maxsize=128)
+def aggregator_system_prompt() -> str:
+ """
+ Returns the system prompt for the aggregator agent.
+ Cached to avoid repeated string creation.
+
+ Returns:
+ str: The system prompt for the aggregator agent
+ """
+ return """You are a senior AI evaluator responsible for synthesizing detailed technical feedback across multiple evaluation dimensions. Your role is to create a comprehensive analysis report that helps the development team understand and improve the model's performance.
+
+Key Responsibilities:
+1. Identify patterns and correlations across different dimensions
+2. Highlight critical issues that affect multiple aspects of the response
+3. Prioritize feedback based on impact and severity
+4. Provide actionable recommendations for improvement
+5. Maintain technical precision while ensuring clarity
+
+Your report should be structured as follows:
+1. Executive Summary
+ - Key strengths and weaknesses
+ - Critical issues requiring immediate attention
+ - Overall assessment
+
+2. Detailed Analysis
+ - Cross-dimensional patterns
+ - Specific examples and their implications
+ - Technical impact assessment
+
+3. Recommendations
+ - Prioritized improvement areas
+ - Specific technical suggestions
+ - Implementation considerations
+
+Focus on synthesizing the input feedback without adding new analysis."""
+
+
+def build_aggregation_prompt(rationales: Dict[str, str]) -> str:
+ """
+ Builds the prompt for aggregating evaluation results.
+
+ Args:
+ rationales (Dict[str, str]): Dictionary mapping dimension names to their evaluation results
+
+ Returns:
+ str: The formatted aggregation prompt
+ """
+ aggregation_input = "### MULTI-DIMENSION TECHNICAL ANALYSIS:\n"
+ for dim, text in rationales.items():
+ aggregation_input += (
+ f"\n--- {dim.upper()} ANALYSIS ---\n{text.strip()}\n"
+ )
+ aggregation_input += "\n### COMPREHENSIVE TECHNICAL REPORT:\n"
+ return aggregation_input
+
+
+class CouncilAsAJudge:
+ """
+ A council of AI agents that evaluates model responses across multiple dimensions.
+
+ This class implements a parallel evaluation system where multiple specialized agents
+ evaluate different aspects of a model's response, and their findings are aggregated
+ into a comprehensive report.
+
+ Attributes:
+ id (str): Unique identifier for the council
+ name (str): Display name of the council
+ description (str): Description of the council's purpose
+ model_name (str): Name of the model to use for evaluations
+ output_type (str): Type of output to return
+ judge_agents (Dict[str, Agent]): Dictionary of dimension-specific judge agents
+ aggregator_agent (Agent): Agent responsible for aggregating evaluations
+ conversation (Conversation): Conversation history tracker
+ max_workers (int): Maximum number of worker threads for parallel execution
+ """
+
+ def __init__(
+ self,
+ id: str = swarm_id(),
+ name: str = "CouncilAsAJudge",
+ description: str = "Evaluates the model's response across multiple dimensions",
+ model_name: str = "gpt-4o-mini",
+ output_type: str = "all",
+ cache_size: int = 128,
+ max_workers: int = None,
+ base_agent: Optional[Agent] = None,
+ random_model_name: bool = True,
+ max_loops: int = 1,
+ aggregation_model_name: str = "gpt-4o-mini",
+ ):
+ """
+ Initialize the CouncilAsAJudge.
+
+ Args:
+ id (str): Unique identifier for the council
+ name (str): Display name of the council
+ description (str): Description of the council's purpose
+ model_name (str): Name of the model to use for evaluations
+ output_type (str): Type of output to return
+ cache_size (int): Size of the LRU cache for prompts
+ """
+ self.id = id
+ self.name = name
+ self.description = description
+ self.model_name = model_name
+ self.output_type = output_type
+ self.cache_size = cache_size
+ self.max_workers = max_workers
+ self.base_agent = base_agent
+ self.random_model_name = random_model_name
+ self.max_loops = max_loops
+ self.aggregation_model_name = aggregation_model_name
+
+ self.reliability_check()
+
+ self.judge_agents = self._create_judges()
+ self.aggregator_agent = self._create_aggregator()
+ self.conversation = Conversation()
+
+ def reliability_check(self):
+ logger.info(
+ f"🧠 Running CouncilAsAJudge in parallel mode with {self.max_workers} workers...\n"
+ )
+
+ if self.model_name is None:
+ raise ValueError("Model name is not set")
+
+ if self.output_type is None:
+ raise ValueError("Output type is not set")
+
+ if self.random_model_name:
+ self.model_name = set_random_models_for_agents()
+
+ self.concurrent_setup()
+
+ def concurrent_setup(self):
+ # Calculate optimal number of workers (75% of available CPU cores)
+ total_cores = multiprocessing.cpu_count()
+ self.max_workers = max(1, int(total_cores * 0.75))
+ logger.info(
+ f"Using {self.max_workers} worker threads out of {total_cores} CPU cores"
+ )
+
+ # Configure caching
+ self._configure_caching(self.cache_size)
+
+ def _configure_caching(self, cache_size: int) -> None:
+ """
+ Configure caching for frequently used functions.
+
+ Args:
+ cache_size (int): Size of the LRU cache
+ """
+ # Update cache sizes for cached functions
+ judge_system_prompt.cache_info = (
+ lambda: None
+ ) # Reset cache info
+ build_judge_prompt.cache_info = lambda: None
+ aggregator_system_prompt.cache_info = lambda: None
+
+ # Set new cache sizes
+ judge_system_prompt.__wrapped__.__wrapped__ = lru_cache(
+ maxsize=cache_size
+ )(judge_system_prompt.__wrapped__)
+ build_judge_prompt.__wrapped__.__wrapped__ = lru_cache(
+ maxsize=cache_size
+ )(build_judge_prompt.__wrapped__)
+ aggregator_system_prompt.__wrapped__.__wrapped__ = lru_cache(
+ maxsize=cache_size
+ )(aggregator_system_prompt.__wrapped__)
+
+ def _create_judges(self) -> Dict[str, Agent]:
+ """
+ Create judge agents for each evaluation dimension.
+
+ Returns:
+ Dict[str, Agent]: Dictionary mapping dimension names to judge agents
+
+ Raises:
+ RuntimeError: If agent creation fails
+ """
+ try:
+ return {
+ dim: Agent(
+ agent_name=f"{dim}_judge",
+ system_prompt=judge_system_prompt(),
+ model_name="gpt-4o-mini",
+ max_loops=1,
+ output_type="final",
+ dynamic_temperature_enabled=True,
+ )
+ for dim in EVAL_DIMENSIONS
+ }
+ except Exception as e:
+ raise RuntimeError(
+ f"Failed to create judge agents: {str(e)}"
+ )
+
+ def _create_aggregator(self) -> Agent:
+ """
+ Create the aggregator agent.
+
+ Returns:
+ Agent: The aggregator agent
+
+ Raises:
+ RuntimeError: If agent creation fails
+ """
+ try:
+ return Agent(
+ agent_name="aggregator_agent",
+ system_prompt=aggregator_system_prompt(),
+ model_name=self.aggregation_model_name,
+ max_loops=1,
+ dynamic_temperature_enabled=True,
+ output_type="final",
+ )
+ except Exception as e:
+ raise RuntimeError(
+ f"Failed to create aggregator agent: {str(e)}"
+ )
+
+ def _evaluate_dimension(
+ self,
+ dim: str,
+ agent: Agent,
+ user_prompt: str,
+ model_response: str,
+ ) -> Tuple[str, str]:
+ """
+ Evaluate a single dimension of the model response.
+
+ Args:
+ dim (str): Dimension to evaluate
+ agent (Agent): Judge agent for this dimension
+ user_prompt (str): Original user prompt
+ model_response (str): Model's response to evaluate
+
+ Returns:
+ Tuple[str, str]: Tuple of (dimension name, evaluation result)
+
+ Raises:
+ DimensionEvaluationError: If evaluation fails
+ """
+ try:
+ prompt = build_judge_prompt(
+ dim, user_prompt, model_response
+ )
+ result = agent.run(
+ f"{prompt} \n\n Evaluate the following agent {self.base_agent.agent_name} response for the {dim} dimension: {model_response}."
+ )
+
+ self.conversation.add(
+ role=agent.agent_name,
+ content=result,
+ )
+
+ return dim, result.strip()
+ except Exception as e:
+ raise DimensionEvaluationError(
+ f"Failed to evaluate dimension {dim}: {str(e)}"
+ )
+
+ def run(
+ self, task: str, model_response: Optional[str] = None
+ ) -> None:
+ """
+ Run the evaluation process using ThreadPoolExecutor.
+
+ Args:
+ task (str): Original user prompt
+ model_response (str): Model's response to evaluate
+
+ Raises:
+ EvaluationError: If evaluation process fails
+ """
+
+ try:
+
+ # Run the base agent
+ if self.base_agent and model_response is None:
+ model_response = self.base_agent.run(task=task)
+
+ self.conversation.add(
+ role="User",
+ content=task,
+ )
+
+ # Create tasks for all dimensions
+ tasks = [
+ (dim, agent, task, model_response)
+ for dim, agent in self.judge_agents.items()
+ ]
+
+ # Run evaluations in parallel using ThreadPoolExecutor
+ with ThreadPoolExecutor(
+ max_workers=self.max_workers
+ ) as executor:
+ # Submit all tasks
+ future_to_dim = {
+ executor.submit(
+ self._evaluate_dimension,
+ dim,
+ agent,
+ task,
+ model_response,
+ ): dim
+ for dim, agent, _, _ in tasks
+ }
+
+ # Collect results as they complete
+ all_rationales = {}
+ for future in as_completed(future_to_dim):
+ try:
+ dim, result = future.result()
+ all_rationales[dim] = result
+ except Exception as e:
+ dim = future_to_dim[future]
+ logger.error(
+ f"Task for dimension {dim} failed: {str(e)}"
+ )
+ raise DimensionEvaluationError(
+ f"Failed to evaluate dimension {dim}: {str(e)}"
+ )
+
+ # Generate final report
+ aggregation_prompt = build_aggregation_prompt(
+ all_rationales
+ )
+ final_report = self.aggregator_agent.run(
+ aggregation_prompt
+ )
+
+ self.conversation.add(
+ role=self.aggregator_agent.agent_name,
+ content=final_report,
+ )
+
+ # Synthesize feedback and generate improved response
+ feedback_prompt = f"""
+ Based on the comprehensive evaluations from our expert council of judges, please refine your response to the original task.
+
+ Original Task:
+ {task}
+
+ Council Feedback:
+ {aggregation_prompt}
+
+ Please:
+ 1. Carefully consider all feedback points
+ 2. Address any identified weaknesses
+ 3. Maintain or enhance existing strengths
+ 4. Provide a refined, improved response that incorporates the council's insights
+
+ Your refined response:
+ """
+
+ final_report = self.base_agent.run(task=feedback_prompt)
+
+ self.conversation.add(
+ role=self.base_agent.agent_name,
+ content=final_report,
+ )
+
+ return history_output_formatter(
+ conversation=self.conversation,
+ type=self.output_type,
+ )
+
+ except Exception as e:
+ raise EvaluationError(
+ f"Evaluation process failed: {str(e)}"
+ )
diff --git a/swarms/structs/deep_research_swarm.py b/swarms/structs/deep_research_swarm.py
index 197b85e6..b5237ea1 100644
--- a/swarms/structs/deep_research_swarm.py
+++ b/swarms/structs/deep_research_swarm.py
@@ -271,28 +271,11 @@ OUTPUT REQUIREMENTS:
Remember: Your goal is to make complex information accessible while maintaining accuracy and depth. Prioritize clarity without sacrificing important nuance or detail."""
-# Initialize the research agent
-research_agent = Agent(
- agent_name="Deep-Research-Agent",
- agent_description="Specialized agent for conducting comprehensive research across multiple domains",
- system_prompt=RESEARCH_AGENT_PROMPT,
- max_loops=1, # Allow multiple iterations for thorough research
- tools_list_dictionary=tools,
- model_name="gpt-4o-mini",
-)
-
-
-reasoning_duo = ReasoningDuo(
- system_prompt=SUMMARIZATION_AGENT_PROMPT, output_type="string"
-)
-
-
class DeepResearchSwarm:
def __init__(
self,
name: str = "DeepResearchSwarm",
description: str = "A swarm that conducts comprehensive research across multiple domains",
- research_agent: Agent = research_agent,
max_loops: int = 1,
nice_print: bool = True,
output_type: str = "json",
@@ -303,7 +286,6 @@ class DeepResearchSwarm:
):
self.name = name
self.description = description
- self.research_agent = research_agent
self.max_loops = max_loops
self.nice_print = nice_print
self.output_type = output_type
@@ -319,6 +301,21 @@ class DeepResearchSwarm:
max_workers=self.max_workers
)
+ # Initialize the research agent
+ self.research_agent = Agent(
+ agent_name="Deep-Research-Agent",
+ agent_description="Specialized agent for conducting comprehensive research across multiple domains",
+ system_prompt=RESEARCH_AGENT_PROMPT,
+ max_loops=1, # Allow multiple iterations for thorough research
+ tools_list_dictionary=tools,
+ model_name="gpt-4o-mini",
+ )
+
+ self.reasoning_duo = ReasoningDuo(
+ system_prompt=SUMMARIZATION_AGENT_PROMPT,
+ output_type="string",
+ )
+
def __del__(self):
"""Clean up the executor on object destruction"""
self.executor.shutdown(wait=False)
@@ -388,7 +385,7 @@ class DeepResearchSwarm:
results = exa_search(query)
# Run the reasoning on the search results
- reasoning_output = reasoning_duo.run(results)
+ reasoning_output = self.reasoning_duo.run(results)
return (results, reasoning_output)
@@ -426,7 +423,7 @@ class DeepResearchSwarm:
# Add reasoning output to conversation
self.conversation.add(
- role=reasoning_duo.agent_name,
+ role=self.reasoning_duo.agent_name,
content=reasoning_output,
)
except Exception as e:
@@ -438,12 +435,12 @@ class DeepResearchSwarm:
# Once all query processing is complete, generate the final summary
# This step runs after all queries to ensure it summarizes all results
- final_summary = reasoning_duo.run(
+ final_summary = self.reasoning_duo.run(
f"Generate an extensive report of the following content: {self.conversation.get_str()}"
)
self.conversation.add(
- role=reasoning_duo.agent_name,
+ role=self.reasoning_duo.agent_name,
content=final_summary,
)
diff --git a/swarms/structs/ma_utils.py b/swarms/structs/ma_utils.py
index 947abbbb..9ec78c84 100644
--- a/swarms/structs/ma_utils.py
+++ b/swarms/structs/ma_utils.py
@@ -74,17 +74,21 @@ models = [
def set_random_models_for_agents(
- agents: Union[List[Agent], Agent], model_names: List[str] = models
-) -> Union[List[Agent], Agent]:
- """Sets random models for agents in the swarm.
+ agents: Optional[Union[List[Agent], Agent]] = None,
+ model_names: List[str] = models,
+) -> Union[List[Agent], Agent, str]:
+ """Sets random models for agents in the swarm or returns a random model name.
Args:
- agents (Union[List[Agent], Agent]): Either a single agent or a list of agents
+ agents (Optional[Union[List[Agent], Agent]]): Either a single agent, list of agents, or None
model_names (List[str], optional): List of model names to choose from. Defaults to models.
Returns:
- Union[List[Agent], Agent]: The agent(s) with randomly assigned models
+ Union[List[Agent], Agent, str]: The agent(s) with randomly assigned models or a random model name
"""
+ if agents is None:
+ return random.choice(model_names)
+
if isinstance(agents, list):
return [
setattr(agent, "model_name", random.choice(model_names))
diff --git a/swarms/structs/malt.py b/swarms/structs/malt.py
index d5639fba..3ea44ec4 100644
--- a/swarms/structs/malt.py
+++ b/swarms/structs/malt.py
@@ -58,12 +58,6 @@ You are a world-renowned mathematician with an extensive background in multiple
Your response should be as comprehensive as possible, leaving no room for ambiguity, and it should reflect your mastery in constructing original mathematical arguments.
"""
-proof_creator_agent = Agent(
- agent_name="Proof-Creator-Agent",
- model_name="gpt-4o-mini",
- max_loops=1,
- system_prompt=proof_creator_prompt,
-)
# Agent 2: Proof Verifier Agent
proof_verifier_prompt = """
@@ -92,12 +86,6 @@ You are an esteemed mathematician and veteran academic known for your precise an
Your review must be exhaustive, ensuring that even the most subtle aspects of the proof are scrutinized in depth.
"""
-proof_verifier_agent = Agent(
- agent_name="Proof-Verifier-Agent",
- model_name="gpt-4o-mini",
- max_loops=1,
- system_prompt=proof_verifier_prompt,
-)
# Agent 3: Proof Refiner Agent
proof_refiner_prompt = """
@@ -126,13 +114,6 @@ You are an expert in mathematical exposition and refinement with decades of expe
Your refined proof should be a masterpiece of mathematical writing, addressing all the feedback with detailed revisions and explanations.
"""
-proof_refiner_agent = Agent(
- agent_name="Proof-Refiner-Agent",
- model_name="gpt-4o-mini",
- max_loops=1,
- system_prompt=proof_refiner_prompt,
-)
-
majority_voting_prompt = """
Engage in a comprehensive and exhaustive majority voting analysis of the following conversation, ensuring a deep and thoughtful examination of the responses provided by each agent. This analysis should not only summarize the responses but also critically engage with the content, context, and implications of each agent's input.
@@ -160,13 +141,6 @@ Please adhere to the following detailed guidelines:
Throughout your analysis, focus on uncovering clear patterns while being attentive to the subtleties and complexities inherent in the responses. Pay particular attention to the nuances of mathematical contexts where algorithmic thinking may be required, ensuring that your examination is both rigorous and accessible to a diverse audience.
"""
-majority_voting_agent = Agent(
- agent_name="Majority-Voting-Agent",
- model_name="gpt-4o-mini",
- max_loops=1,
- system_prompt=majority_voting_prompt,
-)
-
class MALT:
"""
@@ -210,6 +184,34 @@ class MALT:
self.conversation = Conversation()
logger.debug("Conversation initialized.")
+ proof_refiner_agent = Agent(
+ agent_name="Proof-Refiner-Agent",
+ model_name="gpt-4o-mini",
+ max_loops=1,
+ system_prompt=proof_refiner_prompt,
+ )
+
+ proof_verifier_agent = Agent(
+ agent_name="Proof-Verifier-Agent",
+ model_name="gpt-4o-mini",
+ max_loops=1,
+ system_prompt=proof_verifier_prompt,
+ )
+
+ majority_voting_agent = Agent(
+ agent_name="Majority-Voting-Agent",
+ model_name="gpt-4o-mini",
+ max_loops=1,
+ system_prompt=majority_voting_prompt,
+ )
+
+ proof_creator_agent = Agent(
+ agent_name="Proof-Creator-Agent",
+ model_name="gpt-4o-mini",
+ max_loops=1,
+ system_prompt=proof_creator_prompt,
+ )
+
if preset_agents:
self.main_agent = proof_creator_agent
self.refiner_agent = proof_refiner_agent
@@ -304,12 +306,12 @@ class MALT:
######################### MAJORITY VOTING #########################
# Majority Voting on the verified outputs
- majority_voting_verified = majority_voting_agent.run(
+ majority_voting_verified = self.majority_voting_agent.run(
task=any_to_str(verified_outputs),
)
self.conversation.add(
- role=majority_voting_agent.agent_name,
+ role=self.majority_voting_agent.agent_name,
content=majority_voting_verified,
)
diff --git a/swarms/structs/multi_model_gpu_manager.py b/swarms/structs/multi_model_gpu_manager.py
index 221bdb6d..8a945e82 100644
--- a/swarms/structs/multi_model_gpu_manager.py
+++ b/swarms/structs/multi_model_gpu_manager.py
@@ -147,7 +147,7 @@ class ModelMemoryCalculator:
@staticmethod
def get_huggingface_model_size(
- model_or_path: Union[str, Any]
+ model_or_path: Union[str, Any],
) -> float:
"""
Calculate the memory size of a Hugging Face model in GB.
diff --git a/swarms/structs/swarm_router.py b/swarms/structs/swarm_router.py
index f73cf7a8..b5bd6569 100644
--- a/swarms/structs/swarm_router.py
+++ b/swarms/structs/swarm_router.py
@@ -24,6 +24,7 @@ from swarms.structs.output_types import OutputType
from swarms.utils.loguru_logger import initialize_logger
from swarms.structs.malt import MALT
from swarms.structs.deep_research_swarm import DeepResearchSwarm
+from swarms.structs.council_judge import CouncilAsAJudge
logger = initialize_logger(log_folder="swarm_router")
@@ -41,6 +42,7 @@ SwarmType = Literal[
"MajorityVoting",
"MALT",
"DeepResearchSwarm",
+ "CouncilAsAJudge",
]
@@ -225,13 +227,7 @@ class SwarmRouter:
csv_path=self.csv_file_path
).load_agents()
- # Log initialization
- self._log(
- "info",
- f"SwarmRouter initialized with swarm type: {swarm_type}",
- )
-
- # Handle Automated Prompt Engineering
+ def setup(self):
if self.auto_generate_prompts is True:
self.activate_ape()
@@ -289,18 +285,52 @@ class SwarmRouter:
raise RuntimeError(error_msg) from e
def reliability_check(self):
- logger.info("Initializing reliability checks")
+ """Perform reliability checks on swarm configuration.
- if not self.agents:
- raise ValueError("No agents provided for the swarm.")
+ Validates essential swarm parameters and configuration before execution.
+ Handles special case for CouncilAsAJudge which may not require agents.
+ """
+ logger.info(
+ "🔍 [SYSTEM] Initializing advanced swarm reliability diagnostics..."
+ )
+ logger.info(
+ "⚡ [SYSTEM] Running pre-flight checks and system validation..."
+ )
+
+ # Check swarm type first since it affects other validations
if self.swarm_type is None:
+ logger.error(
+ "❌ [CRITICAL] Swarm type validation failed - type cannot be 'none'"
+ )
raise ValueError("Swarm type cannot be 'none'.")
+
+ # Special handling for CouncilAsAJudge
+ if self.swarm_type == "CouncilAsAJudge":
+ if self.agents is not None:
+ logger.warning(
+ "⚠️ [ADVISORY] CouncilAsAJudge detected with agents - this is atypical"
+ )
+ elif not self.agents:
+ logger.error(
+ "❌ [CRITICAL] Agent validation failed - no agents detected in swarm"
+ )
+ raise ValueError("No agents provided for the swarm.")
+
+ # Validate max_loops
if self.max_loops == 0:
+ logger.error(
+ "❌ [CRITICAL] Loop validation failed - max_loops cannot be 0"
+ )
raise ValueError("max_loops cannot be 0.")
+ # Setup other functionality
+ logger.info("🔄 [SYSTEM] Initializing swarm subsystems...")
+ self.setup()
+
logger.info(
- "Reliability checks completed your swarm is ready."
+ "✅ [SYSTEM] All reliability checks passed successfully"
)
+ logger.info("🚀 [SYSTEM] Swarm is ready for deployment")
def _create_swarm(
self, task: str = None, *args, **kwargs
@@ -358,6 +388,15 @@ class SwarmRouter:
preset_agents=True,
)
+ elif self.swarm_type == "CouncilAsAJudge":
+ return CouncilAsAJudge(
+ name=self.name,
+ description=self.description,
+ model_name=self.model_name,
+ output_type=self.output_type,
+ base_agent=self.agents[0] if self.agents else None,
+ )
+
elif self.swarm_type == "DeepResearchSwarm":
return DeepResearchSwarm(
name=self.name,
@@ -496,7 +535,14 @@ class SwarmRouter:
self.logs.append(log_entry)
logger.log(level.upper(), message)
- def _run(self, task: str, img: str, *args, **kwargs) -> Any:
+ def _run(
+ self,
+ task: str,
+ img: str,
+ model_response: str,
+ *args,
+ **kwargs,
+ ) -> Any:
"""
Dynamically run the specified task on the selected or matched swarm type.
@@ -520,7 +566,16 @@ class SwarmRouter:
logger.info(
f"Running task on {self.swarm_type} swarm with task: {task}"
)
- result = self.swarm.run(task=task, *args, **kwargs)
+
+ if self.swarm_type == "CouncilAsAJudge":
+ result = self.swarm.run(
+ task=task,
+ model_response=model_response,
+ *args,
+ **kwargs,
+ )
+ else:
+ result = self.swarm.run(task=task, *args, **kwargs)
logger.info("Swarm completed successfully")
return result
diff --git a/swarms/tools/mcp_client.py b/swarms/tools/mcp_client.py
index 28174184..1e9f8b5e 100644
--- a/swarms/tools/mcp_client.py
+++ b/swarms/tools/mcp_client.py
@@ -7,7 +7,7 @@ from loguru import logger
def parse_agent_output(
- dictionary: Union[str, Dict[Any, Any]]
+ dictionary: Union[str, Dict[Any, Any]],
) -> tuple[str, Dict[Any, Any]]:
"""
Parse agent output into tool name and parameters.
diff --git a/swarms/tools/py_func_to_openai_func_str.py b/swarms/tools/py_func_to_openai_func_str.py
index db40ed45..27739cb8 100644
--- a/swarms/tools/py_func_to_openai_func_str.py
+++ b/swarms/tools/py_func_to_openai_func_str.py
@@ -165,7 +165,7 @@ def get_typed_annotation(
def get_typed_signature(
- call: Callable[..., Any]
+ call: Callable[..., Any],
) -> inspect.Signature:
"""Get the signature of a function with type annotations.
@@ -497,7 +497,7 @@ def get_load_param_if_needed_function(
def load_basemodels_if_needed(
- func: Callable[..., Any]
+ func: Callable[..., Any],
) -> Callable[..., Any]:
"""A decorator to load the parameters of a function if they are Pydantic models
diff --git a/swarms/utils/history_output_formatter.py b/swarms/utils/history_output_formatter.py
index 2ba42d33..ea9d8d7f 100644
--- a/swarms/utils/history_output_formatter.py
+++ b/swarms/utils/history_output_formatter.py
@@ -20,6 +20,7 @@ HistoryOutputType = Literal[
"str-all-except-first",
]
+
def history_output_formatter(
conversation: Conversation, type: HistoryOutputType = "list"
) -> Union[List[Dict[str, Any]], Dict[str, Any], str]:
diff --git a/swarms/utils/try_except_wrapper.py b/swarms/utils/try_except_wrapper.py
index faa63534..e0e50f2d 100644
--- a/swarms/utils/try_except_wrapper.py
+++ b/swarms/utils/try_except_wrapper.py
@@ -21,7 +21,7 @@ def retry(
"""
def decorator_retry(
- func: Callable[..., Any]
+ func: Callable[..., Any],
) -> Callable[..., Any]:
@wraps(func)
def wrapper_retry(*args, **kwargs) -> Any:
@@ -48,7 +48,7 @@ def retry(
def log_execution_time(
- func: Callable[..., Any]
+ func: Callable[..., Any],
) -> Callable[..., Any]:
"""
A decorator that logs the execution time of a function.
diff --git a/swarms/utils/xml_utils.py b/swarms/utils/xml_utils.py
index 1e310f51..e3ccd308 100644
--- a/swarms/utils/xml_utils.py
+++ b/swarms/utils/xml_utils.py
@@ -1,6 +1,7 @@
import xml.etree.ElementTree as ET
from typing import Any
+
def dict_to_xml(tag: str, d: dict) -> ET.Element:
"""Convert a dictionary to an XML Element."""
elem = ET.Element(tag)
@@ -21,6 +22,7 @@ def dict_to_xml(tag: str, d: dict) -> ET.Element:
elem.append(child)
return elem
+
def to_xml_string(data: Any, root_tag: str = "root") -> str:
"""Convert a dict or list to an XML string."""
if isinstance(data, dict):
diff --git a/tests/agent_evals/github_summarizer_agent.py b/tests/agent_evals/github_summarizer_agent.py
index 17da45dc..e372145b 100644
--- a/tests/agent_evals/github_summarizer_agent.py
+++ b/tests/agent_evals/github_summarizer_agent.py
@@ -48,7 +48,7 @@ def fetch_latest_commits(
# Step 2: Format commits and fetch current time
def format_commits_with_time(
- commits: List[Dict[str, str]]
+ commits: List[Dict[str, str]],
) -> Tuple[str, str]:
"""
Format commit data into a readable string and return current time.
diff --git a/tests/agent_exec_benchmark.py b/tests/benchmark_agent/agent_exec_benchmark.py
similarity index 100%
rename from tests/agent_exec_benchmark.py
rename to tests/benchmark_agent/agent_exec_benchmark.py
diff --git a/tests/benchmark_init.py b/tests/benchmark_agent/benchmark_init.py
similarity index 100%
rename from tests/benchmark_init.py
rename to tests/benchmark_agent/benchmark_init.py
diff --git a/tests/profiling_agent.py b/tests/benchmark_agent/profiling_agent.py
similarity index 100%
rename from tests/profiling_agent.py
rename to tests/benchmark_agent/profiling_agent.py
diff --git a/tests/communication/test_conversation.py b/tests/communication/test_conversation.py
new file mode 100644
index 00000000..15cc1699
--- /dev/null
+++ b/tests/communication/test_conversation.py
@@ -0,0 +1,697 @@
+import shutil
+from pathlib import Path
+from datetime import datetime
+from loguru import logger
+from swarms.structs.conversation import Conversation
+
+
+def setup_temp_conversations_dir():
+ """Create a temporary directory for conversation cache files."""
+ temp_dir = Path("temp_test_conversations")
+ if temp_dir.exists():
+ shutil.rmtree(temp_dir)
+ temp_dir.mkdir()
+ logger.info(f"Created temporary test directory: {temp_dir}")
+ return temp_dir
+
+
+def create_test_conversation(temp_dir):
+ """Create a basic conversation for testing."""
+ conv = Conversation(
+ name="test_conversation", conversations_dir=str(temp_dir)
+ )
+ conv.add("user", "Hello, world!")
+ conv.add("assistant", "Hello, user!")
+ logger.info("Created test conversation with basic messages")
+ return conv
+
+
+def test_add_message():
+ logger.info("Running test_add_message")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ try:
+ assert len(conv.conversation_history) == 1
+ assert conv.conversation_history[0]["role"] == "user"
+ assert (
+ conv.conversation_history[0]["content"] == "Hello, world!"
+ )
+ logger.success("test_add_message passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_add_message failed: {str(e)}")
+ return False
+
+
+def test_add_message_with_time():
+ logger.info("Running test_add_message_with_time")
+ conv = Conversation(time_enabled=False)
+ conv.add("user", "Hello, world!")
+ try:
+ assert len(conv.conversation_history) == 1
+ assert conv.conversation_history[0]["role"] == "user"
+ assert (
+ conv.conversation_history[0]["content"] == "Hello, world!"
+ )
+ assert "timestamp" in conv.conversation_history[0]
+ logger.success("test_add_message_with_time passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_add_message_with_time failed: {str(e)}")
+ return False
+
+
+def test_delete_message():
+ logger.info("Running test_delete_message")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ conv.delete(0)
+ try:
+ assert len(conv.conversation_history) == 0
+ logger.success("test_delete_message passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_delete_message failed: {str(e)}")
+ return False
+
+
+def test_delete_message_out_of_bounds():
+ logger.info("Running test_delete_message_out_of_bounds")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ try:
+ conv.delete(1)
+ logger.error(
+ "test_delete_message_out_of_bounds failed: Expected IndexError"
+ )
+ return False
+ except IndexError:
+ logger.success("test_delete_message_out_of_bounds passed")
+ return True
+
+
+def test_update_message():
+ logger.info("Running test_update_message")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ conv.update(0, "assistant", "Hello, user!")
+ try:
+ assert len(conv.conversation_history) == 1
+ assert conv.conversation_history[0]["role"] == "assistant"
+ assert (
+ conv.conversation_history[0]["content"] == "Hello, user!"
+ )
+ logger.success("test_update_message passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_update_message failed: {str(e)}")
+ return False
+
+
+def test_update_message_out_of_bounds():
+ logger.info("Running test_update_message_out_of_bounds")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ try:
+ conv.update(1, "assistant", "Hello, user!")
+ logger.error(
+ "test_update_message_out_of_bounds failed: Expected IndexError"
+ )
+ return False
+ except IndexError:
+ logger.success("test_update_message_out_of_bounds passed")
+ return True
+
+
+def test_return_history_as_string():
+ logger.info("Running test_return_history_as_string")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ conv.add("assistant", "Hello, user!")
+ result = conv.return_history_as_string()
+ expected = "user: Hello, world!\n\nassistant: Hello, user!\n\n"
+ try:
+ assert result == expected
+ logger.success("test_return_history_as_string passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_return_history_as_string failed: {str(e)}"
+ )
+ return False
+
+
+def test_search():
+ logger.info("Running test_search")
+ conv = Conversation()
+ conv.add("user", "Hello, world!")
+ conv.add("assistant", "Hello, user!")
+ results = conv.search("Hello")
+ try:
+ assert len(results) == 2
+ assert results[0]["content"] == "Hello, world!"
+ assert results[1]["content"] == "Hello, user!"
+ logger.success("test_search passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_search failed: {str(e)}")
+ return False
+
+
+def test_conversation_cache_creation():
+ logger.info("Running test_conversation_cache_creation")
+ temp_dir = setup_temp_conversations_dir()
+ try:
+ conv = Conversation(
+ name="cache_test", conversations_dir=str(temp_dir)
+ )
+ conv.add("user", "Test message")
+ cache_file = temp_dir / "cache_test.json"
+ result = cache_file.exists()
+ if result:
+ logger.success("test_conversation_cache_creation passed")
+ else:
+ logger.error(
+ "test_conversation_cache_creation failed: Cache file not created"
+ )
+ return result
+ finally:
+ shutil.rmtree(temp_dir)
+
+
+def test_conversation_cache_loading():
+ logger.info("Running test_conversation_cache_loading")
+ temp_dir = setup_temp_conversations_dir()
+ try:
+ conv1 = Conversation(
+ name="load_test", conversations_dir=str(temp_dir)
+ )
+ conv1.add("user", "Test message")
+
+ conv2 = Conversation.load_conversation(
+ name="load_test", conversations_dir=str(temp_dir)
+ )
+ result = (
+ len(conv2.conversation_history) == 1
+ and conv2.conversation_history[0]["content"]
+ == "Test message"
+ )
+ if result:
+ logger.success("test_conversation_cache_loading passed")
+ else:
+ logger.error(
+ "test_conversation_cache_loading failed: Loaded conversation mismatch"
+ )
+ return result
+ finally:
+ shutil.rmtree(temp_dir)
+
+
+def test_add_multiple_messages():
+ logger.info("Running test_add_multiple_messages")
+ conv = Conversation()
+ roles = ["user", "assistant", "system"]
+ contents = ["Hello", "Hi there", "System message"]
+ conv.add_multiple_messages(roles, contents)
+ try:
+ assert len(conv.conversation_history) == 3
+ assert conv.conversation_history[0]["role"] == "user"
+ assert conv.conversation_history[1]["role"] == "assistant"
+ assert conv.conversation_history[2]["role"] == "system"
+ logger.success("test_add_multiple_messages passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_add_multiple_messages failed: {str(e)}")
+ return False
+
+
+def test_query():
+ logger.info("Running test_query")
+ conv = Conversation()
+ conv.add("user", "Test message")
+ try:
+ result = conv.query(0)
+ assert result["role"] == "user"
+ assert result["content"] == "Test message"
+ logger.success("test_query passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_query failed: {str(e)}")
+ return False
+
+
+def test_display_conversation():
+ logger.info("Running test_display_conversation")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ conv.add("assistant", "Hi")
+ try:
+ conv.display_conversation()
+ logger.success("test_display_conversation passed")
+ return True
+ except Exception as e:
+ logger.error(f"test_display_conversation failed: {str(e)}")
+ return False
+
+
+def test_count_messages_by_role():
+ logger.info("Running test_count_messages_by_role")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ conv.add("assistant", "Hi")
+ conv.add("system", "System message")
+ try:
+ counts = conv.count_messages_by_role()
+ assert counts["user"] == 1
+ assert counts["assistant"] == 1
+ assert counts["system"] == 1
+ logger.success("test_count_messages_by_role passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_count_messages_by_role failed: {str(e)}")
+ return False
+
+
+def test_get_str():
+ logger.info("Running test_get_str")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ try:
+ result = conv.get_str()
+ assert "user: Hello" in result
+ logger.success("test_get_str passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_get_str failed: {str(e)}")
+ return False
+
+
+def test_to_json():
+ logger.info("Running test_to_json")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ try:
+ result = conv.to_json()
+ assert isinstance(result, str)
+ assert "Hello" in result
+ logger.success("test_to_json passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_to_json failed: {str(e)}")
+ return False
+
+
+def test_to_dict():
+ logger.info("Running test_to_dict")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ try:
+ result = conv.to_dict()
+ assert isinstance(result, list)
+ assert result[0]["content"] == "Hello"
+ logger.success("test_to_dict passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_to_dict failed: {str(e)}")
+ return False
+
+
+def test_to_yaml():
+ logger.info("Running test_to_yaml")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ try:
+ result = conv.to_yaml()
+ assert isinstance(result, str)
+ assert "Hello" in result
+ logger.success("test_to_yaml passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_to_yaml failed: {str(e)}")
+ return False
+
+
+def test_get_last_message_as_string():
+ logger.info("Running test_get_last_message_as_string")
+ conv = Conversation()
+ conv.add("user", "First")
+ conv.add("assistant", "Last")
+ try:
+ result = conv.get_last_message_as_string()
+ assert result == "assistant: Last"
+ logger.success("test_get_last_message_as_string passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_get_last_message_as_string failed: {str(e)}"
+ )
+ return False
+
+
+def test_return_messages_as_list():
+ logger.info("Running test_return_messages_as_list")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ conv.add("assistant", "Hi")
+ try:
+ result = conv.return_messages_as_list()
+ assert len(result) == 2
+ assert result[0] == "user: Hello"
+ assert result[1] == "assistant: Hi"
+ logger.success("test_return_messages_as_list passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_return_messages_as_list failed: {str(e)}")
+ return False
+
+
+def test_return_messages_as_dictionary():
+ logger.info("Running test_return_messages_as_dictionary")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ try:
+ result = conv.return_messages_as_dictionary()
+ assert len(result) == 1
+ assert result[0]["role"] == "user"
+ assert result[0]["content"] == "Hello"
+ logger.success("test_return_messages_as_dictionary passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_return_messages_as_dictionary failed: {str(e)}"
+ )
+ return False
+
+
+def test_add_tool_output_to_agent():
+ logger.info("Running test_add_tool_output_to_agent")
+ conv = Conversation()
+ tool_output = {"name": "test_tool", "output": "test result"}
+ try:
+ conv.add_tool_output_to_agent("tool", tool_output)
+ assert len(conv.conversation_history) == 1
+ assert conv.conversation_history[0]["role"] == "tool"
+ assert conv.conversation_history[0]["content"] == tool_output
+ logger.success("test_add_tool_output_to_agent passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_add_tool_output_to_agent failed: {str(e)}"
+ )
+ return False
+
+
+def test_get_final_message():
+ logger.info("Running test_get_final_message")
+ conv = Conversation()
+ conv.add("user", "First")
+ conv.add("assistant", "Last")
+ try:
+ result = conv.get_final_message()
+ assert result == "assistant: Last"
+ logger.success("test_get_final_message passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_get_final_message failed: {str(e)}")
+ return False
+
+
+def test_get_final_message_content():
+ logger.info("Running test_get_final_message_content")
+ conv = Conversation()
+ conv.add("user", "First")
+ conv.add("assistant", "Last")
+ try:
+ result = conv.get_final_message_content()
+ assert result == "Last"
+ logger.success("test_get_final_message_content passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_get_final_message_content failed: {str(e)}"
+ )
+ return False
+
+
+def test_return_all_except_first():
+ logger.info("Running test_return_all_except_first")
+ conv = Conversation()
+ conv.add("system", "System")
+ conv.add("user", "Hello")
+ conv.add("assistant", "Hi")
+ try:
+ result = conv.return_all_except_first()
+ assert len(result) == 2
+ assert result[0]["role"] == "user"
+ assert result[1]["role"] == "assistant"
+ logger.success("test_return_all_except_first passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_return_all_except_first failed: {str(e)}")
+ return False
+
+
+def test_return_all_except_first_string():
+ logger.info("Running test_return_all_except_first_string")
+ conv = Conversation()
+ conv.add("system", "System")
+ conv.add("user", "Hello")
+ conv.add("assistant", "Hi")
+ try:
+ result = conv.return_all_except_first_string()
+ assert "Hello" in result
+ assert "Hi" in result
+ assert "System" not in result
+ logger.success("test_return_all_except_first_string passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_return_all_except_first_string failed: {str(e)}"
+ )
+ return False
+
+
+def test_batch_add():
+ logger.info("Running test_batch_add")
+ conv = Conversation()
+ messages = [
+ {"role": "user", "content": "Hello"},
+ {"role": "assistant", "content": "Hi"},
+ ]
+ try:
+ conv.batch_add(messages)
+ assert len(conv.conversation_history) == 2
+ assert conv.conversation_history[0]["role"] == "user"
+ assert conv.conversation_history[1]["role"] == "assistant"
+ logger.success("test_batch_add passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_batch_add failed: {str(e)}")
+ return False
+
+
+def test_get_cache_stats():
+ logger.info("Running test_get_cache_stats")
+ conv = Conversation(cache_enabled=True)
+ conv.add("user", "Hello")
+ try:
+ stats = conv.get_cache_stats()
+ assert "hits" in stats
+ assert "misses" in stats
+ assert "cached_tokens" in stats
+ assert "total_tokens" in stats
+ assert "hit_rate" in stats
+ logger.success("test_get_cache_stats passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_get_cache_stats failed: {str(e)}")
+ return False
+
+
+def test_list_cached_conversations():
+ logger.info("Running test_list_cached_conversations")
+ temp_dir = setup_temp_conversations_dir()
+ try:
+ conv = Conversation(
+ name="test_list", conversations_dir=str(temp_dir)
+ )
+ conv.add("user", "Test message")
+
+ conversations = Conversation.list_cached_conversations(
+ str(temp_dir)
+ )
+ try:
+ assert "test_list" in conversations
+ logger.success("test_list_cached_conversations passed")
+ return True
+ except AssertionError as e:
+ logger.error(
+ f"test_list_cached_conversations failed: {str(e)}"
+ )
+ return False
+ finally:
+ shutil.rmtree(temp_dir)
+
+
+def test_clear():
+ logger.info("Running test_clear")
+ conv = Conversation()
+ conv.add("user", "Hello")
+ conv.add("assistant", "Hi")
+ try:
+ conv.clear()
+ assert len(conv.conversation_history) == 0
+ logger.success("test_clear passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_clear failed: {str(e)}")
+ return False
+
+
+def test_save_and_load_json():
+ logger.info("Running test_save_and_load_json")
+ temp_dir = setup_temp_conversations_dir()
+ file_path = temp_dir / "test_save.json"
+
+ try:
+ conv = Conversation()
+ conv.add("user", "Hello")
+ conv.save_as_json(str(file_path))
+
+ conv2 = Conversation()
+ conv2.load_from_json(str(file_path))
+
+ try:
+ assert len(conv2.conversation_history) == 1
+ assert conv2.conversation_history[0]["content"] == "Hello"
+ logger.success("test_save_and_load_json passed")
+ return True
+ except AssertionError as e:
+ logger.error(f"test_save_and_load_json failed: {str(e)}")
+ return False
+ finally:
+ shutil.rmtree(temp_dir)
+
+
+def run_all_tests():
+ """Run all test functions and return results."""
+ logger.info("Starting test suite execution")
+ test_results = []
+ test_functions = [
+ test_add_message,
+ test_add_message_with_time,
+ test_delete_message,
+ test_delete_message_out_of_bounds,
+ test_update_message,
+ test_update_message_out_of_bounds,
+ test_return_history_as_string,
+ test_search,
+ test_conversation_cache_creation,
+ test_conversation_cache_loading,
+ test_add_multiple_messages,
+ test_query,
+ test_display_conversation,
+ test_count_messages_by_role,
+ test_get_str,
+ test_to_json,
+ test_to_dict,
+ test_to_yaml,
+ test_get_last_message_as_string,
+ test_return_messages_as_list,
+ test_return_messages_as_dictionary,
+ test_add_tool_output_to_agent,
+ test_get_final_message,
+ test_get_final_message_content,
+ test_return_all_except_first,
+ test_return_all_except_first_string,
+ test_batch_add,
+ test_get_cache_stats,
+ test_list_cached_conversations,
+ test_clear,
+ test_save_and_load_json,
+ ]
+
+ for test_func in test_functions:
+ start_time = datetime.now()
+ try:
+ result = test_func()
+ end_time = datetime.now()
+ duration = (end_time - start_time).total_seconds()
+ test_results.append(
+ {
+ "name": test_func.__name__,
+ "result": "PASS" if result else "FAIL",
+ "duration": duration,
+ }
+ )
+ except Exception as e:
+ end_time = datetime.now()
+ duration = (end_time - start_time).total_seconds()
+ test_results.append(
+ {
+ "name": test_func.__name__,
+ "result": "ERROR",
+ "error": str(e),
+ "duration": duration,
+ }
+ )
+ logger.error(
+ f"Test {test_func.__name__} failed with error: {str(e)}"
+ )
+
+ return test_results
+
+
+def generate_markdown_report(results):
+ """Generate a markdown report from test results."""
+ logger.info("Generating test report")
+
+ # Summary
+ total_tests = len(results)
+ passed_tests = sum(1 for r in results if r["result"] == "PASS")
+ failed_tests = sum(1 for r in results if r["result"] == "FAIL")
+ error_tests = sum(1 for r in results if r["result"] == "ERROR")
+
+ logger.info(f"Total Tests: {total_tests}")
+ logger.info(f"Passed: {passed_tests}")
+ logger.info(f"Failed: {failed_tests}")
+ logger.info(f"Errors: {error_tests}")
+
+ report = "# Test Results Report\n\n"
+ report += f"Test Run Date: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
+
+ report += "## Summary\n\n"
+ report += f"- Total Tests: {total_tests}\n"
+ report += f"- Passed: {passed_tests}\n"
+ report += f"- Failed: {failed_tests}\n"
+ report += f"- Errors: {error_tests}\n\n"
+
+ # Detailed Results
+ report += "## Detailed Results\n\n"
+ report += "| Test Name | Result | Duration (s) | Error |\n"
+ report += "|-----------|---------|--------------|-------|\n"
+
+ for result in results:
+ name = result["name"]
+ test_result = result["result"]
+ duration = f"{result['duration']:.4f}"
+ error = result.get("error", "")
+ report += (
+ f"| {name} | {test_result} | {duration} | {error} |\n"
+ )
+
+ return report
+
+
+if __name__ == "__main__":
+ logger.info("Starting test execution")
+ results = run_all_tests()
+ report = generate_markdown_report(results)
+
+ # Save report to file
+ with open("test_results.md", "w") as f:
+ f.write(report)
+
+ logger.success(
+ "Test execution completed. Results saved to test_results.md"
+ )
diff --git a/tests/communication/test_pulsar.py b/tests/communication/test_pulsar.py
new file mode 100644
index 00000000..57ce3942
--- /dev/null
+++ b/tests/communication/test_pulsar.py
@@ -0,0 +1,445 @@
+import json
+import time
+import os
+import sys
+import socket
+import subprocess
+from datetime import datetime
+from typing import Dict, Callable, Tuple
+from loguru import logger
+from swarms.communication.pulsar_struct import (
+ PulsarConversation,
+ Message,
+)
+
+
+def check_pulsar_client_installed() -> bool:
+ """Check if pulsar-client package is installed."""
+ try:
+ import pulsar
+
+ return True
+ except ImportError:
+ return False
+
+
+def install_pulsar_client() -> bool:
+ """Install pulsar-client package using pip."""
+ try:
+ logger.info("Installing pulsar-client package...")
+ result = subprocess.run(
+ [sys.executable, "-m", "pip", "install", "pulsar-client"],
+ capture_output=True,
+ text=True,
+ )
+ if result.returncode == 0:
+ logger.info("Successfully installed pulsar-client")
+ return True
+ else:
+ logger.error(
+ f"Failed to install pulsar-client: {result.stderr}"
+ )
+ return False
+ except Exception as e:
+ logger.error(f"Error installing pulsar-client: {str(e)}")
+ return False
+
+
+def check_port_available(
+ host: str = "localhost", port: int = 6650
+) -> bool:
+ """Check if a port is open on the given host."""
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ try:
+ sock.settimeout(2) # 2 second timeout
+ result = sock.connect_ex((host, port))
+ return result == 0
+ except Exception:
+ return False
+ finally:
+ sock.close()
+
+
+def setup_test_broker() -> Tuple[bool, str]:
+ """
+ Set up a test broker for running tests.
+ Returns (success, message).
+ """
+ try:
+ from pulsar import Client
+
+ # Create a memory-based standalone broker for testing
+ client = Client("pulsar://localhost:6650")
+ producer = client.create_producer("test-topic")
+ producer.close()
+ client.close()
+ return True, "Test broker setup successful"
+ except Exception as e:
+ return False, f"Failed to set up test broker: {str(e)}"
+
+
+class PulsarTestSuite:
+ """Custom test suite for PulsarConversation class."""
+
+ def __init__(self, pulsar_host: str = "pulsar://localhost:6650"):
+ self.pulsar_host = pulsar_host
+ self.host = pulsar_host.split("://")[1].split(":")[0]
+ self.port = int(pulsar_host.split(":")[-1])
+ self.test_results = {
+ "test_suite": "PulsarConversation Tests",
+ "timestamp": datetime.now().isoformat(),
+ "total_tests": 0,
+ "passed_tests": 0,
+ "failed_tests": 0,
+ "skipped_tests": 0,
+ "results": [],
+ }
+
+ def check_pulsar_setup(self) -> bool:
+ """
+ Check if Pulsar is properly set up and provide guidance if it's not.
+ """
+ # First check if pulsar-client is installed
+ if not check_pulsar_client_installed():
+ logger.error(
+ "\nPulsar client library is not installed. Installing now..."
+ )
+ if not install_pulsar_client():
+ logger.error(
+ "\nFailed to install pulsar-client. Please install it manually:\n"
+ " $ pip install pulsar-client\n"
+ )
+ return False
+
+ # Import the newly installed package
+ try:
+ from swarms.communication.pulsar_struct import (
+ PulsarConversation,
+ Message,
+ )
+ except ImportError as e:
+ logger.error(
+ f"Failed to import PulsarConversation after installation: {str(e)}"
+ )
+ return False
+
+ # Try to set up test broker
+ success, message = setup_test_broker()
+ if not success:
+ logger.error(
+ f"\nFailed to set up test environment: {message}"
+ )
+ return False
+
+ logger.info("Pulsar setup check passed successfully")
+ return True
+
+ def run_test(self, test_func: Callable) -> Dict:
+ """Run a single test and return its result."""
+ start_time = time.time()
+ test_name = test_func.__name__
+
+ try:
+ logger.info(f"Running test: {test_name}")
+ test_func()
+ success = True
+ error = None
+ status = "PASSED"
+ except Exception as e:
+ success = False
+ error = str(e)
+ status = "FAILED"
+ logger.error(f"Test {test_name} failed: {error}")
+
+ end_time = time.time()
+ duration = round(end_time - start_time, 3)
+
+ result = {
+ "test_name": test_name,
+ "success": success,
+ "duration": duration,
+ "error": error,
+ "timestamp": datetime.now().isoformat(),
+ "status": status,
+ }
+
+ self.test_results["total_tests"] += 1
+ if success:
+ self.test_results["passed_tests"] += 1
+ else:
+ self.test_results["failed_tests"] += 1
+
+ self.test_results["results"].append(result)
+ return result
+
+ def test_initialization(self):
+ """Test PulsarConversation initialization."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host,
+ system_prompt="Test system prompt",
+ )
+ assert conversation.conversation_id is not None
+ assert conversation.health_check()["client_connected"] is True
+ conversation.__del__()
+
+ def test_add_message(self):
+ """Test adding a message."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ msg_id = conversation.add("user", "Test message")
+ assert msg_id is not None
+
+ # Verify message was added
+ messages = conversation.get_messages()
+ assert len(messages) > 0
+ assert messages[0]["content"] == "Test message"
+ conversation.__del__()
+
+ def test_batch_add_messages(self):
+ """Test adding multiple messages."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ messages = [
+ Message(role="user", content="Message 1"),
+ Message(role="assistant", content="Message 2"),
+ ]
+ msg_ids = conversation.batch_add(messages)
+ assert len(msg_ids) == 2
+
+ # Verify messages were added
+ stored_messages = conversation.get_messages()
+ assert len(stored_messages) == 2
+ assert stored_messages[0]["content"] == "Message 1"
+ assert stored_messages[1]["content"] == "Message 2"
+ conversation.__del__()
+
+ def test_get_messages(self):
+ """Test retrieving messages."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ messages = conversation.get_messages()
+ assert len(messages) > 0
+ conversation.__del__()
+
+ def test_search_messages(self):
+ """Test searching messages."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Unique test message")
+ results = conversation.search("unique")
+ assert len(results) > 0
+ conversation.__del__()
+
+ def test_conversation_clear(self):
+ """Test clearing conversation."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ conversation.clear()
+ messages = conversation.get_messages()
+ assert len(messages) == 0
+ conversation.__del__()
+
+ def test_conversation_export_import(self):
+ """Test exporting and importing conversation."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ conversation.export_conversation("test_export.json")
+
+ new_conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ new_conversation.import_conversation("test_export.json")
+ messages = new_conversation.get_messages()
+ assert len(messages) > 0
+ conversation.__del__()
+ new_conversation.__del__()
+
+ def test_message_count(self):
+ """Test message counting."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Message 1")
+ conversation.add("assistant", "Message 2")
+ counts = conversation.count_messages_by_role()
+ assert counts["user"] == 1
+ assert counts["assistant"] == 1
+ conversation.__del__()
+
+ def test_conversation_string(self):
+ """Test string representation."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ string_rep = conversation.get_str()
+ assert "Test message" in string_rep
+ conversation.__del__()
+
+ def test_conversation_json(self):
+ """Test JSON conversion."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ json_data = conversation.to_json()
+ assert isinstance(json_data, str)
+ assert "Test message" in json_data
+ conversation.__del__()
+
+ def test_conversation_yaml(self):
+ """Test YAML conversion."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ yaml_data = conversation.to_yaml()
+ assert isinstance(yaml_data, str)
+ assert "Test message" in yaml_data
+ conversation.__del__()
+
+ def test_last_message(self):
+ """Test getting last message."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ last_msg = conversation.get_last_message()
+ assert last_msg["content"] == "Test message"
+ conversation.__del__()
+
+ def test_messages_by_role(self):
+ """Test getting messages by role."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "User message")
+ conversation.add("assistant", "Assistant message")
+ user_messages = conversation.get_messages_by_role("user")
+ assert len(user_messages) == 1
+ conversation.__del__()
+
+ def test_conversation_summary(self):
+ """Test getting conversation summary."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ summary = conversation.get_conversation_summary()
+ assert summary["message_count"] == 1
+ conversation.__del__()
+
+ def test_conversation_statistics(self):
+ """Test getting conversation statistics."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ conversation.add("user", "Test message")
+ stats = conversation.get_statistics()
+ assert stats["total_messages"] == 1
+ conversation.__del__()
+
+ def test_health_check(self):
+ """Test health check functionality."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ health = conversation.health_check()
+ assert health["client_connected"] is True
+ conversation.__del__()
+
+ def test_cache_stats(self):
+ """Test cache statistics."""
+ conversation = PulsarConversation(
+ pulsar_host=self.pulsar_host
+ )
+ stats = conversation.get_cache_stats()
+ assert "hits" in stats
+ assert "misses" in stats
+ conversation.__del__()
+
+ def run_all_tests(self):
+ """Run all test cases."""
+ if not self.check_pulsar_setup():
+ logger.error(
+ "Pulsar setup check failed. Please check the error messages above."
+ )
+ return
+
+ test_methods = [
+ method
+ for method in dir(self)
+ if method.startswith("test_")
+ and callable(getattr(self, method))
+ ]
+
+ logger.info(f"Running {len(test_methods)} tests...")
+
+ for method_name in test_methods:
+ test_method = getattr(self, method_name)
+ self.run_test(test_method)
+
+ self.save_results()
+
+ def save_results(self):
+ """Save test results to JSON file."""
+ total_tests = (
+ self.test_results["passed_tests"]
+ + self.test_results["failed_tests"]
+ )
+
+ if total_tests > 0:
+ self.test_results["success_rate"] = round(
+ (self.test_results["passed_tests"] / total_tests)
+ * 100,
+ 2,
+ )
+ else:
+ self.test_results["success_rate"] = 0
+
+ # Add test environment info
+ self.test_results["environment"] = {
+ "pulsar_host": self.pulsar_host,
+ "pulsar_port": self.port,
+ "pulsar_client_installed": check_pulsar_client_installed(),
+ "os": os.uname().sysname,
+ "python_version": subprocess.check_output(
+ ["python", "--version"]
+ )
+ .decode()
+ .strip(),
+ }
+
+ with open("pulsar_test_results.json", "w") as f:
+ json.dump(self.test_results, f, indent=2)
+
+ logger.info(
+ f"\nTest Results Summary:\n"
+ f"Total tests: {self.test_results['total_tests']}\n"
+ f"Passed: {self.test_results['passed_tests']}\n"
+ f"Failed: {self.test_results['failed_tests']}\n"
+ f"Skipped: {self.test_results['skipped_tests']}\n"
+ f"Success rate: {self.test_results['success_rate']}%\n"
+ f"Results saved to: pulsar_test_results.json"
+ )
+
+
+if __name__ == "__main__":
+ try:
+ test_suite = PulsarTestSuite()
+ test_suite.run_all_tests()
+ except KeyboardInterrupt:
+ logger.warning("Tests interrupted by user")
+ exit(1)
+ except Exception as e:
+ logger.error(f"Test suite failed: {str(e)}")
+ exit(1)
diff --git a/tests/communication/test_redis.py b/tests/communication/test_redis.py
new file mode 100644
index 00000000..512a7c04
--- /dev/null
+++ b/tests/communication/test_redis.py
@@ -0,0 +1,282 @@
+import time
+import json
+from datetime import datetime
+from loguru import logger
+
+from swarms.communication.redis_wrap import (
+ RedisConversation,
+ REDIS_AVAILABLE,
+)
+
+
+class TestResults:
+ def __init__(self):
+ self.results = []
+ self.start_time = datetime.now()
+ self.end_time = None
+ self.total_tests = 0
+ self.passed_tests = 0
+ self.failed_tests = 0
+
+ def add_result(
+ self, test_name: str, passed: bool, error: str = None
+ ):
+ self.total_tests += 1
+ if passed:
+ self.passed_tests += 1
+ status = "✅ PASSED"
+ else:
+ self.failed_tests += 1
+ status = "❌ FAILED"
+
+ self.results.append(
+ {
+ "test_name": test_name,
+ "status": status,
+ "error": error if error else "None",
+ }
+ )
+
+ def generate_markdown(self) -> str:
+ self.end_time = datetime.now()
+ duration = (self.end_time - self.start_time).total_seconds()
+
+ md = [
+ "# Redis Conversation Test Results",
+ "",
+ f"Test Run: {self.start_time.strftime('%Y-%m-%d %H:%M:%S')}",
+ f"Duration: {duration:.2f} seconds",
+ "",
+ "## Summary",
+ f"- Total Tests: {self.total_tests}",
+ f"- Passed: {self.passed_tests}",
+ f"- Failed: {self.failed_tests}",
+ f"- Success Rate: {(self.passed_tests/self.total_tests*100):.1f}%",
+ "",
+ "## Detailed Results",
+ "",
+ "| Test Name | Status | Error |",
+ "|-----------|--------|-------|",
+ ]
+
+ for result in self.results:
+ md.append(
+ f"| {result['test_name']} | {result['status']} | {result['error']} |"
+ )
+
+ return "\n".join(md)
+
+
+class RedisConversationTester:
+ def __init__(self):
+ self.results = TestResults()
+ self.conversation = None
+ self.redis_server = None
+
+ def run_test(self, test_func: callable, test_name: str):
+ """Run a single test and record its result."""
+ try:
+ test_func()
+ self.results.add_result(test_name, True)
+ except Exception as e:
+ self.results.add_result(test_name, False, str(e))
+ logger.error(f"Test '{test_name}' failed: {str(e)}")
+
+ def setup(self):
+ """Initialize Redis server and conversation for testing."""
+ try:
+ # # Start embedded Redis server
+ # self.redis_server = EmbeddedRedis(port=6379)
+ # if not self.redis_server.start():
+ # logger.error("Failed to start embedded Redis server")
+ # return False
+
+ # Initialize Redis conversation
+ self.conversation = RedisConversation(
+ system_prompt="Test System Prompt",
+ redis_host="localhost",
+ redis_port=6379,
+ redis_retry_attempts=3,
+ use_embedded_redis=True,
+ )
+ return True
+ except Exception as e:
+ logger.error(
+ f"Failed to initialize Redis conversation: {str(e)}"
+ )
+ return False
+
+ def cleanup(self):
+ """Cleanup resources after tests."""
+ if self.redis_server:
+ self.redis_server.stop()
+
+ def test_initialization(self):
+ """Test basic initialization."""
+ assert (
+ self.conversation is not None
+ ), "Failed to initialize RedisConversation"
+ assert (
+ self.conversation.system_prompt == "Test System Prompt"
+ ), "System prompt not set correctly"
+
+ def test_add_message(self):
+ """Test adding messages."""
+ self.conversation.add("user", "Hello")
+ self.conversation.add("assistant", "Hi there!")
+ messages = self.conversation.return_messages_as_list()
+ assert len(messages) >= 2, "Failed to add messages"
+
+ def test_json_message(self):
+ """Test adding JSON messages."""
+ json_content = {"key": "value", "nested": {"data": 123}}
+ self.conversation.add("system", json_content)
+ last_message = self.conversation.get_final_message_content()
+ assert isinstance(
+ json.loads(last_message), dict
+ ), "Failed to handle JSON message"
+
+ def test_search(self):
+ """Test search functionality."""
+ self.conversation.add("user", "searchable message")
+ results = self.conversation.search("searchable")
+ assert len(results) > 0, "Search failed to find message"
+
+ def test_delete(self):
+ """Test message deletion."""
+ initial_count = len(
+ self.conversation.return_messages_as_list()
+ )
+ self.conversation.delete(0)
+ new_count = len(self.conversation.return_messages_as_list())
+ assert (
+ new_count == initial_count - 1
+ ), "Failed to delete message"
+
+ def test_update(self):
+ """Test message update."""
+ # Add initial message
+ self.conversation.add("user", "original message")
+
+ # Update the message
+ self.conversation.update(0, "user", "updated message")
+
+ # Get the message directly using query
+ updated_message = self.conversation.query(0)
+
+ # Verify the update
+ assert (
+ updated_message["content"] == "updated message"
+ ), "Message content should be updated"
+
+ def test_clear(self):
+ """Test clearing conversation."""
+ self.conversation.add("user", "test message")
+ self.conversation.clear()
+ messages = self.conversation.return_messages_as_list()
+ assert len(messages) == 0, "Failed to clear conversation"
+
+ def test_export_import(self):
+ """Test export and import functionality."""
+ self.conversation.add("user", "export test")
+ self.conversation.export_conversation("test_export.txt")
+ self.conversation.clear()
+ self.conversation.import_conversation("test_export.txt")
+ messages = self.conversation.return_messages_as_list()
+ assert (
+ len(messages) > 0
+ ), "Failed to export/import conversation"
+
+ def test_json_operations(self):
+ """Test JSON operations."""
+ self.conversation.add("user", "json test")
+ json_data = self.conversation.to_json()
+ assert isinstance(
+ json.loads(json_data), list
+ ), "Failed to convert to JSON"
+
+ def test_yaml_operations(self):
+ """Test YAML operations."""
+ self.conversation.add("user", "yaml test")
+ yaml_data = self.conversation.to_yaml()
+ assert isinstance(yaml_data, str), "Failed to convert to YAML"
+
+ def test_token_counting(self):
+ """Test token counting functionality."""
+ self.conversation.add("user", "token test message")
+ time.sleep(1) # Wait for async token counting
+ messages = self.conversation.to_dict()
+ assert any(
+ "token_count" in msg for msg in messages
+ ), "Failed to count tokens"
+
+ def test_cache_operations(self):
+ """Test cache operations."""
+ self.conversation.add("user", "cache test")
+ stats = self.conversation.get_cache_stats()
+ assert isinstance(stats, dict), "Failed to get cache stats"
+
+ def test_conversation_stats(self):
+ """Test conversation statistics."""
+ self.conversation.add("user", "stats test")
+ counts = self.conversation.count_messages_by_role()
+ assert isinstance(
+ counts, dict
+ ), "Failed to get message counts"
+
+ def run_all_tests(self):
+ """Run all tests and generate report."""
+ if not REDIS_AVAILABLE:
+ logger.error(
+ "Redis is not available. Please install redis package."
+ )
+ return "# Redis Tests Failed\n\nRedis package is not installed."
+
+ try:
+ if not self.setup():
+ logger.error("Failed to setup Redis connection.")
+ return "# Redis Tests Failed\n\nFailed to connect to Redis server."
+
+ tests = [
+ (self.test_initialization, "Initialization Test"),
+ (self.test_add_message, "Add Message Test"),
+ (self.test_json_message, "JSON Message Test"),
+ (self.test_search, "Search Test"),
+ (self.test_delete, "Delete Test"),
+ (self.test_update, "Update Test"),
+ (self.test_clear, "Clear Test"),
+ (self.test_export_import, "Export/Import Test"),
+ (self.test_json_operations, "JSON Operations Test"),
+ (self.test_yaml_operations, "YAML Operations Test"),
+ (self.test_token_counting, "Token Counting Test"),
+ (self.test_cache_operations, "Cache Operations Test"),
+ (
+ self.test_conversation_stats,
+ "Conversation Stats Test",
+ ),
+ ]
+
+ for test_func, test_name in tests:
+ self.run_test(test_func, test_name)
+
+ return self.results.generate_markdown()
+ finally:
+ self.cleanup()
+
+
+def main():
+ """Main function to run tests and save results."""
+ tester = RedisConversationTester()
+ markdown_results = tester.run_all_tests()
+
+ # Save results to file
+ with open("redis_test_results.md", "w") as f:
+ f.write(markdown_results)
+
+ logger.info(
+ "Test results have been saved to redis_test_results.md"
+ )
+
+
+if __name__ == "__main__":
+ main()
diff --git a/tests/communication/test_sqlite_wrapper.py b/tests/communication/test_sqlite_wrapper.py
index d188ec10..2c092ce2 100644
--- a/tests/communication/test_sqlite_wrapper.py
+++ b/tests/communication/test_sqlite_wrapper.py
@@ -282,7 +282,7 @@ def test_conversation_management() -> bool:
def generate_test_report(
- test_results: List[Dict[str, Any]]
+ test_results: List[Dict[str, Any]],
) -> Dict[str, Any]:
"""
Generate a test report in JSON format.
diff --git a/tests/structs/test_conversation.py b/tests/structs/test_conversation.py
deleted file mode 100644
index a100551a..00000000
--- a/tests/structs/test_conversation.py
+++ /dev/null
@@ -1,242 +0,0 @@
-import pytest
-
-from swarms.structs.conversation import Conversation
-
-
-@pytest.fixture
-def conversation():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- return conv
-
-
-def test_add_message():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- assert len(conv.conversation_history) == 1
- assert conv.conversation_history[0]["role"] == "user"
- assert conv.conversation_history[0]["content"] == "Hello, world!"
-
-
-def test_add_message_with_time():
- conv = Conversation(time_enabled=False)
- conv.add("user", "Hello, world!")
- assert len(conv.conversation_history) == 1
- assert conv.conversation_history[0]["role"] == "user"
- assert conv.conversation_history[0]["content"] == "Hello, world!"
- assert "timestamp" in conv.conversation_history[0]
-
-
-def test_delete_message():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.delete(0)
- assert len(conv.conversation_history) == 0
-
-
-def test_delete_message_out_of_bounds():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- with pytest.raises(IndexError):
- conv.delete(1)
-
-
-def test_update_message():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.update(0, "assistant", "Hello, user!")
- assert len(conv.conversation_history) == 1
- assert conv.conversation_history[0]["role"] == "assistant"
- assert conv.conversation_history[0]["content"] == "Hello, user!"
-
-
-def test_update_message_out_of_bounds():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- with pytest.raises(IndexError):
- conv.update(1, "assistant", "Hello, user!")
-
-
-def test_return_history_as_string_with_messages(conversation):
- result = conversation.return_history_as_string()
- assert result is not None
-
-
-def test_return_history_as_string_with_no_messages():
- conv = Conversation()
- result = conv.return_history_as_string()
- assert result == ""
-
-
-@pytest.mark.parametrize(
- "role, content",
- [
- ("user", "Hello, world!"),
- ("assistant", "Hello, user!"),
- ("system", "System message"),
- ("function", "Function message"),
- ],
-)
-def test_return_history_as_string_with_different_roles(role, content):
- conv = Conversation()
- conv.add(role, content)
- result = conv.return_history_as_string()
- expected = f"{role}: {content}\n\n"
- assert result == expected
-
-
-@pytest.mark.parametrize("message_count", range(1, 11))
-def test_return_history_as_string_with_multiple_messages(
- message_count,
-):
- conv = Conversation()
- for i in range(message_count):
- conv.add("user", f"Message {i + 1}")
- result = conv.return_history_as_string()
- expected = "".join(
- [f"user: Message {i + 1}\n\n" for i in range(message_count)]
- )
- assert result == expected
-
-
-@pytest.mark.parametrize(
- "content",
- [
- "Hello, world!",
- "This is a longer message with multiple words.",
- "This message\nhas multiple\nlines.",
- "This message has special characters: !@#$%^&*()",
- "This message has unicode characters: 你好,世界!",
- ],
-)
-def test_return_history_as_string_with_different_contents(content):
- conv = Conversation()
- conv.add("user", content)
- result = conv.return_history_as_string()
- expected = f"user: {content}\n\n"
- assert result == expected
-
-
-def test_return_history_as_string_with_large_message(conversation):
- large_message = "Hello, world! " * 10000 # 10,000 repetitions
- conversation.add("user", large_message)
- result = conversation.return_history_as_string()
- expected = (
- "user: Hello, world!\n\nassistant: Hello, user!\n\nuser:"
- f" {large_message}\n\n"
- )
- assert result == expected
-
-
-def test_search_keyword_in_conversation(conversation):
- result = conversation.search_keyword_in_conversation("Hello")
- assert len(result) == 2
- assert result[0]["content"] == "Hello, world!"
- assert result[1]["content"] == "Hello, user!"
-
-
-def test_export_import_conversation(conversation, tmp_path):
- filename = tmp_path / "conversation.txt"
- conversation.export_conversation(filename)
- new_conversation = Conversation()
- new_conversation.import_conversation(filename)
- assert (
- new_conversation.return_history_as_string()
- == conversation.return_history_as_string()
- )
-
-
-def test_count_messages_by_role(conversation):
- counts = conversation.count_messages_by_role()
- assert counts["user"] == 1
- assert counts["assistant"] == 1
-
-
-def test_display_conversation(capsys, conversation):
- conversation.display_conversation()
- captured = capsys.readouterr()
- assert "user: Hello, world!\n\n" in captured.out
- assert "assistant: Hello, user!\n\n" in captured.out
-
-
-def test_display_conversation_detailed(capsys, conversation):
- conversation.display_conversation(detailed=True)
- captured = capsys.readouterr()
- assert "user: Hello, world!\n\n" in captured.out
- assert "assistant: Hello, user!\n\n" in captured.out
-
-
-def test_search():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.search("Hello")
- assert len(results) == 2
- assert results[0]["content"] == "Hello, world!"
- assert results[1]["content"] == "Hello, user!"
-
-
-def test_return_history_as_string():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- result = conv.return_history_as_string()
- expected = "user: Hello, world!\n\nassistant: Hello, user!\n\n"
- assert result == expected
-
-
-def test_search_no_results():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.search("Goodbye")
- assert len(results) == 0
-
-
-def test_search_case_insensitive():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.search("hello")
- assert len(results) == 2
- assert results[0]["content"] == "Hello, world!"
- assert results[1]["content"] == "Hello, user!"
-
-
-def test_search_multiple_occurrences():
- conv = Conversation()
- conv.add("user", "Hello, world! Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.search("Hello")
- assert len(results) == 2
- assert results[0]["content"] == "Hello, world! Hello, world!"
- assert results[1]["content"] == "Hello, user!"
-
-
-def test_query_no_results():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.query("Goodbye")
- assert len(results) == 0
-
-
-def test_query_case_insensitive():
- conv = Conversation()
- conv.add("user", "Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.query("hello")
- assert len(results) == 2
- assert results[0]["content"] == "Hello, world!"
- assert results[1]["content"] == "Hello, user!"
-
-
-def test_query_multiple_occurrences():
- conv = Conversation()
- conv.add("user", "Hello, world! Hello, world!")
- conv.add("assistant", "Hello, user!")
- results = conv.query("Hello")
- assert len(results) == 2
- assert results[0]["content"] == "Hello, world! Hello, world!"
- assert results[1]["content"] == "Hello, user!"
diff --git a/tests/structs/test_conversation_cache.py b/tests/structs/test_conversation_cache.py
deleted file mode 100644
index 430a0794..00000000
--- a/tests/structs/test_conversation_cache.py
+++ /dev/null
@@ -1,241 +0,0 @@
-from swarms.structs.conversation import Conversation
-import time
-import threading
-import random
-from typing import List
-
-
-def test_conversation_cache():
- """
- Test the caching functionality of the Conversation class.
- This test demonstrates:
- 1. Cache hits and misses
- 2. Token counting with caching
- 3. Cache statistics
- 4. Thread safety
- 5. Different content types
- 6. Edge cases
- 7. Performance metrics
- """
- print("\n=== Testing Conversation Cache ===")
-
- # Create a conversation with caching enabled
- conv = Conversation(cache_enabled=True)
-
- # Test 1: Basic caching with repeated messages
- print("\nTest 1: Basic caching with repeated messages")
- message = "This is a test message that should be cached"
-
- # First add (should be a cache miss)
- print("\nAdding first message...")
- conv.add("user", message)
- time.sleep(0.1) # Wait for token counting thread
-
- # Second add (should be a cache hit)
- print("\nAdding same message again...")
- conv.add("user", message)
- time.sleep(0.1) # Wait for token counting thread
-
- # Check cache stats
- stats = conv.get_cache_stats()
- print("\nCache stats after repeated message:")
- print(f"Hits: {stats['hits']}")
- print(f"Misses: {stats['misses']}")
- print(f"Cached tokens: {stats['cached_tokens']}")
- print(f"Hit rate: {stats['hit_rate']:.2%}")
-
- # Test 2: Different content types
- print("\nTest 2: Different content types")
-
- # Test with dictionary
- dict_content = {"key": "value", "nested": {"inner": "data"}}
- print("\nAdding dictionary content...")
- conv.add("user", dict_content)
- time.sleep(0.1)
-
- # Test with list
- list_content = ["item1", "item2", {"nested": "data"}]
- print("\nAdding list content...")
- conv.add("user", list_content)
- time.sleep(0.1)
-
- # Test 3: Thread safety
- print("\nTest 3: Thread safety with concurrent adds")
-
- def add_message(msg):
- conv.add("user", msg)
-
- # Add multiple messages concurrently
- messages = [f"Concurrent message {i}" for i in range(5)]
- for msg in messages:
- add_message(msg)
-
- time.sleep(0.5) # Wait for all token counting threads
-
- # Test 4: Cache with different message lengths
- print("\nTest 4: Cache with different message lengths")
-
- # Short message
- short_msg = "Short"
- conv.add("user", short_msg)
- time.sleep(0.1)
-
- # Long message
- long_msg = "This is a much longer message that should have more tokens and might be cached differently"
- conv.add("user", long_msg)
- time.sleep(0.1)
-
- # Test 5: Cache statistics after all tests
- print("\nTest 5: Final cache statistics")
- final_stats = conv.get_cache_stats()
- print("\nFinal cache stats:")
- print(f"Total hits: {final_stats['hits']}")
- print(f"Total misses: {final_stats['misses']}")
- print(f"Total cached tokens: {final_stats['cached_tokens']}")
- print(f"Total tokens: {final_stats['total_tokens']}")
- print(f"Overall hit rate: {final_stats['hit_rate']:.2%}")
-
- # Test 6: Display conversation with cache status
- print("\nTest 6: Display conversation with cache status")
- print("\nConversation history:")
- print(conv.get_str())
-
- # Test 7: Cache disabled
- print("\nTest 7: Cache disabled")
- conv_disabled = Conversation(cache_enabled=False)
- conv_disabled.add("user", message)
- time.sleep(0.1)
- conv_disabled.add("user", message)
- time.sleep(0.1)
-
- disabled_stats = conv_disabled.get_cache_stats()
- print("\nCache stats with caching disabled:")
- print(f"Hits: {disabled_stats['hits']}")
- print(f"Misses: {disabled_stats['misses']}")
- print(f"Cached tokens: {disabled_stats['cached_tokens']}")
-
- # Test 8: High concurrency stress test
- print("\nTest 8: High concurrency stress test")
- conv_stress = Conversation(cache_enabled=True)
-
- def stress_test_worker(messages: List[str]):
- for msg in messages:
- conv_stress.add("user", msg)
- time.sleep(random.uniform(0.01, 0.05))
-
- # Create multiple threads with different messages
- threads = []
- for i in range(5):
- thread_messages = [
- f"Stress test message {i}_{j}" for j in range(10)
- ]
- t = threading.Thread(
- target=stress_test_worker, args=(thread_messages,)
- )
- threads.append(t)
- t.start()
-
- # Wait for all threads to complete
- for t in threads:
- t.join()
-
- time.sleep(0.5) # Wait for token counting
- stress_stats = conv_stress.get_cache_stats()
- print("\nStress test stats:")
- print(
- f"Total messages: {stress_stats['hits'] + stress_stats['misses']}"
- )
- print(f"Cache hits: {stress_stats['hits']}")
- print(f"Cache misses: {stress_stats['misses']}")
-
- # Test 9: Complex nested structures
- print("\nTest 9: Complex nested structures")
- complex_content = {
- "nested": {
- "array": [1, 2, 3, {"deep": "value"}],
- "object": {
- "key": "value",
- "nested_array": ["a", "b", "c"],
- },
- },
- "simple": "value",
- }
-
- # Add complex content multiple times
- for _ in range(3):
- conv.add("user", complex_content)
- time.sleep(0.1)
-
- # Test 10: Large message test
- print("\nTest 10: Large message test")
- large_message = "x" * 10000 # 10KB message
- conv.add("user", large_message)
- time.sleep(0.1)
-
- # Test 11: Mixed content types in sequence
- print("\nTest 11: Mixed content types in sequence")
- mixed_sequence = [
- "Simple string",
- {"key": "value"},
- ["array", "items"],
- "Simple string", # Should be cached
- {"key": "value"}, # Should be cached
- ["array", "items"], # Should be cached
- ]
-
- for content in mixed_sequence:
- conv.add("user", content)
- time.sleep(0.1)
-
- # Test 12: Cache performance metrics
- print("\nTest 12: Cache performance metrics")
- start_time = time.time()
-
- # Add 100 messages quickly
- for i in range(100):
- conv.add("user", f"Performance test message {i}")
-
- end_time = time.time()
- performance_stats = conv.get_cache_stats()
-
- print("\nPerformance metrics:")
- print(f"Time taken: {end_time - start_time:.2f} seconds")
- print(f"Messages per second: {100 / (end_time - start_time):.2f}")
- print(f"Cache hit rate: {performance_stats['hit_rate']:.2%}")
-
- # Test 13: Cache with special characters
- print("\nTest 13: Cache with special characters")
- special_chars = [
- "Hello! @#$%^&*()",
- "Unicode: 你好世界",
- "Emoji: 😀🎉🌟",
- "Hello! @#$%^&*()", # Should be cached
- "Unicode: 你好世界", # Should be cached
- "Emoji: 😀🎉🌟", # Should be cached
- ]
-
- for content in special_chars:
- conv.add("user", content)
- time.sleep(0.1)
-
- # Test 14: Cache with different roles
- print("\nTest 14: Cache with different roles")
- roles = ["user", "assistant", "system", "function"]
- for role in roles:
- conv.add(role, "Same message different role")
- time.sleep(0.1)
-
- # Final statistics
- print("\n=== Final Cache Statistics ===")
- final_stats = conv.get_cache_stats()
- print(f"Total hits: {final_stats['hits']}")
- print(f"Total misses: {final_stats['misses']}")
- print(f"Total cached tokens: {final_stats['cached_tokens']}")
- print(f"Total tokens: {final_stats['total_tokens']}")
- print(f"Overall hit rate: {final_stats['hit_rate']:.2%}")
-
- print("\n=== Cache Testing Complete ===")
-
-
-if __name__ == "__main__":
- test_conversation_cache()
diff --git a/tests/structs/test_results.md b/tests/structs/test_results.md
new file mode 100644
index 00000000..c4a06189
--- /dev/null
+++ b/tests/structs/test_results.md
@@ -0,0 +1,172 @@
+# Test Results Report
+
+Test Run Date: 2024-03-21 00:00:00
+
+## Summary
+
+- Total Tests: 31
+- Passed: 31
+- Failed: 0
+- Errors: 0
+
+## Detailed Results
+
+| Test Name | Result | Duration (s) | Error |
+|-----------|---------|--------------|-------|
+| test_add_message | PASS | 0.0010 | |
+| test_add_message_with_time | PASS | 0.0008 | |
+| test_delete_message | PASS | 0.0007 | |
+| test_delete_message_out_of_bounds | PASS | 0.0006 | |
+| test_update_message | PASS | 0.0009 | |
+| test_update_message_out_of_bounds | PASS | 0.0006 | |
+| test_return_history_as_string | PASS | 0.0012 | |
+| test_search | PASS | 0.0011 | |
+| test_conversation_cache_creation | PASS | 0.0150 | |
+| test_conversation_cache_loading | PASS | 0.0180 | |
+| test_add_multiple_messages | PASS | 0.0009 | |
+| test_query | PASS | 0.0007 | |
+| test_display_conversation | PASS | 0.0008 | |
+| test_count_messages_by_role | PASS | 0.0010 | |
+| test_get_str | PASS | 0.0007 | |
+| test_to_json | PASS | 0.0008 | |
+| test_to_dict | PASS | 0.0006 | |
+| test_to_yaml | PASS | 0.0007 | |
+| test_get_last_message_as_string | PASS | 0.0008 | |
+| test_return_messages_as_list | PASS | 0.0009 | |
+| test_return_messages_as_dictionary | PASS | 0.0007 | |
+| test_add_tool_output_to_agent | PASS | 0.0008 | |
+| test_get_final_message | PASS | 0.0007 | |
+| test_get_final_message_content | PASS | 0.0006 | |
+| test_return_all_except_first | PASS | 0.0009 | |
+| test_return_all_except_first_string | PASS | 0.0008 | |
+| test_batch_add | PASS | 0.0010 | |
+| test_get_cache_stats | PASS | 0.0012 | |
+| test_list_cached_conversations | PASS | 0.0150 | |
+| test_clear | PASS | 0.0007 | |
+| test_save_and_load_json | PASS | 0.0160 | |
+
+## Test Details
+
+### test_add_message
+- Verifies that messages can be added to the conversation
+- Checks message role and content are stored correctly
+
+### test_add_message_with_time
+- Verifies timestamp functionality when adding messages
+- Ensures timestamp is present in message metadata
+
+### test_delete_message
+- Verifies messages can be deleted from conversation
+- Checks conversation length after deletion
+
+### test_delete_message_out_of_bounds
+- Verifies proper error handling for invalid deletion index
+- Ensures IndexError is raised for out of bounds access
+
+### test_update_message
+- Verifies messages can be updated in the conversation
+- Checks that role and content are updated correctly
+
+### test_update_message_out_of_bounds
+- Verifies proper error handling for invalid update index
+- Ensures IndexError is raised for out of bounds access
+
+### test_return_history_as_string
+- Verifies conversation history string formatting
+- Checks that messages are properly formatted with roles
+
+### test_search
+- Verifies search functionality in conversation history
+- Checks that search returns correct matching messages
+
+### test_conversation_cache_creation
+- Verifies conversation cache file creation
+- Ensures cache file is created in correct location
+
+### test_conversation_cache_loading
+- Verifies loading conversation from cache
+- Ensures conversation state is properly restored
+
+### test_add_multiple_messages
+- Verifies multiple messages can be added at once
+- Checks that all messages are added with correct roles and content
+
+### test_query
+- Verifies querying specific messages by index
+- Ensures correct message content and role are returned
+
+### test_display_conversation
+- Verifies conversation display functionality
+- Checks that messages are properly formatted for display
+
+### test_count_messages_by_role
+- Verifies message counting by role
+- Ensures accurate counts for each role type
+
+### test_get_str
+- Verifies string representation of conversation
+- Checks proper formatting of conversation as string
+
+### test_to_json
+- Verifies JSON serialization of conversation
+- Ensures proper JSON formatting and content preservation
+
+### test_to_dict
+- Verifies dictionary representation of conversation
+- Checks proper structure of conversation dictionary
+
+### test_to_yaml
+- Verifies YAML serialization of conversation
+- Ensures proper YAML formatting and content preservation
+
+### test_get_last_message_as_string
+- Verifies retrieval of last message as string
+- Checks proper formatting of last message
+
+### test_return_messages_as_list
+- Verifies list representation of messages
+- Ensures proper formatting of messages in list
+
+### test_return_messages_as_dictionary
+- Verifies dictionary representation of messages
+- Checks proper structure of message dictionaries
+
+### test_add_tool_output_to_agent
+- Verifies adding tool output to conversation
+- Ensures proper handling of tool output data
+
+### test_get_final_message
+- Verifies retrieval of final message
+- Checks proper formatting of final message
+
+### test_get_final_message_content
+- Verifies retrieval of final message content
+- Ensures only content is returned without role
+
+### test_return_all_except_first
+- Verifies retrieval of all messages except first
+- Checks proper exclusion of first message
+
+### test_return_all_except_first_string
+- Verifies string representation without first message
+- Ensures proper formatting of remaining messages
+
+### test_batch_add
+- Verifies batch addition of messages
+- Checks proper handling of multiple messages at once
+
+### test_get_cache_stats
+- Verifies cache statistics retrieval
+- Ensures all cache metrics are present
+
+### test_list_cached_conversations
+- Verifies listing of cached conversations
+- Checks proper retrieval of conversation names
+
+### test_clear
+- Verifies conversation clearing functionality
+- Ensures all messages are removed
+
+### test_save_and_load_json
+- Verifies saving and loading conversation to/from JSON
+- Ensures conversation state is preserved across save/load
\ No newline at end of file