{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Swarms Docs Home","text":""},{"location":"#what-is-swarms","title":"What is Swarms?","text":"
Swarms is the first and most reliable multi-agent production-grade framework designed to orchestrate intelligent AI agents at scale. Built for enterprise applications, Swarms enables you to create sophisticated multi-agent systems that can handle complex tasks through collaboration, parallel processing, and intelligent task distribution.
"},{"location":"#key-capabilities","title":"Key Capabilities","text":"Swarms stands out as the most reliable multi-agent framework because it was built from the ground up for production environments. Unlike other frameworks that focus on research or simple demos, Swarms provides the infrastructure, tooling, and best practices needed to deploy multi-agent systems in real-world applications.
Whether you're building financial analysis systems, healthcare diagnostics, manufacturing optimization, or any other complex multi-agent application, Swarms provides the foundation you need to succeed.
"},{"location":"#swarms-installation","title":"Swarms Installation","text":"pip3 install swarms\n
"},{"location":"#update-swarms","title":"Update Swarms","text":"pip3 install -U swarms\n
"},{"location":"#get-started-building-production-grade-multi-agent-applications","title":"Get Started Building Production-Grade Multi-Agent Applications","text":""},{"location":"#onboarding","title":"Onboarding","text":"Section Links Installation Installation Quickstart Get Started Environment Setup Environment Configuration Environment Variables Environment Variables Swarms CLI CLI Documentation Agent Internal Mechanisms Agent Architecture Agent API Agent API Managing Prompts in Production Prompts Management Integrating External Agents External Agents Integration Creating Agents from YAML YAML Agent Creation Why You Need Swarms Why MultiAgent Collaboration Swarm Architectures Analysis Swarm Architectures Choosing the Right Swarm How to Choose Swarms Full API Reference API Reference AgentRearrange Docs AgentRearrange"},{"location":"#ecosystem","title":"Ecosystem","text":"Here you'll find references about the Swarms framework, marketplace, community, and more to enable you to build your multi-agent applications.
Section Links Swarms Python Framework Docs Framework Docs Swarms Cloud API Cloud API Swarms Marketplace API Marketplace API Swarms Memory Systems Memory Systems Available Models Models Overview Swarms Tools Tools Overview Example Applications Examples Swarms Corp Github Swarms Corp GitHub"},{"location":"#join-the-swarms-community","title":"Join the Swarms Community","text":"Platform Link Description \ud83d\udcda Documentation docs.swarms.world Official documentation and guides \ud83d\udcdd Blog Medium Latest updates and technical articles \ud83d\udcac Discord Join Discord Live chat and community support \ud83d\udc26 Twitter @kyegomez Latest news and announcements \ud83d\udc65 LinkedIn The Swarm Corporation Professional network and updates \ud83d\udcfa YouTube Swarms Channel Tutorials and demos \ud83c\udfab Events Sign up here Join our community events"},{"location":"#get-support","title":"Get Support","text":"Want to get in touch with the Swarms team? Open an issue on GitHub or reach out to us via email. We're here to help!
"},{"location":"docs_structure/","title":"Class/function","text":"Brief description \u2193
\u2193
"},{"location":"docs_structure/#overview","title":"Overview","text":"\u2193
"},{"location":"docs_structure/#architecture-mermaid-diagram","title":"Architecture (Mermaid diagram)","text":"\u2193
"},{"location":"docs_structure/#class-reference-constructor-methods","title":"Class Reference (Constructor + Methods)","text":"\u2193
"},{"location":"docs_structure/#examples","title":"Examples","text":"\u2193
"},{"location":"docs_structure/#conclusion","title":"Conclusion","text":"Benefits of class/structure, and more
"},{"location":"quickstart/","title":"Welcome to Swarms Docs Home","text":""},{"location":"quickstart/#what-is-swarms","title":"What is Swarms?","text":"Swarms is the first and most reliable multi-agent production-grade framework designed to orchestrate intelligent AI agents at scale. Built for enterprise applications, Swarms enables you to create sophisticated multi-agent systems that can handle complex tasks through collaboration, parallel processing, and intelligent task distribution.
"},{"location":"quickstart/#key-capabilities","title":"Key Capabilities","text":"Swarms stands out as the most reliable multi-agent framework because it was built from the ground up for production environments. Unlike other frameworks that focus on research or simple demos, Swarms provides the infrastructure, tooling, and best practices needed to deploy multi-agent systems in real-world applications.
Whether you're building financial analysis systems, healthcare diagnostics, manufacturing optimization, or any other complex multi-agent application, Swarms provides the foundation you need to succeed.
Get started learning swarms with the following examples and more.
"},{"location":"quickstart/#install","title":"Install \ud83d\udcbb","text":"$ pip3 install -U swarms\n
"},{"location":"quickstart/#using-uv-recommended","title":"Using uv (Recommended)","text":"uv is a fast Python package installer and resolver, written in Rust.
# Install uv\n$ curl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Install swarms using uv\n$ uv pip install swarms\n
"},{"location":"quickstart/#using-poetry","title":"Using poetry","text":"# Install poetry if you haven't already\n$ curl -sSL https://install.python-poetry.org | python3 -\n\n# Add swarms to your project\n$ poetry add swarms\n
"},{"location":"quickstart/#from-source","title":"From source","text":"# Clone the repository\n$ git clone https://github.com/kyegomez/swarms.git\n$ cd swarms\n\n# Install with pip\n$ pip install -e .\n
"},{"location":"quickstart/#environment-configuration","title":"Environment Configuration","text":"Learn more about the environment configuration here
OPENAI_API_KEY=\"\"\nWORKSPACE_DIR=\"agent_workspace\"\nANTHROPIC_API_KEY=\"\"\nGROQ_API_KEY=\"\"\n
"},{"location":"quickstart/#your-first-agent","title":"\ud83e\udd16 Your First Agent","text":"An Agent is the fundamental building block of a swarm\u2014an autonomous entity powered by an LLM + Tools + Memory. Learn more Here
from swarms import Agent\n\n# Initialize a new agent\nagent = Agent(\n model_name=\"gpt-4o-mini\", # Specify the LLM\n max_loops=1, # Set the number of interactions\n interactive=True, # Enable interactive mode for real-time feedback\n)\n\n# Run the agent with a task\nagent.run(\"What are the key benefits of using a multi-agent system?\")\n
"},{"location":"quickstart/#your-first-swarm-multi-agent-collaboration","title":"\ud83e\udd1d Your First Swarm: Multi-Agent Collaboration","text":"A Swarm consists of multiple agents working together. This simple example creates a two-agent workflow for researching and writing a blog post. Learn More About SequentialWorkflow
from swarms import Agent, SequentialWorkflow\n\n# Agent 1: The Researcher\nresearcher = Agent(\n agent_name=\"Researcher\",\n system_prompt=\"Your job is to research the provided topic and provide a detailed summary.\",\n model_name=\"gpt-4o-mini\",\n)\n\n# Agent 2: The Writer\nwriter = Agent(\n agent_name=\"Writer\",\n system_prompt=\"Your job is to take the research summary and write a beautiful, engaging blog post about it.\",\n model_name=\"gpt-4o-mini\",\n)\n\n# Create a sequential workflow where the researcher's output feeds into the writer's input\nworkflow = SequentialWorkflow(agents=[researcher, writer])\n\n# Run the workflow on a task\nfinal_post = workflow.run(\"The history and future of artificial intelligence\")\nprint(final_post)\n
"},{"location":"quickstart/#multi-agent-architectures-for-production-deployments","title":"\ud83c\udfd7\ufe0f Multi-Agent Architectures For Production Deployments","text":"swarms
provides a variety of powerful, pre-built multi-agent architectures enabling you to orchestrate agents in various ways. Choose the right structure for your specific problem to build efficient and reliable production systems.
a -> b, c
) between agents. Flexible and adaptive workflows, task distribution, dynamic routing. GraphWorkflow Orchestrates agents as nodes in a Directed Acyclic Graph (DAG). Complex projects with intricate dependencies, like software builds. MixtureOfAgents (MoA) Utilizes multiple expert agents in parallel and synthesizes their outputs. Complex problem-solving, achieving state-of-the-art performance through collaboration. GroupChat Agents collaborate and make decisions through a conversational interface. Real-time collaborative decision-making, negotiations, brainstorming. ForestSwarm Dynamically selects the most suitable agent or tree of agents for a given task. Task routing, optimizing for expertise, complex decision-making trees. SpreadSheetSwarm Manages thousands of agents concurrently, tracking tasks and outputs in a structured format. Massive-scale parallel operations, large-scale data generation and analysis. SwarmRouter Universal orchestrator that provides a single interface to run any type of swarm with dynamic selection. Simplifying complex workflows, switching between swarm strategies, unified multi-agent management."},{"location":"quickstart/#sequentialworkflow","title":"SequentialWorkflow","text":"A SequentialWorkflow
executes tasks in a strict order, forming a pipeline where each agent builds upon the work of the previous one. SequentialWorkflow
is Ideal for processes that have clear, ordered steps. This ensures that tasks with dependencies are handled correctly.
from swarms import Agent, SequentialWorkflow\n\n# Initialize agents for a 3-step process\n# 1. Generate an idea\nidea_generator = Agent(agent_name=\"IdeaGenerator\", system_prompt=\"Generate a unique startup idea.\", model_name=\"gpt-4o-mini\")\n# 2. Validate the idea\nvalidator = Agent(agent_name=\"Validator\", system_prompt=\"Take this startup idea and analyze its market viability.\", model_name=\"gpt-4o-mini\")\n# 3. Create a pitch\npitch_creator = Agent(agent_name=\"PitchCreator\", system_prompt=\"Write a 3-sentence elevator pitch for this validated startup idea.\", model_name=\"gpt-4o-mini\")\n\n# Create the sequential workflow\nworkflow = SequentialWorkflow(agents=[idea_generator, validator, pitch_creator])\n\n# Run the workflow\nelevator_pitch = workflow.run()\nprint(elevator_pitch)\n
"},{"location":"quickstart/#concurrentworkflow-with-spreadsheetswarm","title":"ConcurrentWorkflow (with SpreadSheetSwarm
)","text":"A concurrent workflow runs multiple agents simultaneously. SpreadSheetSwarm
is a powerful implementation that can manage thousands of concurrent agents and log their outputs to a CSV file. Use this architecture for high-throughput tasks that can be performed in parallel, drastically reducing execution time.
from swarms import Agent, SpreadSheetSwarm\n\n# Define a list of tasks (e.g., social media posts to generate)\nplatforms = [\"Twitter\", \"LinkedIn\", \"Instagram\"]\n\n# Create an agent for each task\nagents = [\n Agent(\n agent_name=f\"{platform}-Marketer\",\n system_prompt=f\"Generate a real estate marketing post for {platform}.\",\n model_name=\"gpt-4o-mini\",\n )\n for platform in platforms\n]\n\n# Initialize the swarm to run these agents concurrently\nswarm = SpreadSheetSwarm(\n agents=agents,\n autosave_on=True,\n save_file_path=\"marketing_posts.csv\",\n)\n\n# Run the swarm with a single, shared task description\nproperty_description = \"A beautiful 3-bedroom house in sunny California.\"\nswarm.run(task=f\"Generate a post about: {property_description}\")\n# Check marketing_posts.csv for the results!\n
"},{"location":"quickstart/#agentrearrange","title":"AgentRearrange","text":"Inspired by einsum
, AgentRearrange
lets you define complex, non-linear relationships between agents using a simple string-based syntax. Learn more. This architecture is Perfect for orchestrating dynamic workflows where agents might work in parallel, sequence, or a combination of both.
from swarms import Agent, AgentRearrange\n\n# Define agents\nresearcher = Agent(agent_name=\"researcher\", model_name=\"gpt-4o-mini\")\nwriter = Agent(agent_name=\"writer\", model_name=\"gpt-4o-mini\")\neditor = Agent(agent_name=\"editor\", model_name=\"gpt-4o-mini\")\n\n# Define a flow: researcher sends work to both writer and editor simultaneously\n# This is a one-to-many relationship\nflow = \"researcher -> writer, editor\"\n\n# Create the rearrangement system\nrearrange_system = AgentRearrange(\n agents=[researcher, writer, editor],\n flow=flow,\n)\n\n# Run the system\n# The researcher will generate content, and then both the writer and editor\n# will process that content in parallel.\noutputs = rearrange_system.run(\"Analyze the impact of AI on modern cinema.\")\nprint(outputs)\n
<!--
"},{"location":"quickstart/#graphworkflow","title":"GraphWorkflow","text":"GraphWorkflow
orchestrates tasks using a Directed Acyclic Graph (DAG), allowing you to manage complex dependencies where some tasks must wait for others to complete.
Description: Essential for building sophisticated pipelines, like in software development or complex project management, where task order and dependencies are critical.
from swarms import Agent, GraphWorkflow, Node, Edge, NodeType\n\n# Define agents and a simple python function as nodes\ncode_generator = Agent(agent_name=\"CodeGenerator\", system_prompt=\"Write Python code for the given task.\", model_name=\"gpt-4o-mini\")\ncode_tester = Agent(agent_name=\"CodeTester\", system_prompt=\"Test the given Python code and find bugs.\", model_name=\"gpt-4o-mini\")\n\n# Create nodes for the graph\nnode1 = Node(id=\"generator\", agent=code_generator)\nnode2 = Node(id=\"tester\", agent=code_tester)\n\n# Create the graph and define the dependency\ngraph = GraphWorkflow()\ngraph.add_nodes([node1, node2])\ngraph.add_edge(Edge(source=\"generator\", target=\"tester\")) # Tester runs after generator\n\n# Set entry and end points\ngraph.set_entry_points([\"generator\"])\ngraph.set_end_points([\"tester\"])\n\n# Run the graph workflow\nresults = graph.run(\"Create a function that calculates the factorial of a number.\")\nprint(results)\n``` -->\n\n----\n\n### SwarmRouter: The Universal Swarm Orchestrator\n\nThe `SwarmRouter` simplifies building complex workflows by providing a single interface to run any type of swarm. Instead of importing and managing different swarm classes, you can dynamically select the one you need just by changing the `swarm_type` parameter. [Read the full documentation](https://docs.swarms.world/en/latest/swarms/structs/swarm_router/)\n\nThis makes your code cleaner and more flexible, allowing you to switch between different multi-agent strategies with ease. Here's a complete example that shows how to define agents and then use `SwarmRouter` to execute the same task using different collaborative strategies.\n\n```python\nfrom swarms import Agent\nfrom swarms.structs.swarm_router import SwarmRouter, SwarmType\n\n# Define a few generic agents\nwriter = Agent(agent_name=\"Writer\", system_prompt=\"You are a creative writer.\", model_name=\"gpt-4o-mini\")\neditor = Agent(agent_name=\"Editor\", system_prompt=\"You are an expert editor for stories.\", model_name=\"gpt-4o-mini\")\nreviewer = Agent(agent_name=\"Reviewer\", system_prompt=\"You are a final reviewer who gives a score.\", model_name=\"gpt-4o-mini\")\n\n# The agents and task will be the same for all examples\nagents = [writer, editor, reviewer]\ntask = \"Write a short story about a robot who discovers music.\"\n\n# --- Example 1: SequentialWorkflow ---\n# Agents run one after another in a chain: Writer -> Editor -> Reviewer.\nprint(\"Running a Sequential Workflow...\")\nsequential_router = SwarmRouter(swarm_type=SwarmType.SequentialWorkflow, agents=agents)\nsequential_output = sequential_router.run(task)\nprint(f\"Final Sequential Output:\\n{sequential_output}\\n\")\n\n# --- Example 2: ConcurrentWorkflow ---\n# All agents receive the same initial task and run at the same time.\nprint(\"Running a Concurrent Workflow...\")\nconcurrent_router = SwarmRouter(swarm_type=SwarmType.ConcurrentWorkflow, agents=agents)\nconcurrent_outputs = concurrent_router.run(task)\n# This returns a dictionary of each agent's output\nfor agent_name, output in concurrent_outputs.items():\n print(f\"Output from {agent_name}:\\n{output}\\n\")\n\n# --- Example 3: MixtureOfAgents ---\n# All agents run in parallel, and a special 'aggregator' agent synthesizes their outputs.\nprint(\"Running a Mixture of Agents Workflow...\")\naggregator = Agent(\n agent_name=\"Aggregator\",\n system_prompt=\"Combine the story, edits, and review into a final document.\",\n model_name=\"gpt-4o-mini\"\n)\nmoa_router = SwarmRouter(\n swarm_type=SwarmType.MixtureOfAgents,\n agents=agents,\n aggregator_agent=aggregator, # MoA requires an aggregator\n)\naggregated_output = moa_router.run(task)\nprint(f\"Final Aggregated Output:\\n{aggregated_output}\\n\")\n
The SwarmRouter
is a powerful tool for simplifying multi-agent orchestration. It provides a consistent and flexible way to deploy different collaborative strategies, allowing you to build more sophisticated applications with less code.
The MixtureOfAgents
architecture processes tasks by feeding them to multiple \"expert\" agents in parallel. Their diverse outputs are then synthesized by an aggregator agent to produce a final, high-quality result. Learn more here
from swarms import Agent, MixtureOfAgents\n\n# Define expert agents\nfinancial_analyst = Agent(agent_name=\"FinancialAnalyst\", system_prompt=\"Analyze financial data.\", model_name=\"gpt-4o-mini\")\nmarket_analyst = Agent(agent_name=\"MarketAnalyst\", system_prompt=\"Analyze market trends.\", model_name=\"gpt-4o-mini\")\nrisk_analyst = Agent(agent_name=\"RiskAnalyst\", system_prompt=\"Analyze investment risks.\", model_name=\"gpt-4o-mini\")\n\n# Define the aggregator agent\naggregator = Agent(\n agent_name=\"InvestmentAdvisor\",\n system_prompt=\"Synthesize the financial, market, and risk analyses to provide a final investment recommendation.\",\n model_name=\"gpt-4o-mini\"\n)\n\n# Create the MoA swarm\nmoa_swarm = MixtureOfAgents(\n agents=[financial_analyst, market_analyst, risk_analyst],\n aggregator_agent=aggregator,\n)\n\n# Run the swarm\nrecommendation = moa_swarm.run(\"Should we invest in NVIDIA stock right now?\")\nprint(recommendation)\n
"},{"location":"quickstart/#groupchat","title":"GroupChat","text":"GroupChat
creates a conversational environment where multiple agents can interact, discuss, and collaboratively solve a problem. You can define the speaking order or let it be determined dynamically. This architecture is ideal for tasks that benefit from debate and multi-perspective reasoning, such as contract negotiation, brainstorming, or complex decision-making.
from swarms import Agent, GroupChat\n\n# Define agents for a debate\ntech_optimist = Agent(agent_name=\"TechOptimist\", system_prompt=\"Argue for the benefits of AI in society.\", model_name=\"gpt-4o-mini\")\ntech_critic = Agent(agent_name=\"TechCritic\", system_prompt=\"Argue against the unchecked advancement of AI.\", model_name=\"gpt-4o-mini\")\n\n# Create the group chat\nchat = GroupChat(\n agents=[tech_optimist, tech_critic],\n max_loops=4, # Limit the number of turns in the conversation\n)\n\n# Run the chat with an initial topic\nconversation_history = chat.run(\n \"Let's discuss the societal impact of artificial intelligence.\"\n)\n\n# Print the full conversation\nfor message in conversation_history:\n print(f\"[{message['agent_name']}]: {message['content']}\")\n
"},{"location":"applications/azure_openai/","title":"Deploying Azure OpenAI in Production: A Comprehensive Guide","text":"In today's fast-paced digital landscape, leveraging cutting-edge technologies has become essential for businesses to stay competitive and provide exceptional services to their customers. One such technology that has gained significant traction is Azure OpenAI, a powerful platform that allows developers to integrate advanced natural language processing (NLP) capabilities into their applications. Whether you're building a chatbot, a content generation system, or any other AI-powered solution, Azure OpenAI offers a robust and scalable solution for production-grade deployment.
In this comprehensive guide, we'll walk through the process of setting up and deploying Azure OpenAI in a production environment. We'll dive deep into the code, provide clear explanations, and share best practices to ensure a smooth and successful implementation.
"},{"location":"applications/azure_openai/#prerequisites","title":"Prerequisites:","text":"Before we begin, it's essential to have the following prerequisites in place:
python-dotenv
and swarms
.To kick things off, we'll set up our development environment and install the necessary dependencies.
venv
or any other virtual environment management tool of your choice.python -m venv myenv\n
source myenv/bin/activate # On Windows, use `myenv\\Scripts\\activate`\n
python-dotenv
and swarms
packages using pip.pip install python-dotenv swarms\n
.env
File: In the root directory of your project, create a new file called .env
. This file will store your Azure OpenAI credentials and configuration settings.AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint>\nAZURE_OPENAI_DEPLOYMENT=<your_azure_openai_deployment_name>\nOPENAI_API_VERSION=<your_openai_api_version>\nAZURE_OPENAI_API_KEY=<your_azure_openai_api_key>\nAZURE_OPENAI_AD_TOKEN=<your_azure_openai_ad_token>\n
Replace the placeholders with your actual Azure OpenAI credentials and configuration settings.
"},{"location":"applications/azure_openai/#connecting-to-azure-openai","title":"Connecting to Azure OpenAI:","text":"Now that we've set up our environment, let's dive into the code that connects to Azure OpenAI and interacts with the language model.
import os\nfrom dotenv import load_dotenv\nfrom swarms import AzureOpenAI\n\n# Load the environment variables\nload_dotenv()\n\n# Create an instance of the AzureOpenAI class\nmodel = AzureOpenAI(\n azure_endpoint=os.getenv(\"AZURE_OPENAI_ENDPOINT\"),\n deployment_name=os.getenv(\"AZURE_OPENAI_DEPLOYMENT\"),\n openai_api_version=os.getenv(\"OPENAI_API_VERSION\"),\n openai_api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\n azure_ad_token=os.getenv(\"AZURE_OPENAI_AD_TOKEN\")\n)\n
"},{"location":"applications/azure_openai/#lets-break-down-this-code","title":"Let's break down this code:","text":"Import Statements: We import the necessary modules, including os
for interacting with the operating system, load_dotenv
from python-dotenv
to load environment variables, and AzureOpenAI
from swarms
to interact with the Azure OpenAI service.
Load Environment Variables: We use load_dotenv()
to load the environment variables stored in the .env
file we created earlier.
Create AzureOpenAI Instance: We create an instance of the AzureOpenAI
class by passing in the required configuration parameters:
azure_endpoint
: The endpoint URL for your Azure OpenAI resource.deployment_name
: The name of the deployment you want to use.openai_api_version
: The version of the OpenAI API you want to use.openai_api_key
: Your Azure OpenAI API key, which authenticates your requests.azure_ad_token
: An optional Azure Active Directory (AAD) token for additional security.Querying the Language Model: With our connection to Azure OpenAI established, we can now query the language model and receive responses.
# Define the prompt\nprompt = \"Analyze this load document and assess it for any risks and create a table in markdwon format.\"\n\n# Generate a response\nresponse = model(prompt)\nprint(response)\n
"},{"location":"applications/azure_openai/#heres-whats-happening","title":"Here's what's happening:","text":"Define the Prompt: We define a prompt, which is the input text or question we want to feed into the language model.
Generate a Response: We call the model
instance with the prompt
as an argument. This triggers the Azure OpenAI service to process the prompt and generate a response.
Print the Response: Finally, we print the response received from the language model.
Running the Code: To run the code, save it in a Python file (e.g., main.py
) and execute it from the command line:
python main.py\n
"},{"location":"applications/azure_openai/#best-practices-for-production-deployment","title":"Best Practices for Production Deployment:","text":"While the provided code serves as a basic example, there are several best practices to consider when deploying Azure OpenAI in a production environment:
Secure Credentials Management: Instead of storing sensitive credentials like API keys in your codebase, consider using secure storage solutions like Azure Key Vault or environment variables managed by your cloud provider.
Error Handling and Retries: Implement robust error handling and retry mechanisms to handle potential failures or rate-limiting scenarios.
Logging and Monitoring: Implement comprehensive logging and monitoring strategies to track application performance, identify issues, and gather insights for optimization.
Scalability and Load Testing: Conduct load testing to ensure your application can handle anticipated traffic volumes and scale appropriately based on demand.
Caching and Optimization: Explore caching strategies and performance optimizations to improve response times and reduce the load on the Azure OpenAI service.
Integration with Other Services: Depending on your use case, you may need to integrate Azure OpenAI with other Azure services or third-party tools for tasks like data processing, storage, or analysis.
Compliance and Security: Ensure your application adheres to relevant compliance standards and security best practices, especially when handling sensitive data.
Azure OpenAI is a powerful platform that enables developers to integrate advanced natural language processing capabilities into their applications. By following the steps outlined in this guide, you can set up a production-ready environment for deploying Azure OpenAI and start leveraging its capabilities in your projects.
Remember, this guide serves as a starting point, and there are numerous additional features and capabilities within Azure OpenAI that you can explore to enhance your applications further. As with any production deployment, it's crucial to follow best practices, conduct thorough testing, and implement robust monitoring and security measures.
With the right approach and careful planning, you can successfully deploy Azure OpenAI in a production environment and unlock the power of cutting-edge language models to drive innovation and provide exceptional experiences for your users.
"},{"location":"applications/blog/","title":"The Future of Manufacturing: Leveraging Autonomous LLM Agents for Cost Reduction and Revenue Growth","text":""},{"location":"applications/blog/#table-of-contents","title":"Table of Contents","text":"In today's rapidly evolving manufacturing landscape, executives and CEOs face unprecedented challenges and opportunities. The key to maintaining a competitive edge lies in embracing cutting-edge technologies that can revolutionize operations, reduce costs, and drive revenue growth. One such transformative technology is the integration of autonomous Large Language Model (LLM) agents equipped with Retrieval-Augmented Generation (RAG) embedding databases, function calling capabilities, and access to external tools.
This comprehensive blog post aims to explore how these advanced AI systems can be leveraged to address the most pressing issues in manufacturing enterprises. We will delve into the intricacies of these technologies, provide concrete examples of their applications, and offer insights into implementation strategies. By the end of this article, you will have a clear understanding of how autonomous LLM agents can become a cornerstone of your manufacturing business's digital transformation journey.
"},{"location":"applications/blog/#2-understanding-autonomous-llm-agents","title":"2. Understanding Autonomous LLM Agents","text":"Autonomous LLM agents represent the cutting edge of artificial intelligence in the manufacturing sector. These sophisticated systems are built upon large language models, which are neural networks trained on vast amounts of text data. What sets them apart is their ability to operate autonomously, making decisions and taking actions with minimal human intervention.
Key features of autonomous LLM agents include:
Natural Language Processing (NLP): They can understand and generate human-like text, enabling seamless communication with employees across all levels of the organization.
Contextual Understanding: These agents can grasp complex scenarios and nuanced information, making them ideal for handling intricate manufacturing processes.
Adaptive Learning: Through continuous interaction and feedback, they can improve their performance over time, becoming more efficient and accurate.
Multi-modal Input Processing: Advanced agents can process not only text but also images, audio, and sensor data, providing a holistic view of manufacturing operations.
Task Automation: They can automate a wide range of tasks, from data analysis to decision-making, freeing up human resources for more strategic activities.
The integration of autonomous LLM agents in manufacturing environments opens up new possibilities for optimization, innovation, and growth. As we explore their applications throughout this blog, it's crucial to understand that these agents are not meant to replace human workers but to augment their capabilities and drive overall productivity.
"},{"location":"applications/blog/#3-rag-embedding-databases-the-knowledge-foundation","title":"3. RAG Embedding Databases: The Knowledge Foundation","text":"At the heart of effective autonomous LLM agents lies the Retrieval-Augmented Generation (RAG) embedding database. This technology serves as the knowledge foundation, enabling agents to access and utilize vast amounts of relevant information quickly and accurately.
RAG embedding databases work by:
Vectorizing Information: Converting textual data into high-dimensional vectors that capture semantic meaning.
Efficient Storage: Organizing these vectors in a way that allows for rapid retrieval of relevant information.
Contextual Retrieval: Enabling the agent to pull relevant information based on the current context or query.
Dynamic Updates: Allowing for continuous updates to the knowledge base, ensuring the agent always has access to the most current information.
In the manufacturing context, RAG embedding databases can store a wealth of information, including:
By leveraging RAG embedding databases, autonomous LLM agents can make informed decisions based on a comprehensive understanding of the manufacturing ecosystem. This leads to more accurate predictions, better problem-solving capabilities, and the ability to generate innovative solutions.
For example, when faced with a production bottleneck, an agent can quickly retrieve relevant historical data, equipment specifications, and best practices to propose an optimal solution. This rapid access to contextual information significantly reduces decision-making time and improves the quality of outcomes.
"},{"location":"applications/blog/#4-function-calling-and-external-tools-enhancing-capabilities","title":"4. Function Calling and External Tools: Enhancing Capabilities","text":"The true power of autonomous LLM agents in manufacturing environments is realized through their ability to interact with external systems and tools. This is achieved through function calling and integration with specialized external tools.
Function calling allows the agent to:
Execute Specific Tasks: Trigger predefined functions to perform complex operations or calculations.
Interact with Databases: Query and update various databases within the manufacturing ecosystem.
Control Equipment: Send commands to machinery or robotic systems on the production floor.
Generate Reports: Automatically compile and format data into meaningful reports for different stakeholders.
External tools that can be integrated include:
By combining the cognitive abilities of LLM agents with the specialized functionalities of external tools, manufacturing enterprises can create a powerful ecosystem that drives efficiency and innovation.
For instance, an autonomous agent could:
This level of integration and automation can lead to significant improvements in operational efficiency, cost reduction, and overall productivity.
"},{"location":"applications/blog/#5-cost-reduction-strategies","title":"5. Cost Reduction Strategies","text":"One of the primary benefits of implementing autonomous LLM agents in manufacturing is the potential for substantial cost reductions across various aspects of operations. Let's explore some key areas where these agents can drive down expenses:
"},{"location":"applications/blog/#51-optimizing-supply-chain-management","title":"5.1. Optimizing Supply Chain Management","text":"Autonomous LLM agents can revolutionize supply chain management by:
Predictive Inventory Management: Analyzing historical data, market trends, and production schedules to optimize inventory levels, reducing carrying costs and minimizing stockouts.
Supplier Selection and Negotiation: Evaluating supplier performance, market conditions, and contract terms to recommend the most cost-effective suppliers and negotiate better deals.
Logistics Optimization: Analyzing transportation routes, warehouse locations, and delivery schedules to minimize logistics costs and improve delivery times.
Example: A large automotive manufacturer implemented an autonomous LLM agent to optimize its global supply chain. The agent analyzed data from multiple sources, including production schedules, supplier performance metrics, and global shipping trends. By optimizing inventory levels and renegotiating supplier contracts, the company reduced supply chain costs by 15% in the first year, resulting in savings of over $100 million.
"},{"location":"applications/blog/#52-enhancing-quality-control","title":"5.2. Enhancing Quality Control","text":"Quality control is a critical aspect of manufacturing that directly impacts costs. Autonomous LLM agents can significantly improve quality control processes by:
Real-time Defect Detection: Integrating with computer vision systems to identify and classify defects in real-time, reducing waste and rework.
Root Cause Analysis: Analyzing production data to identify the root causes of quality issues and recommending corrective actions.
Predictive Quality Management: Leveraging historical data and machine learning models to predict potential quality issues before they occur.
Example: A semiconductor manufacturer deployed an autonomous LLM agent to enhance its quality control processes. The agent analyzed data from multiple sensors on the production line, historical quality records, and equipment maintenance logs. By identifying subtle patterns that led to defects, the agent helped reduce scrap rates by 30% and improved overall yield by 5%, resulting in annual savings of $50 million.
"},{"location":"applications/blog/#53-streamlining-maintenance-and-repairs","title":"5.3. Streamlining Maintenance and Repairs","text":"Effective maintenance is crucial for minimizing downtime and extending the lifespan of expensive manufacturing equipment. Autonomous LLM agents can optimize maintenance processes by:
Predictive Maintenance: Analyzing equipment sensor data, maintenance history, and production schedules to predict when maintenance is needed, reducing unplanned downtime.
Maintenance Scheduling Optimization: Balancing maintenance needs with production schedules to minimize disruptions and maximize equipment availability.
Repair Knowledge Management: Creating and maintaining a comprehensive knowledge base of repair procedures, making it easier for technicians to quickly address issues.
Example: A paper mill implemented an autonomous LLM agent to manage its maintenance operations. The agent analyzed vibration data from critical equipment, historical maintenance records, and production schedules. By implementing a predictive maintenance strategy, the mill reduced unplanned downtime by 40% and extended the lifespan of key equipment by 25%, resulting in annual savings of $15 million in maintenance costs and lost production time.
"},{"location":"applications/blog/#54-improving-energy-efficiency","title":"5.4. Improving Energy Efficiency","text":"Energy consumption is a significant cost factor in manufacturing. Autonomous LLM agents can help reduce energy costs by:
Real-time Energy Monitoring: Analyzing energy consumption data across the facility to identify inefficiencies and anomalies.
Process Optimization for Energy Efficiency: Recommending changes to production processes to reduce energy consumption without impacting output.
Demand Response Management: Integrating with smart grid systems to optimize energy usage based on variable electricity prices and demand.
Example: A large chemical manufacturing plant deployed an autonomous LLM agent to optimize its energy consumption. The agent analyzed data from thousands of sensors across the facility, weather forecasts, and electricity price fluctuations. By optimizing process parameters and scheduling energy-intensive operations during off-peak hours, the plant reduced its energy costs by 18%, saving $10 million annually.
"},{"location":"applications/blog/#6-revenue-growth-opportunities","title":"6. Revenue Growth Opportunities","text":"While cost reduction is crucial, autonomous LLM agents also present significant opportunities for revenue growth in manufacturing enterprises. Let's explore how these advanced AI systems can drive top-line growth:
"},{"location":"applications/blog/#61-product-innovation-and-development","title":"6.1. Product Innovation and Development","text":"Autonomous LLM agents can accelerate and enhance the product innovation process by:
Market Trend Analysis: Analyzing vast amounts of market data, customer feedback, and industry reports to identify emerging trends and unmet needs.
Design Optimization: Leveraging generative design techniques and historical performance data to suggest optimal product designs that balance functionality, manufacturability, and cost.
Rapid Prototyping Assistance: Guiding engineers through the prototyping process, suggesting materials and manufacturing techniques based on design requirements and cost constraints.
Example: A consumer electronics manufacturer utilized an autonomous LLM agent to enhance its product development process. The agent analyzed social media trends, customer support tickets, and competitor product features to identify key areas for innovation. By suggesting novel features and optimizing designs for manufacturability, the company reduced time-to-market for new products by 30% and increased the success rate of new product launches by 25%, resulting in a 15% increase in annual revenue.
"},{"location":"applications/blog/#62-personalized-customer-experiences","title":"6.2. Personalized Customer Experiences","text":"In the age of mass customization, providing personalized experiences can significantly boost customer satisfaction and revenue. Autonomous LLM agents can facilitate this by:
Customer Preference Analysis: Analyzing historical purchase data, customer interactions, and market trends to predict individual customer preferences.
Dynamic Product Configuration: Enabling real-time product customization based on customer inputs and preferences, while ensuring manufacturability.
Personalized Marketing and Sales Support: Generating tailored marketing content and sales recommendations for each customer or market segment.
Example: A high-end furniture manufacturer implemented an autonomous LLM agent to power its online customization platform. The agent analyzed customer behavior, design trends, and production capabilities to offer personalized product recommendations and customization options. This led to a 40% increase in online sales and a 20% increase in average order value, driving significant revenue growth.
"},{"location":"applications/blog/#63-market-analysis-and-trend-prediction","title":"6.3. Market Analysis and Trend Prediction","text":"Staying ahead of market trends is crucial for maintaining a competitive edge. Autonomous LLM agents can provide valuable insights by:
Competitive Intelligence: Analyzing competitor activities, product launches, and market positioning to identify threats and opportunities.
Demand Forecasting: Combining historical sales data, economic indicators, and market trends to predict future demand more accurately.
Emerging Market Identification: Analyzing global economic data, demographic trends, and industry reports to identify promising new markets for expansion.
Example: A global automotive parts manufacturer employed an autonomous LLM agent to enhance its market intelligence capabilities. The agent analyzed data from industry reports, social media, patent filings, and economic indicators to predict the growth of electric vehicle adoption in different regions. This insight allowed the company to strategically invest in EV component manufacturing, resulting in a 30% year-over-year growth in this high-margin segment.
"},{"location":"applications/blog/#64-optimizing-pricing-strategies","title":"6.4. Optimizing Pricing Strategies","text":"Pricing is a critical lever for revenue growth. Autonomous LLM agents can optimize pricing strategies by:
Dynamic Pricing Models: Analyzing market conditions, competitor pricing, and demand fluctuations to suggest optimal pricing in real-time.
Value-based Pricing Analysis: Assessing customer perceived value through sentiment analysis and willingness-to-pay studies to maximize revenue.
Bundle and Discount Optimization: Recommending product bundles and discount structures that maximize overall revenue and profitability.
Example: A industrial equipment manufacturer implemented an autonomous LLM agent to optimize its pricing strategy. The agent analyzed historical sales data, competitor pricing, economic indicators, and customer sentiment to recommend dynamic pricing models for different product lines and markets. This resulted in a 10% increase in profit margins and a 7% boost in overall revenue within the first year of implementation.
"},{"location":"applications/blog/#7-implementation-strategies","title":"7. Implementation Strategies","text":"Successfully implementing autonomous LLM agents in a manufacturing environment requires a strategic approach. Here are key steps and considerations for executives and CEOs:
Identify key performance indicators (KPIs) to measure success.
Conduct a Comprehensive Readiness Assessment:
Identify potential integration points with existing systems and processes.
Build a Cross-functional Implementation Team:
Consider partnering with external AI and manufacturing technology experts.
Develop a Phased Implementation Plan:
Scale successful pilots across the organization.
Invest in Data Infrastructure and Quality:
Implement data cleaning and standardization processes.
Choose the Right LLM and RAG Technologies:
Select RAG embedding databases that can efficiently handle the scale and complexity of manufacturing data.
Develop a Robust Integration Strategy:
Ensure proper API development and management for connecting with external tools and databases.
Prioritize Security and Compliance:
Ensure compliance with industry regulations and data privacy laws.
Invest in Change Management and Training:
Communicate the benefits and address concerns about AI implementation.
Establish Governance and Oversight:
Plan for Continuous Improvement:
Example: A leading automotive manufacturer implemented autonomous LLM agents across its global operations using a phased approach. They started with a pilot project in predictive maintenance at a single plant, which reduced downtime by 25%. Building on this success, they expanded to supply chain optimization and quality control. Within three years, the company had deployed AI agents across all major operations, resulting in a 12% reduction in overall production costs and a 9% increase in productivity.
"},{"location":"applications/blog/#8-overcoming-challenges-and-risks","title":"8. Overcoming Challenges and Risks","text":"While the benefits of autonomous LLM agents in manufacturing are substantial, there are several challenges and risks that executives must address:
"},{"location":"applications/blog/#data-quality-and-availability","title":"Data Quality and Availability","text":"Challenge: Manufacturing environments often have siloed, inconsistent, or incomplete data, which can hinder the effectiveness of AI systems.
Solution: - Invest in data infrastructure and standardization across the organization. - Implement data governance policies to ensure consistent data collection and management. - Use data augmentation techniques to address gaps in historical data.
"},{"location":"applications/blog/#integration-with-legacy-systems","title":"Integration with Legacy Systems","text":"Challenge: Many manufacturing facilities rely on legacy systems that may not easily integrate with modern AI technologies.
Solution: - Develop custom APIs and middleware to facilitate communication between legacy systems and AI agents. - Consider a gradual modernization strategy, replacing legacy systems over time. - Use edge computing devices to bridge the gap between old equipment and new AI systems.
"},{"location":"applications/blog/#workforce-adaptation-and-resistance","title":"Workforce Adaptation and Resistance","text":"Challenge: Employees may resist AI implementation due to fear of job displacement or lack of understanding.
Solution: - Emphasize that AI is a tool to augment human capabilities, not replace workers. - Provide comprehensive training programs to upskill employees. - Involve workers in the AI implementation process to gain buy-in and valuable insights.
"},{"location":"applications/blog/#ethical-considerations-and-bias","title":"Ethical Considerations and Bias","text":"Challenge: AI systems may inadvertently perpetuate biases present in historical data or decision-making processes.
Solution: - Implement rigorous testing for bias in AI models and decisions. - Establish an ethics committee to oversee AI implementations. - Regularly audit AI systems for fairness and unintended consequences.
"},{"location":"applications/blog/#security-and-intellectual-property-protection","title":"Security and Intellectual Property Protection","text":"Challenge: AI systems may be vulnerable to cyber attacks or could potentially expose sensitive manufacturing processes.
Solution: - Implement robust cybersecurity measures, including encryption and access controls. - Develop clear policies on data handling and AI model ownership. - Regularly conduct security audits and penetration testing.
Example: A pharmaceutical manufacturer faced challenges integrating AI agents with its highly regulated production processes. They addressed this by creating a cross-functional team of IT specialists, process engineers, and compliance officers. This team developed a custom integration layer that allowed AI agents to interact with existing systems while maintaining regulatory compliance. They also implemented a rigorous change management process, which included extensive training and a phased rollout. As a result, they successfully deployed AI agents that optimized production scheduling and quality control, leading to a 15% increase in throughput and a 30% reduction in quality-related issues.
"},{"location":"applications/blog/#9-case-studies","title":"9. Case Studies","text":"To illustrate the transformative potential of autonomous LLM agents in manufacturing, let's examine several real-world case studies:
"},{"location":"applications/blog/#case-study-1-global-electronics-manufacturer","title":"Case Study 1: Global Electronics Manufacturer","text":"Challenge: A leading electronics manufacturer was struggling with supply chain disruptions and rising production costs.
Solution: They implemented an autonomous LLM agent integrated with their supply chain management system and production planning tools.
Results: - 22% reduction in inventory carrying costs - 18% improvement in on-time deliveries - 15% decrease in production lead times - $200 million annual cost savings
Key Factors for Success: - Comprehensive integration with existing systems - Real-time data processing capabilities - Continuous learning and optimization algorithms
"},{"location":"applications/blog/#case-study-2-automotive-parts-supplier","title":"Case Study 2: Automotive Parts Supplier","text":"Challenge: An automotive parts supplier needed to improve quality control and reduce warranty claims.
Solution: They deployed an AI-powered quality control system using computer vision and an autonomous LLM agent for defect analysis and prediction.
Results: - 40% reduction in defect rates - 60% decrease in warranty claims - 25% improvement in overall equipment effectiveness (OEE) - $75 million annual savings in quality-related costs
Key Factors for Success: - High-quality image data collection system - Integration of domain expertise into the AI model - Continuous feedback loop for model improvement
"},{"location":"applications/blog/#case-study-3-food-and-beverage-manufacturer","title":"Case Study 3: Food and Beverage Manufacturer","text":"Challenge: A large food and beverage manufacturer wanted to optimize its energy consumption and reduce waste in its production processes.
Solution: They implemented an autonomous LLM agent that integrated with their energy management systems and production equipment.
Results: - 20% reduction in energy consumption - 30% decrease in production waste - 12% increase in overall production efficiency - $50 million annual cost savings - Significant progress towards sustainability goals
Key Factors for Success: - Comprehensive sensor network for real-time data collection - Integration with smart grid systems for dynamic energy management - Collaboration with process engineers to refine AI recommendations
"},{"location":"applications/blog/#case-study-4-aerospace-component-manufacturer","title":"Case Study 4: Aerospace Component Manufacturer","text":"Challenge: An aerospace component manufacturer needed to accelerate product development and improve first-time-right rates for new designs.
Solution: They implemented an autonomous LLM agent to assist in the design process, leveraging historical data, simulation results, and industry standards.
Results: - 35% reduction in design cycle time - 50% improvement in first-time-right rates for new designs - 20% increase in successful patent applications - $100 million increase in annual revenue from new products
Key Factors for Success: - Integration of CAD systems with the AI agent - Incorporation of aerospace industry standards and regulations into the AI knowledge base - Collaborative approach between AI and human engineers
These case studies demonstrate the wide-ranging benefits of autonomous LLM agents across various manufacturing sectors. The key takeaway is that successful implementation requires a holistic approach, combining technology integration, process redesign, and a focus on continuous improvement.
"},{"location":"applications/blog/#10-future-outlook","title":"10. Future Outlook","text":"As we look to the future of manufacturing, the role of autonomous LLM agents is set to become even more critical. Here are some key trends and developments that executives should keep on their radar:
"},{"location":"applications/blog/#1-advanced-natural-language-interfaces","title":"1. Advanced Natural Language Interfaces","text":"Future LLM agents will feature more sophisticated natural language interfaces, allowing workers at all levels to interact with complex manufacturing systems using conversational language. This will democratize access to AI capabilities and enhance overall operational efficiency.
"},{"location":"applications/blog/#2-enhanced-multi-modal-learning","title":"2. Enhanced Multi-modal Learning","text":"Next-generation agents will be able to process and analyze data from a wider range of sources, including text, images, video, and sensor data. This will enable more comprehensive insights and decision-making capabilities across the manufacturing ecosystem.
"},{"location":"applications/blog/#3-collaborative-ai-systems","title":"3. Collaborative AI Systems","text":"We'll see the emergence of AI ecosystems where multiple specialized agents collaborate to solve complex manufacturing challenges. For example, a design optimization agent might work in tandem with a supply chain agent and a quality control agent to develop new products that are optimized for both performance and manufacturability.
"},{"location":"applications/blog/#4-quantum-enhanced-ai","title":"4. Quantum-enhanced AI","text":"As quantum computing becomes more accessible, it will significantly enhance the capabilities of LLM agents, particularly in complex optimization problems common in manufacturing. This could lead to breakthroughs in areas such as materials science and process optimization.
"},{"location":"applications/blog/#5-augmented-reality-integration","title":"5. Augmented Reality Integration","text":"LLM agents will increasingly be integrated with augmented reality (AR) systems, providing real-time guidance and information to workers on the factory floor. This could revolutionize training, maintenance, and quality control processes.
"},{"location":"applications/blog/#6-autonomous-factories","title":"6. Autonomous Factories","text":"The ultimate vision is the development of fully autonomous factories where LLM agents orchestrate entire production processes with minimal human intervention. While this is still on the horizon, progressive implementation of autonomous systems will steadily move the industry in this direction.
"},{"location":"applications/blog/#7-ethical-ai-and-explainable-decision-making","title":"7. Ethical AI and Explainable Decision-Making","text":"As AI systems become more prevalent in critical manufacturing decisions, there will be an increased focus on developing ethical AI frameworks and enhancing the explainability of AI decision-making processes. This will be crucial for maintaining trust and meeting regulatory requirements.
"},{"location":"applications/blog/#8-circular-economy-optimization","title":"8. Circular Economy Optimization","text":"Future LLM agents will play a key role in optimizing manufacturing processes for sustainability and circular economy principles. This will include enhancing recycling processes, optimizing resource use, and designing products for easy disassembly and reuse.
To stay ahead in this rapidly evolving landscape, manufacturing executives should:
Foster a Culture of Innovation: Encourage experimentation with new AI technologies and applications.
Invest in Continuous Learning: Ensure your workforce is constantly upskilling to work effectively with advanced AI systems.
Collaborate with AI Research Institutions: Partner with universities and research labs to stay at the forefront of AI advancements in manufacturing.
Participate in Industry Consortiums: Join manufacturing technology consortiums to share knowledge and shape industry standards for AI adoption.
Develop Flexible and Scalable AI Infrastructure: Build systems that can easily incorporate new AI capabilities as they emerge.
Monitor Regulatory Developments: Stay informed about evolving regulations related to AI in manufacturing to ensure compliance and competitive advantage.
By embracing these future trends and preparing their organizations accordingly, manufacturing executives can position their companies to thrive in the AI-driven future of industry.
"},{"location":"applications/blog/#11-conclusion","title":"11. Conclusion","text":"The integration of autonomous LLM agents with RAG embedding databases, function calling, and external tools represents a paradigm shift in manufacturing. This technology has the potential to dramatically reduce costs, drive revenue growth, and revolutionize how manufacturing enterprises operate.
Key takeaways for executives and CEOs:
Transformative Potential: Autonomous LLM agents can impact every aspect of manufacturing, from supply chain optimization to product innovation.
Data-Driven Decision Making: These AI systems enable more informed, real-time decision-making based on comprehensive data analysis.
Competitive Advantage: Early adopters of this technology are likely to gain significant competitive advantages in terms of efficiency, quality, and innovation.
Holistic Implementation: Success requires a strategic approach that addresses technology, processes, and people.
Continuous Evolution: The field of AI in manufacturing is rapidly advancing, necessitating ongoing investment and adaptation.
Ethical Considerations: As AI becomes more prevalent, addressing ethical concerns and maintaining transparency will be crucial.
Future Readiness: Preparing for future developments, such as quantum-enhanced AI and autonomous factories, will be key to long-term success.
The journey to implement autonomous LLM agents in manufacturing is complex but potentially transformative. It requires vision, commitment, and a willingness to reimagine traditional manufacturing processes. However, the potential rewards \u2013 in terms of cost savings, revenue growth, and competitive advantage \u2013 are substantial.
As a manufacturing executive or CEO, your role is to lead this transformation, fostering a culture of innovation and continuous improvement. By embracing the power of autonomous LLM agents, you can position your organization at the forefront of the next industrial revolution, driving sustainable growth and success in an increasingly competitive global marketplace.
The future of manufacturing is intelligent, autonomous, and data-driven. The time to act is now. Embrace the potential of autonomous LLM agents and lead your organization into a new era of manufacturing excellence.
"},{"location":"applications/business-analyst-agent/","title":"Business analyst agent","text":""},{"location":"applications/business-analyst-agent/#building-analyst-agents-with-swarms-to-write-business-reports","title":"Building Analyst Agents with Swarms to write Business Reports","text":"Jupyter Notebook accompanying this post is accessible at: Business Analyst Agent Notebook
Solving a business problem often involves preparing a Business Case Report. This report comprehensively analyzes the problem, evaluates potential solutions, and provides evidence-based recommendations and an implementation plan to effectively address the issue and drive business value. While the process of preparing one requires an experienced business analyst, the workflow can be augmented using AI agents. Two candidates stick out as areas to work on:
In this post, we will explore how Swarms agents can be used to tackle a busuiness problem by outlining the solution, conducting background research and generating a preliminary report.
Before we proceed, this blog uses 3 API tools. Please obtain the following keys and store them in a .env
file in the same folder as this file.
OPENAI_API_KEY
TAVILY_API_KEY
KAY_API_KEY
import dotenv\ndotenv.load_dotenv() # Load environment variables from .env file\n
"},{"location":"applications/business-analyst-agent/#developing-an-outline-to-solve-the-problem","title":"Developing an Outline to solve the problem","text":"Assume the business problem is: How do we improve Nike's revenue in Q3 2024? We first create a planning agent to break down the problem into dependent sub-problems.
"},{"location":"applications/business-analyst-agent/#step-1-defining-the-data-model-and-tool-schema","title":"Step 1. Defining the Data Model and Tool Schema","text":"Using Pydantic, we define a structure to help the agent generate sub-problems.
import enum\nfrom typing import List\nfrom pydantic import Field, BaseModel\n\nclass QueryType(str, enum.Enum):\n \"\"\"Enumeration representing the types of queries that can be asked to a question answer system.\"\"\"\n\n SINGLE_QUESTION = \"SINGLE\"\n MERGE_MULTIPLE_RESPONSES = \"MERGE_MULTIPLE_RESPONSES\"\n\nclass Query(BaseModel):\n \"\"\"Class representing a single question in a query plan.\"\"\"\n\n id: int = Field(..., description=\"Unique id of the query\")\n question: str = Field(\n ...,\n description=\"Question asked using a question answering system\",\n )\n dependencies: List[int] = Field(\n default_factory=list,\n description=\"List of sub questions that need to be answered before asking this question\",\n )\n node_type: QueryType = Field(\n default=QueryType.SINGLE_QUESTION,\n description=\"Type of question, either a single question or a multi-question merge\",\n )\n\nclass QueryPlan(BaseModel):\n \"\"\"Container class representing a tree of questions to ask a question answering system.\"\"\"\n\n query_graph: List[Query] = Field(\n ..., description=\"The query graph representing the plan\"\n )\n\n def _dependencies(self, ids: List[int]) -> List[Query]:\n \"\"\"Returns the dependencies of a query given their ids.\"\"\"\n\n return [q for q in self.query_graph if q.id in ids]\n
Also, a tool_schema
needs to be defined. It is an instance of QueryPlan
and is used to initialize the agent.
tool_schema = QueryPlan(\n query_graph = [query.dict() for query in [\n Query(\n id=1,\n question=\"How do we improve Nike's revenue in Q3 2024?\",\n dependencies=[2],\n node_type=QueryType('SINGLE')\n ),\n # ... other queries ...\n ]]\n)\n
"},{"location":"applications/business-analyst-agent/#step-2-defining-the-planning-agent","title":"Step 2. Defining the Planning Agent","text":"We specify the query, task specification and an appropriate system prompt.
from swarm_models import OpenAIChat\nfrom swarms import Agent\n\nquery = \"How do we improve Nike's revenue in Q3 2024?\"\ntask = f\"Consider: {query}. Generate just the correct query plan in JSON format.\"\nsystem_prompt = (\n \"You are a world class query planning algorithm \" \n \"capable of breaking apart questions into its \" \n \"dependency queries such that the answers can be \" \n \"used to inform the parent question. Do not answer \" \n \"the questions, simply provide a correct compute \" \n \"graph with good specific questions to ask and relevant \" \n \"dependencies. Before you call the function, think \" \n \"step-by-step to get a better understanding of the problem.\"\n )\nllm = OpenAIChat(\n temperature=0.0, model_name=\"gpt-4\", max_tokens=4000\n)\n
Then, we proceed with agent definition.
# Initialize the agent\nagent = Agent(\n agent_name=\"Query Planner\",\n system_prompt=system_prompt,\n # Set the tool schema to the JSON string -- this is the key difference\n tool_schema=tool_schema,\n llm=llm,\n max_loops=1,\n autosave=True,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n interactive=False,\n # Set the output type to the tool schema which is a BaseModel\n output_type=tool_schema, # or dict, or str\n metadata_output_type=\"json\",\n # List of schemas that the agent can handle\n list_base_models=[tool_schema],\n function_calling_format_type=\"OpenAI\",\n function_calling_type=\"json\", # or soon yaml\n)\n
"},{"location":"applications/business-analyst-agent/#step-3-obtaining-outline-from-planning-agent","title":"Step 3. Obtaining Outline from Planning Agent","text":"We now run the agent, and since its output is in JSON format, we can load it as a dictionary.
generated_data = agent.run(task)\n
At times the agent could return extra content other than JSON. Below function will filter it out.
def process_json_output(content):\n # Find the index of the first occurrence of '```json\\n'\n start_index = content.find('```json\\n')\n if start_index == -1:\n # If '```json\\n' is not found, return the original content\n return content\n # Return the part of the content after '```json\\n' and remove the '```' at the end\n return content[start_index + len('```json\\n'):].rstrip('`')\n\n# Use the function to clean up the output\njson_content = process_json_output(generated_data.content)\n\nimport json\n\n# Load the JSON string into a Python object\njson_object = json.loads(json_content)\n\n# Convert the Python object back to a JSON string\njson_content = json.dumps(json_object, indent=2)\n\n# Print the JSON string\nprint(json_content)\n
Below is the output this produces
{\n \"main_query\": \"How do we improve Nike's revenue in Q3 2024?\",\n \"sub_queries\": [\n {\n \"id\": \"1\",\n \"query\": \"What is Nike's current revenue trend?\"\n },\n {\n \"id\": \"2\",\n \"query\": \"What are the projected market trends for the sports apparel industry in 2024?\"\n },\n {\n \"id\": \"3\",\n \"query\": \"What are the current successful strategies being used by Nike's competitors?\",\n \"dependencies\": [\n \"2\"\n ]\n },\n {\n \"id\": \"4\",\n \"query\": \"What are the current and projected economic conditions in Nike's major markets?\",\n \"dependencies\": [\n \"2\"\n ]\n },\n {\n \"id\": \"5\",\n \"query\": \"What are the current consumer preferences in the sports apparel industry?\",\n \"dependencies\": [\n \"2\"\n ]\n },\n {\n \"id\": \"6\",\n \"query\": \"What are the potential areas of improvement in Nike's current business model?\",\n \"dependencies\": [\n \"1\"\n ]\n },\n {\n \"id\": \"7\",\n \"query\": \"What are the potential new markets for Nike to explore in 2024?\",\n \"dependencies\": [\n \"2\",\n \"4\"\n ]\n },\n {\n \"id\": \"8\",\n \"query\": \"What are the potential new products or services Nike could introduce in 2024?\",\n \"dependencies\": [\n \"5\"\n ]\n },\n {\n \"id\": \"9\",\n \"query\": \"What are the potential marketing strategies Nike could use to increase its revenue in Q3 2024?\",\n \"dependencies\": [\n \"3\",\n \"5\",\n \"7\",\n \"8\"\n ]\n },\n {\n \"id\": \"10\",\n \"query\": \"What are the potential cost-saving strategies Nike could implement to increase its net revenue in Q3 2024?\",\n \"dependencies\": [\n \"6\"\n ]\n }\n ]\n}\n
The JSON dictionary is not convenient for humans to process. We make a directed graph out of it.
import networkx as nx\nimport matplotlib.pyplot as plt\nimport textwrap\nimport random\n\n# Create a directed graph\nG = nx.DiGraph()\n\n# Define a color map\ncolor_map = {}\n\n# Add nodes and edges to the graph\nfor sub_query in json_object['sub_queries']:\n # Check if 'dependencies' key exists in sub_query, if not, initialize it as an empty list\n if 'dependencies' not in sub_query:\n sub_query['dependencies'] = []\n # Assign a random color for each node\n color_map[sub_query['id']] = \"#{:06x}\".format(random.randint(0, 0xFFFFFF))\n G.add_node(sub_query['id'], label=textwrap.fill(sub_query['query'], width=20))\n for dependency in sub_query['dependencies']:\n G.add_edge(dependency, sub_query['id'])\n\n# Draw the graph\npos = nx.spring_layout(G)\nnx.draw(G, pos, with_labels=True, node_size=800, node_color=[color_map[node] for node in G.nodes()], node_shape=\"o\", alpha=0.5, linewidths=40)\n\n# Prepare labels for legend\nlabels = nx.get_node_attributes(G, 'label')\nhandles = [plt.Line2D([0], [0], marker='o', color=color_map[node], label=f\"{node}: {label}\", markersize=10, linestyle='None') for node, label in labels.items()]\n\n# Create a legend\nplt.legend(handles=handles, title=\"Queries\", bbox_to_anchor=(1.05, 1), loc='upper left')\n\nplt.show()\n
This produces the below diagram which makes the plan much more convenient to understand.
"},{"location":"applications/business-analyst-agent/#doing-background-research-and-gathering-data","title":"Doing Background Research and Gathering Data","text":"At this point, we have solved the first half of the problem. We have an outline consisting of sub-problems to to tackled to solve our business problem. This will form the overall structure of our report. We now need to research information for each sub-problem in order to write an informed report. This mechanically intensive and is the aspect that will most benefit from Agentic intervention.
Essentially, we can spawn parallel agents to gather the data. Each agent will have 2 tools:
As they run parallelly, they will add their knowledge into a common long-term memory. We will then spawn a separate report writing agent with access to this memory to generate our business case report.
"},{"location":"applications/business-analyst-agent/#step-4-defining-tools-for-worker-agents","title":"Step 4. Defining Tools for Worker Agents","text":"Let us first define the 2 tools.
import os\nfrom typing import List, Dict\n\nfrom swarms import tool\n\nos.environ['TAVILY_API_KEY'] = os.getenv('TAVILY_API_KEY')\nos.environ[\"KAY_API_KEY\"] = os.getenv('KAY_API_KEY')\n\nfrom langchain_community.tools.tavily_search import TavilySearchResults\nfrom langchain_core.pydantic_v1 import BaseModel, Field\n\nfrom kay.rag.retrievers import KayRetriever\n\ndef browser(query: str) -> str:\n \"\"\"\n Search the query in the browser with the Tavily API tool.\n Args:\n query (str): The query to search in the browser.\n Returns:\n str: The search results\n \"\"\"\n internet_search = TavilySearchResults()\n results = internet_search.invoke({\"query\": query})\n response = '' \n for result in results:\n response += (result['content'] + '\\n')\n return response\n\ndef kay_retriever(query: str) -> str:\n \"\"\"\n Search the financial data query with the KayAI API tool.\n Args:\n query (str): The query to search in the KayRetriever.\n Returns:\n str: The first context retrieved as a string.\n \"\"\"\n # Initialize the retriever\n retriever = KayRetriever(dataset_id = \"company\", data_types=[\"10-K\", \"10-Q\", \"8-K\", \"PressRelease\"])\n # Query the retriever\n context = retriever.query(query=query,num_context=1)\n return context[0]['chunk_embed_text']\n
"},{"location":"applications/business-analyst-agent/#step-5-defining-long-term-memory","title":"Step 5. Defining Long-Term Memory","text":"As mentioned previously, the worker agents running parallelly, will pool their knowledge into a common memory. Let us define that.
import logging\nimport os\nimport uuid\nfrom typing import Callable, List, Optional\n\nimport chromadb\nimport numpy as np\nfrom dotenv import load_dotenv\n\nfrom swarms.utils.data_to_text import data_to_text\nfrom swarms.utils.markdown_message import display_markdown_message\nfrom swarms_memory import AbstractVectorDatabase\n\n\n# Results storage using local ChromaDB\nclass ChromaDB(AbstractVectorDatabase):\n \"\"\"\n\n ChromaDB database\n\n Args:\n metric (str): The similarity metric to use.\n output (str): The name of the collection to store the results in.\n limit_tokens (int, optional): The maximum number of tokens to use for the query. Defaults to 1000.\n n_results (int, optional): The number of results to retrieve. Defaults to 2.\n\n Methods:\n add: _description_\n query: _description_\n\n Examples:\n >>> chromadb = ChromaDB(\n >>> metric=\"cosine\",\n >>> output=\"results\",\n >>> llm=\"gpt3\",\n >>> openai_api_key=OPENAI_API_KEY,\n >>> )\n >>> chromadb.add(task, result, result_id)\n \"\"\"\n\n def __init__(\n self,\n metric: str = \"cosine\",\n output_dir: str = \"swarms\",\n limit_tokens: Optional[int] = 1000,\n n_results: int = 3,\n embedding_function: Callable = None,\n docs_folder: str = None,\n verbose: bool = False,\n *args,\n **kwargs,\n ):\n self.metric = metric\n self.output_dir = output_dir\n self.limit_tokens = limit_tokens\n self.n_results = n_results\n self.docs_folder = docs_folder\n self.verbose = verbose\n\n # Disable ChromaDB logging\n if verbose:\n logging.getLogger(\"chromadb\").setLevel(logging.INFO)\n\n # Create Chroma collection\n chroma_persist_dir = \"chroma\"\n chroma_client = chromadb.PersistentClient(\n settings=chromadb.config.Settings(\n persist_directory=chroma_persist_dir,\n ),\n *args,\n **kwargs,\n )\n\n # Embedding model\n if embedding_function:\n self.embedding_function = embedding_function\n else:\n self.embedding_function = None\n\n # Create ChromaDB client\n self.client = chromadb.Client()\n\n # Create Chroma collection\n self.collection = chroma_client.get_or_create_collection(\n name=output_dir,\n metadata={\"hnsw:space\": metric},\n embedding_function=self.embedding_function,\n # data_loader=self.data_loader,\n *args,\n **kwargs,\n )\n display_markdown_message(\n \"ChromaDB collection created:\"\n f\" {self.collection.name} with metric: {self.metric} and\"\n f\" output directory: {self.output_dir}\"\n )\n\n # If docs\n if docs_folder:\n display_markdown_message(\n f\"Traversing directory: {docs_folder}\"\n )\n self.traverse_directory()\n\n def add(\n self,\n document: str,\n *args,\n **kwargs,\n ):\n \"\"\"\n Add a document to the ChromaDB collection.\n\n Args:\n document (str): The document to be added.\n condition (bool, optional): The condition to check before adding the document. Defaults to True.\n\n Returns:\n str: The ID of the added document.\n \"\"\"\n try:\n doc_id = str(uuid.uuid4())\n self.collection.add(\n ids=[doc_id],\n documents=[document],\n *args,\n **kwargs,\n )\n print('-----------------')\n print(\"Document added successfully\")\n print('-----------------')\n return doc_id\n except Exception as e:\n raise Exception(f\"Failed to add document: {str(e)}\")\n\n def query(\n self,\n query_text: str,\n *args,\n **kwargs,\n ):\n \"\"\"\n Query documents from the ChromaDB collection.\n\n Args:\n query (str): The query string.\n n_docs (int, optional): The number of documents to retrieve. Defaults to 1.\n\n Returns:\n dict: The retrieved documents.\n \"\"\"\n try:\n docs = self.collection.query(\n query_texts=[query_text],\n n_results=self.n_results,\n *args,\n **kwargs,\n )[\"documents\"]\n return docs[0]\n except Exception as e:\n raise Exception(f\"Failed to query documents: {str(e)}\")\n\n def traverse_directory(self):\n \"\"\"\n Traverse through every file in the given directory and its subdirectories,\n and return the paths of all files.\n Parameters:\n - directory_name (str): The name of the directory to traverse.\n Returns:\n - list: A list of paths to each file in the directory and its subdirectories.\n \"\"\"\n added_to_db = False\n\n for root, dirs, files in os.walk(self.docs_folder):\n for file in files:\n file = os.path.join(self.docs_folder, file)\n _, ext = os.path.splitext(file)\n data = data_to_text(file)\n added_to_db = self.add([data])\n print(f\"{file} added to Database\")\n\n return added_to_db\n
We can now proceed to initialize the memory.
from chromadb.utils import embedding_functions\ndefault_ef = embedding_functions.DefaultEmbeddingFunction()\n\nmemory = ChromaDB(\n metric=\"cosine\",\n n_results=3,\n output_dir=\"results\",\n embedding_function=default_ef\n)\n
"},{"location":"applications/business-analyst-agent/#step-6-defining-worker-agents","title":"Step 6. Defining Worker Agents","text":"The Worker Agent sub-classes the Agent
class. The only different between these 2 is in how the run()
method works. In the Agent
class, run()
simply returns the set of tool commands to run, but does not execute it. We, however, desire this. In addition, after we run our tools, we get the relevant information as output. We want to add this information to our memory. Hence, to incorporate these 2 changes, we define WorkerAgent
as follows.
class WorkerAgent(Agent):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def run(self, task, *args, **kwargs):\n response = super().run(task, *args, **kwargs)\n print(response.content)\n\n json_dict = json.loads(process_json_output(response.content))\n\n #print(json.dumps(json_dict, indent=2))\n\n if response!=None:\n try:\n commands = json_dict[\"commands\"]\n except:\n commands = [json_dict['command']]\n\n for command in commands:\n tool_name = command[\"name\"]\n\n if tool_name not in ['browser', 'kay_retriever']:\n continue\n\n query = command[\"args\"][\"query\"]\n\n # Get the tool by its name\n tool = globals()[tool_name]\n tool_response = tool(query)\n\n # Add tool's output to long term memory\n self.long_term_memory.add(tool_response)\n
We can then instantiate an object of the WorkerAgent
class.
worker_agent = WorkerAgent(\n agent_name=\"Worker Agent\",\n system_prompt=(\n \"Autonomous agent that can interact with browser, \"\n \"financial data retriever and other agents. Be Helpful \" \n \"and Kind. Use the tools provided to assist the user. \"\n \"Generate the plan with list of commands in JSON format.\"\n ),\n llm=OpenAIChat(\n temperature=0.0, model_name=\"gpt-4\", max_tokens=4000\n),\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n interactive=True,\n tools=[browser, kay_retriever],\n long_term_memory=memory,\n code_interpreter=True,\n)\n
"},{"location":"applications/business-analyst-agent/#step-7-running-the-worker-agents","title":"Step 7. Running the Worker Agents","text":"At this point, we need to setup a concurrent workflow. While the order of adding tasks to the workflow doesn't matter (since they will all run concurrently late when executed), we can take some time to define an order for these tasks. This order will come in handy later when writing the report using our Writer Agent.
The order we will follow is Breadth First Traversal (BFT) of the sub-queries in the graph we had made earlier (shown below again for reference). BFT makes sense to be used here because we want all the dependent parent questions to be answered before answering the child question. Also, since we could have independent subgraphs, we will also perform BFT separately on each subgraph.
Below is the code that produces the order of processing sub-queries.
from collections import deque, defaultdict\n\n# Define the graph nodes\nnodes = json_object['sub_queries']\n\n# Create a graph from the nodes\ngraph = defaultdict(list)\nfor node in nodes:\n for dependency in node['dependencies']:\n graph[dependency].append(node['id'])\n\n# Find all nodes with no dependencies (potential starting points)\nstart_nodes = [node['id'] for node in nodes if not node['dependencies']]\n\n# Adjust the BFT function to handle dependencies correctly\ndef bft_corrected(start, graph, nodes_info):\n visited = set()\n queue = deque([start])\n order = []\n\n while queue:\n node = queue.popleft()\n if node not in visited:\n # Check if all dependencies of the current node are visited\n node_dependencies = [n['id'] for n in nodes if n['id'] == node][0]\n dependencies_met = all(dep in visited for dep in nodes_info[node_dependencies]['dependencies'])\n\n if dependencies_met:\n visited.add(node)\n order.append(node)\n # Add only nodes to the queue whose dependencies are fully met\n for next_node in graph[node]:\n if all(dep in visited for dep in nodes_info[next_node]['dependencies']):\n queue.append(next_node)\n else:\n # Requeue the node to check dependencies later\n queue.append(node)\n\n return order\n\n# Dictionary to access node information quickly\nnodes_info = {node['id']: node for node in nodes}\n\n# Perform BFT for each unvisited start node using the corrected BFS function\nvisited_global = set()\nbfs_order = []\n\nfor start in start_nodes:\n if start not in visited_global:\n order = bft_corrected(start, graph, nodes_info)\n bfs_order.extend(order)\n visited_global.update(order)\n\nprint(\"BFT Order:\", bfs_order)\n
This produces the following output.
BFT Order: ['1', '6', '10', '2', '3', '4', '5', '7', '8', '9']\n
Now, let's define our ConcurrentWorkflow
and run it.
import os\nfrom dotenv import load_dotenv\nfrom swarms import Agent, ConcurrentWorkflow, OpenAIChat, Task\n\n# Create a workflow\nworkflow = ConcurrentWorkflow(max_workers=5)\ntask_list = []\n\nfor node in bfs_order:\n sub_query =nodes_info[node]['query']\n task = Task(worker_agent, sub_query)\n print('-----------------')\n print(\"Added task: \", sub_query)\n print('-----------------')\n task_list.append(task)\n\nworkflow.add(tasks=task_list)\n\n# Run the workflow\nworkflow.run()\n
Below is part of the output this workflow produces. We clearly see the thought process of the agent and the plan it came up to solve a particular sub-query. In addition, we see the tool-calling schema it produces in \"command\"
.
...\n...\ncontent='\\n{\\n \"thoughts\": {\\n \"text\": \"To find out Nike\\'s current revenue trend, I will use the financial data retriever tool to search for \\'Nike revenue trend\\'.\",\\n \"reasoning\": \"The financial data retriever tool allows me to search for specific financial data, so I can look up the current revenue trend of Nike.\", \\n \"plan\": \"Use the financial data retriever tool to search for \\'Nike revenue trend\\'. Parse the result to get the current revenue trend and format that into a readable report.\"\\n },\\n \"command\": {\\n \"name\": \"kay_retriever\", \\n \"args\": {\\n \"query\": \"Nike revenue trend\"\\n }\\n }\\n}\\n```' response_metadata={'token_usage': {'completion_tokens': 152, 'prompt_tokens': 1527, 'total_tokens': 1679}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}\nSaved agent state to: Worker Agent_state.json\n\n{\n \"thoughts\": {\n \"text\": \"To find out Nike's current revenue trend, I will use the financial data retriever tool to search for 'Nike revenue trend'.\",\n \"reasoning\": \"The financial data retriever tool allows me to search for specific financial data, so I can look up the current revenue trend of Nike.\", \n \"plan\": \"Use the financial data retriever tool to search for 'Nike revenue trend'. Parse the result to get the current revenue trend and format that into a readable report.\"\n },\n \"command\": {\n \"name\": \"kay_retriever\", \n \"args\": {\n \"query\": \"Nike revenue trend\"\n }\n }\n}\n\n-----------------\nDocument added successfully\n-----------------\n...\n...\n
Here, \"name\"
pertains to the name of the tool to be called and \"args\"
is the arguments to be passed to the tool call. Like mentioned before, we modify Agent
's default behaviour in WorkerAgent
. Hence, the tool call is executed here and its results (information from web pages and Kay Retriever API) are added to long-term memory. We get confirmation for this from the message Document added successfully
.
At this point, our Worker Agents have gathered all the background information required to generate the report. We have also defined a coherent structure to write the report, which is following the BFT order to answering the sub-queries. Now it's time to define a Writer Agent and call it sequentially in the order of sub-queries.
from swarms import Agent, OpenAIChat, tool\n\nagent = Agent(\n agent_name=\"Writer Agent\",\n agent_description=(\n \"This agent writes reports based on information in long-term memory\"\n ),\n system_prompt=(\n \"You are a world-class financial report writer. \" \n \"Write analytical and accurate responses using memory to answer the query. \"\n \"Do not mention use of long-term memory in the report. \"\n \"Do not mention Writer Agent in response.\"\n \"Return only response content in strict markdown format.\"\n ),\n llm=OpenAIChat(temperature=0.2, model='gpt-3.5-turbo'),\n max_loops=1,\n autosave=True,\n verbose=True,\n long_term_memory=memory,\n)\n
The report individual sections of the report will be collected in a list.
report = []\n
Let us now run the writer agent.
for node in bfs_order:\n sub_query =nodes_info[node]['query']\n print(\"Running task: \", sub_query)\n out = agent.run(f\"Consider: {sub_query}. Write response in strict markdown format using long-term memory. Do not mention Writer Agent in response.\")\n print(out)\n try:\n report.append(out.content)\n except:\n pass\n
Now, we need to clean up the repoort a bit to make it render professionally.
# Remove any content before the first \"#\" as that signals start of heading\n# Anything before this usually contains filler content\nstripped_report = [entry[entry.find('#'):] if '#' in entry else entry for entry in report]\nreport = stripped_report\n\n# At times the LLM outputs \\\\n instead of \\n\ncleaned_report = [entry.replace(\"\\\\n\", \"\\n\") for entry in report]\nimport re\n\n# Function to clean up unnecessary metadata from the report entries\ndef clean_report(report):\n cleaned_report = []\n for entry in report:\n # This pattern matches 'response_metadata={' followed by any characters that are not '}' (non-greedy), \n # possibly nested inside other braces, until the closing '}'.\n cleaned_entry = re.sub(r\"response_metadata=\\{[^{}]*(?:\\{[^{}]*\\}[^{}]*)*\\}\", \"\", entry, flags=re.DOTALL)\n cleaned_report.append(cleaned_entry)\n return cleaned_report\n\n# Apply the cleaning function to the markdown report\ncleaned_report = clean_report(cleaned_report)\n
After cleaning, we append parts of the report together to get out final report.
final_report = ' \\n '.join(cleaned_report)\n
In Jupyter Notebook, we can use the below code to render it in Markdown.
from IPython.display import display, Markdown\n\ndisplay(Markdown(final_report))\n
"},{"location":"applications/business-analyst-agent/#final-generated-report","title":"Final Generated Report","text":""},{"location":"applications/business-analyst-agent/#nikes-current-revenue-trend","title":"Nike's Current Revenue Trend","text":"Nike's current revenue trend has been steadily increasing over the past few years. In the most recent fiscal year, Nike reported a revenue of $37.4 billion, which was a 7% increase from the previous year. This growth can be attributed to strong sales in key markets, successful marketing campaigns, and a focus on innovation in product development. Overall, Nike continues to demonstrate strong financial performance and is well-positioned for future growth. ### Potential Areas of Improvement in Nike's Business Model
Sustainability Practices: Nike could further enhance its sustainability efforts by reducing its carbon footprint, using more eco-friendly materials, and ensuring ethical labor practices throughout its supply chain.
Diversification of Product Portfolio: While Nike is known for its athletic footwear and apparel, diversifying into new product categories or expanding into untapped markets could help drive growth and mitigate risks associated with a single product line.
E-commerce Strategy: Improving the online shopping experience, investing in digital marketing, and leveraging data analytics to personalize customer interactions could boost online sales and customer loyalty.
Innovation and R&D: Continuously investing in research and development to stay ahead of competitors, introduce new technologies, and enhance product performance could help maintain Nike's competitive edge in the market.
Brand Image and Reputation: Strengthening brand image through effective marketing campaigns, community engagement, and transparent communication with stakeholders can help build trust and loyalty among consumers. ### Potential Cost-Saving Strategies for Nike to Increase Net Revenue in Q3 2024
Supply Chain Optimization: Streamlining the supply chain, reducing transportation costs, and improving inventory management can lead to significant cost savings for Nike.
Operational Efficiency: Implementing lean manufacturing practices, reducing waste, and optimizing production processes can help lower production costs and improve overall efficiency.
Outsourcing Non-Core Functions: Outsourcing non-core functions such as IT services, customer support, or logistics can help reduce overhead costs and focus resources on core business activities.
Energy Efficiency: Investing in energy-efficient technologies, renewable energy sources, and sustainable practices can lower utility costs and demonstrate a commitment to environmental responsibility.
Negotiating Supplier Contracts: Negotiating better terms with suppliers, leveraging economies of scale, and exploring alternative sourcing options can help lower procurement costs and improve margins.
By implementing these cost-saving strategies, Nike can improve its bottom line and increase net revenue in Q3 2024. ### Projected Market Trends for the Sports Apparel Industry in 2024
Sustainable Fashion: Consumers are increasingly demanding eco-friendly and sustainable products, leading to a rise in sustainable sportswear options in the market.
Digital Transformation: The sports apparel industry is expected to continue its shift towards digital platforms, with a focus on e-commerce, personalized shopping experiences, and digital marketing strategies.
Athleisure Wear: The trend of athleisure wear, which combines athletic and leisure clothing, is projected to remain popular in 2024 as consumers seek comfort and versatility in their apparel choices.
Innovative Materials: Advances in technology and material science are likely to drive the development of innovative fabrics and performance-enhancing materials in sports apparel, catering to the demand for high-quality and functional products.
Health and Wellness Focus: With a growing emphasis on health and wellness, sports apparel brands are expected to incorporate features that promote comfort, performance, and overall well-being in their products.
Overall, the sports apparel industry in 2024 is anticipated to be characterized by sustainability, digitalization, innovation, and a focus on consumer health and lifestyle trends. ### Current Successful Strategies Used by Nike's Competitors
Adidas: Adidas has been successful in leveraging collaborations with celebrities and designers to create limited-edition collections that generate hype and drive sales. They have also focused on sustainability initiatives, such as using recycled materials in their products, to appeal to environmentally conscious consumers.
Under Armour: Under Armour has differentiated itself by targeting performance-driven athletes and emphasizing technological innovation in their products. They have also invested heavily in digital marketing and e-commerce to reach a wider audience and enhance the customer shopping experience.
Puma: Puma has successfully capitalized on the athleisure trend by offering stylish and versatile sportswear that can be worn both in and out of the gym. They have also focused on building partnerships with influencers and sponsoring high-profile athletes to increase brand visibility and credibility.
Lululemon: Lululemon has excelled in creating a strong community around its brand, hosting events, classes, and collaborations to engage with customers beyond just selling products. They have also prioritized customer experience by offering personalized services and creating a seamless omnichannel shopping experience.
New Balance: New Balance has carved out a niche in the market by emphasizing quality craftsmanship, heritage, and authenticity in their products. They have also focused on customization and personalization options for customers, allowing them to create unique and tailored footwear and apparel.
Overall, Nike's competitors have found success through a combination of innovative product offerings, strategic marketing initiatives, and a focus on customer engagement and experience. ### Current and Projected Economic Conditions in Nike's Major Markets
United States: The United States, being one of Nike's largest markets, is currently experiencing moderate economic growth driven by consumer spending, low unemployment rates, and a rebound in manufacturing. However, uncertainties surrounding trade policies, inflation, and interest rates could impact consumer confidence and spending in the near future.
China: China remains a key market for Nike, with a growing middle class and increasing demand for sportswear and athletic footwear. Despite recent trade tensions with the U.S., China's economy is projected to continue expanding, driven by domestic consumption, infrastructure investments, and technological advancements.
Europe: Economic conditions in Europe vary across countries, with some experiencing sluggish growth due to Brexit uncertainties, political instability, and trade tensions. However, overall consumer confidence is improving, and the sports apparel market is expected to grow, driven by e-commerce and sustainability trends.
Emerging Markets: Nike's presence in emerging markets such as India, Brazil, and Southeast Asia provides opportunities for growth, given the rising disposable incomes, urbanization, and increasing focus on health and fitness. However, challenges such as currency fluctuations, regulatory changes, and competition from local brands could impact Nike's performance in these markets.
Overall, Nike's major markets exhibit a mix of opportunities and challenges, with economic conditions influenced by global trends, geopolitical factors, and consumer preferences.\" ### Current Consumer Preferences in the Sports Apparel Industry
Sustainability: Consumers are increasingly seeking eco-friendly and sustainable options in sports apparel, driving brands to focus on using recycled materials, reducing waste, and promoting ethical practices.
Athleisure: The trend of athleisure wear continues to be popular, with consumers looking for versatile and comfortable clothing that can be worn both during workouts and in everyday life.
Performance and Functionality: Consumers prioritize performance-enhancing features in sports apparel, such as moisture-wicking fabrics, breathable materials, and ergonomic designs that enhance comfort and mobility.
Personalization: Customization options, personalized fit, and unique design elements are appealing to consumers who seek individuality and exclusivity in their sports apparel choices.
Brand Transparency: Consumers value transparency in brand practices, including supply chain transparency, ethical sourcing, and clear communication on product quality and manufacturing processes.
Overall, consumer preferences in the sports apparel industry are shifting towards sustainability, versatility, performance, personalization, and transparency, influencing brand strategies and product offerings. ### Potential New Markets for Nike to Explore in 2024
India: With a growing population, increasing disposable incomes, and a rising interest in health and fitness, India presents a significant opportunity for Nike to expand its presence and tap into a large consumer base.
Africa: The African market, particularly countries with emerging economies and a young population, offers potential for Nike to introduce its products and capitalize on the growing demand for sportswear and athletic footwear.
Middle East: Countries in the Middle East, known for their luxury shopping destinations and a growing interest in sports and fitness activities, could be strategic markets for Nike to target and establish a strong foothold.
Latin America: Markets in Latin America, such as Brazil, Mexico, and Argentina, present opportunities for Nike to cater to a diverse consumer base and leverage the region's passion for sports and active lifestyles.
Southeast Asia: Rapid urbanization, increasing urban middle-class population, and a trend towards health and wellness in countries like Indonesia, Thailand, and Vietnam make Southeast Asia an attractive region for Nike to explore and expand its market reach.
By exploring these new markets in 2024, Nike can diversify its geographical presence, reach untapped consumer segments, and drive growth in emerging economies. ### Potential New Products or Services Nike Could Introduce in 2024
Smart Apparel: Nike could explore the integration of technology into its apparel, such as smart fabrics that monitor performance metrics, provide feedback, or enhance comfort during workouts.
Athletic Accessories: Introducing a line of athletic accessories like gym bags, water bottles, or fitness trackers could complement Nike's existing product offerings and provide additional value to customers.
Customization Platforms: Offering personalized design options for footwear and apparel through online customization platforms could appeal to consumers seeking unique and tailored products.
Athletic Recovery Gear: Developing recovery-focused products like compression wear, recovery sandals, or massage tools could cater to athletes and fitness enthusiasts looking to enhance post-workout recovery.
Sustainable Collections: Launching sustainable collections made from eco-friendly materials, recycled fabrics, or biodegradable components could align with consumer preferences for environmentally conscious products.
By introducing these new products or services in 2024, Nike can innovate its product portfolio, cater to evolving consumer needs, and differentiate itself in the competitive sports apparel market. ### Potential Marketing Strategies for Nike to Increase Revenue in Q3 2024
Influencer Partnerships: Collaborating with popular athletes, celebrities, or social media influencers to promote Nike products can help reach a wider audience and drive sales.
Interactive Campaigns: Launching interactive marketing campaigns, contests, or events that engage customers and create buzz around new product releases can generate excitement and increase brand visibility.
Social Media Engagement: Leveraging social media platforms to connect with consumers, share user-generated content, and respond to feedback can build brand loyalty and encourage repeat purchases.
Localized Marketing: Tailoring marketing messages, promotions, and product offerings to specific regions or target demographics can enhance relevance and appeal to diverse consumer groups.
Customer Loyalty Programs: Implementing loyalty programs, exclusive offers, or rewards for repeat customers can incentivize brand loyalty, increase retention rates, and drive higher lifetime customer value.
By employing these marketing strategies in Q3 2024, Nike can enhance its brand presence, attract new customers, and ultimately boost revenue growth.
"},{"location":"applications/customer_support/","title":"Customer support","text":""},{"location":"applications/customer_support/#applications-of-swarms-revolutionizing-customer-support","title":"Applications of Swarms: Revolutionizing Customer Support","text":"Introduction: In today's fast-paced digital world, responsive and efficient customer support is a linchpin for business success. The introduction of AI-driven swarms in the customer support domain can transform the way businesses interact with and assist their customers. By leveraging the combined power of multiple AI agents working in concert, businesses can achieve unprecedented levels of efficiency, customer satisfaction, and operational cost savings.
"},{"location":"applications/customer_support/#the-benefits-of-using-swarms-for-customer-support","title":"The Benefits of Using Swarms for Customer Support:","text":"24/7 Availability: Swarms never sleep. Customers receive instantaneous support at any hour, ensuring constant satisfaction and loyalty.
Infinite Scalability: Whether it's ten inquiries or ten thousand, swarms can handle fluctuating volumes with ease, eliminating the need for vast human teams and minimizing response times.
Adaptive Intelligence: Swarms learn collectively, meaning that a solution found for one customer can be instantly applied to benefit all. This leads to constantly improving support experiences, evolving with every interaction.
AI Inbox Monitor: Continuously scans email inboxes, identifying and categorizing support requests for swift responses.
Intelligent Debugging: Proactively helps customers by diagnosing and troubleshooting underlying issues.
Automated Refunds & Coupons: Seamless integration with payment systems like Stripe allows for instant issuance of refunds or coupons if a problem remains unresolved.
Full System Integration: Holistically connects with CRM, email systems, and payment portals, ensuring a cohesive and unified support experience.
Conversational Excellence: With advanced LLMs (Language Model Transformers), the swarm agents can engage in natural, human-like conversations, enhancing customer comfort and trust.
Rule-based Operation: By working with rule engines, swarms ensure that all actions adhere to company guidelines, ensuring consistent, error-free support.
Turing Test Ready: Crafted to meet and exceed the Turing Test standards, ensuring that every customer interaction feels genuine and personal.
Conclusion: Swarms are not just another technological advancement; they represent the future of customer support. Their ability to provide round-the-clock, scalable, and continuously improving support can redefine customer experience standards. By adopting swarms, businesses can stay ahead of the curve, ensuring unparalleled customer loyalty and satisfaction.
Experience the future of customer support. Dive into the swarm revolution.
"},{"location":"applications/marketing_agencies/","title":"Marketing agencies","text":""},{"location":"applications/marketing_agencies/#swarms-in-marketing-agencies-a-new-era-of-automated-media-strategy","title":"Swarms in Marketing Agencies: A New Era of Automated Media Strategy","text":""},{"location":"applications/marketing_agencies/#introduction","title":"Introduction:","text":"Definition: The challenge of creating an effective media plan that resonates with a target audience and aligns with brand objectives.
Traditional Solutions and Their Shortcomings: Manual brainstorming sessions, over-reliance on past strategies, and long turnaround times leading to inefficiency.
How Swarms Address This Problem:
Definition: The tedious task of determining where ads will be placed, considering demographics, platform specifics, and more.
Traditional Solutions and Their Shortcomings: Manual placement leading to possible misalignment with target audiences and brand objectives.
How Swarms Address This Problem:
Definition: Efficiently allocating and managing advertising budgets across multiple campaigns, platforms, and timeframes.
Traditional Solutions and Their Shortcomings: Manual budgeting using tools like Excel, prone to errors, and inefficient shifts in allocations.
How Swarms Address This Problem:
This section explores the fundamental limitations of individual AI agents and why multi-agent systems are necessary for complex tasks. Understanding these limitations is crucial for designing effective multi-agent architectures.
"},{"location":"concepts/limitations/#overview","title":"Overview","text":"graph TD\n A[Individual Agent Limitations] --> B[Context Window Limits]\n A --> C[Hallucination]\n A --> D[Single Task Execution]\n A --> E[Lack of Collaboration]\n A --> F[Accuracy Issues]\n A --> G[Processing Speed]
"},{"location":"concepts/limitations/#1-context-window-limits","title":"1. Context Window Limits","text":""},{"location":"concepts/limitations/#the-challenge","title":"The Challenge","text":"Individual agents are constrained by fixed context windows, limiting their ability to process large amounts of information simultaneously.
graph LR\n subgraph \"Context Window Limitation\"\n Input[Large Document] --> Truncation[Truncation]\n Truncation --> ProcessedPart[Processed Part]\n Truncation --> UnprocessedPart[Unprocessed Part]\n end
"},{"location":"concepts/limitations/#impact","title":"Impact","text":"Individual agents may generate plausible-sounding but incorrect information, especially when dealing with ambiguous or incomplete data.
graph TD\n Input[Ambiguous Input] --> Agent[AI Agent]\n Agent --> Valid[Valid Output]\n Agent --> Hallucination[Hallucinated Output]\n style Hallucination fill:#ff9999
"},{"location":"concepts/limitations/#impact_1","title":"Impact","text":"Most individual agents are optimized for specific tasks and struggle with multi-tasking or adapting to new requirements.
graph LR\n Task1[Task A] --> Agent1[Agent A]\n Task2[Task B] --> Agent2[Agent B]\n Task3[Task C] --> Agent3[Agent C]\n Agent1 --> Output1[Output A]\n Agent2 --> Output2[Output B]\n Agent3 --> Output3[Output C]
"},{"location":"concepts/limitations/#impact_2","title":"Impact","text":"Individual agents operate in isolation, unable to share insights or coordinate actions with other agents.
graph TD\n A1[Agent 1] --> O1[Output 1]\n A2[Agent 2] --> O2[Output 2]\n A3[Agent 3] --> O3[Output 3]\n style A1 fill:#f9f,stroke:#333\n style A2 fill:#f9f,stroke:#333\n style A3 fill:#f9f,stroke:#333
"},{"location":"concepts/limitations/#impact_3","title":"Impact","text":"Individual agents may produce inaccurate results due to: - Limited training data - Model biases - Lack of cross-validation - Incomplete context understanding
graph LR\n Input[Input Data] --> Processing[Processing]\n Processing --> Accurate[Accurate Output]\n Processing --> Inaccurate[Inaccurate Output]\n style Inaccurate fill:#ff9999
"},{"location":"concepts/limitations/#6-processing-speed-limitations","title":"6. Processing Speed Limitations","text":""},{"location":"concepts/limitations/#the-challenge_5","title":"The Challenge","text":"Individual agents may experience: - Slow response times - Resource constraints - Limited parallel processing - Bottlenecks in complex tasks
graph TD\n Input[Input] --> Queue[Processing Queue]\n Queue --> Processing[Sequential Processing]\n Processing --> Delay[Processing Delay]\n Delay --> Output[Delayed Output]
"},{"location":"concepts/limitations/#best-practices-for-mitigation","title":"Best Practices for Mitigation","text":"Foster collaboration
Implement Verification
Track performance
Optimize Resource Usage
Understanding these limitations is crucial for: - Designing robust multi-agent systems - Implementing effective mitigation strategies - Optimizing system performance - Ensuring reliable outputs
The next section explores how Multi-Agent Architecture addresses these limitations through collaborative approaches and specialized agent roles.
"},{"location":"contributors/docs/","title":"Contributing to Swarms Documentation","text":"The Swarms documentation serves as the primary gateway for developer and user engagement within the Swarms ecosystem. Comprehensive, clear, and consistently updated documentation accelerates adoption, reduces support requests, and helps maintain a thriving developer community. This guide offers an in-depth, actionable framework for contributing to the Swarms documentation site, covering the full lifecycle from initial setup to the implementation of our bounty-based rewards program.
This guide is designed for first-time contributors, experienced engineers, and technical writers alike. It emphasizes professional standards, collaborative development practices, and incentivized participation through our structured rewards program. Contributors play a key role in helping us scale and evolve our ecosystem by improving the clarity, accessibility, and technical depth of our documentation.
"},{"location":"contributors/docs/#1-introduction","title":"1. Introduction","text":"Documentation in the Swarms ecosystem is not simply static text. It is a living, breathing system that guides users, developers, and enterprises in effectively utilizing our frameworks, SDKs, APIs, and tools. Whether you are documenting a new feature, refining an API call, writing a tutorial, or correcting existing information, every contribution has a direct impact on the product\u2019s usability and user satisfaction.
Objectives of this Guide:
Define a standardized contribution workflow for Swarms documentation.
Clarify documentation roles, responsibilities, and submission expectations.
Establish quality benchmarks, review procedures, and formatting rules.
Introduce the Swarms Documentation Bounty Program to incentivize excellence.
By treating documentation as a core product component, we ensure continuity, scalability, and user satisfaction.
"},{"location":"contributors/docs/#3-understanding-the-swarms-ecosystem","title":"3. Understanding the Swarms Ecosystem","text":"The Swarms ecosystem consists of multiple tightly integrated components that serve developers and enterprise clients alike:
Core Documentation Repository: The main documentation hub for all Swarms technologies GitHub.
Rust SDK (swarms_rs
): Official documentation for the Rust implementation. Repo.
Tools Documentation (swarms_tools
): Guides for CLI and GUI utilities.
Hosted API Reference: Up-to-date REST API documentation: Swarms API Docs.
Marketplace & Chat: Web platforms and communication interfaces swarms.world.
All contributions funnel through the docs/
directory in the core repo and are structured via MkDocs.
Swarms documentation is powered by MkDocs, an extensible static site generator tailored for project documentation. To contribute, you should be comfortable with:
Markdown: For formatting structure, code snippets, lists, and links.
MkDocs Configuration: mkdocs.yml
manages structure, theme, and navigation.
Version Control: GitHub for branching, version tracking, and collaboration.
Recommended Tooling:
Markdown linters to enforce syntax consistency.
Spellcheckers to ensure grammatical accuracy.
Doc generators for automated API reference extraction.
Git v2.30 or higher
Node.js and npm for related dependency management
MkDocs and Material for MkDocs theme (pip install mkdocs mkdocs-material
)
A GitHub account with permissions to fork and submit pull requests
Visit: https://github.com/kyegomez/swarms
Click on Fork to create your version of the repository
git clone https://github.com/<your-username>/swarms.git\ncd swarms/docs\ngit checkout -b feature/docs-<short-description>\n
"},{"location":"contributors/docs/#6-understanding-the-repository-structure","title":"6. Understanding the Repository Structure","text":"Explore the documentation directory:
docs/\n\u251c\u2500\u2500 index.md\n\u251c\u2500\u2500 mkdocs.yml\n\u251c\u2500\u2500 swarms_rs/\n\u2502 \u251c\u2500\u2500 overview.md\n\u2502 \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 swarms_tools/\n \u251c\u2500\u2500 install.md\n \u2514\u2500\u2500 ...\n
"},{"location":"contributors/docs/#61-sdktools-directories","title":"6.1 SDK/Tools Directories","text":"Rust SDK (docs/swarms_rs
): Guides, references, and API walkthroughs for the Rust-based implementation.
Swarms Tools (docs/swarms_tools
): CLI guides, GUI usage instructions, and architecture documentation.
Add new .md
files in the folder corresponding to your documentation type.
Update mkdocs.yml
to integrate your new document:
nav:\n - Home: index.md\n - Swarms Rust:\n - Overview: swarms_rs/overview.md\n - Your Topic: swarms_rs/your_file.md\n - Swarms Tools:\n - Installation: swarms_tools/install.md\n - Your Guide: swarms_tools/your_file.md\n
"},{"location":"contributors/docs/#7-writing-and-editing-documentation","title":"7. Writing and Editing Documentation","text":""},{"location":"contributors/docs/#71-content-standards","title":"7.1 Content Standards","text":"Clarity: Explain complex ideas in simple, direct language.
Style Consistency: Match the tone and structure of existing docs.
Accuracy: Validate all technical content and code snippets.
Accessibility: Include alt text for images and use semantic Markdown.
Sequential heading levels (#
, ##
, ###
)
Use fenced code blocks with language identifiers
Create readable line spacing and avoid unnecessary line breaks
Place .md
files into the correct subdirectory:
Rust SDK Docs: docs/swarms_rs/
Tooling Docs: docs/swarms_tools/
After writing your content:
mkdocs.yml
nav
hierarchymkdocs serve\n# Open http://127.0.0.1:8000 to verify output\n
"},{"location":"contributors/docs/#9-workflow-branches-commits-pull-requests","title":"9. Workflow: Branches, Commits, Pull Requests","text":""},{"location":"contributors/docs/#91-branch-naming-guidelines","title":"9.1 Branch Naming Guidelines","text":"feature/docs-api-pagination
fix/docs-typo-tooling
Follow Conventional Commits:
docs(swarms_rs): add stream API tutorial\ndocs(swarms_tools): correct CLI usage example\n
"},{"location":"contributors/docs/#93-submitting-a-pull-request","title":"9.3 Submitting a Pull Request","text":"documentation
, bounty-eligible
)Every PR undergoes automated and human review:
CI Checks: Syntax validation, link checking, and formatting
Manual Review: Maintain clarity, completeness, and relevance
Iteration: Collaborate through feedback and finalize changes
Once approved, maintainers will merge and deploy the updated documentation.
"},{"location":"contributors/docs/#11-swarms-documentation-bounty-initiative","title":"11. Swarms Documentation Bounty Initiative","text":"To foster continuous improvement, we offer structured rewards for eligible contributions:
"},{"location":"contributors/docs/#111-contribution-types","title":"11.1 Contribution Types","text":"Creating comprehensive new tutorials and deep dives
Updating outdated references and examples
Fixing typos, grammar, and formatting errors
Translating existing content
bounty-eligible
Stay Updated: Sync your fork weekly to avoid merge conflicts
Atomic PRs: Submit narrowly scoped changes for faster review
Use Visuals: Support documentation with screenshots or diagrams
Cross-Reference: Link to related documentation for completeness
Version Awareness: Specify SDK/tool versions in code examples
Voice: Informative, concise, and respectful
Terminology: Use standardized terms (Swarm
, Swarms
) consistently
Code: Format snippets using language-specific linters
Accessibility: Include alt attributes and avoid ambiguous links
We use analytics and community input to prioritize improvements:
Traffic Reports: Track most/least visited pages
Search Logs: Detect content gaps from common search terms
Feedback Forms: Collect real-world user input
Schedule quarterly audits to refine structure and content across all repositories.
"},{"location":"contributors/docs/#15-community-promotion-engagement","title":"15. Community Promotion & Engagement","text":"Promote your contributions via:
Swarms Discord: https://discord.gg/jM3Z6M9uMq
Swarms Telegram: https://t.me/swarmsgroupchat
Swarms Twitter: https://x.com/swarms_corp
Startup Program Showcases: https://www.swarms.xyz/programs/startups
Active contributors are often spotlighted for leadership roles and community awards.
"},{"location":"contributors/docs/#16-resource-index","title":"16. Resource Index","text":"Core GitHub Repo: https://github.com/kyegomez/swarms
Rust SDK Repo: https://github.com/The-Swarm-Corporation/swarms-rs
Swarms API Docs: https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/
Marketplace: https://swarms.world
Join our monthly Documentation Office Hours for real-time mentorship and Q&A.
"},{"location":"contributors/docs/#17-frequently-asked-questions","title":"17. Frequently Asked Questions","text":"Q1: Is MkDocs required to contribute? A: It's recommended but not required; Markdown knowledge is sufficient to get started.
Q2: Can I rework existing sections? A: Yes, propose changes via issues first, or submit PRs with clear descriptions.
Q3: When are bounties paid? A: Within 30 days of merge, following internal validation.
"},{"location":"contributors/docs/#18-final-thoughts","title":"18. Final Thoughts","text":"The Swarms documentation is a critical piece of our technology stack. As a contributor, your improvements\u2014big or small\u2014directly impact adoption, user retention, and developer satisfaction. This guide aims to equip you with the tools, practices, and incentives to make meaningful contributions. Your work helps us deliver a more usable, scalable, and inclusive platform.
We look forward to your pull requests, feedback, and ideas.
"},{"location":"contributors/environment_setup/","title":"Environment Setup Guide for Swarms Contributors","text":"Welcome to the Swarms development environment setup guide! This comprehensive guide will walk you through setting up your development environment from scratch, whether you're a first-time contributor or an experienced developer.
\ud83d\ude80 One-Click Setup (Recommended)
New! Use our automated setup script that handles everything:
git clone https://github.com/kyegomez/swarms.git\ncd swarms\nchmod +x scripts/setup.sh\n./scripts/setup.sh\n
This script automatically installs Poetry, creates a virtual environment, installs all dependencies, sets up pre-commit hooks, and more! Manual Setup
Alternative: For manual control, install Python 3.10+, Git, and Poetry, then run:
git clone https://github.com/kyegomez/swarms.git\ncd swarms\npoetry install --with dev\n
"},{"location":"contributors/environment_setup/#prerequisites","title":"Prerequisites","text":"Before setting up your development environment, ensure you have the following installed:
"},{"location":"contributors/environment_setup/#system-requirements","title":"System Requirements","text":"Tool Version Purpose Python 3.10+ Core runtime Git 2.30+ Version control Poetry 1.4+ Dependency management (recommended) Node.js 16+ Documentation tools (optional)"},{"location":"contributors/environment_setup/#operating-system-support","title":"Operating System Support","text":"macOSUbuntu/DebianWindows# Install Homebrew if not already installed\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"\n\n# Install prerequisites\nbrew install python@3.10 git poetry node\n
# Update package list\nsudo apt update\n\n# Install Python 3.10 and pip\nsudo apt install python3.10 python3.10-venv python3-pip git curl\n\n# Install Poetry\ncurl -sSL https://install.python-poetry.org | python3 -\n\n# Add Poetry to PATH\nexport PATH=\"$HOME/.local/bin:$PATH\"\necho 'export PATH=\"$HOME/.local/bin:$PATH\"' >> ~/.bashrc\n
(Invoke-WebRequest -Uri https://install.python-poetry.org -UseBasicParsing).Content | python -\n
We provide a comprehensive setup script that automates the entire development environment setup process. This is the recommended approach for new contributors.
"},{"location":"contributors/environment_setup/#what-the-setup-script-does","title":"What the Setup Script Does","text":"The scripts/setup.sh
script automatically handles:
.env
file template with common variables# Clone the repository\ngit clone https://github.com/kyegomez/swarms.git\ncd swarms\n\n# Make the script executable and run it\nchmod +x scripts/setup.sh\n./scripts/setup.sh\n
"},{"location":"contributors/environment_setup/#script-features","title":"Script Features","text":"\ud83c\udfaf Smart Detection\ud83d\udd27 Comprehensive Setup\ud83d\udccb Environment Template\ud83d\udca1 Helpful Guidance The script intelligently detects your system state: - Checks if Poetry is already installed - Verifies Python version compatibility - Detects existing virtual environments - Checks for Git repository status
Installs everything you need:
# All dependency groups\npoetry install --with dev,lint,test\n\n# Pre-commit hooks\npre-commit install\npre-commit install --hook-type commit-msg\n\n# Initial verification run\npre-commit run --all-files\n
Creates a starter .env
file:
# Generated .env template\nOPENAI_API_KEY=your_openai_api_key_here\nANTHROPIC_API_KEY=your_anthropic_key_here\nLOG_LEVEL=INFO\nDEVELOPMENT=true\n
Provides next steps and useful commands: - How to activate the virtual environment - Essential Poetry commands - Testing and development workflow - Troubleshooting tips
"},{"location":"contributors/environment_setup/#when-to-use-manual-setup","title":"When to Use Manual Setup","text":"Use the manual setup approach if you: - Want full control over each step - Have specific system requirements - Are troubleshooting installation issues - Prefer to understand each component
"},{"location":"contributors/environment_setup/#repository-setup","title":"Repository Setup","text":""},{"location":"contributors/environment_setup/#step-1-fork-and-clone","title":"Step 1: Fork and Clone","text":"Fork the repository on GitHub: github.com/kyegomez/swarms
Clone your fork:
git clone https://github.com/YOUR_USERNAME/swarms.git\ncd swarms\n
Add upstream remote:
git remote add upstream https://github.com/kyegomez/swarms.git\n
Verify remotes:
git remote -v\n# origin https://github.com/YOUR_USERNAME/swarms.git (fetch)\n# origin https://github.com/YOUR_USERNAME/swarms.git (push)\n# upstream https://github.com/kyegomez/swarms.git (fetch)\n# upstream https://github.com/kyegomez/swarms.git (push)\n
Choose your preferred method for managing dependencies:
Poetry (Recommended)pip + venvPoetry provides superior dependency resolution and virtual environment management.
Traditional pip-based setup with virtual environments.
"},{"location":"contributors/environment_setup/#installation","title":"Installation","text":"# Navigate to project directory\ncd swarms\n\n# Install all dependencies including development tools\npoetry install --with dev,lint,test\n\n# Activate the virtual environment\npoetry shell\n
"},{"location":"contributors/environment_setup/#useful-poetry-commands","title":"Useful Poetry Commands","text":"# Add a new dependency\npoetry add package_name\n\n# Add a development dependency\npoetry add --group dev package_name\n\n# Update dependencies\npoetry update\n\n# Show dependency tree\npoetry show --tree\n\n# Run commands in the virtual environment\npoetry run python your_script.py\n
"},{"location":"contributors/environment_setup/#installation_1","title":"Installation","text":"# Navigate to project directory\ncd swarms\n\n# Create virtual environment\npython -m venv venv\n\n# Activate virtual environment\n# On macOS/Linux:\nsource venv/bin/activate\n# On Windows:\nvenv\\Scripts\\activate\n\n# Upgrade pip\npip install --upgrade pip\n\n# Install core dependencies\npip install -r requirements.txt\n\n# Install documentation dependencies (optional)\npip install -r docs/requirements.txt\n
"},{"location":"contributors/environment_setup/#development-tools-setup","title":"Development Tools Setup","text":""},{"location":"contributors/environment_setup/#code-quality-tools","title":"Code Quality Tools","text":"Swarms uses several tools to maintain code quality:
FormattingLintingType CheckingBlack - Code formatter
# Format code\npoetry run black swarms/\n# or with pip:\nblack swarms/\n\n# Check formatting without making changes\nblack swarms/ --check --diff\n
Ruff - Fast Python linter
# Run linter\npoetry run ruff check swarms/\n# or with pip:\nruff check swarms/\n\n# Auto-fix issues\nruff check swarms/ --fix\n
MyPy - Static type checker
# Run type checking\npoetry run mypy swarms/\n# or with pip:\nmypy swarms/\n
"},{"location":"contributors/environment_setup/#pre-commit-hooks-optional-but-recommended","title":"Pre-commit Hooks (Optional but Recommended)","text":"Set up pre-commit hooks to automatically run quality checks:
# Install pre-commit\npoetry add --group dev pre-commit\n# or with pip:\npip install pre-commit\n\n# Install git hooks\npre-commit install\n\n# Run on all files\npre-commit run --all-files\n
The project uses the latest ruff-pre-commit configuration with separate hooks for linting and formatting:
--fix
flag)This configuration ensures consistent code quality and style across the project while avoiding conflicts with Jupyter notebook files.
"},{"location":"contributors/environment_setup/#testing-setup","title":"Testing Setup","text":""},{"location":"contributors/environment_setup/#running-tests","title":"Running Tests","text":"# Run all tests\npoetry run pytest\n# or with pip:\npytest\n\n# Run tests with coverage\npoetry run pytest --cov=swarms tests/\n\n# Run specific test file\npoetry run pytest tests/test_specific_file.py\n\n# Run tests matching a pattern\npoetry run pytest -k \"test_agent\"\n
"},{"location":"contributors/environment_setup/#test-structure","title":"Test Structure","text":"The project uses pytest with the following structure:
tests/\n\u251c\u2500\u2500 agents/ # Agent-related tests\n\u251c\u2500\u2500 structs/ # Multi-agent structure tests\n\u251c\u2500\u2500 tools/ # Tool tests\n\u251c\u2500\u2500 utils/ # Utility tests\n\u2514\u2500\u2500 conftest.py # Test configuration\n
"},{"location":"contributors/environment_setup/#writing-tests","title":"Writing Tests","text":"# Example test file: tests/test_example.py\nimport pytest\nfrom swarms import Agent\n\ndef test_agent_creation():\n \"\"\"Test that an agent can be created successfully.\"\"\"\n agent = Agent(\n agent_name=\"test_agent\",\n system_prompt=\"You are a helpful assistant\"\n )\n assert agent.agent_name == \"test_agent\"\n\n@pytest.mark.parametrize(\"input_val,expected\", [\n (\"hello\", \"HELLO\"),\n (\"world\", \"WORLD\"),\n])\ndef test_uppercase(input_val, expected):\n \"\"\"Example parametrized test.\"\"\"\n assert input_val.upper() == expected\n
"},{"location":"contributors/environment_setup/#documentation-setup","title":"Documentation Setup","text":""},{"location":"contributors/environment_setup/#building-documentation-locally","title":"Building Documentation Locally","text":"# Install documentation dependencies\npip install -r docs/requirements.txt\n\n# Navigate to docs directory\ncd docs\n\n# Serve documentation locally\nmkdocs serve\n# Documentation will be available at http://127.0.0.1:8000\n
"},{"location":"contributors/environment_setup/#documentation-structure","title":"Documentation Structure","text":"docs/\n\u251c\u2500\u2500 index.md # Homepage\n\u251c\u2500\u2500 mkdocs.yml # MkDocs configuration\n\u251c\u2500\u2500 swarms/ # Core documentation\n\u251c\u2500\u2500 examples/ # Examples and tutorials\n\u251c\u2500\u2500 contributors/ # Contributor guides\n\u2514\u2500\u2500 assets/ # Images and static files\n
"},{"location":"contributors/environment_setup/#writing-documentation","title":"Writing Documentation","text":"Use Markdown with MkDocs extensions:
# Page Title\n\n!!! tip \"Pro Tip\"\n Use admonitions to highlight important information.\n\n=== \"Python\"\n ```python\n from swarms import Agent\n agent = Agent()\n ```\n\n=== \"CLI\"\n ```bash\n swarms create-agent --name myagent\n ```\n
"},{"location":"contributors/environment_setup/#environment-variables","title":"Environment Variables","text":"Create a .env
file for local development:
# Copy example environment file\ncp .env.example .env # if it exists\n\n# Or create your own .env file\ntouch .env\n
Common environment variables:
# .env file\nOPENAI_API_KEY=your_openai_api_key_here\nANTHROPIC_API_KEY=your_anthropic_api_key_here\nGROQ_API_KEY=your_groq_api_key_here\n\n# Development settings\nDEBUG=true\nLOG_LEVEL=INFO\n\n# Optional: Database settings\nDATABASE_URL=sqlite:///swarms.db\n
"},{"location":"contributors/environment_setup/#verification-steps","title":"Verification Steps","text":"Automated Verification
If you used the automated setup script (./scripts/setup.sh
), most verification steps are handled automatically. The script runs verification checks and reports any issues.
For manual setups, verify your setup is working correctly:
"},{"location":"contributors/environment_setup/#1-basic-import-test","title":"1. Basic Import Test","text":"poetry run python -c \"from swarms import Agent; print('\u2705 Import successful')\"\n
"},{"location":"contributors/environment_setup/#2-run-a-simple-agent","title":"2. Run a Simple Agent","text":"# test_setup.py\nfrom swarms import Agent\n\nagent = Agent(\n agent_name=\"setup_test\",\n system_prompt=\"You are a helpful assistant for testing setup.\",\n max_loops=1\n)\n\nresponse = agent.run(\"Say hello!\")\nprint(f\"\u2705 Agent response: {response}\")\n
"},{"location":"contributors/environment_setup/#3-code-quality-check","title":"3. Code Quality Check","text":"# Run all quality checks\npoetry run black swarms/ --check\npoetry run ruff check swarms/\npoetry run pytest tests/ -x\n
"},{"location":"contributors/environment_setup/#4-documentation-build","title":"4. Documentation Build","text":"cd docs\nmkdocs build\necho \"\u2705 Documentation built successfully\"\n
"},{"location":"contributors/environment_setup/#development-workflow","title":"Development Workflow","text":""},{"location":"contributors/environment_setup/#creating-a-feature-branch","title":"Creating a Feature Branch","text":"# Sync with upstream\ngit fetch upstream\ngit checkout master\ngit rebase upstream/master\n\n# Create feature branch\ngit checkout -b feature/your-feature-name\n\n# Make your changes...\n# Add and commit\ngit add .\ngit commit -m \"feat: add your feature description\"\n\n# Push to your fork\ngit push origin feature/your-feature-name\n
"},{"location":"contributors/environment_setup/#daily-development-commands","title":"Daily Development Commands","text":"# Start development session\ncd swarms\npoetry shell # or source venv/bin/activate\n\n# Pull latest changes\ngit fetch upstream\ngit rebase upstream/master\n\n# Run tests during development\npoetry run pytest tests/ -v\n\n# Format and lint before committing\npoetry run black swarms/\npoetry run ruff check swarms/ --fix\n\n# Run a quick smoke test\npoetry run python -c \"from swarms import Agent; print('\u2705 All good')\"\n
"},{"location":"contributors/environment_setup/#troubleshooting","title":"Troubleshooting","text":"First Step: Try the Automated Setup
If you're experiencing setup issues, try running our automated setup script first:
chmod +x scripts/setup.sh\n./scripts/setup.sh\n
This script handles most common setup problems automatically and provides helpful error messages."},{"location":"contributors/environment_setup/#common-issues-and-solutions","title":"Common Issues and Solutions","text":"Poetry IssuesPython Version IssuesImport ErrorsTest Failures Problem: Poetry command not found
# Solution: Add Poetry to PATH\nexport PATH=\"$HOME/.local/bin:$PATH\"\n# Add to your shell profile (.bashrc, .zshrc, etc.)\n
Problem: Poetry install fails
# Solution: Clear cache and reinstall\npoetry cache clear --all pypi\npoetry install --with dev\n
Problem: Wrong Python version
# Check Python version\npython --version\n\n# Use pyenv to manage Python versions\ncurl https://pyenv.run | bash\npyenv install 3.10.12\npyenv local 3.10.12\n
Problem: Cannot import swarms modules
# Ensure you're in the virtual environment\npoetry shell\n# or\nsource venv/bin/activate\n\n# Install in development mode\npoetry install --with dev\n# or\npip install -e .\n
Problem: Tests fail due to missing dependencies
# Install test dependencies\npoetry install --with test\n# or\npip install pytest pytest-cov pytest-mock\n
"},{"location":"contributors/environment_setup/#getting-help","title":"Getting Help","text":"If you encounter issues:
Now that your environment is set up:
swarms/structs/agent.py
examples/
directorygood-first-issue
labels on GitHubYou're Ready!
Your Swarms development environment is now set up! You're ready to contribute to the most important technology for multi-agent collaboration.
"},{"location":"contributors/environment_setup/#quick-reference","title":"Quick Reference","text":""},{"location":"contributors/environment_setup/#essential-commands","title":"Essential Commands","text":"# Setup (choose one)\n./scripts/setup.sh # Automated setup (recommended)\npoetry install --with dev # Manual dependency install\n\n# Daily workflow\npoetry shell # Activate environment\npoetry run pytest # Run tests\npoetry run black swarms/ # Format code\npoetry run ruff check swarms/ # Lint code\n\n# Git workflow\ngit fetch upstream # Get latest changes\ngit rebase upstream/master # Update your branch\ngit checkout -b feature/name # Create feature branch\ngit push origin feature/name # Push your changes\n\n# Documentation\ncd docs && mkdocs serve # Serve docs locally\nmkdocs build # Build docs\n
"},{"location":"contributors/environment_setup/#project-structure","title":"Project Structure","text":"swarms/\n\u251c\u2500\u2500 swarms/ # Core package\n\u2502 \u251c\u2500\u2500 agents/ # Agent implementations\n\u2502 \u251c\u2500\u2500 structs/ # Multi-agent structures\n\u2502 \u251c\u2500\u2500 tools/ # Agent tools\n\u2502 \u2514\u2500\u2500 utils/ # Utilities\n\u251c\u2500\u2500 examples/ # Usage examples\n\u251c\u2500\u2500 tests/ # Test suite\n\u251c\u2500\u2500 docs/ # Documentation\n\u251c\u2500\u2500 pyproject.toml # Poetry configuration\n\u2514\u2500\u2500 requirements.txt # Pip dependencies\n
Happy coding! \ud83d\ude80
"},{"location":"contributors/main/","title":"Contributing to Swarms: Building the Infrastructure for The Agentic Economy","text":"Multi-agent collaboration is the most important technology in human history. It will reshape civilization by enabling billions of autonomous agents to coordinate and solve problems at unprecedented scale.
The Foundation of Tomorrow
Swarms is the foundational infrastructure powering this autonomous economy. By contributing, you're building the systems that will enable the next generation of intelligent automation.
"},{"location":"contributors/main/#what-youre-building","title":"What You're Building","text":"Autonomous SystemsIntelligence NetworksSmart MarketsProblem SolvingInfrastructureAutonomous Resource Allocation
Global supply chains and energy distribution optimized in real-time
Distributed Decision Making
Collaborative intelligence networks across industries and governments
Self-Organizing Markets
Agent-driven marketplaces that automatically balance supply and demand
Collaborative Problem Solving
Massive agent swarms tackling climate change, disease, and scientific discovery
Adaptive Infrastructure
Self-healing systems that evolve without human intervention
"},{"location":"contributors/main/#why-contribute-to-swarms","title":"Why Contribute to Swarms?","text":""},{"location":"contributors/main/#shape-the-future-of-civilization","title":"Shape the Future of Civilization","text":"Your Impact
Immediate Recognition
Career Benefits
Master cutting-edge technologies:
Technology Area Skills You'll Develop Swarm Intelligence Design sophisticated agent coordination mechanisms Distributed Computing Build scalable architectures for thousands of agents Communication Protocols Create novel interaction patterns Production AI Deploy and orchestrate enterprise-scale systems Research Implementation Turn cutting-edge papers into working code"},{"location":"contributors/main/#research-community-access","title":"Research Community Access","text":"Collaborative Environment
Essential Resources
Documentation GitHub Repository Community Channels
"},{"location":"contributors/main/#step-2-find-your-path","title":"Step 2: Find Your Path","text":"graph TD\n A[Choose Your Path] --> B[Browse Issues]\n A --> C[Review Roadmap]\n A --> D[Propose Ideas]\n B --> E[good first issue]\n B --> F[help wanted]\n C --> G[Core Features]\n C --> H[Research Areas]\n D --> I[Discussion Forums]
"},{"location":"contributors/main/#step-3-make-impact","title":"Step 3: Make Impact","text":"Instant Recognition
Benefit Description Social Media Features Every merged PR showcased publicly Community Recognition Contributor badges and documentation credits Professional References Formal acknowledgment for portfolios Direct Mentorship Access to core team guidance"},{"location":"contributors/main/#long-term-opportunities","title":"Long-term Opportunities","text":"Career Growth
Building Solutions for Humanity
Swarms enables technology that addresses critical challenges:
ResearchHealthcareEnvironmentEducationEconomyScientific Research
Accelerate collaborative research and discovery across disciplines
Healthcare Innovation
Support drug discovery and personalized medicine development
Environmental Solutions
Monitor climate and optimize sustainability initiatives
Educational Technology
Create adaptive learning systems for personalized education
Economic Innovation
Generate new opportunities and efficiency improvements
"},{"location":"contributors/main/#get-involved","title":"Get Involved","text":""},{"location":"contributors/main/#connect-with-us","title":"Connect With Us","text":"Join the Community
GitHub Repository Documentation Community Forums
The Future is Now
Multi-agent collaboration will define the next century of human progress. The autonomous economy depends on the infrastructure we build today.
Your Mission
Your contribution to Swarms helps create the foundation for billions of autonomous agents working together to solve humanity's greatest challenges.
Join us in building the most important technology of our time.
Built with by the global Swarms community
"},{"location":"contributors/tools/","title":"Contributing Tools and Plugins to the Swarms Ecosystem","text":""},{"location":"contributors/tools/#introduction","title":"Introduction","text":"The Swarms ecosystem is a modular, intelligent framework built to support the seamless integration, execution, and orchestration of dynamic tools that perform specific functions. These tools form the foundation for how autonomous agents operate, enabling them to retrieve data, communicate with APIs, conduct computational tasks, and respond intelligently to real-world requests. By contributing to Swarms Tools, developers can empower agents with capabilities that drive practical, enterprise-ready applications.
This guide provides a comprehensive roadmap for contributing tools and plugins to the Swarms Tools repository. It is written for software engineers, data scientists, platform architects, and technologists who seek to develop modular, production-grade functionality within the Swarms agent framework.
Whether your expertise lies in finance, security, machine learning, or developer tooling, this documentation outlines the essential standards, workflows, and integration patterns to make your contributions impactful and interoperable.
"},{"location":"contributors/tools/#repository-architecture","title":"Repository Architecture","text":"The Swarms Tools GitHub repository is meticulously organized to maintain structure, scalability, and domain-specific clarity. Each folder within the repository represents a vertical where tools can be contributed and extended over time. These folders include:
finance/
: Market analytics, stock price retrievers, blockchain APIs, etc.
social/
: Sentiment analysis, engagement tracking, and media scraping utilities.
health/
: Interfaces for EHR systems, wearable device APIs, or health informatics.
ai/
: Model-serving utilities, embedding services, and prompt engineering functions.
security/
: Encryption libraries, risk scoring tools, penetration test interfaces.
devtools/
: Build tools, deployment utilities, code quality analyzers.
misc/
: General-purpose helpers or utilities that serve multiple domains.
Each tool inside these directories is implemented as a single, self-contained function. These functions are expected to adhere to Swarms-wide standards for clarity, typing, documentation, and API key handling.
"},{"location":"contributors/tools/#tool-development-specifications","title":"Tool Development Specifications","text":"To ensure long-term maintainability and smooth agent-tool integration, each contribution must strictly follow the specifications below.
"},{"location":"contributors/tools/#1-function-structure-and-api-usage","title":"1. Function Structure and API Usage","text":"import requests\nimport os\n\ndef fetch_data(symbol: str, date_range: str) -> str:\n \"\"\"\n Fetch financial data for a given symbol and date range.\n\n Args:\n symbol (str): Ticker symbol of the asset.\n date_range (str): Timeframe for the data (e.g., '1d', '1m', '1y').\n\n Returns:\n str: A string containing financial data or an error message.\n \"\"\"\n api_key = os.getenv(\"FINANCE_API_KEY\")\n url = f\"https://api.financeprovider.com/data?symbol={symbol}&range={date_range}&apikey={api_key}\"\n response = requests.get(url)\n if response.status_code == 200:\n return response.text\n return \"Error fetching data.\"\n
All logic must be encapsulated inside a single callable function, written using pure Python. Where feasible, network requests should be stateless, side-effect-free, and gracefully handle errors or timeouts.
"},{"location":"contributors/tools/#2-type-hints-and-input-validation","title":"2. Type Hints and Input Validation","text":"All function parameters must be typed using Python's type hinting system. Use built-in primitives where possible (e.g., str
, int
, float
, bool
) and make use of Optional
or Union
types when dealing with nullable parameters or multiple formats. This aids LLMs and type checkers in understanding expected input ranges.
Regardless of internal logic or complexity, tools must return outputs in a consistent string format. This string can contain plain text or a serialized JSON object (as a string), but must not return raw objects, dictionaries, or binary blobs. This standardization ensures all downstream agents can interpret tool output predictably.
"},{"location":"contributors/tools/#4-api-key-management-best-practices","title":"4. API Key Management Best Practices","text":"Security and environment isolation are paramount. Never hardcode API keys or sensitive credentials inside source code. Always retrieve them dynamically using the os.getenv(\"ENV_VAR\")
approach. If a tool requires credentials, clearly document the required environment variable names in the function docstring.
Every tool must include a detailed docstring that describes:
The function's purpose and operational scope
All parameter types and formats
A clear return type
Usage examples or sample inputs/outputs
Example usage:
result = fetch_data(\"AAPL\", \"1m\")\nprint(result)\n
Well-documented code accelerates adoption and improves LLM interpretability.
"},{"location":"contributors/tools/#contribution-workflow","title":"Contribution Workflow","text":"To submit a tool, follow the workflow below. This ensures your code integrates cleanly and is easy for maintainers to review.
"},{"location":"contributors/tools/#step-1-fork-the-repository","title":"Step 1: Fork the Repository","text":"Navigate to the Swarms Tools repository and fork it to your personal or organization\u2019s GitHub account.
"},{"location":"contributors/tools/#step-2-clone-your-fork","title":"Step 2: Clone Your Fork","text":"git clone https://github.com/YOUR_USERNAME/swarms-tools.git\ncd swarms-tools\n
"},{"location":"contributors/tools/#step-3-create-a-feature-branch","title":"Step 3: Create a Feature Branch","text":"git checkout -b feature/add-tool-<tool-name>\n
Use descriptive branch names. This is especially helpful when collaborating in teams or maintaining audit trails.
"},{"location":"contributors/tools/#step-4-build-your-tool","title":"Step 4: Build Your Tool","text":"Navigate into the appropriate category folder (e.g., finance/
, ai/
, etc.) and implement your tool according to the defined schema.
If your tool belongs in a new category, you may create a new folder with a clear, lowercase name.
"},{"location":"contributors/tools/#step-5-run-local-tests-if-applicable","title":"Step 5: Run Local Tests (if applicable)","text":"Ensure the function executes correctly and does not throw runtime errors. If feasible, test edge cases and verify consistent behavior across platforms.
"},{"location":"contributors/tools/#step-6-commit-your-changes","title":"Step 6: Commit Your Changes","text":"git add .\ngit commit -m \"Add <tool_name> under <folder_name>: API-based tool for X\"\n
"},{"location":"contributors/tools/#step-7-push-to-github","title":"Step 7: Push to GitHub","text":"git push origin feature/add-tool-<tool-name>\n
"},{"location":"contributors/tools/#step-8-submit-a-pull-request","title":"Step 8: Submit a Pull Request","text":"On GitHub, open a pull request from your fork to the main Swarms Tools repository. Your PR description should: - Summarize the tool\u2019s functionality - Reference any related issues or enhancements - Include usage notes or setup instructions (e.g., required API keys)
"},{"location":"contributors/tools/#integration-with-swarms-agents","title":"Integration with Swarms Agents","text":"Once your tool has been merged into the official repository, it can be utilized by Swarms agents as part of their available capabilities.
The example below illustrates how to embed a newly added tool into an autonomous agent:
from swarms import Agent\nfrom finance.stock_price import get_stock_price\n\nagent = Agent(\n agent_name=\"Devin\",\n system_prompt=(\n \"Autonomous agent that can interact with humans and other agents.\"\n \" Be helpful and kind. Use the tools provided to assist the user.\"\n \" Return all code in markdown format.\"\n ),\n llm=llm,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n interactive=True,\n tools=[get_stock_price, terminal, browser, file_editor, create_file],\n metadata_output_type=\"json\",\n function_calling_format_type=\"OpenAI\",\n function_calling_type=\"json\",\n)\n\nagent.run(\"Create a new file for a plan to take over the world.\")\n
By registering tools in the tools
parameter during agent creation, you enable dynamic function calling. The agent interprets natural language input, selects the appropriate tool, and invokes it with valid arguments.
This agent-tool paradigm enables highly flexible and responsive behavior across workflows involving research, automation, financial analysis, social listening, and more.
"},{"location":"contributors/tools/#tool-maintenance-and-long-term-ownership","title":"Tool Maintenance and Long-Term Ownership","text":"Contributors are expected to uphold the quality of their tools post-merge. This includes:
Monitoring for issues or bugs reported by the community
Updating tools when APIs deprecate or modify their behavior
Improving efficiency, error handling, or documentation over time
If a tool becomes outdated or unsupported, maintainers may archive or revise it to maintain ecosystem integrity.
Contributors whose tools receive wide usage or demonstrate excellence in design may be offered elevated privileges or invited to maintain broader tool categories.
"},{"location":"contributors/tools/#best-practices-for-enterprise-grade-contributions","title":"Best Practices for Enterprise-Grade Contributions","text":"To ensure your tool is production-ready and enterprise-compliant, observe the following practices:
Run static type checking with mypy
Use formatters like black
and linters such as flake8
Avoid unnecessary external dependencies
Keep functions modular and readable
Prefer named parameters over positional arguments for clarity
Handle API errors gracefully and return user-friendly messages
Document limitations or assumptions in the docstring
Optional but encouraged: - Add unit tests to validate function output
The Swarms ecosystem is built on the principle of extensibility through community-driven contributions. By submitting modular, typed, and well-documented tools to the Swarms Tools repository, you directly enhance the problem-solving power of intelligent agents.
This documentation serves as your blueprint for contributing high-quality, reusable functionality. From idea to implementation to integration, your efforts help shape the future of collaborative, agent-powered software.
We encourage all developers, data scientists, and domain experts to contribute meaningfully. Review existing tools for inspiration, or create something entirely novel.
To begin, fork the Swarms Tools repository and start building impactful, reusable tools that can scale across agents and use cases.
"},{"location":"corporate/2024_2025_goals/","title":"Swarms Goals & Milestone Tracking: A Vision for 2024 and Beyond","text":"As we propel Swarms into a new frontier, we\u2019ve set ambitious yet achievable goals for the coming years that will solidify Swarms as a leader in multi-agent orchestration. This document outlines our vision, the goals for 2024 and 2025, and how we track our progress through meticulously designed milestones and metrics.
"},{"location":"corporate/2024_2025_goals/#our-vision-the-agentic-ecosystem","title":"Our Vision: The Agentic Ecosystem","text":"We envision an ecosystem where agents are pervasive and serve as integral collaborators in business processes, daily life, and complex problem-solving. By leveraging the collective intelligence of swarms, we believe we can achieve massive gains in productivity, scalability, and impact. Our target is to establish the Swarms platform as the go-to environment for deploying and managing agents at an unprecedented scale\u2014making agents as common and indispensable as mobile apps are today. This future will see agents integrated into nearly every digital interaction, creating a seamless extension of human capability and reducing the cognitive load on individuals and organizations.
We believe that agents will transition from being simple tools to becoming full-fledged partners that can understand user needs, predict outcomes, and adapt to changes dynamically. Our vision is not just about increasing numbers; it\u2019s about building a smarter, more interconnected agentic ecosystem where every agent has a purpose and contributes to a collective intelligence that continuously evolves. By cultivating a diverse array of agents capable of handling various specialized tasks, we aim to create an environment in which these digital collaborators function as a cohesive whole\u2014one that can amplify human ingenuity and productivity beyond current limits.
"},{"location":"corporate/2024_2025_goals/#goals-for-2024-and-2025","title":"Goals for 2024 and 2025","text":"To achieve our vision, we have laid out a structured growth trajectory for Swarms, driven by clear numerical targets:
End of 2024: 500 Million Agents Currently, our platform hosts 45 million agents. By the end of 2024, our goal is to reach 500 million agents deployed on Swarms. This means achieving sustained exponential growth, which will require doubling or even tripling the total number of agents roughly every month from now until December 2024. Such growth will necessitate not only scaling infrastructure but also improving the ease with which users can develop and deploy agents, expanding educational resources, and fostering a vibrant community that drives innovation in agent design. To achieve this milestone, we plan to invest heavily in making our platform user-friendly, including simplifying onboarding processes and providing extensive educational content. Additionally, we aim to build out our infrastructure to support the necessary scalability and ensure the seamless operation of a growing number of agents. Beyond merely scaling in numbers, we are also focused on increasing the diversity of tasks that agents can perform, thereby enhancing the practical value of deploying agents on Swarms.
End of 2025: 10 Billion+ Agents The long-term vision extends further to reach 10 billion agents by the end of 2025. This ambitious goal reflects not only the organic growth of our user base but also the increasing role of swarms in business applications, personal projects, and global problem-solving initiatives. This goal requires continuous monthly doubling of agents and a clear roadmap of user engagement and deployment. By scaling to this level, we envision Swarms as a cornerstone of automation and productivity enhancement, where agents autonomously manage everything from mundane tasks to sophisticated strategic decisions, effectively enhancing human capabilities. This expansion will rely on the development of a robust ecosystem in which users can easily create, share, and enhance agents. We will foster partnerships with industries that can benefit from scalable agentic solutions\u2014spanning healthcare, finance, education, and beyond. Our strategy includes developing domain-specific templates and specialized agents that cater to niche needs, thereby making Swarms an indispensable solution for businesses and individuals alike.
Achieving these goals is not just about reaching numerical targets but ensuring that our users are deriving tangible value from Swarms and deploying agents effectively. To measure success, we\u2019ve defined several key performance indicators (KPIs) and milestones:
"},{"location":"corporate/2024_2025_goals/#1-growth-in-agent-deployment","title":"1. Growth in Agent Deployment","text":"The number of agents deployed per month will be our primary growth metric. With our goal of doubling agent count every month, this metric serves as an overall health indicator for platform adoption and usage. Growth in deployment indicates that our platform is attracting users who see value in creating and deploying agents to solve diverse challenges.
Key Milestones:
November 2024: Surpass 250 million agents.
December 2024: Reach 500 million agents.
June 2025: Break the 5 billion agents mark.
December 2025: Hit 10 billion agents.
To accomplish this, we must continually expand our infrastructure, maintain scalability, and create a seamless user onboarding process. We\u2019ll ensure that adding agents is frictionless and that our platform can accommodate this rapid growth. By integrating advanced orchestration capabilities, we will enable agents to form more complex collaborations and achieve tasks that previously seemed out of reach. Furthermore, we will develop analytics tools to track the success and efficiency of these agents, giving users real-time feedback to optimize their deployment strategies.
"},{"location":"corporate/2024_2025_goals/#2-agents-deployed-per-user-engagement-indicator","title":"2. Agents Deployed Per User: Engagement Indicator","text":"A core belief of Swarms is that agents are here to make life easier for their users\u2014whether it\u2019s automating mundane tasks, handling complex workflows, or enhancing creative endeavors. Therefore, we measure the number of agents deployed per user per month as a key metric for engagement. Tracking this metric allows us to understand how effectively our users are utilizing the platform, and how deeply agents are becoming embedded into their workflows.
This metric ensures that users aren\u2019t just joining Swarms, but they are actively building and deploying agents to solve real problems. Our milestone for engagement is to see increasing growth in agents deployed per user month over month, which indicates a deeper integration of Swarms into daily workflows and business processes. We want our users to view Swarms as their go-to solution for any problem they face, which means ensuring that agents are providing real, tangible benefits.
Key Milestones:
November 2024: Achieve an average of 20 agents deployed per user each month.
June 2025: Target 100-200+ agents deployed per user.
To drive these numbers, we plan to improve user support, enhance educational materials, host workshops, and create an environment that empowers users to deploy agents for increasingly complex use-cases. Additionally, we will introduce templates and pre-built agents that users can customize, reducing the barriers to entry and enabling rapid deployment for new users. We are also developing gamified elements that reward users for deploying more agents and achieving milestones, fostering a competitive and engaging community atmosphere.
"},{"location":"corporate/2024_2025_goals/#3-active-vs-inactive-agents-measuring-churn","title":"3. Active vs. Inactive Agents: Measuring Churn","text":"The number of inactive agents per user is an essential metric for understanding our churn rate. An agent is considered inactive when it remains undeployed or unused for a prolonged period, indicating that it\u2019s no longer delivering value to the user. Churn metrics provide valuable insights into the effectiveness of our agents and highlight areas where improvements are needed.
We aim to minimize the number of inactive agents, as this will be a direct reflection of how well our agents are designed, integrated, and supported. A low churn rate means that users are finding long-term utility in their agents, which is key to our mission. Our platform\u2019s success depends on users consistently deploying agents that remain active and valuable over time.
Key Milestones:
December 2024: Ensure that no more than 30% of deployed agents are inactive.
December 2025: Aim for 10% or lower, reflecting strong agent usefulness and consistent platform value delivery.
Reducing churn will require proactive measures, such as automated notifications to users about inactive agents, recommending potential uses, and implementing agent retraining features to enhance their adaptability over time. Educating users on prompting engineering, tool engineering, and RAG engineering also helps decrease these numbers as the number of inactive agents is evident that the user is not automating a business operation with that agent. We will also integrate machine learning models to predict agent inactivity and take corrective actions before agents become dormant. By offering personalized recommendations to users on how to enhance or repurpose inactive agents, we hope to ensure that all deployed agents are actively contributing value.
"},{"location":"corporate/2024_2025_goals/#milestones-and-success-criteria","title":"Milestones and Success Criteria","text":"To reach these ambitious goals, we have broken our roadmap down into a series of actionable milestones:
Infrastructure Scalability (Q1 2025) We will work on ensuring that our backend infrastructure can handle the scale required to reach 500 million agents by the end of 2024. This includes expanding server capacity, improving agent orchestration capabilities, and ensuring low latency across deployments. We will also focus on enhancing our database management systems to ensure efficient storage and retrieval of agent data, enabling seamless operation at a massive scale. Our infrastructure roadmap also includes implementing advanced load balancing techniques and predictive scaling mechanisms to ensure high availability and reliability.
Improved User Experience (Q2 2025) To encourage agent deployment and reduce churn, we will introduce new onboarding flows, agent-building wizards, and intuitive user interfaces. We will also implement in-depth tutorials and documentation to simplify agent creation for new users. By making agent-building accessible even to those without programming expertise, we will open the doors to a broader audience and drive exponential growth in the number of agents deployed. Additionally, we will integrate AI-driven suggestions and contextual help to assist users at every step of the process, making the platform as intuitive as possible.
Agent Marketplace (Q3 2025) Launching the Swarms Marketplace for agents, prompts, and tools will allow users to share, discover, and even monetize their agents. This marketplace will be a crucial driver in both increasing the number of agents deployed and reducing inactive agents, as it will create an ecosystem of continuously evolving and highly useful agents. Users will have the opportunity to browse agents that others have developed, which can serve as inspiration or as a starting point for their own projects. We will also introduce ratings, reviews, and community feedback mechanisms to ensure that the most effective agents are highlighted and accessible.
Community Engagement and Swarms Education (Ongoing) Workshops, webinars, and events will be conducted throughout 2024 and 2025 to engage new users and educate them on building effective agents. The goal is to ensure that every user becomes proficient in deploying swarms of agents for meaningful tasks. We will foster an active community where users can exchange ideas, get help, and collaborate on projects, ultimately driving forward the growth of the Swarms ecosystem. We also plan to establish a mentor program where experienced users can guide newcomers, helping them get up to speed more quickly and successfully deploy agents.
1. Developer Incentives One of our most important strategies will be the introduction of developer incentives. By providing rewards for creating agents, we foster an environment of creativity and encourage rapid growth in the number of useful agents on the platform. We will host hackathons, contests, and provide financial incentives to developers whose agents provide substantial value to the community. Additionally, we plan to create a tiered rewards system that acknowledges developers for the number of active deployments and the utility of their agents, motivating continuous improvement and innovation.
2. Strategic Partnerships We plan to form partnerships with major technology providers and industry players to scale Swarms adoption. Integrating Swarms into existing business software and industrial processes will drive significant growth in agent numbers and usage. These partnerships will allow Swarms to become embedded into existing workflows, making it easier for users to understand the value and immediately apply agents to solve real-world challenges. We are also targeting partnerships with educational institutions to provide Swarms as a learning platform for AI, encouraging students and researchers to contribute to our growing ecosystem.
3. User Feedback Loop To ensure we are on track, a continuous feedback loop with our user community will help us understand what agents are effective, which require improvements, and where we need to invest our resources to maximize engagement. Users\u2019 experiences will shape our platform evolution. We will implement regular surveys, feedback forms, and user interviews to gather insights, and use this data to drive iterative development that is directly aligned with user needs. In addition, we will create an open feature request forum where users can vote on the most important features they want to see, ensuring that we are prioritizing our community\u2019s needs.
4. Marketing and Awareness Campaigns Strategic campaigns to showcase the power of swarms in specific industries will highlight the versatility and impact of our agents. We plan to create case studies demonstrating how swarms solve complex problems in marketing, finance, customer service, and other verticals, and use these to attract a wider audience. Our content marketing strategy will include blogs, video tutorials, and success stories to help potential users visualize the transformative power of Swarms. We will also leverage social media campaigns and influencer partnerships to reach a broader audience and generate buzz around Swarms\u2019 capabilities.
5. Educational Initiatives To lower the barrier to entry for new users, we will invest heavily in educational content. This includes video tutorials, comprehensive guides, and in-platform learning modules. By making the learning process easy and engaging, we ensure that users quickly become proficient in creating and deploying agents, thereby increasing user satisfaction and reducing churn. A well-educated user base will lead to more agents being deployed effectively, contributing to our overall growth targets. We are also developing certification programs for users and developers, providing a structured pathway to become proficient in Swarms technology and gain recognition for their skills.
"},{"location":"corporate/2024_2025_goals/#the-path-ahead-building-towards-10-billion-agents","title":"The Path Ahead: Building Towards 10 Billion Agents","text":"To achieve our vision of 10 billion agents by the end of 2025, it\u2019s critical that we maintain an aggressive growth strategy while ensuring that agents are providing real value to users. This requires a deep focus on scalability, community growth, and user-centric development. It also demands a continuous feedback loop where insights from agent deployments and user interactions drive platform evolution. By creating an environment where agents are easy to develop, share, and integrate, we will achieve sustainable growth that benefits not just Swarms, but the broader AI community.
We envision swarms as a catalyst for democratizing access to AI. By enabling users across industries\u2014from healthcare to education to manufacturing\u2014to deploy agents that handle specialized tasks, we empower individuals and organizations to focus on creative, strategic endeavors rather than repetitive operational tasks. The journey to 10 billion agents is not just about scale; it\u2019s about creating meaningful and effective automation that transforms how work gets done. We believe that Swarms will ultimately reshape industries by making sophisticated automation accessible to all, driving a shift toward higher productivity and innovation.
"},{"location":"corporate/2024_2025_goals/#community-and-culture","title":"Community and Culture","text":"Swarms will also be emphasizing the community aspect, building a culture of collaboration among users, developers, and businesses. By fostering open communication and enabling the sharing of agents, we encourage knowledge transfer and network effects, which help drive overall growth. Our goal is to create an environment where agents not only work individually but evolve as a collective intelligence network\u2014working towards a post-scarcity civilization where every problem can be tackled by the right combination of swarms.
We see the community as the heartbeat of Swarms, driving innovation, providing support, and expanding the use-cases for agents. Whether it\u2019s through forums, community events, or user-generated content, we want Swarms to be the hub where people come together to solve the most pressing challenges of our time. By empowering our users and encouraging collaboration, we can ensure that the platform continuously evolves and adapts to new needs and opportunities. Additionally, we plan to establish local Swarms chapters worldwide, where users can meet in person to share knowledge, collaborate on projects, and build lasting relationships that strengthen the global Swarms community.
"},{"location":"corporate/2024_2025_goals/#conclusion-measuring-success-one-milestone-at-a-time","title":"Conclusion: Measuring Success One Milestone at a Time","text":"The path to 500 million agents by the end of 2024 and 10 billion agents by the end of 2025 is paved with strategic growth, infrastructure resilience, and user-centric improvements. Each milestone is a step closer to a fully realized vision of an agentic economy\u2014one where agents are ubiquitous, assisting individuals, businesses, and entire industries in achieving their goals more efficiently.
By tracking key metrics, such as growth in agent numbers, the rate of agent deployment per user, and reducing churn, we ensure that Swarms not only grows in size but also in effectiveness, adoption, and user satisfaction. Through a combination of infrastructure development, community engagement, incentives, and constant user feedback, we will create an ecosystem where agents thrive, users are empowered, and the entire platform evolves towards our ambitious vision.
This is the journey of Swarms\u2014a journey towards redefining how we interact with AI, solve complex problems, and enhance productivity. With each milestone, we get closer to a future where swarms of agents are the bedrock of human-machine collaboration and an integral part of our daily lives. The journey ahead is one of transformation, creativity, and collaboration, as we work together to create an AI-driven world that benefits everyone, enabling us to achieve more than we ever thought possible. Our commitment to building an agentic ecosystem is unwavering, and we are excited to see the incredible impact that swarms of agents will have on the future of work, innovation, and human potential.
"},{"location":"corporate/architecture/","title":"Architecture","text":""},{"location":"corporate/architecture/#1-introduction","title":"1. Introduction","text":"In today's rapidly evolving digital world, harnessing the collaborative power of multiple computational agents is more crucial than ever. 'Swarms' represents a bold stride in this direction\u2014a scalable and dynamic framework designed to enable swarms of agents to function in harmony and tackle complex tasks. This document serves as a comprehensive guide, elucidating the underlying architecture and strategies pivotal to realizing the Swarms vision.
"},{"location":"corporate/architecture/#2-the-vision","title":"2. The Vision","text":"At its heart, the Swarms framework seeks to emulate the collaborative efficiency witnessed in natural systems, like ant colonies or bird flocks. These entities, though individually simple, achieve remarkable outcomes through collaboration. Similarly, Swarms will unleash the collective potential of numerous agents, operating cohesively.
"},{"location":"corporate/architecture/#3-architecture-overview","title":"3. Architecture Overview","text":""},{"location":"corporate/architecture/#31-agent-level","title":"3.1 Agent Level","text":"The base level that serves as the building block for all further complexity.
"},{"location":"corporate/architecture/#mechanics","title":"Mechanics:","text":"Agents interact with the external world through their model and tools. The Vectorstore aids in retaining knowledge and facilitating inter-agent communication.
"},{"location":"corporate/architecture/#32-worker-infrastructure-level","title":"3.2 Worker Infrastructure Level","text":"Building on the agent foundation, enhancing capability and readiness for swarm integration.
"},{"location":"corporate/architecture/#mechanics_1","title":"Mechanics:","text":"Each worker is an enhanced agent, capable of operating independently or in sync with its peers, allowing for dynamic, scalable operations.
"},{"location":"corporate/architecture/#33-swarm-level","title":"3.3 Swarm Level","text":"Multiple Worker Nodes orchestrated into a synchronized, collaborative entity.
"},{"location":"corporate/architecture/#mechanics_2","title":"Mechanics:","text":"Nodes collaborate under the orchestrator's guidance, ensuring tasks are partitioned appropriately, executed, and results consolidated.
"},{"location":"corporate/architecture/#34-hivemind-level","title":"3.4 Hivemind Level","text":"Envisioned as a 'Swarm of Swarms'. An upper echelon of collaboration.
"},{"location":"corporate/architecture/#mechanics_3","title":"Mechanics:","text":"Multiple swarms, each a formidable force, combine their prowess under the Hivemind. This level tackles monumental tasks by dividing them among swarms.
"},{"location":"corporate/architecture/#4-building-the-framework-a-task-checklist","title":"4. Building the Framework: A Task Checklist","text":""},{"location":"corporate/architecture/#41-foundations-agent-level","title":"4.1 Foundations: Agent Level","text":"Serving as the memory and communication backbone, the Vectorstore must: * Facilitate rapid storage and retrieval of high-dimensional vectors. * Enable similarity-based lookups: Crucial for recognizing patterns or finding similar outputs. * Scale seamlessly as agent count grows.
"},{"location":"corporate/architecture/#52-orchestrator-driven-communication","title":"5.2 Orchestrator-Driven Communication","text":"The Swarms framework, once realized, will usher in a new era of computational efficiency and collaboration. While the roadmap ahead is intricate, with diligent planning, development, and testing, Swarms will redefine the boundaries of collaborative computing.
"},{"location":"corporate/architecture/#overview","title":"Overview","text":""},{"location":"corporate/architecture/#1-model","title":"1. Model","text":"Overview: The foundational level where a trained model (e.g., OpenAI GPT model) is initialized. It's the base on which further abstraction levels build upon. It provides the core capabilities to perform tasks, answer queries, etc.
Diagram:
[ Model (openai) ]\n
"},{"location":"corporate/architecture/#2-agent-level","title":"2. Agent Level","text":"Overview: At the agent level, the raw model is coupled with tools and a vector store, allowing it to be more than just a model. The agent can now remember, use tools, and become a more versatile entity ready for integration into larger systems.
Diagram:
+-----------+\n| Agent |\n| +-------+ |\n| | Model | |\n| +-------+ |\n| +-----------+ |\n| | VectorStore | |\n| +-----------+ |\n| +-------+ |\n| | Tools | |\n| +-------+ |\n+-----------+\n
"},{"location":"corporate/architecture/#3-worker-infrastructure-level","title":"3. Worker Infrastructure Level","text":"Overview: The worker infrastructure is a step above individual agents. Here, an agent is paired with additional utilities like human input and other tools, making it a more advanced, responsive unit capable of complex tasks.
Diagram:
+----------------+\n| WorkerNode |\n| +-----------+ |\n| | Agent | |\n| | +-------+ | |\n| | | Model | | |\n| | +-------+ | |\n| | +-------+ | |\n| | | Tools | | |\n| | +-------+ | |\n| +-----------+ |\n| |\n| +-----------+ |\n| |Human Input| |\n| +-----------+ |\n| |\n| +-------+ |\n| | Tools | |\n| +-------+ |\n+----------------+\n
"},{"location":"corporate/architecture/#4-swarm-level","title":"4. Swarm Level","text":"Overview: At the swarm level, the orchestrator is central. It's responsible for assigning tasks to worker nodes, monitoring their completion, and handling the communication layer (for example, through a vector store or another universal communication mechanism) between worker nodes.
Diagram:
+------------+\n |Orchestrator|\n +------------+\n |\n +---------------------------+\n | |\n | Swarm-level Communication|\n | Layer (e.g. |\n | Vector Store) |\n +---------------------------+\n / | \\ \n +---------------+ +---------------+ +---------------+\n |WorkerNode 1 | |WorkerNode 2 | |WorkerNode n |\n | | | | | |\n +---------------+ +---------------+ +---------------+\n | Task Assigned | Task Completed | Communication |\n
"},{"location":"corporate/architecture/#5-hivemind-level","title":"5. Hivemind Level","text":"Overview: At the Hivemind level, it's a multi-swarm setup, with an upper-layer orchestrator managing multiple swarm-level orchestrators. The Hivemind orchestrator is responsible for broader tasks like assigning macro-tasks to swarms, handling inter-swarm communications, and ensuring the overall system is functioning smoothly.
Diagram:
+--------+\n |Hivemind|\n +--------+\n |\n +--------------+\n |Hivemind |\n |Orchestrator |\n +--------------+\n / | \\ \n +------------+ +------------+ +------------+\n |Orchestrator| |Orchestrator| |Orchestrator|\n +------------+ +------------+ +------------+\n | | |\n+--------------+ +--------------+ +--------------+\n| Swarm-level| | Swarm-level| | Swarm-level|\n|Communication| |Communication| |Communication|\n| Layer | | Layer | | Layer |\n+--------------+ +--------------+ +--------------+\n / \\ / \\ / \\\n+-------+ +-------+ +-------+ +-------+ +-------+\n|Worker | |Worker | |Worker | |Worker | |Worker |\n| Node | | Node | | Node | | Node | | Node |\n+-------+ +-------+ +-------+ +-------+ +-------+\n
This setup allows the Hivemind level to operate at a grander scale, with the capability to manage hundreds or even thousands of worker nodes across multiple swarms efficiently.
"},{"location":"corporate/architecture/#swarms-framework-development-strategy-checklist","title":"Swarms Framework Development Strategy Checklist","text":""},{"location":"corporate/architecture/#introduction","title":"Introduction","text":"The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
"},{"location":"corporate/architecture/#1-agent-level-development","title":"1. Agent Level Development","text":""},{"location":"corporate/architecture/#11-model-integration","title":"1.1 Model Integration","text":"The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
(Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
"},{"location":"corporate/bounties/","title":"Bounty Program","text":"Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
Here's how it works:
Check out our Roadmap: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
Pick a Task: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
Get to Work: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
Submit your Contribution: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
Earn Rewards: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
"},{"location":"corporate/bounties/#phase-2-enhancing-the-system","title":"Phase 2: Enhancing the System","text":"In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
"},{"location":"corporate/bounties/#phase-3-towards-super-intelligence","title":"Phase 3: Towards Super-Intelligence","text":"The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
To participate in our bounty program, visit the Swarms Bounty Program Page. Let's build the future together!
"},{"location":"corporate/bounties/#bounties-for-roadmap-items","title":"Bounties for Roadmap Items","text":"To accelerate the development of Swarms and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
"},{"location":"corporate/bounties/#3-phase-testing-framework","title":"3-Phase Testing Framework","text":"To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
"},{"location":"corporate/bounties/#phase-1-unit-testing","title":"Phase 1: Unit Testing","text":"In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
"},{"location":"corporate/bounties/#phase-2-integration-testing","title":"Phase 2: Integration Testing","text":"After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
"},{"location":"corporate/bounties/#phase-3-benchmarking-stress-testing","title":"Phase 3: Benchmarking & Stress Testing","text":"In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
"},{"location":"corporate/bounties/#reverse-engineering-to-reach-phase-3","title":"Reverse Engineering to Reach Phase 3","text":"To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
Set Clear Expectations: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
Develop Testing Scenarios: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
Write Test Cases: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
Execute the Tests: Run the test cases on our Swarm, making note of any issues or bugs that arise.
Iterate and Improve: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
Repeat: Repeat this process until our Swarm meets our expectations and passes all test cases.
By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
Let's shape the future of digital automation together!
"},{"location":"corporate/bounty_program/","title":"Swarms Bounty Program","text":"The Swarms Bounty Program is an initiative designed to incentivize contributors to help us improve and expand the Swarms framework. With an impressive $150,000 allocated for bounties, contributors have the unique opportunity to earn generous rewards while gaining prestigious recognition in the Swarms community of over 9,000 agent engineers. This program offers more than just financial benefits; it allows contributors to play a pivotal role in advancing the field of multi-agent collaboration and AI automation, while also growing their professional skills and network. By joining the Swarms Bounty Program, you become part of an innovative movement shaping the future of technology.
"},{"location":"corporate/bounty_program/#why-contribute","title":"Why Contribute?","text":"Generous Rewards: The bounty pool totals $150,000, ensuring that contributors are fairly compensated for their valuable work on successfully completed tasks. Each task comes with its own reward, reflecting its complexity and impact.
Community Status: Gain coveted recognition as a valued and active contributor within the thriving Swarms community. This status not only highlights your contributions but also builds your reputation among a network of AI engineers.
Skill Development: Collaborate on cutting-edge AI projects, hone your expertise in agent engineering, and learn practical skills that can be applied to real-world challenges in the AI domain.
Networking Opportunities: Work side-by-side with over 9,000 agent engineers in our active and supportive community. This network fosters collaboration, knowledge sharing, and mentorship opportunities that can significantly boost your career.
Check the Swarms Project Board for prioritized tasks and ongoing milestones. This board provides a clear view of project priorities and helps contributors align their efforts with the project's immediate goals.
Claim a Bounty:
Await approval from the Swarms team before commencing work. Approval ensures clarity and avoids duplication of efforts by other contributors.
Submit Your Work:
Engage with reviewers to refine your submission if requested.
Earn Rewards:
To ensure high-quality contributions and streamline the process, please adhere to the following guidelines: - Familiarize yourself with the Swarms Contribution Guidelines. These guidelines outline coding standards, best practices, and procedures for contributing effectively.
Ensure your code is clean, modular, and well-documented. Contributions that adhere to the project's standards are more likely to be accepted.
Actively communicate with the Swarms team and other contributors. Clear communication helps resolve uncertainties, avoids duplication, and fosters collaboration within the community.
Become an active member of the Swarms community by joining our Discord server: Join Now. The Discord server serves as a hub for discussions, updates, and support.
Stay Updated:
Keep track of the latest updates, announcements, and bounty opportunities by regularly checking the Discord channel and the GitHub repository.
Start Contributing:
Beyond monetary rewards, contributors gain intangible benefits that elevate their professional journey:
Recognition: Your contributions will be showcased to a community of over 9,000 engineers, increasing your visibility and credibility in the AI field.
Portfolio Building: Add high-impact contributions to your portfolio, demonstrating your skills and experience to potential employers or collaborators.
Knowledge Sharing: Learn from and collaborate with experts in agent engineering, gaining insights into the latest advancements and best practices in the field.
For any questions, support, or clarifications, reach out to the Swarms team:
Discord: Engage directly with the team and fellow contributors in our active channels.
GitHub: Open an issue for specific questions or suggestions related to the project. We\u2019re here to guide and assist you at every step of your contribution journey.
Join us in building the future of multi-agent collaboration and AI automation. With your contributions, we can create something truly extraordinary and transformative. Together, let\u2019s pave the way for groundbreaking advancements in technology and innovation!
"},{"location":"corporate/checklist/","title":"Swarms Framework Development Strategy Checklist","text":""},{"location":"corporate/checklist/#introduction","title":"Introduction","text":"The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
"},{"location":"corporate/checklist/#1-agent-level-development","title":"1. Agent Level Development","text":""},{"location":"corporate/checklist/#11-model-integration","title":"1.1 Model Integration","text":"The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
(Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
"},{"location":"corporate/cost_analysis/","title":"Costs Structure of Deploying Autonomous Agents","text":""},{"location":"corporate/cost_analysis/#table-of-contents","title":"Table of Contents","text":"Autonomous agents are revolutionizing various industries, from self-driving cars to chatbots and customer service solutions. The prospect of automation and improved efficiency makes these agents attractive investments. However, like any other technological solution, deploying autonomous agents involves several cost elements that organizations need to consider carefully. This comprehensive guide aims to provide an exhaustive outline of the costs associated with deploying autonomous agents.
"},{"location":"corporate/cost_analysis/#2-our-time-generating-system-prompts-and-custom-tools","title":"2. Our Time: Generating System Prompts and Custom Tools","text":""},{"location":"corporate/cost_analysis/#description","title":"Description","text":"The deployment of autonomous agents often requires a substantial investment of time to develop system prompts and custom tools tailored to specific operational needs.
"},{"location":"corporate/cost_analysis/#costs","title":"Costs","text":"Task Time Required (Hours) Cost per Hour ($) Total Cost ($) System Prompts Design 50 100 5,000 Custom Tools Development 100 100 10,000 Total 150 15,000"},{"location":"corporate/cost_analysis/#3-consultancy-fees","title":"3. Consultancy Fees","text":""},{"location":"corporate/cost_analysis/#description_1","title":"Description","text":"Consultation is often necessary for navigating the complexities of autonomous agents. This includes system assessment, customization, and other essential services.
"},{"location":"corporate/cost_analysis/#costs_1","title":"Costs","text":"Service Fees ($) Initial Assessment 5,000 System Customization 7,000 Training 3,000 Total 15,000"},{"location":"corporate/cost_analysis/#4-model-inference-infrastructure","title":"4. Model Inference Infrastructure","text":""},{"location":"corporate/cost_analysis/#description_2","title":"Description","text":"The hardware and software needed for the agent's functionality, known as the model inference infrastructure, form a significant part of the costs.
"},{"location":"corporate/cost_analysis/#costs_2","title":"Costs","text":"Component Cost ($) Hardware 10,000 Software Licenses 2,000 Cloud Services 3,000 Total 15,000"},{"location":"corporate/cost_analysis/#5-deployment-and-continual-maintenance","title":"5. Deployment and Continual Maintenance","text":""},{"location":"corporate/cost_analysis/#description_3","title":"Description","text":"Once everything is in place, deploying the autonomous agents and their ongoing maintenance are the next major cost factors.
"},{"location":"corporate/cost_analysis/#costs_3","title":"Costs","text":"Task Monthly Cost ($) Annual Cost ($) Deployment 5,000 60,000 Ongoing Maintenance 1,000 12,000 Total 6,000 72,000"},{"location":"corporate/cost_analysis/#6-output-metrics-blogs-generation-rates","title":"6. Output Metrics: Blogs Generation Rates","text":""},{"location":"corporate/cost_analysis/#description_4","title":"Description","text":"To provide a sense of what an investment in autonomous agents can yield, we offer the following data regarding blogs that can be generated as an example of output.
"},{"location":"corporate/cost_analysis/#blogs-generation-rates","title":"Blogs Generation Rates","text":"Timeframe Number of Blogs Per Day 20 Per Week 140 Per Month 600"},{"location":"corporate/culture/","title":"Swarms Corp Culture Document","text":""},{"location":"corporate/culture/#our-mission-and-purpose","title":"Our Mission and Purpose","text":"At Swarms Corp, we believe in more than just building technology. We are advancing humanity by pioneering systems that allow agents\u2014both AI and human\u2014to collaborate seamlessly, working toward the betterment of society and unlocking a future of abundance. Our mission is everything, and each of us is here because we understand the transformative potential of our work. We are not just a company; we are a movement aimed at reshaping the future. We strive to create systems that can tackle the most complex challenges facing humanity, from climate change to inequality, with solutions that are powered by collective intelligence.
Our purpose goes beyond just technological advancement. We are here to create tools that empower people, uplift communities, and set a new standard for what technology can achieve when the mission is clear and the commitment is unwavering. We see every project as a step toward something greater\u2014an abundant future where human potential is limitless and artificial intelligence serves as a powerful ally to mankind.
"},{"location":"corporate/culture/#values-we-live-by","title":"Values We Live By","text":""},{"location":"corporate/culture/#1-hard-work-no-stone-unturned","title":"1. Hard Work: No Stone Unturned","text":"We believe that hard work is the foundation of all great achievements. At Swarms Corp, each member of the team is dedicated to putting in the effort required to solve complex problems. This isn\u2019t just about long hours\u2014it\u2019s about focused, intentional work that leads to breakthroughs. We hold each other to high standards, and we don\u2019t shy away from the hard paths when the mission calls for it. Every challenge we face is an opportunity to demonstrate our resilience and our commitment to excellence. We understand that the pursuit of groundbreaking innovation demands not just effort, but a relentless curiosity and the courage to face the unknown.
At Swarms Corp, we respect the grind because we know that transformative change doesn\u2019t happen overnight. It requires continuous effort, sacrifice, and an unwavering focus on the task at hand. We celebrate hard work, not because it\u2019s difficult, but because we understand its potential to transform ambitious ideas into tangible solutions. We honor the sweat equity that goes into building something that can truly make a difference.
"},{"location":"corporate/culture/#2-mission-above-everything","title":"2. Mission Above Everything","text":"Our mission is our guiding star. Every decision, every task, and every project must align with our overarching purpose: advancing humanity and creating a post-scarcity world. This means sometimes putting the collective goal ahead of individual preferences or comfort. We\u2019re here to do something much larger than ourselves, and we prioritize the mission with relentless commitment. We know that personal sacrifices will often be necessary, and we embrace that reality because the rewards of our mission are far greater than any individual gain.
When we say \"mission above everything,\" we mean that our focus is not just on immediate success, but on creating a lasting impact that will benefit future generations. Our mission provides meaning and direction to our daily efforts, and we see every task as a small yet crucial part of our broader vision. We remind ourselves constantly of why we are here and who we are working for\u2014not just our customers or stakeholders, but humanity as a whole.
"},{"location":"corporate/culture/#3-finding-the-shortest-path","title":"3. Finding the Shortest Path","text":"Innovation thrives on efficiency. At Swarms Corp, we value finding the shortest, most effective paths to reach our goals. We encourage everyone to question the status quo, challenge existing processes, and ask, \u201cIs there a better way to do this?\u201d Creativity means finding new routes\u2014whether by leveraging automation, questioning outdated steps, or collaborating to uncover insights faster. We honor those who seek smarter paths over conventional ones. Efficiency is not just about saving time\u2014it\u2019s about maximizing impact and ensuring that every ounce of effort drives meaningful progress.
Finding the shortest path is about eliminating unnecessary complexity and focusing our energy on what truly matters. We encourage a culture of continuous improvement, where each team member is empowered to innovate on processes, tools, and methodologies. The shortest path does not mean cutting corners\u2014it means removing obstacles, optimizing workflows, and focusing on high-leverage activities that bring us closer to our mission. We celebrate those who find elegant, effective solutions that others might overlook.
"},{"location":"corporate/culture/#4-advancing-humanity","title":"4. Advancing Humanity","text":"The ultimate goal of everything we do is to elevate humanity. We envision a world where intelligence\u2014both human and artificial\u2014works in harmony to improve lives, solve global challenges, and expand possibilities. This ethos drives our work, whether it\u2019s developing advanced AI systems, collaborating with others to push technological boundaries, or thinking deeply about how our creations can impact society in positive ways. Every line of code, every idea, and every strategy should move us closer to this vision.
Advancing humanity means we always think about the ethical implications of our work. We are deeply aware that the technology we create has the power to transform lives, and with that power comes the responsibility to ensure our contributions are always positive. We seek not only to push the boundaries of what technology can do but also to ensure that these advancements are inclusive and equitable. Our focus is on building a future where every person has access to the tools and opportunities they need to thrive.
Our vision is to bridge the gap between technology and humanity\u2019s most pressing needs. We aim to democratize intelligence, making it available for everyone, regardless of their background or resources. This is how we advance humanity\u2014not just through technological feats, but by ensuring that our innovations serve the greater good and uplift everyone.
"},{"location":"corporate/culture/#our-way-of-working","title":"Our Way of Working","text":"Radical Ownership: Each team member is not just a contributor but an owner of their domain. We take full responsibility for outcomes, follow through on our promises, and ensure that nothing falls through the cracks. We don\u2019t wait for permission\u2014we act, innovate, and lead. Radical ownership means understanding that our actions have a direct impact on the success of our mission. It\u2019s about proactive problem-solving and always stepping up when we see an opportunity to make a difference.
Honesty and Respect: We communicate openly and respect each other\u2019s opinions. Tough conversations are a natural part of building something impactful. We face challenges head-on with honesty and directness while maintaining a respectful and supportive atmosphere. Honesty fosters trust, and trust is the foundation of any high-performing team. We value feedback and see it as an essential tool for growth\u2014both for individuals and for the organization as a whole.
One Team, One Mission: Collaboration isn\u2019t just encouraged\u2014it\u2019s essential. We operate as a swarm, where each agent contributes to a greater goal, learning from each other, sharing knowledge, and constantly iterating together. We celebrate wins collectively and approach obstacles with a unified spirit. No one succeeds alone; every achievement is the result of collective effort. We lift each other up, and we know that our strength lies in our unity and shared purpose.
The Future is Ours to Shape: Our work is inherently future-focused. We\u2019re not satisfied with simply keeping up\u2014we want to set the pace. Every day, we take one step closer to a future where humanity\u2019s potential is limitless, where scarcity is eliminated, and where intelligence\u2014human and machine\u2014advances society. We are not passive participants in the future; we are active shapers of it. We imagine a better tomorrow, and then we take deliberate steps to create it. Our work today will define what the world looks like tomorrow.
Be Bold: Don\u2019t be afraid to take risks. Innovation requires experimentation, and sometimes that means making mistakes. We support each other in learning from failures and taking smart, calculated risks. Boldness is at the heart of progress. We want every member of Swarms Corp to feel empowered to think outside the box, propose unconventional ideas, and drive innovation. Mistakes are seen not as setbacks, but as opportunities for learning and growth.
Keep the Mission First: Every decision we make should be with our mission in mind. Ask yourself how your work advances the cause of creating an abundant future. The mission is the yardstick against which we measure our efforts, ensuring that everything we do pushes us closer to our ultimate goals. We understand that the mission is bigger than any one of us, and we strive to contribute meaningfully every day.
Find Solutions, Not Problems: While identifying issues is important, we value those who come with solutions. Embrace challenges as opportunities to innovate and find ways to make an impact. We foster a culture of proactive problem-solving where obstacles are seen as opportunities to exercise creativity. If something\u2019s broken, we fix it. If there\u2019s a better way, we find it. We expect our team members to be solution-oriented, always seeking ways to turn challenges into stepping stones for progress.
Think Big, Act Fast: We\u2019re not here to make small changes\u2014we\u2019re here to revolutionize how we think about intelligence, automation, and society. Dream big, but work with urgency. We are tackling problems of immense scale, and we must move with intention and speed. Thinking big means envisioning a world that is radically different and better, and acting fast means executing the steps to get us there without hesitation. We value ambition and the courage to move swiftly when the time is right.
Swarms Corp is a place for dreamers and doers, for those who are driven by purpose and are unafraid of the work required to achieve it. We commit to providing you with the tools, support, and environment you need to contribute meaningfully to our mission. We are here to advance humanity together, one agent, one solution, one breakthrough at a time. We pledge to nurture an environment that encourages creativity, collaboration, and bold thinking. Here, you will find a community that celebrates your wins, supports you through challenges, and pushes you to be your best self.
Our commitment also includes ensuring that your voice is heard. We are building the future together, and every perspective matters. We strive to create an inclusive space where diversity of thought is welcomed, and where each team member feels valued for their unique contributions. At Swarms Corp, you are not just part of a team\u2014you are part of a mission that aims to change the course of humanity for the better. Together, we\u2019ll make the impossible possible, one breakthrough at a time.
"},{"location":"corporate/data_room/","title":"Swarms Data Room","text":""},{"location":"corporate/data_room/#table-of-contents","title":"Table of Contents","text":"Introduction
Overview of the Company
Vision and Mission Statement
Executive Summary
Corporate Documents
Articles of Incorporation
Bylaws
Shareholder Agreements
Board Meeting Minutes
Company Structure and Org Chart
Financial Information
Historical Financial Statements
Income Statements
Balance Sheets
Cash Flow Statements
Financial Projections and Forecasts
Cap Table
Funding History and Use of Funds
Products and Services
Detailed Descriptions of Products/Services
Product Development Roadmap
User Manuals and Technical Specifications
Case Studies and Use Cases
Swarms provides automation-as-a-service through swarms of autonomous agents that work together as a team. We enable our customers to build, deploy, and scale production-grade multi-agent applications to automate real-world tasks.
"},{"location":"corporate/data_room/#vision","title":"Vision","text":"Our vision for 2024 is to provide the most reliable infrastructure for deploying autonomous agents into the real world through the Swarm Cloud, our premier cloud platform for the scalable deployment of Multi-Modal Autonomous Agents. The platform focuses on delivering maximum value to users by only taking a small fee when utilizing the agents for the hosted compute power needed to host the agents.
"},{"location":"corporate/data_room/#executive-summary","title":"Executive Summary","text":"The Swarm Corporation aims to enable AI models to automate complex workflows and operations, not just singular low-value tasks. We believe collaboration between multiple agents can overcome limitations of individual agents for reasoning, planning, etc. This will allow automation of processes in mission-critical industries like security, logistics, and manufacturing where AI adoption is currently low.
We provide an open source framework to deploy production-grade multi-modal agents in just a few lines of code. This builds our user base, recruits talent, gets customer feedback to improve products, gains awareness and trust.
Our business model focuses on customer satisfaction, openness, integration with other tools/platforms, and production-grade reliability.
Go-to-market strategy is to get the framework to product-market fit with over 50K weekly recurring users, then secure high-value contracts in target industries. Long-term monetization via microtransactions, usage-based pricing, subscriptions.
The team has thousands of hours building and optimizing autonomous agents. Leadership includes AI engineers, product experts, open source contributors and community builders.
Key milestones: get 80K framework users in January 2024, start contracts in target verticals, introduce commercial products in 2025 with various pricing models.
"},{"location":"corporate/data_room/#resources","title":"Resources","text":"This section is dedicated entirely for corporate documents.
Cap Table
Cashflow Prediction Sheet
Swarms is an open source framework for developers in python to enable seamless, reliable, and scalable multi-agent orchestration through modularity, customization, and precision.
We could also try to create an AI influencer run by a swarm, let it create a whole identity and generate images, memes, and other content for Twitter, Reddit, etc.
had a thought that we should have either a more general one of these or a swarm or both -- need something connecting all the calendars, events, and initiatives of all the AI communities, langchain, laion, eluther, lesswrong, gato, rob miles, chatgpt hackers, etc etc
Swarm of AI influencers to spread marketing
Delegation System to better organize teams: Start with a team of passionate humans and let them self-report their skills/strengths so the agent has a concept of who to delegate to, then feed the agent a huge task list (like the bullet list a few messages above) that it breaks down into actionable steps and \"prompts\" specific team members to complete tasks. Could even suggest breakout teams of a few people with complementary skills to tackle more complex tasks. There can also be a live board that updates each time a team member completes something, to encourage momentum and keep track of progress
Our goal is to ensure that Swarms is intuitive and easy to use for all users, regardless of their level of technical expertise. This includes the developers who implement Swarms in their applications, as well as end users who interact with the implemented systems.
"},{"location":"corporate/design/#tactics","title":"Tactics","text":"Swarms should be dependable and trustworthy. Users should be able to count on Swarms to perform consistently and without error or failure.
"},{"location":"corporate/design/#tactics_1","title":"Tactics","text":"Swarms should offer high performance and rapid response times. The system should be able to handle requests and tasks swiftly.
"},{"location":"corporate/design/#tactics_2","title":"Tactics","text":"Swarms should be able to grow in capacity and complexity without compromising performance or reliability. It should be able to handle increased workloads gracefully.
"},{"location":"corporate/design/#tactics_3","title":"Tactics","text":"Swarms is designed with a philosophy of simplicity and reliability. We believe that software should be a tool that empowers users, not a hurdle that they need to overcome. Therefore, our focus is on usability, reliability, speed, and scalability. We want our users to find Swarms intuitive and dependable, fast and adaptable to their needs. This philosophy guides all of our design and development decisions.
"},{"location":"corporate/design/#swarm-architecture-design-document","title":"Swarm Architecture Design Document","text":""},{"location":"corporate/design/#overview","title":"Overview","text":"The goal of the Swarm Architecture is to provide a flexible and scalable system to build swarm intelligence models that can solve complex problems. This document details the proposed design to create a plug-and-play system, which makes it easy to create custom swarms, and provides pre-configured swarms with multi-modal agents.
"},{"location":"corporate/design/#design-principles","title":"Design Principles","text":"The BaseSwarm is an abstract base class which defines the basic structure of a swarm and the methods that need to be implemented. Any new swarm should inherit from this class and implement the required methods.
"},{"location":"corporate/design/#swarm-classes","title":"Swarm Classes","text":"Various Swarm classes can be implemented inheriting from the BaseSwarm class. Each swarm class should implement the required methods for initializing the components, worker nodes, and boss node, and running the swarm.
Pre-configured swarm classes with multi-modal agents can be provided for ease of use. These classes come with a default configuration of tools and agents, which can be used out of the box.
"},{"location":"corporate/design/#tools-and-agents","title":"Tools and Agents","text":"Tools and agents are the components that provide the actual functionality to the swarms. They can be language models, AI assistants, vector stores, or any other components that can help in problem solving.
To make the system plug-and-play, a standard interface should be defined for these components. Any new tool or agent should implement this interface, so that it can be easily plugged into the system.
"},{"location":"corporate/design/#usage","title":"Usage","text":"Users can either use pre-configured swarms or create their own custom swarms.
To use a pre-configured swarm, they can simply instantiate the corresponding swarm class and call the run method with the required objective.
To create a custom swarm, they need to:
# Using pre-configured swarm\nswarm = PreConfiguredSwarm(openai_api_key)\nswarm.run_swarms(objective)\n\n# Creating custom swarm\nclass CustomSwarm(BaseSwarm):\n # Implement required methods\n\nswarm = CustomSwarm(openai_api_key)\nswarm.run_swarms(objective)\n
"},{"location":"corporate/design/#conclusion","title":"Conclusion","text":"This Swarm Architecture design provides a scalable and flexible system for building swarm intelligence models. The plug-and-play design allows users to easily use pre-configured swarms or create their own custom swarms.
"},{"location":"corporate/design/#swarming-architectures","title":"Swarming Architectures","text":"Sure, below are five different swarm architectures with their base requirements and an abstract class that processes these components:
Hierarchical Swarm: This architecture is characterized by a boss/worker relationship. The boss node takes high-level decisions and delegates tasks to the worker nodes. The worker nodes perform tasks and report back to the boss node.
Homogeneous Swarm: In this architecture, all nodes in the swarm are identical and contribute equally to problem-solving. Each node has the same capabilities.
Heterogeneous Swarm: This architecture contains different types of nodes, each with its specific capabilities. This diversity can lead to more robust problem-solving.
Competitive Swarm: In this architecture, nodes compete with each other to find the best solution. The system may use a selection process to choose the best solutions.
Cooperative Swarm: In this architecture, nodes work together and share information to find solutions. The focus is on cooperation rather than competition.
Grid-based Swarm: This architecture positions agents on a grid, where they can only interact with their neighbors. This is useful for simulations, especially in fields like ecology or epidemiology.
Particle Swarm Optimization (PSO) Swarm: In this architecture, each agent represents a potential solution to an optimization problem. Agents move in the solution space based on their own and their neighbors' past performance. PSO is especially useful for continuous numerical optimization problems.
Ant Colony Optimization (ACO) Swarm: Inspired by ant behavior, this architecture has agents leave a pheromone trail that other agents follow, reinforcing the best paths. It's useful for problems like the traveling salesperson problem.
Genetic Algorithm (GA) Swarm: In this architecture, agents represent potential solutions to a problem. They can 'breed' to create new solutions and can undergo 'mutations'. GA swarms are good for search and optimization problems.
Stigmergy-based Swarm: In this architecture, agents communicate indirectly by modifying the environment, and other agents react to such modifications. It's a decentralized method of coordinating tasks.
These architectures all have unique features and requirements, but they share the need for agents (often implemented as language models) and a mechanism for agents to communicate or interact, whether it's directly through messages, indirectly through the environment, or implicitly through a shared solution space. Some also require specific data structures, like a grid or problem space, and specific algorithms, like for evaluating solutions or updating agent positions.
"},{"location":"corporate/distribution/","title":"Swarms Monetization Strategy","text":"This strategy includes a variety of business models, potential revenue streams, cashflow structures, and customer identification methods. Let's explore these further.
"},{"location":"corporate/distribution/#business-models","title":"Business Models","text":"Platform as a Service (PaaS): Provide the Swarms AI platform on a subscription basis, charged monthly or annually. This could be tiered based on usage and access to premium features.
API Usage-based Pricing: Charge customers based on their usage of the Swarms API. The more requests made, the higher the fee.
Managed Services: Offer complete end-to-end solutions where you manage the entire AI infrastructure for the clients. This could be on a contract basis with a recurring fee.
Training and Certification: Provide Swarms AI training and certification programs for interested developers and businesses. These could be monetized as separate courses or subscription-based access.
Partnerships: Collaborate with large enterprises and offer them dedicated Swarm AI services. These could be performance-based contracts, ensuring a mutually beneficial relationship.
Data as a Service (DaaS): Leverage the data generated by Swarms for insights and analytics, providing valuable business intelligence to clients.
Subscription Fees: This would be the main revenue stream from providing the Swarms platform as a service.
Usage Fees: Additional revenue can come from usage fees for businesses that have high demand for Swarms API.
Contract Fees: From offering managed services and bespoke solutions to businesses.
Training Fees: Revenue from providing training and certification programs to developers and businesses.
Partnership Contracts: Large-scale projects with enterprises, involving dedicated Swarm AI services, could provide substantial income.
Data Insights: Revenue from selling valuable business intelligence derived from Swarm's aggregated and anonymized data.
Businesses Across Sectors: Any business seeking to leverage AI for automation, efficiency, and data insights could be a potential customer. This includes sectors like finance, eCommerce, logistics, healthcare, and more.
Developers: Both freelance and those working in organizations could use Swarms to enhance their projects and services.
Enterprises: Large enterprises looking to automate and optimize their operations could greatly benefit from Swarms.
Educational Institutions: Universities and research institutions could leverage Swarms for research and teaching purposes.
Landing Page Creation: Develop a dedicated product page on apac.ai for Swarms.
Hosted Swarms API: Launch a cloud-based Swarms API service. It should be highly reliable, with robust documentation to attract daily users.
Consumer and Enterprise Subscription Service: Launch a comprehensive subscription service on The Domain. This would provide users with access to a wide array of APIs and data streams.
Dedicated Capacity Deals: Partner with large enterprises to offer them dedicated Swarm AI solutions for automating their operations.
Enterprise Partnerships: Develop partnerships with large enterprises for extensive contract-based projects.
Integration with Collaboration Platforms: Develop Swarms bots for platforms like Discord and Slack, charging users a subscription fee for access.
Personal Data Instances: Offer users dedicated instances of all their data that the Swarm can query as needed.
Browser Extension: Develop a browser extension that integrates with the Swarms platform, offering users a more seamless experience.
Remember, customer satisfaction and a value-centric approach are at the core of any successful monetization strategy. It's essential to continuously iterate and improve the product based on customer feedback and evolving market needs.
"},{"location":"corporate/distribution/#other-ideas","title":"Other ideas","text":"Platform as a Service (PaaS): Create a cloud-based platform that allows users to build, run, and manage applications without the complexity of maintaining the infrastructure. You could charge users a subscription fee for access to the platform and provide different pricing tiers based on usage levels. This could be an attractive solution for businesses that do not have the capacity to build or maintain their own swarm intelligence solutions.
Professional Services: Offer consultancy and implementation services to businesses looking to utilize the Swarm technology. This could include assisting with integration into existing systems, offering custom development services, or helping customers to build specific solutions using the framework.
Education and Training: Create a certification program for developers or companies looking to become proficient with the Swarms framework. This could be sold as standalone courses, or bundled with other services.
Managed Services: Some companies may prefer to outsource the management of their Swarm-based systems. A managed services solution could take care of all the technical aspects, from hosting the solution to ensuring it runs smoothly, allowing the customer to focus on their core business.
Data Analysis and Insights: Swarm intelligence can generate valuable data and insights. By anonymizing and aggregating this data, you could provide industry reports, trend analysis, and other valuable insights to businesses.
As for the type of platform, Swarms can be offered as a cloud-based solution given its scalability and flexibility. This would also allow you to apply a SaaS/PaaS type monetization model, which provides recurring revenue.
Potential customers could range from small to large enterprises in various sectors such as logistics, eCommerce, finance, and technology, who are interested in leveraging artificial intelligence and machine learning for complex problem solving, optimization, and decision-making.
Product Brief Monetization Strategy:
Product Name: Swarms.AI Platform
Product Description: A cloud-based AI and ML platform harnessing the power of swarm intelligence.
Platform as a Service (PaaS): Offer tiered subscription plans (Basic, Premium, Enterprise) to accommodate different usage levels and business sizes.
Professional Services: Offer consultancy and custom development services to tailor the Swarms solution to the specific needs of the business.
Education and Training: Launch an online Swarms.AI Academy with courses and certifications for developers and businesses.
Managed Services: Provide a premium, fully-managed service offering that includes hosting, maintenance, and 24/7 support.
Data Analysis and Insights: Offer industry reports and customized insights generated from aggregated and anonymized Swarm data.
Potential Customers: Enterprises in sectors such as logistics, eCommerce, finance, and technology. This can be sold globally, provided there's an internet connection.
Marketing Channels: Online marketing (SEO, Content Marketing, Social Media), Partnerships with tech companies, Direct Sales to Enterprises.
This strategy is designed to provide multiple revenue streams, while ensuring the Swarms.AI platform is accessible and useful to a range of potential customers.
AI Solution as a Service: By offering the Swarms framework as a service, businesses can access and utilize the power of multiple LLM agents without the need to maintain the infrastructure themselves. Subscription can be tiered based on usage and additional features.
Integration and Custom Development: Offer integration services to businesses wanting to incorporate the Swarms framework into their existing systems. Also, you could provide custom development for businesses with specific needs not met by the standard framework.
Training and Certification: Develop an educational platform offering courses, webinars, and certifications on using the Swarms framework. This can serve both developers seeking to broaden their skills and businesses aiming to train their in-house teams.
Managed Swarms Solutions: For businesses that prefer to outsource their AI needs, provide a complete solution which includes the development, maintenance, and continuous improvement of swarms-based applications.
Data Analytics Services: Leveraging the aggregated insights from the AI swarms, you could offer data analytics services. Businesses can use these insights to make informed decisions and predictions.
Type of Platform:
Cloud-based platform or Software as a Service (SaaS) will be a suitable model. It offers accessibility, scalability, and ease of updates.
Target Customers:
The technology can be beneficial for businesses across sectors like eCommerce, technology, logistics, finance, healthcare, and education, among others.
Product Brief Monetization Strategy:
Product Name: Swarms.AI
AI Solution as a Service: Offer different tiered subscriptions (Standard, Premium, and Enterprise) each with varying levels of usage and features.
Integration and Custom Development: Offer custom development and integration services, priced based on the scope and complexity of the project.
Training and Certification: Launch the Swarms.AI Academy with courses and certifications, available for a fee.
Managed Swarms Solutions: Offer fully managed solutions tailored to business needs, priced based on scope and service level agreements.
Data Analytics Services: Provide insightful reports and data analyses, which can be purchased on a one-off basis or through a subscription.
By offering a variety of services and payment models, Swarms.AI will be able to cater to a diverse range of business needs, from small start-ups to large enterprises. Marketing channels would include digital marketing, partnerships with technology companies, presence in tech events, and direct sales to targeted industries.
"},{"location":"corporate/distribution/#roadmap_1","title":"Roadmap","text":"Create a landing page for swarms apac.ai/product/swarms
Create Hosted Swarms API for anybody to just use without need for mega gpu infra, charge usage based pricing. Prerequisites for success => Swarms has to be extremely reliable + we need world class documentation and many daily users => how do we get many daily users? We provide a seamless and fluid experience, how do we create a seamless and fluid experience? We write good code that is modular, provides feedback to the user in times of distress, and ultimately accomplishes the user's tasks.
Hosted consumer and enterprise subscription as a service on The Domain, where users can interact with 1000s of APIs and ingest 1000s of different data streams.
Hosted dedicated capacity deals with mega enterprises on automating many operations with Swarms for monthly subscription 300,000+$
Partnerships with enterprises, massive contracts with performance based fee
Have discord bot and or slack bot with users personal data, charge subscription + browser extension
each user gets a dedicated ocean instance of all their data so the swarm can query it as needed.
Swarms is a powerful AI platform leveraging the transformative potential of Swarm Intelligence. Our ambition is to monetize this groundbreaking technology in ways that generate significant cashflow while providing extraordinary value to our customers.
Here we outline our strategic monetization pathways and provide a roadmap that plots our course to future success.
"},{"location":"corporate/distribution/#i-business-models","title":"I. Business Models","text":"Platform as a Service (PaaS): We provide the Swarms platform as a service, billed on a monthly or annual basis. Subscriptions can range from $50 for basic access, to $500+ for premium features and extensive usage.
API Usage-based Pricing: Customers are billed according to their use of the Swarms API. Starting at $0.01 per request, this creates a cashflow model that rewards extensive platform usage.
Managed Services: We offer end-to-end solutions, managing clients' entire AI infrastructure. Contract fees start from $100,000 per month, offering both a sustainable cashflow and considerable savings for our clients.
Training and Certification: A Swarms AI training and certification program is available for developers and businesses. Course costs can range from $200 to $2,000, depending on course complexity and duration.
Partnerships: We forge collaborations with large enterprises, offering dedicated Swarm AI services. These performance-based contracts start from $1,000,000, creating a potentially lucrative cashflow stream.
Data as a Service (DaaS): Swarms generated data are mined for insights and analytics, with business intelligence reports offered from $500 each.
Subscription Fees: From $50 to $500+ per month for platform access.
Usage Fees: From $0.01 per API request, generating income from high platform usage.
Contract Fees: Starting from $100,000 per month for managed services.
Training Fees: From $200 to $2,000 for individual courses or subscription access.
Partnership Contracts: Contracts starting from $100,000, offering major income potential.
Data Insights: Business intelligence reports starting from $500.
Businesses Across Sectors: Our offerings cater to businesses across finance, eCommerce, logistics, healthcare, and more.
Developers: Both freelancers and organization-based developers can leverage Swarms for their projects.
Enterprises: Swarms offers large enterprises solutions for optimizing operations.
Educational Institutions: Universities and research institutions can use Swarms for research and teaching.
Landing Page Creation: Develop a dedicated Swarms product page on apac.ai.
Hosted Swarms API: Launch a reliable, well-documented cloud-based Swarms API service.
Consumer and Enterprise Subscription Service: Launch an extensive subscription service on The Domain, providing wide-ranging access to APIs and data streams.
Dedicated Capacity Deals: Offer large enterprises dedicated Swarm AI solutions, starting from $300,000 monthly subscription.
Enterprise Partnerships: Develop performance-based contracts with large enterprises.
Integration with Collaboration Platforms: Develop Swarms bots for platforms like Discord and Slack, charging a subscription fee for access.
Personal Data Instances: Offer users dedicated data instances that the Swarm can query as needed.
Browser Extension: Develop a browser extension that integrates with the Swarms platform for seamless user experience.
Our North Star remains customer satisfaction and value provision. As we embark on this journey, we continuously refine our product based on customer feedback and evolving market needs, ensuring we lead in the age of AI-driven solutions.
"},{"location":"corporate/distribution/#platform-distribution-strategy-for-swarms","title":"Platform Distribution Strategy for Swarms","text":"*Note: This strategy aims to diversify the presence of 'Swarms' across various platforms and mediums while focusing on monetization and value creation for its users.
"},{"location":"corporate/distribution/#1-framework","title":"1. Framework:","text":""},{"location":"corporate/distribution/#objective","title":"Objective:","text":"To offer Swarms as an integrated solution within popular frameworks to ensure that developers and businesses can seamlessly incorporate its functionalities.
"},{"location":"corporate/distribution/#strategy","title":"Strategy:","text":"Language/Framework Integration:
Monetization:
Promotion:
To provide a scalable solution for developers and businesses that want direct access to Swarms' functionalities without integrating the entire framework.
"},{"location":"corporate/distribution/#strategy_1","title":"Strategy:","text":"API Endpoints:
Monetization:
Promotion:
To provide a centralized web platform where users can directly access and engage with Swarms' offerings.
"},{"location":"corporate/distribution/#strategy_2","title":"Strategy:","text":"User-Friendly Interface:
Monetization:
Promotion:
To cater to the non-developer audience, allowing them to leverage Swarms' features without any coding expertise.
"},{"location":"corporate/distribution/#strategy_3","title":"Strategy:","text":"Drag-and-Drop Interface:
Monetization:
Promotion:
To create an ecosystem where third-party developers can contribute, and users can enhance their Swarms experience.
"},{"location":"corporate/distribution/#strategy_4","title":"Strategy:","text":"Open API for Development:
Monetization:
Promotion:
Browser Extenision: Athena browser extension for deep browser automation, subscription, usage,
Mobile Application: Develop a mobile app version for Swarms to tap into the vast mobile user base.
E-commerce Integrations: Platforms like Shopify, WooCommerce, where Swarms can add value to sellers.
Web Browser Extensions: Chrome, Firefox, and Edge extensions that bring Swarms features directly to users.
Podcasting Platforms: Swarms-themed content on platforms like Spotify, Apple Podcasts to reach aural learners.
Virtual Reality (VR) Platforms: Integration with VR experiences on Oculus or Viveport.
Gaming Platforms: Tools or plugins for game developers on Steam, Epic Games.
Decentralized Platforms: Using blockchain, create decentralized apps (DApps) versions of Swarms.
Chat Applications: Integrate with popular messaging platforms like WhatsApp, Telegram, Slack.
AI Assistants: Integration with Siri, Alexa, Google Assistant to provide Swarms functionalities via voice commands.
Freelancing Websites: Offer tools or services for freelancers on platforms like Upwork, Fiverr.
Online Forums: Platforms like Reddit, Quora, where users can discuss or access Swarms.
Educational Platforms: Sites like Khan Academy, Udacity where Swarms can enhance learning experiences.
Digital Art Platforms: Integrate with platforms like DeviantArt, Behance.
Open-source Repositories: Hosting Swarms on GitHub, GitLab, Bitbucket with open-source plugins.
Augmented Reality (AR) Apps: Create AR experiences powered by Swarms.
Smart Home Devices: Integrate Swarms' functionalities into smart home devices.
Newsletters: Platforms like Substack, where Swarms insights can be shared.
Interactive Kiosks: In malls, airports, and other public places.
IoT Devices: Incorporate Swarms in devices like smart fridges, smartwatches.
Collaboration Tools: Platforms like Trello, Notion, offering Swarms-enhanced productivity.
Dating Apps: An AI-enhanced matching algorithm powered by Swarms.
Music Platforms: Integrate with Spotify, SoundCloud for music-related AI functionalities.
Recipe Websites: Platforms like AllRecipes, Tasty with AI-recommended recipes.
Travel & Hospitality: Integrate with platforms like Airbnb, Tripadvisor for AI-based recommendations.
Language Learning Apps: Duolingo, Rosetta Stone integrations.
Virtual Events Platforms: Websites like Hopin, Zoom where Swarms can enhance the virtual event experience.
Social Media Management: Tools like Buffer, Hootsuite with AI insights by Swarms.
Fitness Apps: Platforms like MyFitnessPal, Strava with AI fitness insights.
Mental Health Apps: Integration into apps like Calm, Headspace for AI-driven wellness.
E-books Platforms: Amazon Kindle, Audible with AI-enhanced reading experiences.
Sports Analysis Tools: Websites like ESPN, Sky Sports where Swarms can provide insights.
Financial Tools: Integration into platforms like Mint, Robinhood for AI-driven financial advice.
Public Libraries: Digital platforms of public libraries for enhanced reading experiences.
3D Printing Platforms: Websites like Thingiverse, Shapeways with AI customization.
Meme Platforms: Websites like Memedroid, 9GAG where Swarms can suggest memes.
Astronomy Apps: Platforms like Star Walk, NASA's Eyes with AI-driven space insights.
Weather Apps: Integration into Weather.com, AccuWeather for predictive analysis.
Sustainability Platforms: Websites like Ecosia, GoodGuide with AI-driven eco-tips.
Fashion Apps: Platforms like ASOS, Zara with AI-based style recommendations.
Pet Care Apps: Integration into PetSmart, Chewy for AI-driven pet care tips.
Real Estate Platforms: Websites like Zillow, Realtor with AI-enhanced property insights.
DIY Platforms: Websites like Instructables, DIY.org with AI project suggestions.
Genealogy Platforms: Ancestry, MyHeritage with AI-driven family tree insights.
Car Rental & Sale Platforms: Integration into AutoTrader, Turo for AI-driven vehicle suggestions.
Wedding Planning Websites: Platforms like Zola, The Knot with AI-driven planning.
Craft Platforms: Websites like Etsy, Craftsy with AI-driven craft suggestions.
Gift Recommendation Platforms: AI-driven gift suggestions for websites like Gifts.com.
Study & Revision Platforms: Websites like Chegg, Quizlet with AI-driven study guides.
Local Business Directories: Yelp, Yellow Pages with AI-enhanced reviews.
Networking Platforms: LinkedIn, Meetup with AI-driven connection suggestions.
Lifestyle Magazines' Digital Platforms: Websites like Vogue, GQ with AI-curated fashion and lifestyle insights.
Endnote: Leveraging these diverse platforms ensures that Swarms becomes an integral part of multiple ecosystems, enhancing its visibility and user engagement.
"},{"location":"corporate/failures/","title":"Failure Root Cause Analysis for Langchain","text":""},{"location":"corporate/failures/#1-introduction","title":"1. Introduction","text":"Langchain is an open-source software that has gained massive popularity in the artificial intelligence ecosystem, serving as a tool for connecting different language models, especially GPT based models. However, despite its popularity and substantial investment, Langchain has shown several weaknesses that hinder its use in various projects, especially in complex and large-scale implementations. This document provides an analysis of the identified issues and proposes potential mitigation strategies.
"},{"location":"corporate/failures/#2-analysis-of-weaknesses","title":"2. Analysis of Weaknesses","text":""},{"location":"corporate/failures/#21-tool-lock-in","title":"2.1 Tool Lock-in","text":"Langchain tends to enforce tool lock-in, which could prove detrimental for developers. Its design heavily relies on specific workflows and architectures, which greatly limits flexibility. Developers may find themselves restricted to certain methodologies, impeding their freedom to implement custom solutions or integrate alternative tools.
"},{"location":"corporate/failures/#mitigation","title":"Mitigation","text":"An ideal AI framework should not be restrictive but should instead offer flexibility for users to integrate any agent on any architecture. Adopting an open architecture that allows for seamless interaction between various agents and workflows can address this issue.
"},{"location":"corporate/failures/#22-outdated-workflows","title":"2.2 Outdated Workflows","text":"Langchain's current workflows and prompt engineering, mainly based on InstructGPT, are out of date, especially compared to newer models like ChatGPT/GPT-4.
"},{"location":"corporate/failures/#mitigation_1","title":"Mitigation","text":"Keeping up with the latest AI models and workflows is crucial. The framework should have a mechanism for regular updates and seamless integration of up-to-date models and workflows.
"},{"location":"corporate/failures/#23-debugging-difficulties","title":"2.3 Debugging Difficulties","text":"Debugging in Langchain is reportedly very challenging, even with verbose output enabled, making it hard to determine what is happening under the hood.
"},{"location":"corporate/failures/#mitigation_2","title":"Mitigation","text":"The introduction of a robust debugging and logging system would help users understand the internals of the models, thus enabling them to pinpoint and rectify issues more effectively.
"},{"location":"corporate/failures/#24-limited-customization","title":"2.4 Limited Customization","text":"Langchain makes it extremely hard to deviate from documented workflows. This becomes a challenge when developers need custom workflows for their specific use-cases.
"},{"location":"corporate/failures/#mitigation_3","title":"Mitigation","text":"An ideal framework should support custom workflows and allow developers to hack and adjust the framework according to their needs.
"},{"location":"corporate/failures/#25-documentation","title":"2.5 Documentation","text":"Langchain's documentation is reportedly missing relevant details, making it difficult for users to understand the differences between various agent types, among other things.
"},{"location":"corporate/failures/#mitigation_4","title":"Mitigation","text":"Providing detailed and comprehensive documentation, including examples, FAQs, and best practices, is crucial. This will help users understand the intricacies of the framework, making it easier for them to implement it in their projects.
"},{"location":"corporate/failures/#26-negative-influence-on-ai-ecosystem","title":"2.6 Negative Influence on AI Ecosystem","text":"The extreme popularity of Langchain seems to be warping the AI ecosystem to the point of causing harm, with other AI entities shifting their operations to align with Langchain's 'magic AI' approach.
"},{"location":"corporate/failures/#mitigation_5","title":"Mitigation","text":"It's essential for any widely adopted framework to promote healthy practices in the broader ecosystem. One approach could be promoting open dialogue, inviting criticism, and being open to change based on feedback.
"},{"location":"corporate/failures/#3-conclusion","title":"3. Conclusion","text":"While Langchain has made significant contributions to the AI landscape, these challenges hinder its potential. Addressing these issues will not only improve Langchain but also foster a healthier AI ecosystem. It's important to note that criticism, when approached constructively, can be a powerful tool for growth and innovation.
"},{"location":"corporate/failures/#list-of-weaknesses-in-glangchain-and-potential-mitigations","title":"List of weaknesses in gLangchain and Potential Mitigations","text":"Mitigation Strategy: Langchain should consider designing the architecture to be more versatile and allow for the inclusion of a variety of tools. An open architecture will provide developers with more freedom and customization options.
Mitigation Strategy: Regular updates and adaptation of more recent models should be integrated into the Langchain framework.
Mitigation Strategy: Develop a comprehensive debugging tool or improve current debugging processes for clearer and more accessible error detection and resolution.
Mitigation Strategy: Improve documentation and provide guides on how to customize workflows to enhance developer flexibility.
Mitigation Strategy: Enhance and improve the documentation of Langchain to provide clarity for developers and make navigation easier.
Mitigation Strategy: Encourage diverse and balanced adoption of AI tools in the ecosystem.
Mitigation Strategy: Enhance the performance optimization of Langchain. Benchmarking against other tools can also provide performance improvement insights.
Mitigation Strategy: Focus on core features and allow greater flexibility in the interface. Adopting a modular approach where developers can pick and choose the features they want could also be helpful.
Mitigation Strategy: Adopt a more balanced approach between a library and a framework. Provide a solid core feature set with the possibility to extend it according to the developers' needs.
Mitigation Strategy: Prioritize fine-tuning and customizability for developers, limiting the focus on third-party services unless they provide substantial value.
Remember, any mitigation strategy will need to be tailored to Langchain's particular circumstances and developer feedback. It's also important to consider potential trade-offs and unintended consequences when implementing these strategies.
"},{"location":"corporate/faq/","title":"Faq","text":""},{"location":"corporate/faq/#faq-on-swarm-intelligence-and-multi-agent-systems","title":"FAQ on Swarm Intelligence and Multi-Agent Systems","text":""},{"location":"corporate/faq/#what-is-an-agent-in-the-context-of-ai-and-swarm-intelligence","title":"What is an agent in the context of AI and swarm intelligence?","text":"In artificial intelligence (AI), an agent refers to an LLM with some objective to accomplish.
In swarm intelligence, each agent interacts with other agents and possibly the environment to achieve complex collective behaviors or solve problems more efficiently than individual agents could on their own.
"},{"location":"corporate/faq/#what-do-you-need-swarms-at-all","title":"What do you need Swarms at all?","text":"Individual agents are limited by a vast array of issues such as context window loss, single task execution, hallucination, and no collaboration.
"},{"location":"corporate/faq/#how-does-a-swarm-work","title":"How does a swarm work?","text":"A swarm works through the principles of decentralized control, local interactions, and simple rules followed by each agent. Unlike centralized systems, where a single entity dictates the behavior of all components, in a swarm, each agent makes its own decisions based on local information and interactions with nearby agents. These local interactions lead to the emergence of complex, organized behaviors or solutions at the collective level, enabling the swarm to tackle tasks efficiently.
"},{"location":"corporate/faq/#why-do-you-need-more-agents-in-a-swarm","title":"Why do you need more agents in a swarm?","text":"More agents in a swarm can enhance its problem-solving capabilities, resilience, and efficiency. With more agents:
While deploying more agents can initially increase costs, especially in terms of computational resources, hosting, and potentially API usage, there are several factors and strategies that can mitigate these expenses:
Yes, swarms can make better decisions than individual agents for several reasons:
Communication in a swarm can vary based on the design and purpose of the system but generally involves either direct or indirect interactions:
While swarms are often associated with computational tasks, their applications extend far beyond. Swarms can be utilized in:
Security in swarm systems involves:
In the context of pre-trained Large Language Models (LLMs) that operate within a swarm, sharing insights typically involves explicit communication and data exchange protocols rather than direct learning mechanisms like reinforcement learning. Here's how it can work:
Shared Databases and Knowledge Bases: Agents can write to and read from a shared database or knowledge base where insights, generated content, and relevant data are stored. This allows agents to benefit from the collective experience of the swarm by accessing information that other agents have contributed.
APIs for Information Exchange: Custom APIs can facilitate the exchange of information between agents. Through these APIs, agents can request specific information or insights from others within the swarm, effectively sharing knowledge without direct learning.
Balancing autonomy with collective coherence in a swarm of LLMs involves:
Central Coordination Mechanism: Implementing a lightweight central coordination mechanism that can assign tasks, distribute information, and collect outputs from individual LLMs. This ensures that while each LLM operates autonomously, their actions are aligned with the swarm's overall objectives.
Standardized Communication Protocols: Developing standardized protocols for how LLMs communicate and share information ensures that even though each agent works autonomously, the information exchange remains coherent and aligned with the collective goals.
Adaptation in LLM swarms, without relying on machine learning techniques for dynamic learning, can be achieved through:
Dynamic Task Allocation: A central system or distributed algorithm can dynamically allocate tasks to different LLMs based on the changing environment or requirements. This ensures that the most suitable LLMs are addressing tasks for which they are best suited as conditions change.
Pre-trained Versatility: Utilizing a diverse set of pre-trained LLMs with different specialties or training data allows the swarm to select the most appropriate agent for a task as the requirements evolve.
In Context Learning: In context learning is another mechanism that can be employed within LLM swarms to adapt to changing environments or tasks. This approach involves leveraging the collective knowledge and experiences of the swarm to facilitate learning and improve performance. Here's how it can work:
LLM swarms primarily operate in digital spaces, given their nature as software entities. However, they can interact with physical environments indirectly through interfaces with sensors, actuaries, or other devices connected to the Internet of Things (IoT). For example, LLMs can process data from physical sensors and control devices based on their outputs, enabling applications like smart home management or autonomous vehicle navigation.
"},{"location":"corporate/faq/#without-direct-learning-from-each-other-how-do-agents-in-a-swarm-improve-over-time","title":"Without direct learning from each other, how do agents in a swarm improve over time?","text":"Improvement over time in a swarm of pre-trained LLMs, without direct learning from each other, can be achieved through:
Human Feedback: Incorporating feedback from human operators or users can guide adjustments to the usage patterns or selection criteria of LLMs within the swarm, optimizing performance based on observed outcomes.
Periodic Re-training and Updating: The individual LLMs can be periodically re-trained or updated by their developers based on collective insights and feedback from their deployment within swarms. While this does not involve direct learning from each encounter, it allows the LLMs to improve over time based on aggregated experiences.
These adjustments to the FAQ reflect the specific context of pre-trained LLMs operating within a swarm, focusing on communication, coordination, and adaptation mechanisms that align with their capabilities and constraints.
"},{"location":"corporate/faq/#conclusion","title":"Conclusion","text":"Swarms represent a powerful paradigm in AI, offering innovative solutions to complex, dynamic problems through collective intelligence and decentralized control. While challenges exist, particularly regarding cost and security, strategic design and management can leverage the strengths of swarm intelligence to achieve remarkable efficiency, adaptability, and robustness in a wide range of applications.
"},{"location":"corporate/flywheel/","title":"The Swarms Flywheel","text":"Building a Supportive Community: Initiate by establishing an engaging and inclusive open-source community for both developers and sales freelancers around Swarms. Regular online meetups, webinars, tutorials, and sales training can make them feel welcome and encourage contributions and sales efforts.
Increased Contributions and Sales Efforts: The more engaged the community, the more developers will contribute to Swarms and the more effort sales freelancers will put into selling Swarms.
Improvement in Quality and Market Reach: More developer contributions mean better quality, reliability, and feature offerings from Swarms. Simultaneously, increased sales efforts from freelancers boost Swarms' market penetration and visibility.
Rise in User Base: As Swarms becomes more robust and more well-known, the user base grows, driving more revenue.
Greater Financial Incentives: Increased revenue can be redirected to offer more significant financial incentives to both developers and salespeople. Developers can be incentivized based on their contribution to Swarms, and salespeople can be rewarded with higher commissions.
Attract More Developers and Salespeople: These financial incentives, coupled with the recognition and experience from participating in a successful project, attract more developers and salespeople to the community.
Wider Adoption of Swarms: An ever-improving product, a growing user base, and an increasing number of passionate salespeople accelerate the adoption of Swarms.
Return to Step 1: As the community, user base, and sales network continue to grow, the cycle repeats, each time speeding up the flywheel.
+---------------------+\n | Building a |\n | Supportive | <--+\n | Community | |\n +--------+-----------+ |\n | |\n v |\n +--------+-----------+ |\n | Increased | |\n | Contributions & | |\n | Sales Efforts | |\n +--------+-----------+ |\n | |\n v |\n +--------+-----------+ |\n | Improvement in | |\n | Quality & Market | |\n | Reach | |\n +--------+-----------+ |\n | |\n v |\n +--------+-----------+ |\n | Rise in User | |\n | Base | |\n +--------+-----------+ |\n | |\n v |\n +--------+-----------+ |\n | Greater Financial | |\n | Incentives | |\n +--------+-----------+ |\n | |\n v |\n +--------+-----------+ |\n | Attract More | |\n | Developers & | |\n | Salespeople | |\n +--------+-----------+ |\n | |\n v |\n +--------+-----------+ |\n | Wider Adoption of | |\n | Swarms |----+\n +---------------------+\n
"},{"location":"corporate/flywheel/#potential-risks-and-mitigations","title":"Potential Risks and Mitigations:","text":"Mitigation: Create a robust community with clear guidelines, support, and resources. Provide incentives for quality contributions, such as a reputation system, swag, or financial rewards. Conduct thorough code reviews to ensure the quality of contributions.
Lack of Sales Results: Commission-based salespeople will only continue to sell the product if they're successful. If they aren't making enough sales, they may lose motivation and cease their efforts.
Mitigation: Provide adequate sales training and resources. Ensure the product-market fit is strong, and adjust messaging or sales tactics as necessary. Consider implementing a minimum commission or base pay to reduce risk for salespeople.
Poor User Experience or User Adoption: If users don't find the product useful or easy to use, they won't adopt it, and the user base won't grow. This could also discourage salespeople and contributors.
Mitigation: Prioritize user experience in the product development process. Regularly gather and incorporate user feedback. Ensure robust user support is in place.
Inadequate Financial Incentives: If the financial rewards don't justify the time and effort contributors and salespeople are putting in, they will likely disengage.
Mitigation: Regularly review and adjust financial incentives as needed. Ensure that the method for calculating and distributing rewards is transparent and fair.
Security and Compliance Risks: As the user base grows and the software becomes more complex, the risk of security issues increases. Moreover, as contributors from various regions join, compliance with various international laws could become an issue.
Community Building: Begin by fostering a supportive community around Swarms. Encourage early adopters to contribute and provide feedback. Create comprehensive documentation, community guidelines, and a forum for discussion and support.
Sales and Development Training: Provide resources and training for salespeople and developers. Make sure they understand the product, its value, and how to effectively contribute or sell.
Increase Contributions and Sales Efforts: Encourage increased participation by highlighting successful contributions and sales, rewarding top contributors and salespeople, and regularly communicating about the project's progress and impact.
Iterate and Improve: Continually gather and implement feedback to improve Swarms and its market reach. The better the product and its alignment with the market, the more the user base will grow.
Expand User Base: As the product improves and sales efforts continue, the user base should grow. Ensure you have the infrastructure to support this growth and maintain a positive user experience.
Increase Financial Incentives: As the user base and product grow, so too should the financial incentives. Make sure rewards continue to be competitive and attractive.
Attract More Contributors and Salespeople: As the financial incentives and success of the product increase, this should attract more contributors and salespeople, further feeding the flywheel.
Throughout this process, it's important to regularly reassess and adjust your strategy as necessary. Stay flexible and responsive to changes in the market, user feedback, and the evolving needs of the community.
"},{"location":"corporate/front_end_contributors/","title":"Frontend Contributor Guide","text":""},{"location":"corporate/front_end_contributors/#mission","title":"Mission","text":"At the heart of Swarms is the mission to democratize multi-agent technology, making it accessible to businesses of all sizes around the globe. This technology, which allows for the orchestration of multiple autonomous agents to achieve complex goals, has the potential to revolutionize industries by enhancing efficiency, scalability, and innovation. Swarms is committed to leading this charge by developing a platform that empowers businesses and individuals to harness the power of multi-agent systems without the need for specialized knowledge or resources.
"},{"location":"corporate/front_end_contributors/#understanding-your-impact-as-a-frontend-engineer","title":"Understanding Your Impact as a Frontend Engineer","text":"Crafting User Experiences: As a frontend engineer at Swarms, you play a crucial role in making multi-agent technology understandable and usable for businesses worldwide. Your work involves translating complex systems into intuitive interfaces, ensuring users can easily navigate, manage, and benefit from multi-agent solutions. By focusing on user-centric design and seamless integration, you help bridge the gap between advanced technology and practical business applications.
Skills and Attributes for Success: Successful frontend engineers at Swarms combine technical expertise with a passion for innovation and a deep understanding of user needs. Proficiency in modern frontend technologies, such as React, NextJS, and Tailwind, is just the beginning. You also need a strong grasp of usability principles, accessibility standards, and the ability to work collaboratively with cross-functional teams. Creativity, problem-solving skills, and a commitment to continuous learning are essential for developing solutions that meet diverse business needs.
"},{"location":"corporate/front_end_contributors/#joining-the-team","title":"Joining the Team","text":"As you contribute to Swarms, you become part of a collaborative effort to change the world. We value each contribution and provide constructive feedback to help you grow. Outstanding contributors who share our vision and demonstrate exceptional skill and dedication are invited to join our team, where they can have an even greater impact on our mission.
"},{"location":"corporate/front_end_contributors/#becoming-a-full-time-swarms-engineer","title":"Becoming a Full-Time Swarms Engineer:","text":"Swarms is radically devoted to open source and transparency. To join the full time team, you must first contribute to the open source repository so we can assess your technical capability and general way of working. After a series of quality contributions, we'll offer you a full time position!
Joining Swarms full-time means more than just a job. It's an opportunity to be at the forefront of technological innovation, working alongside passionate professionals dedicated to making a difference. We look for individuals who are not only skilled but also driven by the desire to make multi-agent technology accessible and beneficial to businesses worldwide.
"},{"location":"corporate/front_end_contributors/#resources","title":"Resources","text":"Linear: Our projects and tasks at a glance. Get a sense of our workflow and priorities.
Design System and UI/UX Guidelines
Figma: Dive into our design system to grasp the aesthetics and user experience objectives of Swarms.
Swarms Platform Repository
GitHub: The hub of our development activities. Familiarize yourself with our codebase and current projects.
Swarms Community
We are a team of engineers, developers, and visionaries on a mission to build the future of AI by orchestrating multi-agent collaboration. We move fast, think ambitiously, and deliver with urgency. Join us if you want to be part of building the next generation of multi-agent systems, redefining how businesses automate operations and leverage AI.
We offer none of the following benefits Yet:
No medical, dental, or vision insurance
No paid time off
No life or AD&D insurance
No short-term or long-term disability insurance
No 401(k) plan
Working hours: 9 AM to 10 PM, every day, 7 days a week. This is not for people who seek work-life balance.
"},{"location":"corporate/hiring/#hiring-process-how-to-join-swarms","title":"Hiring Process: How to Join Swarms","text":"We have a simple 3-step hiring process:
NOTE We do not consider applicants who have not previously submitted a PR, to be considered a PR containing a new feature of a bug fixed must be submitted.
There are no recruiters. All evaluations are done by our technical team.
"},{"location":"corporate/hiring/#location","title":"Location","text":"Palo Alto CA Our Palo Alto office houses the majority of our core research teams including our prompting, agent design, and model training
Miami Our miami office holds prompt engineering, agent design, and more.
Infrastructure Engineer
Build and maintain the systems that run our AI multi-agent infrastructure.
Expertise in Skypilot, AWS, Terraform.
Ensure seamless, high-availability environments for agent operations.
Agent Engineer
Design, develop, and orchestrate complex swarms of AI agents.
Extensive experience with Python, multi-agent systems, and neural networks.
Ability to create dynamic and efficient agent architectures from scratch.
Prompt Engineer
Craft highly optimized prompts that drive our LLM-based agents.
Specialize in instruction-based prompts, multi-shot examples, and production-grade deployment.
Collaborate with agents to deliver state-of-the-art solutions.
Front-End Engineer
Build sleek, intuitive interfaces for interacting with swarms of agents.
Proficiency in Next.js, FastAPI, and modern front-end technologies.
Design with the user experience in mind, integrating complex AI features into simple workflows.
In the world of Swarms, there\u2019s one metric that stands above the rest: the User-Task-Completion-Satisfaction (UTCS) rate. This metric is the heart of our system, the pulse that keeps us moving forward. It\u2019s not just a number; it\u2019s a reflection of our commitment to our users and a measure of our success.
"},{"location":"corporate/metric/#what-is-the-utcs-rate","title":"What is the UTCS Rate?","text":"The UTCS rate is a measure of how reliably and quickly Swarms can satisfy a user demand. It\u2019s calculated by dividing the number of tasks completed to the user\u2019s satisfaction by the total number of tasks. Multiply that by 100, and you\u2019ve got your UTCS rate.
But what does it mean to complete a task to the user\u2019s satisfaction? It means that the task is not only completed, but completed in a way that meets or exceeds the user\u2019s expectations. It\u2019s about quality, speed, and reliability.
"},{"location":"corporate/metric/#why-is-the-utcs-rate-important","title":"Why is the UTCS Rate Important?","text":"The UTCS rate is a direct reflection of the user experience. A high UTCS rate means that users are getting what they need from Swarms, and they\u2019re getting it quickly and reliably. It means that Swarms is doing its job, and doing it well.
But the UTCS rate is not just about user satisfaction. It\u2019s also a measure of Swarms\u2019 efficiency and effectiveness. A high UTCS rate means that Swarms is able to complete tasks quickly and accurately, with minimal errors or delays. It\u2019s a sign of a well-oiled machine.
"},{"location":"corporate/metric/#how-do-we-achieve-a-95-utcs-rate","title":"How Do We Achieve a 95% UTCS Rate?","text":"Achieving a 95% UTCS rate is no small feat. It requires a deep understanding of our users and their needs, a robust and reliable system, and a commitment to continuous improvement.
"},{"location":"corporate/metric/#here-are-some-strategies-were-implementing-to-reach-our-goal","title":"Here are some strategies we\u2019re implementing to reach our goal:","text":"Understanding User Needs: We must have agents that gain an understanding of the user's objective and break it up into it's most fundamental building blocks
Improving System Reliability: We\u2019re working to make Swarms more reliable, reducing errors and improving the accuracy of task completion. This includes improving our algorithms, refining our processes, and investing in quality assurance.
Optimizing for Speed: We\u2019re optimizing Swarms to complete tasks as quickly as possible, without sacrificing quality. This includes improving our infrastructure, streamlining our workflows, and implementing performance optimizations.
*Iterating and Improving: We\u2019re committed to continuous improvement. We\u2019re constantly monitoring our UTCS rate and other key metrics, and we\u2019re always looking for ways to improve. We\u2019re not afraid to experiment, iterate, and learn from our mistakes.
Achieving a 95% UTCS rate is a challenging goal, but it\u2019s a goal worth striving for. It\u2019s a goal that will drive us to improve, innovate, and deliver the best possible experience for our users. And in the end, that\u2019s what Swarms is all about.
"},{"location":"corporate/metric/#your-feedback-matters-help-us-optimize-the-utcs-rate","title":"Your Feedback Matters: Help Us Optimize the UTCS Rate","text":"As we initiate the journey of Swarms, we seek your feedback to better guide our growth and development. Your opinions and suggestions are crucial for us, helping to mold our product, pricing, branding, and a host of other facets that influence your experience.
"},{"location":"corporate/metric/#your-insights-on-the-utcs-rate","title":"Your Insights on the UTCS Rate","text":"Our goal is to maintain a UTCS (User-Task-Completion-Satisfaction) rate of 95%. This metric is integral to the success of Swarms, indicating the efficiency and effectiveness with which we satisfy user requests. However, it's a metric that we can't optimize alone - we need your help.
Here's what we want to understand from you:
We invite you to share your experiences, thoughts, and ideas. Whether it's a simple suggestion or an in-depth critique, we appreciate and value your input.
"},{"location":"corporate/metric/#your-feedback-the-backbone-of-our-growth","title":"Your Feedback: The Backbone of our Growth","text":"Your feedback is the backbone of Swarms' evolution. It drives us to refine our strategies, fuels our innovative spirit, and, most importantly, enables us to serve you better.
As we launch, we open the conversation around these key aspects of Swarms, and we look forward to understanding your expectations, your needs, and how we can deliver the best experience for you.
So, let's start this conversation - how can we make Swarms work best for you?
Guide Our Growth: Help Optimize Swarms As we launch Swarms, your feedback is critical for enhancing our product, pricing, and branding. A key aim for us is a User-Task-Completion-Satisfaction (UTCS) rate of 95% - indicating our efficiency and effectiveness in meeting user needs. However, we need your insights to optimize this.
Here's what we're keen to understand:
Satisfaction: Your interpretation of a \"satisfactorily completed task\". Timeliness: The importance of speed in task completion for you. Usability: Your experiences with our platform\u2019s intuitiveness and user-friendliness. Reliability: The significance of consistent performance to you. Value for Money: Your thoughts on our pricing and value proposition. We welcome your thoughts, experiences, and suggestions. Your feedback fuels our evolution, driving us to refine strategies, boost innovation, and enhance your experience.
Let's start the conversation - how can we make Swarms work best for you?
The Golden Metric Analysis: The Ultimate UTCS Paradigm for Swarms
"},{"location":"corporate/metric/#introduction","title":"Introduction","text":"In our ongoing journey to perfect Swarms, understanding how our product fares in the eyes of the end-users is paramount. Enter the User-Task-Completion-Satisfaction (UTCS) rate - our primary metric that gauges how reliably and swiftly Swarms can meet user demands. As we steer Swarms towards achieving a UTCS rate of 95%, understanding this metric's core and how to refine it becomes vital.
"},{"location":"corporate/metric/#decoding-utcs-an-analytical-overview","title":"Decoding UTCS: An Analytical Overview","text":"The UTCS rate is not merely about task completion; it's about the comprehensive experience. Therefore, its foundations lie in:
We can represent the UTCS rate with the following equation:
\\[ UTCS Rate = \\frac{(Completed Tasks \\times User Satisfaction)}{(Total Tasks)} \\times 100 \\]\n
Where: - Completed Tasks refer to the number of tasks Swarms executes without errors. - User Satisfaction is the subjective component, gauged through feedback mechanisms. This could be on a scale of 1-10 (or a percentage). - Total Tasks refer to all tasks processed by Swarms, regardless of the outcome.
"},{"location":"corporate/metric/#the-golden-metric-swarm-efficiency-index-sei","title":"The Golden Metric: Swarm Efficiency Index (SEI)","text":"However, this basic representation doesn't factor in a critical component: system performance. Thus, we introduce the Swarm Efficiency Index (SEI). The SEI encapsulates not just the UTCS rate but also system metrics like memory consumption, number of tasks, and time taken. By blending these elements, we aim to present a comprehensive view of Swarm's prowess.
Here\u2019s the formula:
\\[ SEI = \\frac{UTCS Rate}{(Memory Consumption + Time Window + Task Complexity)} \\]\n
Where: - Memory Consumption signifies the system resources used to accomplish tasks. - Time Window is the timeframe in which the tasks were executed. - Task Complexity could be a normalized scale that defines how intricate a task is (e.g., 1-5, with 5 being the most complex).
Rationale: - Incorporating Memory Consumption: A system that uses less memory but delivers results is more efficient. By inverting memory consumption in the formula, we emphasize that as memory usage goes down, SEI goes up.
Considering Time: Time is of the essence. The faster the results without compromising quality, the better. By adding the Time Window, we emphasize that reduced task execution time increases the SEI.
Factoring in Task Complexity: Not all tasks are equal. A system that effortlessly completes intricate tasks is more valuable. By integrating task complexity, we can normalize the SEI according to the task's nature.
Using feedback from elder-plinius, we can better understand and improve SEI and UTCS:
Feedback Across Skill Levels: By gathering feedback from users with different skill levels, we can refine our metrics, ensuring Swarms caters to all.
Simplifying Setup: Detailed guides can help newcomers swiftly get on board, thus enhancing user satisfaction.
Enhancing Workspace and Agent Management: A clearer view of the Swarm's internal structure, combined with on-the-go adjustments, can improve both the speed and quality of results.
Introducing System Suggestions: A proactive Swarms that provides real-time insights and recommendations can drastically enhance user satisfaction, thus pushing up the UTCS rate.
The UTCS rate is undeniably a pivotal metric for Swarms. However, with the introduction of the Swarm Efficiency Index (SEI), we have an opportunity to encapsulate a broader spectrum of performance indicators, leading to a more holistic understanding of Swarms' efficiency. By consistently optimizing for SEI, we can ensure that Swarms not only meets user expectations but also operates at peak system efficiency.
Research Analysis: Tracking and Ensuring Reliability of Swarm Metrics at Scale
"},{"location":"corporate/metric/#1-introduction","title":"1. Introduction","text":"In our pursuit to optimize the User-Task-Completion-Satisfaction (UTCS) rate and Swarm Efficiency Index (SEI), reliable tracking of these metrics at scale becomes paramount. This research analysis delves into methodologies, technologies, and practices that can be employed to monitor these metrics accurately and efficiently across vast data sets.
"},{"location":"corporate/metric/#2-why-tracking-at-scale-is-challenging","title":"2. Why Tracking at Scale is Challenging","text":"The primary challenges include:
Recommendation: Implement distributed systems like Prometheus or InfluxDB.
Rationale: - Ability to collect metrics from various Swarm instances concurrently. - Scalable and can handle vast data influxes.
"},{"location":"corporate/metric/#32-real-time-data-processing","title":"3.2. Real-time Data Processing","text":"Recommendation: Use stream processing systems like Apache Kafka or Apache Flink.
Rationale: - Enables real-time metric calculation. - Can handle high throughput and low-latency requirements.
"},{"location":"corporate/metric/#33-data-sampling","title":"3.3. Data Sampling","text":"Recommendation: Random or stratified sampling of user sessions.
Rationale: - Reduces the data volume to be processed. - Maintains representativeness of overall user experience.
"},{"location":"corporate/metric/#4-ensuring-reliability-in-data-collection","title":"4. Ensuring Reliability in Data Collection","text":""},{"location":"corporate/metric/#41-redundancy","title":"4.1. Redundancy","text":"Recommendation: Integrate redundancy into data collection nodes.
Rationale: - Ensures no single point of failure. - Data loss prevention in case of system malfunctions.
"},{"location":"corporate/metric/#42-anomaly-detection","title":"4.2. Anomaly Detection","text":"Recommendation: Implement AI-driven anomaly detection systems.
Rationale: - Identifies outliers or aberrations in metric calculations. - Ensures consistent and reliable data interpretation.
"},{"location":"corporate/metric/#43-data-validation","title":"4.3. Data Validation","text":"Recommendation: Establish automated validation checks.
Rationale: - Ensures only accurate and relevant data is considered. - Eliminates inconsistencies arising from corrupted or irrelevant data.
"},{"location":"corporate/metric/#5-feedback-loops-and-continuous-refinement","title":"5. Feedback Loops and Continuous Refinement","text":""},{"location":"corporate/metric/#51-user-feedback-integration","title":"5.1. User Feedback Integration","text":"Recommendation: Develop an in-built user feedback mechanism.
Rationale: - Helps validate the perceived vs. actual performance. - Allows for continuous refining of tracking metrics and methodologies.
"},{"location":"corporate/metric/#52-ab-testing","title":"5.2. A/B Testing","text":"Recommendation: Regularly conduct A/B tests for new tracking methods or adjustments.
Rationale: - Determines the most effective methods for data collection. - Validates new tracking techniques against established ones.
"},{"location":"corporate/metric/#6-conclusion","title":"6. Conclusion","text":"To successfully and reliably track the UTCS rate and SEI at scale, it's essential to combine robust monitoring tools, data processing methodologies, and validation techniques. By doing so, Swarms can ensure that the metrics collected offer a genuine reflection of system performance and user satisfaction. Regular feedback and iterative refinement, rooted in a culture of continuous improvement, will further enhance the accuracy and reliability of these essential metrics.
"},{"location":"corporate/monthly_formula/","title":"Monthly formula","text":"In\u00a0[\u00a0]: Copied!def calculate_monthly_charge(\n development_time_hours: float,\n hourly_rate: float,\n amortization_months: int,\n api_calls_per_month: int,\n cost_per_api_call: float,\n monthly_maintenance: float,\n additional_monthly_costs: float,\n profit_margin_percentage: float,\n) -> float:\n \"\"\"\n Calculate the monthly charge for a service based on various cost factors.\n\n Parameters:\n - development_time_hours (float): The total number of hours spent on development and setup.\n - hourly_rate (float): The rate per hour for development and setup.\n - amortization_months (int): The number of months over which to amortize the development and setup costs.\n - api_calls_per_month (int): The number of API calls made per month.\n - cost_per_api_call (float): The cost per API call.\n - monthly_maintenance (float): The monthly maintenance cost.\n - additional_monthly_costs (float): Any additional monthly costs.\n - profit_margin_percentage (float): The desired profit margin as a percentage.\n\n Returns:\n - monthly_charge (float): The calculated monthly charge for the service.\n \"\"\"\n\n # Calculate Development and Setup Costs (amortized monthly)\n development_and_setup_costs_monthly = (\n development_time_hours * hourly_rate\n ) / amortization_months\n\n # Calculate Operational Costs per Month\n operational_costs_monthly = (\n (api_calls_per_month * cost_per_api_call)\n + monthly_maintenance\n + additional_monthly_costs\n )\n\n # Calculate Total Monthly Costs\n total_monthly_costs = (\n development_and_setup_costs_monthly\n + operational_costs_monthly\n )\n\n # Calculate Pricing with Profit Margin\n monthly_charge = total_monthly_costs * (\n 1 + profit_margin_percentage / 100\n )\n\n return monthly_charge\ndef calculate_monthly_charge( development_time_hours: float, hourly_rate: float, amortization_months: int, api_calls_per_month: int, cost_per_api_call: float, monthly_maintenance: float, additional_monthly_costs: float, profit_margin_percentage: float, ) -> float: \"\"\" Calculate the monthly charge for a service based on various cost factors. Parameters: - development_time_hours (float): The total number of hours spent on development and setup. - hourly_rate (float): The rate per hour for development and setup. - amortization_months (int): The number of months over which to amortize the development and setup costs. - api_calls_per_month (int): The number of API calls made per month. - cost_per_api_call (float): The cost per API call. - monthly_maintenance (float): The monthly maintenance cost. - additional_monthly_costs (float): Any additional monthly costs. - profit_margin_percentage (float): The desired profit margin as a percentage. Returns: - monthly_charge (float): The calculated monthly charge for the service. \"\"\" # Calculate Development and Setup Costs (amortized monthly) development_and_setup_costs_monthly = ( development_time_hours * hourly_rate ) / amortization_months # Calculate Operational Costs per Month operational_costs_monthly = ( (api_calls_per_month * cost_per_api_call) + monthly_maintenance + additional_monthly_costs ) # Calculate Total Monthly Costs total_monthly_costs = ( development_and_setup_costs_monthly + operational_costs_monthly ) # Calculate Pricing with Profit Margin monthly_charge = total_monthly_costs * ( 1 + profit_margin_percentage / 100 ) return monthly_charge In\u00a0[\u00a0]: Copied!
# Example usage:\nmonthly_charge = calculate_monthly_charge(\n development_time_hours=100,\n hourly_rate=500,\n amortization_months=12,\n api_calls_per_month=500000,\n cost_per_api_call=0.002,\n monthly_maintenance=1000,\n additional_monthly_costs=300,\n profit_margin_percentage=10000,\n)\n# Example usage: monthly_charge = calculate_monthly_charge( development_time_hours=100, hourly_rate=500, amortization_months=12, api_calls_per_month=500000, cost_per_api_call=0.002, monthly_maintenance=1000, additional_monthly_costs=300, profit_margin_percentage=10000, ) In\u00a0[\u00a0]: Copied!
print(f\"Monthly Charge: ${monthly_charge:.2f}\")\nprint(f\"Monthly Charge: ${monthly_charge:.2f}\")"},{"location":"corporate/purpose/","title":"Purpose","text":""},{"location":"corporate/purpose/#purpose","title":"Purpose","text":"
Artificial Intelligence has grown at an exponential rate over the past decade. Yet, we are far from fully harnessing its potential. Today's AI operates in isolation, each working separately in their corner. But life doesn't work like that. The world doesn't work like that. Success isn't built in silos; it's built in teams.
Imagine a world where AI models work in unison. Where they can collaborate, interact, and pool their collective intelligence to achieve more than any single model could. This is the future we envision. But today, we lack a framework for AI to collaborate effectively, to form a true swarm of intelligent agents.
This is a difficult problem, one that has eluded solution. It requires sophisticated systems that can allow individual models to not just communicate but also understand each other, pool knowledge and resources, and create collective intelligence. This is the next frontier of AI.
But here at Swarms, we have a secret sauce. It's not just a technology or a breakthrough invention. It's a way of thinking - the philosophy of rapid iteration. With each cycle, we make massive progress. We experiment, we learn, and we grow. We have developed a pioneering framework that can enable AI models to work together as a swarm, combining their strengths to create richer, more powerful outputs.
We are uniquely positioned to take on this challenge with 1,500+ devoted researchers in Agora. We have assembled a team of world-class experts, experienced and driven, united by a shared vision. Our commitment to breaking barriers, pushing boundaries, and our belief in the power of collective intelligence makes us the best team to usher in this future to fundamentally advance our species, Humanity.
"},{"location":"corporate/research/","title":"Research Lists","text":"A compilation of projects, papers, blogs in autonomous agents.
"},{"location":"corporate/research/#table-of-contents","title":"Table of Contents","text":"In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
"},{"location":"corporate/roadmap/#phase-2-optimizing-the-system","title":"Phase 2: Optimizing the System","text":"In the second phase, we'll focus on optimizng Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
"},{"location":"corporate/roadmap/#phase-3-towards-super-intelligence","title":"Phase 3: Towards Super-Intelligence","text":"The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
"},{"location":"corporate/swarm_cloud/","title":"The Swarm Cloud","text":""},{"location":"corporate/swarm_cloud/#business-model-plan-for-autonomous-agent-swarm-service","title":"Business Model Plan for Autonomous Agent Swarm Service","text":""},{"location":"corporate/swarm_cloud/#service-description","title":"Service Description","text":"| Pricing Structure | Description | Details |\n| ------------------------- | ----------- | ------- |\n| Usage-Based Per Agent | Fees are charged based on the number of agents deployed and their usage duration. | - Ideal for clients needing a few agents for specific tasks. <br> - More agents or longer usage results in higher fees. |\n| Swarm Coverage Pricing | Pricing based on the coverage area or scope of the swarm deployment. | - Suitable for tasks requiring large area coverage. <br> - Price scales with the size or complexity of the area covered. |\n| Performance-Based Pricing | Fees are tied to the performance or outcomes achieved by the agents. | - Clients pay for the effectiveness or results achieved by the agents. <br> - Higher fees for more complex or high-value tasks. |\n
Pay-Per-Mission Pricing: Clients are charged for each specific task or mission completed by the agents.
Per Agent Usage Fee: Charged based on the number of agents and the duration of their deployment.
Volume Discounts: Available for large-scale deployments.
Time-Based Subscription: A subscription model where clients pay a recurring fee for continuous access to a set number of agents.
Dynamic Pricing: Prices fluctuate based on demand, time of day, or specific conditions.
Tiered Usage Levels: Different pricing tiers based on the number of agents used or the complexity of tasks.
Freemium Model: Basic services are free, but premium features or additional agents are paid.
Outcome-Based Pricing: Charges are based on the success or quality of the outcomes achieved by the agents.
Feature-Based Pricing: Different prices for different feature sets or capabilities of the agents.
Volume Discounts: Reduced per-agent price for bulk deployments or long-term contracts.
Peak Time Premiums: Higher charges during peak usage times or for emergency deployment.
Bundled Services: Combining agent services with other products or services for a comprehensive package deal.
Custom Solution Pricing: Tailor-made pricing for unique or specialized requirements.
Data Analysis Fee: Charging for the data processing and analytics provided by the agents.
Performance Tiers: Different pricing for varying levels of agent efficiency or performance.
License Model: Clients purchase a license to deploy and use a certain number of agents.
Cost-Plus Pricing: Pricing based on the cost of deployment plus a markup.
Service Level Agreement (SLA) Pricing: Higher prices for higher levels of service guarantees.
Pay-Per-Save Model: Charging based on the cost savings or value created by the agents for the client.
Revenue Sharing: Sharing a percentage of the revenue generated through the use of agents.
Geographic Pricing: Different pricing for different regions or markets.
User-Based Pricing: Charging based on the number of users accessing and controlling the agents.
Energy Usage Pricing: Prices based on the amount of energy consumed by the agents during operation.
Event-Driven Pricing: Charging for specific events or triggers during the agent's operation.
Seasonal Pricing: Adjusting prices based on seasonal demand or usage patterns.
Partnership Models: Collaborating with other businesses and sharing revenue from combined services.
Customizable Packages: Allowing clients to build their own package of services and capabilities, priced accordingly.
These diverse pricing strategies can be combined or tailored to fit different business models, client needs, and market dynamics. They also provide various methods of value extraction, ensuring flexibility and scalability in revenue generation.
"},{"location":"corporate/swarm_cloud/#icp-analysis","title":"ICP Analysis","text":""},{"location":"corporate/swarm_cloud/#ideal-customer-profile-icp-map","title":"Ideal Customer Profile (ICP) Map","text":""},{"location":"corporate/swarm_cloud/#1-manufacturing-and-industrial-automation","title":"1. Manufacturing and Industrial Automation","text":"- **Characteristics:** Major construction firms, infrastructure developers.\n- **Needs:** Site monitoring, material tracking, safety compliance.\n
"},{"location":"corporate/swarm_cloud/#potential-market-size-table-in-markdown","title":"Potential Market Size Table (in Markdown)","text":"| Customer Segment | Estimated Market Size (USD) | Notes |\n| ---------------------------- | --------------------------- | ----- |\n| Manufacturing and Industrial | $100 Billion | High automation and efficiency needs drive demand. |\n| Agriculture and Farming | $75 Billion | Growing adoption of smart farming technologies. |\n| Logistics and Supply Chain | $90 Billion | Increasing need for automation in warehousing and delivery. |\n| Energy and Utilities | $60 Billion | Focus on infrastructure monitoring and maintenance. |\n| Environmental Monitoring | $30 Billion | Rising interest in climate and ecological data collection. |\n| Smart Cities and Urban Planning | $50 Billion | Growing investment in smart city technologies. |\n| Defense and Security | $120 Billion | High demand for surveillance and reconnaissance tech. |\n| Healthcare and Medical | $85 Billion | Need for efficient hospital management and patient care. |\n| Entertainment and Event Management | $40 Billion | Innovative uses in crowd control and event safety. |\n| Construction and Infrastructure | $70 Billion | Use in monitoring and managing large construction projects. |\n
"},{"location":"corporate/swarm_cloud/#risk-analysis","title":"Risk Analysis","text":"1. Our Vision: - Revolutionize industries through scalable, intelligent swarms of autonomous agents. - Enable real-time data collection, analysis, and automated task execution.
2. Service Offering: - The Swarm Cloud Platform: Deploy and manage swarms of autonomous agents in production-grade environments. - Applications: Versatile across industries \u2013 from smart agriculture to urban planning, logistics, and beyond.
3. Key Features: - High Scalability: Tailored solutions from small-scale deployments to large industrial operations. - Real-Time Analytics: Instant data processing and actionable insights. - User-Friendly Interface: Simplified control and monitoring of agent swarms. - Robust Security: Ensuring data integrity and operational safety.
4. Revenue Streams: - Usage-Based Pricing: Charges based on the number of agents and operation duration. - Subscription Models: Recurring revenue through scalable packages. - Custom Solutions: Tailored pricing for bespoke deployments.
5. Market Opportunity: - Expansive Market: Addressing needs in a $500 billion global market spanning multiple sectors. - Competitive Edge: Advanced technology offering superior efficiency and adaptability.
6. Growth Strategy: - R&D Investment: Continuous enhancement of agent capabilities and platform features. - Strategic Partnerships: Collaborations with industry leaders for market penetration. - Marketing and Sales: Focused approach on high-potential sectors with tailored marketing strategies.
7. Why Invest in The Swarm Cloud? - Pioneering Technology: At the forefront of autonomous agent systems. - Scalable Business Model: Designed for rapid expansion and adaptation to diverse market needs. - Strong Market Demand: Positioned to capitalize on the growing trend of automation and AI.
\"Empowering industries with intelligent, autonomous solutions \u2013 The Swarm Cloud is set to redefine efficiency and innovation.\"
"},{"location":"corporate/swarm_cloud/#conclusion","title":"Conclusion","text":"The business model aims to provide a scalable, efficient, and cost-effective solution for industries looking to leverage the power of autonomous agent technology. With a structured pricing plan and a focus on continuous development and support, the service is positioned to meet diverse industry needs.
"},{"location":"corporate/swarm_memo/","title":"[Go To Market Strategy][GTM]","text":"Our vision is to become the world leader in real-world production grade autonomous agent deployment through open-source product development, Deep Verticalization, and unmatched value delivery to the end user.
We will focus on first accelerating the open source framework to PMF where it will serve as the backend for upstream products and services such as the Swarm Cloud which will enable enterprises to deploy autonomous agents with long term memory and tools in the cloud and a no-code platform for users to build their own swarm by dragging and dropping blocks.
Our target user segment for the framework is AI engineers looking to deploy agents into high risk environments where reliability is crucial.
Once PMF has been achieved and the framework has been extensively benchmarked we aim to establish high value contracts with customers in Security, Logistics, Manufacturing, Health and various other untapped industries.
Our growth strategy for the OS framework can be summarized by:
As we continuously deliver value with the open framework we will strategically position ourselves to acquire leads for high value contracts by demonstrating the power, reliability, and performance of our framework openly.
Acquire Full Access to the memo here: TSC Memo
"},{"location":"corporate/swarms_bounty_system/","title":"The Swarms Bounty System: Get Paid to Contribute to Open Source","text":"In today's fast-paced world of software development, open source has become a driving force for innovation. Every single business and organization on the planet is dependent on open source software.
The power of collaboration and community has proven to be a potent catalyst for creating robust, cutting-edge solutions. At Swarms, we recognize the immense value that open source contributors bring to the table, and we're thrilled to introduce our Bounty System \u2013 a program designed to reward developers for their invaluable contributions to the Swarms ecosystem.
The Swarms Bounty System is a groundbreaking initiative that encourages developers from all walks of life to actively participate in the development and improvement of our suite of products, including the Swarms Python framework, Swarm Cloud, and Swarm Core. By leveraging the collective intelligence and expertise of the global developer community, we aim to foster a culture of continuous innovation and excellence.
All bounties with rewards can be found here:
"},{"location":"corporate/swarms_bounty_system/#the-power-of-collaboration","title":"The Power of Collaboration","text":"At the heart of the Swarms Bounty System lies the belief that collaboration is the key to unlocking the true potential of software development. By opening up our codebase to the vast talent pool of developers around the world, we're not only tapping into a wealth of knowledge and skills, but also fostering a sense of ownership and investment in the Swarms ecosystem.
Whether you're a seasoned developer with years of experience or a passionate newcomer eager to learn and grow, the Swarms Bounty System offers a unique opportunity to contribute to cutting-edge projects and leave your mark on the technological landscape.
"},{"location":"corporate/swarms_bounty_system/#how-the-bounty-system-works","title":"How the Bounty System Works","text":"The Swarms Bounty System is designed to be simple, transparent, and rewarding. Here's how it works:
Explore the Bounties: We maintain a comprehensive list of bounties, ranging from bug fixes and feature enhancements to entirely new projects. These bounties are categorized based on their complexity and potential impact, ensuring that there's something for everyone, regardless of their skill level or area of expertise. Bounties will be listed here
Submit Your Contributions: Once you've identified a bounty that piques your interest, you can start working on it. When you're ready, submit your contribution in the form of a pull request, following our established guidelines and best practices.
Review and Approval: Our dedicated team of reviewers will carefully evaluate your submission, ensuring that it meets our rigorous quality standards and aligns with the project's vision. They'll provide feedback and guidance, fostering a collaborative environment where you can learn and grow.
Get Rewarded: Upon successful acceptance of your contribution, you'll be rewarded with a combination of cash and or stock incentives. The rewards are based on a tiered system, reflecting the complexity and impact of your contribution.
At Swarms, we believe in recognizing and rewarding exceptional contributions. Our tiered rewards system is designed to incentivize developers to push the boundaries of innovation and drive the Swarms ecosystem forward. Here's how the rewards are structured:
"},{"location":"corporate/swarms_bounty_system/#tier-1-bug-fixes-and-minor-enhancements","title":"Tier 1: Bug Fixes and Minor Enhancements","text":"Reward Description Cash Reward $50 - $150 Stock Reward N/AThis tier covers minor bug fixes, documentation improvements, and small enhancements to existing features. While these contributions may seem insignificant, they play a crucial role in maintaining the stability and usability of our products.
"},{"location":"corporate/swarms_bounty_system/#tier-2-moderate-enhancements-and-new-features","title":"Tier 2: Moderate Enhancements and New Features","text":"Reward Description Cash Reward $151 - $300 Stock Reward 10+This tier encompasses moderate enhancements to existing features, as well as the implementation of new, non-critical features. Contributions in this tier demonstrate a deeper understanding of the project's architecture and a commitment to improving the overall user experience.
"},{"location":"corporate/swarms_bounty_system/#tier-3-major-features-and-groundbreaking-innovations","title":"Tier 3: Major Features and Groundbreaking Innovations","text":"Reward Description Cash Reward $301 - $++ Stock Reward 25+This tier is reserved for truly exceptional contributions that have the potential to revolutionize the Swarms ecosystem. Major feature additions, innovative architectural improvements, and groundbreaking new projects fall under this category. Developers who contribute at this level will be recognized as thought leaders and pioneers in their respective fields.
It's important to note that the cash and stock rewards are subject to change based on the project's requirements, complexity, and overall impact. Additionally, we may introduce special bounties with higher reward tiers for particularly challenging or critical projects.
"},{"location":"corporate/swarms_bounty_system/#the-benefits-of-contributing","title":"The Benefits of Contributing","text":"Participating in the Swarms Bounty System offers numerous benefits beyond the financial incentives. By contributing to our open source projects, you'll have the opportunity to:
Expand Your Skills: Working on real-world projects with diverse challenges will help you hone your existing skills and acquire new ones, making you a more versatile and valuable developer.
Build Your Portfolio: Your contributions will become part of your professional portfolio, showcasing your expertise and dedication to the open source community.
Network with Industry Experts: Collaborate with our team of seasoned developers and gain invaluable insights and mentorship from industry leaders.
Shape the Future: Your contributions will directly impact the direction and evolution of the Swarms ecosystem, shaping the future of our products and services.
Gain Recognition: Stand out in the crowded field of software development by having your contributions acknowledged and celebrated by the Swarms community.
The Swarms Bounty System is more than just a program; it's a movement that embraces the spirit of open source and fosters a culture of collaboration, innovation, and excellence. By joining our ranks, you'll become part of a vibrant community of developers who share a passion for pushing the boundaries of what's possible.
Whether you're a seasoned veteran or a newcomer eager to make your mark, the Swarms Bounty System offers a unique opportunity to contribute to cutting-edge projects, earn rewards, and shape the future of software development.
So, what are you waiting for? Explore our bounties, find your niche, and start contributing today. Together, we can build a brighter, more innovative future for the Swarms ecosystem and the entire software development community.
Join the swarm community now:
"},{"location":"corporate/swarms_bounty_system/#resources","title":"Resources","text":"Welcome to the comprehensive Swarms Examples Index! This curated collection showcases the power and versatility of the Swarms framework for building intelligent multi-agent systems. Whether you're a beginner looking to get started or an advanced developer seeking complex implementations, you'll find practical examples to accelerate your AI development journey.
"},{"location":"examples/#what-is-swarms","title":"What is Swarms?","text":"Swarms is a cutting-edge framework for creating sophisticated multi-agent AI systems that can collaborate, reason, and solve complex problems together. From single intelligent agents to coordinated swarms of specialized AI workers, Swarms provides the tools and patterns you need to build the next generation of AI applications.
"},{"location":"examples/#what-youll-find-here","title":"What You'll Find Here","text":"This index organizes 100+ production-ready examples from our Swarms Examples Repository and the main Swarms repository, covering:
Single Agent Systems: From basic implementations to advanced reasoning agents
Multi-Agent Architectures: Collaborative swarms, hierarchical systems, and experimental topologies
Industry Applications: Real-world use cases across finance, healthcare, security, and more
Integration Examples: Connect with popular AI models, tools, and frameworks
Advanced Patterns: RAG systems, function calling, MCP integration, and more
New to Swarms? Start with the Easy Example under Single Agent Examples \u2192 Core Agents.
Looking for comprehensive tutorials? Check out The Swarms Cookbook for detailed walkthroughs and advanced patterns.
Want to see real-world applications? Explore the Industry Applications section to see how Swarms solves practical problems.
"},{"location":"examples/#quick-navigation","title":"Quick Navigation","text":"Single Agent Examples - Individual AI agents with various capabilities
Multi-Agent Examples - Collaborative systems and swarm architectures
Additional Resources - Community links and support channels
Github
Discord (https://t.co/zlLe07AqUX)
Telegram (https://t.co/dSRy143zQv)
X Community (https://x.com/i/communities/1875452887414804745)
The Swarms framework provides powerful real-time streaming capabilities for agents, allowing you to see responses being generated token by token as they're produced by the language model. This creates a more engaging and interactive experience, especially useful for long-form content generation, debugging, or when you want to provide immediate feedback to users.
"},{"location":"examples/agent_stream/#installation","title":"Installation","text":"Install the swarms package using pip:
pip install -U swarms\n
"},{"location":"examples/agent_stream/#basic-setup","title":"Basic Setup","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\n
"},{"location":"examples/agent_stream/#step-by-step","title":"Step by Step","text":"Install and put your keys in .env
Turn on streaming in Agent()
with streaming_on=True
Optional: If you want to pretty print it, you can do print_on=True
; if not, it will print normally
from swarms import Agent\n\n# Enable real-time streaming\nagent = Agent(\n agent_name=\"StoryAgent\",\n model_name=\"gpt-4o-mini\",\n streaming_on=True, # \ud83d\udd25 This enables real streaming!\n max_loops=1,\n print_on=True, # By default, it's False for raw streaming!\n)\n\n# This will now stream in real-time with a beautiful UI!\nresponse = agent.run(\"Tell me a detailed story about humanity colonizing the stars\")\nprint(response)\n
"},{"location":"examples/agent_stream/#connect-with-us","title":"Connect With Us","text":"If you'd like technical support, join our Discord below and stay updated on our Twitter for new updates!
Platform Link Description \ud83d\udcda Documentation docs.swarms.world Official documentation and guides \ud83d\udcdd Blog Medium Latest updates and technical articles \ud83d\udcac Discord Join Discord Live chat and community support \ud83d\udc26 Twitter @kyegomez Latest news and announcements \ud83d\udc65 LinkedIn The Swarm Corporation Professional network and updates \ud83d\udcfa YouTube Swarms Channel Tutorials and demos \ud83c\udfab Events Sign up here Join our community events"},{"location":"examples/cookbook_index/","title":"Swarms Cookbook Examples Index","text":"This index provides a categorized list of examples and tutorials for using the Swarms Framework across different industries. Each example demonstrates practical applications and implementations using the framework.
"},{"location":"examples/cookbook_index/#finance-trading","title":"Finance & Trading","text":"Name Description Link Tickr-Agent Financial analysis agent for stock market data using multithreaded processing and AI integration View Example CryptoAgent Real-time cryptocurrency data analysis and insights using CoinGecko integration View Example 10-K Analysis (Custom) Detailed analysis of SEC 10-K reports using specialized agents View Example 10-K Analysis (AgentRearrange) Mixed sequential and parallel analysis of 10-K reports View Example"},{"location":"examples/cookbook_index/#healthcare-medical","title":"Healthcare & Medical","text":"Name Description Link MedInsight Pro Medical research summarization and analysis using AI-driven agents View Example Athletics Diagnosis Diagnosis and treatment system for extreme athletics using AgentRearrange View Example"},{"location":"examples/cookbook_index/#marketing-content","title":"Marketing & Content","text":"Name Description Link NewsAgent Real-time news aggregation and summarization for business intelligence View Example Social Media Marketing Spreadsheet-based content generation for multi-platform marketing View Example"},{"location":"examples/cookbook_index/#accounting-finance-operations","title":"Accounting & Finance Operations","text":"Name Description Link Accounting Agents Multi-agent system for financial projections and risk assessment View Example"},{"location":"examples/cookbook_index/#workshops-tutorials","title":"Workshops & Tutorials","text":"Name Description Link GPTuesday Event Example of creating promotional content for tech events View Example"},{"location":"examples/cookbook_index/#additional-resources","title":"Additional Resources","text":"Platform Link Description \ud83d\udcda Documentation docs.swarms.world Official documentation and guides \ud83d\udcdd Blog Medium Latest updates and technical articles \ud83d\udcac Discord Join Discord Live chat and community support \ud83d\udc26 Twitter @kyegomez Latest news and announcements \ud83d\udc65 LinkedIn The Swarm Corporation Professional network and updates \ud83d\udcfa YouTube Swarms Channel Tutorials and demos \ud83c\udfab Events Sign up here Join our community events"},{"location":"examples/cookbook_index/#contributing","title":"Contributing","text":"We welcome contributions! If you have an example or tutorial you'd like to add, please check our contribution guidelines.
"},{"location":"examples/cookbook_index/#license","title":"License","text":"This project is licensed under the MIT License - see the LICENSE file for details.
"},{"location":"examples/paper_implementations/","title":"Multi-Agent Paper Implementations","text":"At Swarms, we are passionate about democratizing access to cutting-edge multi-agent research and making advanced AI collaboration accessible to everyone. Our mission is to bridge the gap between academic research and practical implementation by providing production-ready, open-source implementations of the most impactful multi-agent research papers.
"},{"location":"examples/paper_implementations/#why-multi-agent-research-matters","title":"Why Multi-Agent Research Matters","text":"Multi-agent systems represent the next evolution in artificial intelligence, moving beyond single-agent limitations to harness the power of collective intelligence. These systems can:
We believe that the best way to advance the field is through practical implementation and real-world validation. Our approach includes:
Faithful Reproduction: Implementing research papers with high fidelity to original methodologies
Production Enhancement: Adding enterprise-grade features like error handling, monitoring, and scalability
Open Source Commitment: Making all implementations freely available to the research community
Continuous Improvement: Iterating on implementations based on community feedback and new research
This documentation showcases our comprehensive collection of multi-agent research implementations, including:
Academic Paper Implementations: Direct implementations of published research papers
Enhanced Frameworks: Production-ready versions with additional features and optimizations
Research Compilations: Curated lists of influential multi-agent papers and resources
Practical Examples: Ready-to-use code examples and tutorials
Whether you're a researcher looking to validate findings, a developer building production systems, or a student learning about multi-agent AI, you'll find valuable resources here to advance your work.
"},{"location":"examples/paper_implementations/#join-the-multi-agent-revolution","title":"Join the Multi-Agent Revolution","text":"We invite you to explore these implementations, contribute to our research efforts, and help shape the future of collaborative AI. Together, we can unlock the full potential of multi-agent systems and create AI that truly works as a team.
"},{"location":"examples/paper_implementations/#implemented-research-papers","title":"Implemented Research Papers","text":"Paper Name Description Original Paper Implementation Status Key Features MALT (Multi-Agent Learning Task) A sophisticated orchestration framework that coordinates multiple specialized AI agents to tackle complex tasks through structured conversations. arXiv:2412.01928swarms.structs.malt
\u2705 Complete Creator-Verifier-Refiner architecture, structured conversations, reliability guarantees MAI-DxO (MAI Diagnostic Orchestrator) An open-source implementation of Microsoft Research's \"Sequential Diagnosis with Language Models\" paper, simulating a virtual panel of physician-agents for iterative medical diagnosis. Microsoft Research Paper GitHub Repository \u2705 Complete Cost-effective medical diagnosis, physician-agent panel, iterative refinement AI-CoScientist A multi-agent AI framework for collaborative scientific research, implementing the \"Towards an AI Co-Scientist\" methodology with tournament-based hypothesis evolution. \"Towards an AI Co-Scientist\" Paper GitHub Repository \u2705 Complete Tournament-based selection, peer review systems, hypothesis evolution, Elo rating system Mixture of Agents (MoA) A sophisticated multi-agent architecture that implements parallel processing with iterative refinement, combining diverse expert agents for comprehensive analysis. Multi-agent collaboration concepts swarms.structs.moa
\u2705 Complete Parallel processing, expert agent combination, iterative refinement, state-of-the-art performance Deep Research Swarm A production-grade research system that conducts comprehensive analysis across multiple domains using parallel processing and advanced AI agents. Research methodology swarms.structs.deep_research_swarm
\u2705 Complete Parallel search processing, multi-agent coordination, information synthesis, concurrent execution Agent-as-a-Judge An evaluation framework that uses agents to evaluate other agents, implementing the \"Agent-as-a-Judge: Evaluate Agents with Agents\" methodology. arXiv:2410.10934 swarms.agents.agent_judge
\u2705 Complete Agent evaluation, quality assessment, automated judging, performance metrics"},{"location":"examples/paper_implementations/#additional-research-resources","title":"Additional Research Resources","text":""},{"location":"examples/paper_implementations/#multi-agent-papers-compilation","title":"Multi-Agent Papers Compilation","text":"We maintain a comprehensive list of multi-agent research papers at: awesome-multi-agent-papers
"},{"location":"examples/paper_implementations/#research-lists","title":"Research Lists","text":"Our research compilation includes:
Projects: ModelScope-Agent, Gorilla, BMTools, LMQL, Langchain, MetaGPT, AutoGPT, and more
Research Papers: BOLAA, ToolLLM, Communicative Agents, Mind2Web, Voyager, Tree of Thoughts, and many others
Blog Articles: Latest insights and developments in autonomous agents
Talks: Presentations from leading researchers like Geoffrey Hinton and Andrej Karpathy
The MALT implementation provides:
Three-Agent Architecture: Creator, Verifier, and Refiner agents
Structured Workflow: Coordinated task execution with conversation history
Reliability Features: Error handling, validation, and quality assurance
Extensibility: Custom agent integration and configuration options
The MAI Diagnostic Orchestrator features:
Virtual Physician Panel: Multiple specialized medical agents
Cost Optimization: Efficient diagnostic workflows
Iterative Refinement: Continuous improvement of diagnoses
Medical Expertise: Domain-specific knowledge and reasoning
The AI-CoScientist implementation includes:
Tournament-Based Selection: Elo rating system for hypothesis ranking
Peer Review System: Comprehensive evaluation of scientific proposals
Hypothesis Evolution: Iterative refinement based on feedback
Diversity Control: Proximity analysis to maintain hypothesis variety
The MoA architecture provides:
Parallel Processing: Multiple agents working simultaneously
Expert Specialization: Domain-specific agent capabilities
Iterative Refinement: Continuous improvement through collaboration
State-of-the-Art Performance: Achieving superior results through collective intelligence
We welcome contributions to implement additional research papers! If you'd like to contribute:
If you use any of these implementations in your research, please cite the original papers and the Swarms framework:
@misc{SWARMS_2022,\n author = {Gomez, Kye and Pliny and More, Harshal and Swarms Community},\n title = {{Swarms: Production-Grade Multi-Agent Infrastructure Platform}},\n year = {2022},\n howpublished = {\\url{https://github.com/kyegomez/swarms}},\n note = {Documentation available at \\url{https://docs.swarms.world}},\n version = {latest}\n}\n
"},{"location":"examples/paper_implementations/#community","title":"Community","text":"Join our community to stay updated on the latest multi-agent research implementations:
Discord: Join our community
Documentation: docs.swarms.world
GitHub: kyegomez/swarms
Research Papers: awesome-multi-agent-papers
The Swarms framework is a powerful multi-agent orchestration platform that enables developers to build sophisticated AI agent systems. This documentation showcases the extensive ecosystem of templates, applications, and tools built on the Swarms framework, organized by industry and application type.
\ud83d\udd17 Main Repository: Swarms Framework
"},{"location":"examples/templates/#healthcare-medical-applications","title":"\ud83c\udfe5 Healthcare & Medical Applications","text":""},{"location":"examples/templates/#medical-diagnosis-analysis","title":"Medical Diagnosis & Analysis","text":"Name Description Type Repository MRI-Swarm Multi-agent system for MRI image analysis and diagnosis Medical Imaging Healthcare DermaSwarm Dermatology-focused agent swarm for skin condition analysis Medical Diagnosis Healthcare Multi-Modal-XRAY-Diagnosis X-ray diagnosis using multi-modal AI agents Medical Imaging Healthcare Open-MAI-Dx-Orchestrator Medical AI diagnosis orchestration platform Medical Platform Healthcare radiology-swarm Radiology-focused multi-agent system Medical Imaging Healthcare"},{"location":"examples/templates/#medical-operations-administration","title":"Medical Operations & Administration","text":"Name Description Type Repository MedicalCoderSwarm Medical coding automation using agent swarms Medical Coding Healthcare pharma-swarm Pharmaceutical research and development agents Pharmaceutical Healthcare MedGuard Medical data security and compliance system Medical Security Healthcare MedInsight-Pro Advanced medical insights and analytics platform Medical Analytics Healthcare"},{"location":"examples/templates/#financial-services-trading","title":"\ud83d\udcb0 Financial Services & Trading","text":""},{"location":"examples/templates/#trading-investment","title":"Trading & Investment","text":"Name Description Type Repository automated-crypto-fund Automated cryptocurrency trading fund management Crypto Trading Finance CryptoAgent Cryptocurrency analysis and trading agent Crypto Trading Finance AutoHedge Automated hedging strategies implementation Risk Management Finance BackTesterAgent Trading strategy backtesting automation Trading Tools Finance ForexTreeSwarm Forex trading decision tree swarm system Forex Trading Finance HTX-Swarm HTX exchange integration and trading automation Crypto Exchange Finance"},{"location":"examples/templates/#financial-analysis-management","title":"Financial Analysis & Management","text":"Name Description Type Repository TickrAgent Stock ticker analysis and monitoring agent Stock Analysis Finance Open-Aladdin Open-source financial risk management system Risk Management Finance CryptoTaxSwarm Cryptocurrency tax calculation and reporting Tax Management Finance"},{"location":"examples/templates/#insurance-lending","title":"Insurance & Lending","text":"Name Description Type Repository InsuranceSwarm Insurance claim processing and underwriting Insurance Finance MortgageUnderwritingSwarm Automated mortgage underwriting system Lending Finance"},{"location":"examples/templates/#research-development","title":"\ud83d\udd2c Research & Development","text":""},{"location":"examples/templates/#scientific-research","title":"Scientific Research","text":"Name Description Type Repository AI-CoScientist AI research collaboration platform Research Platform Science auto-ai-research-team Automated AI research team coordination Research Automation Science Research-Paper-Writer-Swarm Automated research paper writing system Academic Writing Science"},{"location":"examples/templates/#mathematical-analytical","title":"Mathematical & Analytical","text":"Name Description Type Repository Generalist-Mathematician-Swarm Mathematical problem-solving agent swarm Mathematics Science"},{"location":"examples/templates/#business-marketing","title":"\ud83d\udcbc Business & Marketing","text":""},{"location":"examples/templates/#marketing-content","title":"Marketing & Content","text":"Name Description Type Repository Marketing-Swarm-Template Marketing campaign automation template Marketing Automation Business Multi-Agent-Marketing-Course Educational course on multi-agent marketing Marketing Education Business NewsAgent News aggregation and analysis agent News Analysis Business"},{"location":"examples/templates/#legal-services","title":"Legal Services","text":"Name Description Type Repository Legal-Swarm-Template Legal document processing and analysis Legal Technology Business"},{"location":"examples/templates/#development-tools-platforms","title":"\ud83d\udee0\ufe0f Development Tools & Platforms","text":""},{"location":"examples/templates/#core-platforms-operating-systems","title":"Core Platforms & Operating Systems","text":"Name Description Type Repository AgentOS Operating system for AI agents Agent Platform Development swarm-ecosystem Complete ecosystem for swarm development Ecosystem Platform Development AgentAPIProduction Production-ready agent API system API Platform Development"},{"location":"examples/templates/#development-tools-utilities","title":"Development Tools & Utilities","text":"Name Description Type Repository DevSwarm Development-focused agent swarm Development Tools Development FluidAPI Dynamic API generation and management API Tools Development OmniParse Universal document parsing system Document Processing Development doc-master Documentation generation and management Documentation Tools Development"},{"location":"examples/templates/#templates-examples","title":"Templates & Examples","text":"Name Description Type Repository Multi-Agent-Template-App Template application for multi-agent systems Template Development swarms-examples Collection of Swarms framework examples Examples Development Phala-Deployment-Template Deployment template for Phala Network Deployment Template Development"},{"location":"examples/templates/#educational-resources","title":"\ud83d\udcda Educational Resources","text":""},{"location":"examples/templates/#courses-guides","title":"Courses & Guides","text":"Name Description Type Repository Enterprise-Grade-Agents-Course Comprehensive course on enterprise AI agents Educational Course Education Agents-Beginner-Guide Beginner's guide to AI agents Educational Guide Education"},{"location":"examples/templates/#testing-evaluation","title":"Testing & Evaluation","text":"Name Description Type Repository swarms-evals Evaluation framework for swarm systems Testing Framework Development"},{"location":"examples/templates/#getting-started","title":"\ud83d\ude80 Getting Started","text":""},{"location":"examples/templates/#prerequisites","title":"Prerequisites","text":"Python 3.8+
Basic understanding of AI agents and multi-agent systems
Familiarity with the Swarms framework
pip install swarms\n
"},{"location":"examples/templates/#quick-start","title":"Quick Start","text":"Choose a template from the categories above
Clone the repository
Follow the setup instructions in the README
Customize the agents for your specific use case
The Swarms ecosystem is constantly growing. To contribute:
Join our community of agent engineers and researchers for technical support, cutting-edge updates, and exclusive access to world-class agent engineering insights!
Platform Description Link \ud83c\udfe0 Main Repository Swarms Framework GitHub \ud83c\udfe2 Organization The Swarm Corporation GitHub Org \ud83c\udf10 Website Official project website swarms.ai \ud83d\udcda Documentation Official documentation and guides docs.swarms.world \ud83d\udcdd Blog Latest updates and technical articles Medium \ud83d\udcac Discord Live chat and community support Join Discord \ud83d\udc26 Twitter Latest news and announcements @kyegomez \ud83d\udc65 LinkedIn Professional network and updates The Swarm Corporation \ud83d\udcfa YouTube Tutorials and demos Swarms Channel \ud83c\udfab Events Join our community events Sign up here \ud83d\ude80 Onboarding Session Get onboarded with Kye Gomez, creator and lead maintainer of Swarms Book Session"},{"location":"examples/templates/#statistics","title":"\ud83d\udcca Statistics","text":"Total Projects: 35+
Industries Covered: Healthcare, Finance, Research, Business, Development
Project Types: Templates, Applications, Tools, Educational Resources
Active Development: Continuous updates and new additions
Welcome to the Swarms ecosystem. Click any tile below to explore our products, community, documentation, and social platforms.
\ud83d\udde3\ufe0f Swarms Chat \ud83d\udecd\ufe0f Swarms Marketplace \ud83d\udcda Swarms API Docs \ud83d\ude80 Swarms Startup Program \ud83d\udcbb GitHub: Swarms (Python) \ud83e\udd80 GitHub: Swarms (Rust) \ud83d\udcac Join Our Discord \ud83d\udcf1 Telegram Group \ud83d\udc26 Twitter / X \u270d\ufe0f Swarms Blog on Medium"},{"location":"governance/main/#quick-summary","title":"\ud83d\udca1 Quick Summary","text":"Category Link API Docs docs.swarms.world GitHub kyegomez/swarms GitHub (Rust) The-Swarm-Corporation/swarms-rs Chat UI swarms.world/platform/chat Marketplace swarms.world Startup App Apply Here Discord Join Now Telegram Group Chat Twitter/X @swarms_corp Blog medium.com/@kyeg\ud83d\udc1d Swarms is building the agentic internet. Join the movement and build the future with us.
"},{"location":"guides/agent_evals/","title":"Agent evals","text":""},{"location":"guides/agent_evals/#understanding-agent-evaluation-mechanisms","title":"Understanding Agent Evaluation Mechanisms","text":"Agent evaluation mechanisms play a crucial role in ensuring that autonomous agents, particularly in multi-agent systems, perform their tasks effectively and efficiently. This blog delves into the intricacies of agent evaluation, the importance of accuracy tracking, and the methodologies used to measure and visualize agent performance. We'll use Mermaid graphs to provide clear visual representations of these processes.
"},{"location":"guides/agent_evals/#1-introduction-to-agent-evaluation-mechanisms","title":"1. Introduction to Agent Evaluation Mechanisms","text":"Agent evaluation mechanisms refer to the processes and criteria used to assess the performance of agents within a system. These mechanisms are essential for:
To effectively evaluate agents, several components and metrics are considered:
"},{"location":"guides/agent_evals/#a-performance-metrics","title":"a. Performance Metrics","text":"These are quantitative measures used to assess how well an agent is performing. Common performance metrics include:
Evaluation criteria define the standards or benchmarks against which agent performance is measured. These criteria are often task-specific and may include:
The evaluation process involves several steps, which can be visualized using Mermaid graphs:
"},{"location":"guides/agent_evals/#a-define-evaluation-metrics","title":"a. Define Evaluation Metrics","text":"The first step is to define the metrics that will be used to evaluate the agent. This involves identifying the key performance indicators (KPIs) relevant to the agent's tasks.
graph TD\n A[Define Evaluation Metrics] --> B[Identify KPIs]\n B --> C[Accuracy]\n B --> D[Precision and Recall]\n B --> E[F1 Score]\n B --> F[Response Time]
"},{"location":"guides/agent_evals/#b-collect-data","title":"b. Collect Data","text":"Data collection involves gathering information on the agent's performance. This data can come from logs, user feedback, or direct observations.
graph TD\n A[Collect Data] --> B[Logs]\n A --> C[User Feedback]\n A --> D[Direct Observations]
"},{"location":"guides/agent_evals/#c-analyze-performance","title":"c. Analyze Performance","text":"Once data is collected, it is analyzed to assess the agent's performance against the defined metrics. This step may involve statistical analysis, machine learning models, or other analytical techniques.
graph TD\n A[Analyze Performance] --> B[Statistical Analysis]\n A --> C[Machine Learning Models]\n A --> D[Other Analytical Techniques]
"},{"location":"guides/agent_evals/#d-generate-reports","title":"d. Generate Reports","text":"After analysis, performance reports are generated. These reports provide insights into how well the agent is performing and identify areas for improvement.
graph TD\n A[Generate Reports] --> B[Performance Insights]\n B --> C[Identify Areas for Improvement]
"},{"location":"guides/agent_evals/#4-tracking-agent-accuracy","title":"4. Tracking Agent Accuracy","text":"Accuracy tracking is a critical aspect of agent evaluation. It involves measuring how often an agent's actions or decisions are correct. The following steps outline the process of tracking agent accuracy:
"},{"location":"guides/agent_evals/#a-define-correctness-criteria","title":"a. Define Correctness Criteria","text":"The first step is to define what constitutes a correct action or decision for the agent.
graph TD\n A[Define Correctness Criteria] --> B[Task-Specific Standards]\n B --> C[Action Accuracy]\n B --> D[Decision Accuracy]
"},{"location":"guides/agent_evals/#b-monitor-agent-actions","title":"b. Monitor Agent Actions","text":"Agents' actions are continuously monitored to track their performance. This monitoring can be done in real-time or through periodic evaluations.
graph TD\n A[Monitor Agent Actions] --> B[Real-Time Monitoring]\n A --> C[Periodic Evaluations]
"},{"location":"guides/agent_evals/#c-compare-against-correctness-criteria","title":"c. Compare Against Correctness Criteria","text":"Each action or decision made by the agent is compared against the defined correctness criteria to determine its accuracy.
graph TD\n A[Compare Against Correctness Criteria] --> B[Evaluate Each Action]\n B --> C[Correct or Incorrect?]
"},{"location":"guides/agent_evals/#d-calculate-accuracy-metrics","title":"d. Calculate Accuracy Metrics","text":"Accuracy metrics are calculated based on the comparison results. These metrics provide a quantitative measure of the agent's accuracy.
graph TD\n A[Calculate Accuracy Metrics] --> B[Accuracy Percentage]\n A --> C[Error Rate]
"},{"location":"guides/agent_evals/#5-measuring-agent-accuracy","title":"5. Measuring Agent Accuracy","text":"Measuring agent accuracy involves several steps and considerations:
"},{"location":"guides/agent_evals/#a-data-labeling","title":"a. Data Labeling","text":"To measure accuracy, the data used for evaluation must be accurately labeled. This involves annotating the data with the correct actions or decisions.
graph TD\n A[Data Labeling] --> B[Annotate Data with Correct Actions]\n B --> C[Ensure Accuracy of Labels]
"},{"location":"guides/agent_evals/#b-establish-baseline-performance","title":"b. Establish Baseline Performance","text":"A baseline performance level is established by evaluating a sample set of data. This baseline serves as a reference point for measuring improvements or declines in accuracy.
graph TD\n A[Establish Baseline Performance] --> B[Evaluate Sample Data]\n B --> C[Set Performance Benchmarks]
"},{"location":"guides/agent_evals/#c-regular-evaluations","title":"c. Regular Evaluations","text":"Agents are regularly evaluated to measure their accuracy over time. This helps in tracking performance trends and identifying any deviations from the expected behavior.
graph TD\n A[Regular Evaluations] --> B[Track Performance Over Time]\n B --> C[Identify Performance Trends]\n B --> D[Detect Deviations]
"},{"location":"guides/agent_evals/#d-feedback-and-improvement","title":"d. Feedback and Improvement","text":"Feedback from evaluations is used to improve the agent's performance. This may involve retraining the agent, adjusting its algorithms, or refining its decision-making processes.
graph TD\n A[Feedback and Improvement] --> B[Use Evaluation Feedback]\n B --> C[Retrain Agent]\n B --> D[Adjust Algorithms]\n B --> E[Refine Decision-Making Processes]
"},{"location":"guides/agent_evals/#6-visualizing-agent-evaluation-with-mermaid-graphs","title":"6. Visualizing Agent Evaluation with Mermaid Graphs","text":"Mermaid graphs provide a clear and concise way to visualize the agent evaluation process. Here are some examples of how Mermaid graphs can be used:
"},{"location":"guides/agent_evals/#a-overall-evaluation-process","title":"a. Overall Evaluation Process","text":"graph TD\n A[Define Evaluation Metrics] --> B[Collect Data]\n B --> C[Analyze Performance]\n C --> D[Generate Reports]
"},{"location":"guides/agent_evals/#b-accuracy-tracking","title":"b. Accuracy Tracking","text":"graph TD\n A[Define Correctness Criteria] --> B[Monitor Agent Actions]\n B --> C[Compare Against Correctness Criteria]\n C --> D[Calculate Accuracy Metrics]
"},{"location":"guides/agent_evals/#c-continuous-improvement-cycle","title":"c. Continuous Improvement Cycle","text":"graph TD\n A[Regular Evaluations] --> B[Track Performance Over Time]\n B --> C[Identify Performance Trends]\n C --> D[Detect Deviations]\n D --> E[Feedback and Improvement]\n E --> A
"},{"location":"guides/agent_evals/#7-case-study-evaluating-a-chatbot-agent","title":"7. Case Study: Evaluating a Chatbot Agent","text":"To illustrate the agent evaluation process, let's consider a case study involving a chatbot agent designed to assist customers in an e-commerce platform.
"},{"location":"guides/agent_evals/#a-define-evaluation-metrics_1","title":"a. Define Evaluation Metrics","text":"For the chatbot, key performance metrics might include:
Data is collected from chatbot interactions, including user queries, responses, and feedback.
"},{"location":"guides/agent_evals/#c-analyze-performance_1","title":"c. Analyze Performance","text":"Performance analysis involves comparing the chatbot's responses against a predefined set of correct responses and calculating accuracy metrics.
"},{"location":"guides/agent_evals/#d-generate-reports_1","title":"d. Generate Reports","text":"Reports are generated to provide insights into the chatbot's performance, highlighting areas where it excels and areas needing improvement.
"},{"location":"guides/agent_evals/#8-best-practices-for-agent-evaluation","title":"8. Best Practices for Agent Evaluation","text":"Here are some best practices to ensure effective agent evaluation:
"},{"location":"guides/agent_evals/#a-use-realistic-scenarios","title":"a. Use Realistic Scenarios","text":"Evaluate agents in realistic scenarios that closely mimic real-world conditions. This ensures that the evaluation results are relevant and applicable.
"},{"location":"guides/agent_evals/#b-continuous-monitoring","title":"b. Continuous Monitoring","text":"Continuously monitor agent performance to detect and address issues promptly. This helps in maintaining high performance levels.
"},{"location":"guides/agent_evals/#c-incorporate-user-feedback","title":"c. Incorporate User Feedback","text":"User feedback is invaluable for improving agent performance. Incorporate feedback into the evaluation process to identify and rectify shortcomings.
"},{"location":"guides/agent_evals/#d-regular-updates","title":"d. Regular Updates","text":"Regularly update the evaluation metrics and criteria to keep pace with evolving tasks and requirements.
"},{"location":"guides/agent_evals/#conclusion","title":"Conclusion","text":"Agent evaluation mechanisms are vital for ensuring the reliability, efficiency, and effectiveness of autonomous agents. By defining clear evaluation metrics, continuously monitoring performance, and using feedback for improvement, we can develop agents that consistently perform at high levels. Visualizing the evaluation process with tools like Mermaid graphs further aids in understanding and communication. Through diligent evaluation and continuous improvement, we can harness the full potential of autonomous agents in various applications.
"},{"location":"guides/financial_analysis_swarm_mm/","title":"Building a Multi-Agent System for Real-Time Financial Analysis: A Comprehensive Tutorial","text":"In this tutorial, we'll walk through the process of building a sophisticated multi-agent system for real-time financial analysis using the Swarms framework. This system is designed for financial analysts and developer analysts who want to leverage AI and multiple data sources to gain deeper insights into stock performance, market trends, and economic indicators.
Before we dive into the code, let's briefly introduce the Swarms framework. Swarms is an innovative open-source project that simplifies the creation and management of AI agents. It's particularly well-suited for complex tasks like financial analysis, where multiple specialized agents can work together to provide comprehensive insights.
For more information and to contribute to the project, visit the Swarms GitHub repository. We highly recommend exploring the documentation for a deeper understanding of Swarms' capabilities.
Additional resources: - Swarms Discord for community discussions - Swarms Twitter for updates - Swarms Spotify for podcasts - Swarms Blog for in-depth articles - Swarms Website for an overview of the project
Now, let's break down our financial analysis system step by step.
"},{"location":"guides/financial_analysis_swarm_mm/#step-1-setting-up-the-environment","title":"Step 1: Setting Up the Environment","text":"First install the necessary packages:
$ pip3 install -U swarms yfiance swarm_models fredapi pandas \n
First, we need to set up our environment and import the necessary libraries:
import os\nimport time\nfrom datetime import datetime, timedelta\nimport yfinance as yf\nimport requests\nfrom fredapi import Fred\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom swarms import Agent, AgentRearrange\nfrom swarm_models import OpenAIChat\nimport logging\nfrom dotenv import load_dotenv\nimport asyncio\nimport aiohttp\nfrom ratelimit import limits, sleep_and_retry\n\n# Load environment variables\nload_dotenv()\n\n# Set up logging\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')\nlogger = logging.getLogger(__name__)\n\n# API keys\nPOLYGON_API_KEY = os.getenv('POLYGON_API_KEY')\nFRED_API_KEY = os.getenv('FRED_API_KEY')\nOPENAI_API_KEY = os.getenv('OPENAI_API_KEY')\n\n# Initialize FRED client\nfred_client = Fred(api_key=FRED_API_KEY)\n\n# Polygon API base URL\nPOLYGON_BASE_URL = \"https://api.polygon.io\"\n
This section sets up our environment, imports necessary libraries, and initializes our API keys and clients. We're using dotenv
to securely manage our API keys, and we've set up logging to track the execution of our script.
To respect API rate limits, we implement rate limiting decorators:
@sleep_and_retry\n@limits(calls=5, period=60) # Adjust these values based on your Polygon API tier\nasync def call_polygon_api(session, endpoint, params=None):\n url = f\"{POLYGON_BASE_URL}{endpoint}\"\n params = params or {}\n params['apiKey'] = POLYGON_API_KEY\n async with session.get(url, params=params) as response:\n response.raise_for_status()\n return await response.json()\n\n@sleep_and_retry\n@limits(calls=120, period=60) # FRED allows 120 requests per minute\ndef call_fred_api(func, *args, **kwargs):\n return func(*args, **kwargs)\n
These decorators ensure that we don't exceed the rate limits for our API calls. The call_polygon_api
function is designed to work with asynchronous code, while call_fred_api
is a wrapper for synchronous FRED API calls.
Next, we implement functions to fetch data from various sources:
"},{"location":"guides/financial_analysis_swarm_mm/#yahoo-finance-integration","title":"Yahoo Finance Integration","text":"async def get_yahoo_finance_data(session, ticker, period=\"1d\", interval=\"1m\"):\n try:\n stock = yf.Ticker(ticker)\n hist = await asyncio.to_thread(stock.history, period=period, interval=interval)\n info = await asyncio.to_thread(lambda: stock.info)\n return hist, info\n except Exception as e:\n logger.error(f\"Error fetching Yahoo Finance data for {ticker}: {e}\")\n return None, None\n\nasync def get_yahoo_finance_realtime(session, ticker):\n try:\n stock = yf.Ticker(ticker)\n return await asyncio.to_thread(lambda: stock.fast_info)\n except Exception as e:\n logger.error(f\"Error fetching Yahoo Finance realtime data for {ticker}: {e}\")\n return None\n
These functions fetch historical and real-time data from Yahoo Finance. We use asyncio.to_thread
to run the synchronous yfinance
functions in a separate thread, allowing our main event loop to continue running.
async def get_polygon_realtime_data(session, ticker):\n try:\n trades = await call_polygon_api(session, f\"/v2/last/trade/{ticker}\")\n quotes = await call_polygon_api(session, f\"/v2/last/nbbo/{ticker}\")\n return trades, quotes\n except Exception as e:\n logger.error(f\"Error fetching Polygon.io realtime data for {ticker}: {e}\")\n return None, None\n\nasync def get_polygon_news(session, ticker, limit=10):\n try:\n news = await call_polygon_api(session, f\"/v2/reference/news\", params={\"ticker\": ticker, \"limit\": limit})\n return news.get('results', [])\n except Exception as e:\n logger.error(f\"Error fetching Polygon.io news for {ticker}: {e}\")\n return []\n
These functions fetch real-time trade and quote data, as well as news articles from Polygon.io. We use our call_polygon_api
function to make these requests, ensuring we respect rate limits.
async def get_fred_data(session, series_id, start_date, end_date):\n try:\n data = await asyncio.to_thread(call_fred_api, fred_client.get_series, series_id, start_date, end_date)\n return data\n except Exception as e:\n logger.error(f\"Error fetching FRED data for {series_id}: {e}\")\n return None\n\nasync def get_fred_realtime(session, series_ids):\n try:\n data = {}\n for series_id in series_ids:\n series = await asyncio.to_thread(call_fred_api, fred_client.get_series, series_id)\n data[series_id] = series.iloc[-1] # Get the most recent value\n return data\n except Exception as e:\n logger.error(f\"Error fetching FRED realtime data: {e}\")\n return {}\n
These functions fetch historical and real-time economic data from FRED. Again, we use asyncio.to_thread
to run the synchronous FRED API calls in a separate thread.
Now we create our specialized agents using the Swarms framework:
stock_agent = Agent(\n agent_name=\"StockAgent\",\n system_prompt=\"\"\"You are an expert stock analyst. Your task is to analyze real-time stock data and provide insights. \n Consider price movements, trading volume, and any available company information. \n Provide a concise summary of the stock's current status and any notable trends or events.\"\"\",\n llm=OpenAIChat(api_key=OPENAI_API_KEY),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n)\n\nmarket_agent = Agent(\n agent_name=\"MarketAgent\",\n system_prompt=\"\"\"You are a market analysis expert. Your task is to analyze overall market conditions using real-time data. \n Consider major indices, sector performance, and market-wide trends. \n Provide a concise summary of current market conditions and any significant developments.\"\"\",\n llm=OpenAIChat(api_key=OPENAI_API_KEY),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n)\n\nmacro_agent = Agent(\n agent_name=\"MacroAgent\",\n system_prompt=\"\"\"You are a macroeconomic analysis expert. Your task is to analyze key economic indicators and provide insights on the overall economic situation. \n Consider GDP growth, inflation rates, unemployment figures, and other relevant economic data. \n Provide a concise summary of the current economic situation and any potential impacts on financial markets.\"\"\",\n llm=OpenAIChat(api_key=OPENAI_API_KEY),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n)\n\nnews_agent = Agent(\n agent_name=\"NewsAgent\",\n system_prompt=\"\"\"You are a financial news analyst. Your task is to analyze recent news articles related to specific stocks or the overall market. \n Consider the potential impact of news events on stock prices or market trends. \n Provide a concise summary of key news items and their potential market implications.\"\"\",\n llm=OpenAIChat(api_key=OPENAI_API_KEY),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n)\n
Each agent is specialized in a different aspect of financial analysis. The system_prompt
for each agent defines its role and the type of analysis it should perform.
We then combine our specialized agents into a multi-agent system:
agents = [stock_agent, market_agent, macro_agent, news_agent]\nflow = \"StockAgent -> MarketAgent -> MacroAgent -> NewsAgent\"\n\nagent_system = AgentRearrange(agents=agents, flow=flow)\n
The flow
variable defines the order in which our agents will process information. This allows for a logical progression from specific stock analysis to broader market and economic analysis.
Now we implement our main analysis function:
async def real_time_analysis(session, ticker):\n logger.info(f\"Starting real-time analysis for {ticker}\")\n\n # Fetch real-time data\n yf_data, yf_info = await get_yahoo_finance_data(session, ticker)\n yf_realtime = await get_yahoo_finance_realtime(session, ticker)\n polygon_trades, polygon_quotes = await get_polygon_realtime_data(session, ticker)\n polygon_news = await get_polygon_news(session, ticker)\n fred_data = await get_fred_realtime(session, ['GDP', 'UNRATE', 'CPIAUCSL'])\n\n # Prepare input for the multi-agent system\n input_data = f\"\"\"\n Yahoo Finance Data:\n {yf_realtime}\n\n Recent Stock History:\n {yf_data.tail().to_string() if yf_data is not None else 'Data unavailable'}\n\n Polygon.io Trade Data:\n {polygon_trades}\n\n Polygon.io Quote Data:\n {polygon_quotes}\n\n Recent News:\n {polygon_news[:3] if polygon_news else 'No recent news available'}\n\n Economic Indicators:\n {fred_data}\n\n Analyze this real-time financial data for {ticker}. Provide insights on the stock's performance, overall market conditions, relevant economic factors, and any significant news that might impact the stock or market.\n \"\"\"\n\n # Run the multi-agent analysis\n try:\n analysis = agent_system.run(input_data)\n logger.info(f\"Analysis completed for {ticker}\")\n return analysis\n except Exception as e:\n logger.error(f\"Error during multi-agent analysis for {ticker}: {e}\")\n return f\"Error during analysis: {e}\"\n
This function fetches data from all our sources, prepares it as input for our multi-agent system, and then runs the analysis. The result is a comprehensive analysis of the stock, considering individual performance, market conditions, economic factors, and relevant news.
"},{"location":"guides/financial_analysis_swarm_mm/#step-7-implementing-advanced-use-cases","title":"Step 7: Implementing Advanced Use Cases","text":"We then implement more advanced analysis functions:
"},{"location":"guides/financial_analysis_swarm_mm/#compare-stocks","title":"Compare Stocks","text":"async def compare_stocks(session, tickers):\n results = {}\n for ticker in tickers:\n results[ticker] = await real_time_analysis(session, ticker)\n\n comparison_prompt = f\"\"\"\n Compare the following stocks based on the provided analyses:\n {results}\n\n Highlight key differences and similarities. Provide a ranking of these stocks based on their current performance and future prospects.\n \"\"\"\n\n try:\n comparison = agent_system.run(comparison_prompt)\n logger.info(f\"Stock comparison completed for {tickers}\")\n return comparison\n except Exception as e:\n logger.error(f\"Error during stock comparison: {e}\")\n return f\"Error during comparison: {e}\"\n
This function compares multiple stocks by running a real-time analysis on each and then prompting our multi-agent system to compare the results.
"},{"location":"guides/financial_analysis_swarm_mm/#sector-analysis","title":"Sector Analysis","text":"async def sector_analysis(session, sector):\n sector_stocks = {\n 'Technology': ['AAPL', 'MSFT', 'GOOGL', 'AMZN', 'NVDA'],\n 'Finance': ['JPM', 'BAC', 'WFC', 'C', 'GS'],\n 'Healthcare': ['JNJ', 'UNH', 'PFE', 'ABT', 'MRK'],\n 'Consumer Goods': ['PG', 'KO', 'PEP', 'COST', 'WMT'],\n 'Energy': ['XOM', 'CVX', 'COP', 'SLB', 'EOG']\n }\n\n if sector not in sector_stocks:\n return f\"Sector '{sector}' not found. Available sectors: {', '.join(sector_stocks.keys())}\"\n\n stocks = sector_stocks[sector][:5]\n\n sector_data = {}\n for stock in stocks:\n sector_data[stock] = await real_time_analysis(session, stock)\n\n sector_prompt = f\"\"\"\n Analyze the {sector} sector based on the following data from its top stocks:\n {sector_data}\n\n Provide insights on:\n 1. Overall sector performance\n 2. Key trends within the sector\n 3. Top performing stocks and why they're outperforming\n 4. Any challenges or opportunities facing the sector\n \"\"\"\n\n try:\n analysis = agent_system.run(sector_prompt)\n logger.info(f\"Sector analysis completed for {sector}\")\n return analysis\n except Exception as e:\n logger.error(f\"Error during sector analysis for {sector}: {e}\")\n return f\"Error during sector analysis: {e}\"\n
This function analyzes an entire sector by running real-time analysis on its top stocks and then prompting our multi-agent system to provide sector-wide insights.
"},{"location":"guides/financial_analysis_swarm_mm/#economic-impact-analysis","title":"Economic Impact Analysis","text":"async def economic_impact_analysis(session, indicator, threshold):\n # Fetch historical data for the indicator\n end_date = datetime.now().strftime('%Y-%m-%d')\n start_date = (datetime.now() - timedelta(days=365)).strftime('%Y-%m-%d')\n indicator_data = await get_fred_data(session, indicator, start_date, end_date)\n\n if indicator_data is None or len(indicator_data) < 2:\n return f\"Insufficient data for indicator {indicator}\"\n\n # Check if the latest value crosses the threshold\n latest_value = indicator_data.iloc[-1]\n previous_value = indicator_data.iloc[-2]\n crossed_threshold = (latest_value > threshold and previous_value <= threshold) or (latest_value < threshold and previous_value >= threshold)\n\n if crossed_threshold:\n impact_prompt = f\"\"\"\n The economic indicator {indicator} has crossed the threshold of {threshold}. Its current value is {latest_value}.\n\n Historical data:\n {indicator_data.tail().to_string()}\n\n Analyze the potential impacts of this change on:\n 1. Overall economic conditions\n 2. Different market\n 2. Different market sectors\n 3. Specific types of stocks (e.g., growth vs. value)\n 4. Other economic indicators\n\n Provide a comprehensive analysis of the potential consequences and any recommended actions for investors.\n \"\"\"\n\n try:\n analysis = agent_system.run(impact_prompt)\n logger.info(f\"Economic impact analysis completed for {indicator}\")\n return analysis\n except Exception as e:\n logger.error(f\"Error during economic impact analysis for {indicator}: {e}\")\n return f\"Error during economic impact analysis: {e}\"\n else:\n return f\"The {indicator} indicator has not crossed the threshold of {threshold}. Current value: {latest_value}\"\n
This function analyzes the potential impact of significant changes in economic indicators. It fetches historical data, checks if a threshold has been crossed, and if so, prompts our multi-agent system to provide a comprehensive analysis of the potential consequences.
"},{"location":"guides/financial_analysis_swarm_mm/#step-8-running-the-analysis","title":"Step 8: Running the Analysis","text":"Finally, we implement our main function to run all of our analyses:
async def main():\n async with aiohttp.ClientSession() as session:\n # Example usage\n analysis_result = await real_time_analysis(session, 'AAPL')\n print(\"Single Stock Analysis:\")\n print(analysis_result)\n\n comparison_result = await compare_stocks(session, ['AAPL', 'GOOGL', 'MSFT'])\n print(\"\\nStock Comparison:\")\n print(comparison_result)\n\n tech_sector_analysis = await sector_analysis(session, 'Technology')\n print(\"\\nTechnology Sector Analysis:\")\n print(tech_sector_analysis)\n\n gdp_impact = await economic_impact_analysis(session, 'GDP', 22000)\n print(\"\\nEconomic Impact Analysis:\")\n print(gdp_impact)\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
This main
function demonstrates how to use all of our analysis functions. It runs a single stock analysis, compares multiple stocks, performs a sector analysis, and conducts an economic impact analysis.
This tutorial has walked you through the process of building a sophisticated multi-agent system for real-time financial analysis using the Swarms framework. Here's a summary of what we've accomplished:
This system provides a powerful foundation for financial analysis, but there's always room for expansion and improvement. Here are some potential next steps:
Expand data sources: Consider integrating additional financial data providers for even more comprehensive analysis.
Enhance agent specialization: You could create more specialized agents, such as a technical analysis agent or a sentiment analysis agent for social media data.
Implement a user interface: Consider building a web interface or dashboard to make the system more user-friendly for non-technical analysts.
Add visualization capabilities: Integrate data visualization tools to help interpret complex financial data more easily.
Implement a backtesting system: Develop a system to evaluate your multi-agent system's performance on historical data.
Explore advanced AI models: The Swarms framework supports various AI models. Experiment with different models to see which performs best for your specific use case.
Implement real-time monitoring: Set up a system to continuously monitor markets and alert you to significant changes or opportunities.
Remember, the Swarms framework is a powerful and flexible tool that can be adapted to a wide range of complex tasks beyond just financial analysis. We encourage you to explore the Swarms GitHub repository for more examples and inspiration.
For more in-depth discussions and community support, consider joining the Swarms Discord. You can also stay updated with the latest developments by following Swarms on Twitter.
If you're interested in learning more about AI and its applications in various fields, check out the Swarms Spotify podcast and the Swarms Blog for insightful articles and discussions.
Lastly, don't forget to visit the Swarms Website for a comprehensive overview of the project and its capabilities.
By leveraging the power of multi-agent AI systems, you're well-equipped to navigate the complex world of financial markets. Happy analyzing!
"},{"location":"guides/financial_analysis_swarm_mm/#swarm-resources","title":"Swarm Resources:","text":"In the rapidly evolving landscape of quantitative finance, the integration of artificial intelligence with financial data analysis has become increasingly crucial. This blog post will explore how to leverage the power of AI agents, specifically using the Swarms framework, to analyze financial data from various top-tier data providers. We'll demonstrate how to connect these agents with different financial APIs, enabling sophisticated analysis and decision-making processes.
"},{"location":"guides/financial_data_api/#table-of-contents","title":"Table of Contents","text":"The Swarms framework is a powerful tool for building and deploying AI agents that can interact with various data sources and perform complex analyses. In the context of financial data analysis, Swarms can be used to create intelligent agents that can process large volumes of financial data, identify patterns, and make data-driven decisions. Explore our github for examples, applications, and more.
"},{"location":"guides/financial_data_api/#setting-up-the-environment","title":"Setting Up the Environment","text":"Before we dive into connecting AI agents with financial data providers, let's set up our environment:
pip install -U swarms\n
pip install requests pandas numpy matplotlib\n
Now, let's explore how to connect AI agents using the Swarms framework with different financial data providers.
"},{"location":"guides/financial_data_api/#polygonio","title":"Polygon.io","text":"First, we'll create an AI agent that can fetch and analyze stock data from Polygon.io.
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport requests\nimport pandas as pd\n\nload_dotenv()\n\n# Polygon.io API setup\nPOLYGON_API_KEY = os.getenv(\"POLYGON_API_KEY\")\nPOLYGON_BASE_URL = \"https://api.polygon.io/v2\"\n\n# OpenAI API setup\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"You are a financial analysis AI assistant. Your task is to analyze stock data and provide insights.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_stock_data(symbol, from_date, to_date):\n endpoint = f\"{POLYGON_BASE_URL}/aggs/ticker/{symbol}/range/1/day/{from_date}/{to_date}\"\n params = {\n 'apiKey': POLYGON_API_KEY,\n 'adjusted': 'true'\n }\n response = requests.get(endpoint, params=params)\n data = response.json()\n return pd.DataFrame(data['results'])\n\n# Example usage\nsymbol = \"AAPL\"\nfrom_date = \"2023-01-01\"\nto_date = \"2023-12-31\"\n\nstock_data = get_stock_data(symbol, from_date, to_date)\n\nanalysis_request = f\"\"\"\nAnalyze the following stock data for {symbol} from {from_date} to {to_date}:\n\n{stock_data.to_string()}\n\nProvide insights on the stock's performance, including trends, volatility, and any notable events.\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
In this example, we've created an AI agent that can fetch stock data from Polygon.io and perform an analysis based on that data. The agent uses the GPT-4 model to generate insights about the stock's performance.
"},{"location":"guides/financial_data_api/#alpha-vantage","title":"Alpha Vantage","text":"Next, let's create an agent that can work with Alpha Vantage data to perform fundamental analysis.
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport requests\n\nload_dotenv()\n\n# Alpha Vantage API setup\nALPHA_VANTAGE_API_KEY = os.getenv(\"ALPHA_VANTAGE_API_KEY\")\nALPHA_VANTAGE_BASE_URL = \"https://www.alphavantage.co/query\"\n\n# OpenAI API setup\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Fundamental-Analysis-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in fundamental analysis. Your task is to analyze company financials and provide insights.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_income_statement(symbol):\n params = {\n 'function': 'INCOME_STATEMENT',\n 'symbol': symbol,\n 'apikey': ALPHA_VANTAGE_API_KEY\n }\n response = requests.get(ALPHA_VANTAGE_BASE_URL, params=params)\n return response.json()\n\n# Example usage\nsymbol = \"MSFT\"\n\nincome_statement = get_income_statement(symbol)\n\nanalysis_request = f\"\"\"\nAnalyze the following income statement data for {symbol}:\n\n{income_statement}\n\nProvide insights on the company's financial health, profitability trends, and any notable observations.\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example demonstrates an AI agent that can fetch income statement data from Alpha Vantage and perform a fundamental analysis of a company's financials.
"},{"location":"guides/financial_data_api/#yahoo-finance","title":"Yahoo Finance","text":"Now, let's create an agent that can work with Yahoo Finance data to perform technical analysis.
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport yfinance as yf\nimport pandas as pd\n\nload_dotenv()\n\n# OpenAI API setup\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Technical-Analysis-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in technical analysis. Your task is to analyze stock price data and provide insights on trends and potential trading signals.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_stock_data(symbol, start_date, end_date):\n stock = yf.Ticker(symbol)\n data = stock.history(start=start_date, end=end_date)\n return data\n\n# Example usage\nsymbol = \"GOOGL\"\nstart_date = \"2023-01-01\"\nend_date = \"2023-12-31\"\n\nstock_data = get_stock_data(symbol, start_date, end_date)\n\n# Calculate some technical indicators\nstock_data['SMA_20'] = stock_data['Close'].rolling(window=20).mean()\nstock_data['SMA_50'] = stock_data['Close'].rolling(window=50).mean()\n\nanalysis_request = f\"\"\"\nAnalyze the following stock price data and technical indicators for {symbol} from {start_date} to {end_date}:\n\n{stock_data.tail(30).to_string()}\n\nProvide insights on the stock's price trends, potential support and resistance levels, and any notable trading signals based on the moving averages.\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example shows an AI agent that can fetch stock price data from Yahoo Finance, calculate some basic technical indicators, and perform a technical analysis.
"},{"location":"guides/financial_data_api/#iex-cloud","title":"IEX Cloud","text":"Let's create an agent that can work with IEX Cloud data to analyze company news sentiment.
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport requests\n\nload_dotenv()\n\n# IEX Cloud API setup\nIEX_CLOUD_API_KEY = os.getenv(\"IEX_CLOUD_API_KEY\")\nIEX_CLOUD_BASE_URL = \"https://cloud.iexapis.com/stable\"\n\n# OpenAI API setup\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"News-Sentiment-Analysis-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in news sentiment analysis. Your task is to analyze company news and provide insights on the overall sentiment and potential impact on the stock.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_company_news(symbol, last_n):\n endpoint = f\"{IEX_CLOUD_BASE_URL}/stock/{symbol}/news/last/{last_n}\"\n params = {'token': IEX_CLOUD_API_KEY}\n response = requests.get(endpoint, params=params)\n return response.json()\n\n# Example usage\nsymbol = \"TSLA\"\nlast_n = 10\n\nnews_data = get_company_news(symbol, last_n)\n\nanalysis_request = f\"\"\"\nAnalyze the following recent news articles for {symbol}:\n\n{news_data}\n\nProvide insights on the overall sentiment of the news, potential impact on the stock price, and any notable trends or events mentioned.\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example demonstrates an AI agent that can fetch recent news data from IEX Cloud and perform a sentiment analysis on the company news.
"},{"location":"guides/financial_data_api/#finnhub","title":"Finnhub","text":"Finally, let's create an agent that can work with Finnhub data to analyze earnings estimates and recommendations.
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport finnhub\n\nload_dotenv()\n\n# Finnhub API setup\nFINNHUB_API_KEY = os.getenv(\"FINNHUB_API_KEY\")\nfinnhub_client = finnhub.Client(api_key=FINNHUB_API_KEY)\n\n# OpenAI API setup\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Earnings-Analysis-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in earnings analysis. Your task is to analyze earnings estimates and recommendations to provide insights on a company's financial outlook.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_earnings_estimates(symbol):\n return finnhub_client.earnings_calendar(symbol=symbol, from_date=\"2023-01-01\", to_date=\"2023-12-31\")\n\ndef get_recommendations(symbol):\n return finnhub_client.recommendation_trends(symbol)\n\n# Example usage\nsymbol = \"NVDA\"\n\nearnings_estimates = get_earnings_estimates(symbol)\nrecommendations = get_recommendations(symbol)\n\nanalysis_request = f\"\"\"\nAnalyze the following earnings estimates and recommendations for {symbol}:\n\nEarnings Estimates:\n{earnings_estimates}\n\nRecommendations:\n{recommendations}\n\nProvide insights on the company's expected financial performance, analyst sentiment, and any notable trends in the recommendations.\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example shows an AI agent that can fetch earnings estimates and analyst recommendations from Finnhub and perform an analysis on the company's financial outlook.
"},{"location":"guides/financial_data_api/#advanced-analysis-techniques","title":"Advanced Analysis Techniques","text":"To further enhance the capabilities of our AI agents, we can implement more advanced analysis techniques:
Multi-source analysis: Combine data from multiple providers to get a more comprehensive view of a stock or market.
Time series forecasting: Implement machine learning models for price prediction.
Sentiment analysis of social media: Incorporate data from social media platforms to gauge market sentiment.
Portfolio optimization: Use AI agents to suggest optimal portfolio allocations based on risk tolerance and investment goals.
Anomaly detection: Implement algorithms to detect unusual patterns or events in financial data.
Here's an example of how we might implement a multi-source analysis:
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport yfinance as yf\nimport requests\nimport pandas as pd\n\nload_dotenv()\n\n# API setup\nPOLYGON_API_KEY = os.getenv(\"POLYGON_API_KEY\")\nALPHA_VANTAGE_API_KEY = os.getenv(\"ALPHA_VANTAGE_API_KEY\")\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Multi-Source-Analysis-Agent\",\n system_prompt=\"You are a financial analysis AI assistant capable of analyzing data from multiple sources. Your task is to provide comprehensive insights on a stock based on various data points.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_stock_data_yf(symbol, start_date, end_date):\n stock = yf.Ticker(symbol)\n return stock.history(start=start_date, end=end_date)\n\ndef get_stock_data_polygon(symbol, from_date, to_date):\n endpoint = f\"https://api.polygon.io/v2/aggs/ticker/{symbol}/range/1/day/{from_date}/{to_date}\"\n params = {'apiKey': POLYGON_API_KEY, 'adjusted': 'true'}\n response = requests.get(endpoint, params=params)\n data = response.json()\n return pd.DataFrame(data['results'])\n\ndef get_company_overview_av(symbol):\n params = {\n 'function': 'OVERVIEW',\n 'symbol': symbol,\n 'apikey': ALPHA_VANTAGE_API_KEY\n }\n response = requests.get(\"https://www.alphavantage.co/query\", params=params)\n return response.json()\n\n# Example usage\nsymbol = \"AAPL\"\nstart_date = \"2023-01-01\"\nend_date = \"2023-12-31\"\n\nyf_data = get_stock_data_yf(symbol, start_date, end_date)\npolygon_data = get_stock_data_polygon(symbol, start_date, end_date)\nav_overview = get_company_overview_av(symbol)\n\nanalysis_request = f\"\"\"\nAnalyze the following data for {symbol} from {start_date} to {end_date}:\n\nYahoo Finance Data:\n{yf_data.tail().to_string()}\n\nPolygon.io Data:\n{polygon_data.tail().to_string()}\n\nAlpha Vantage Company Overview:\n{av_overview}\n\nProvide a comprehensive analysis of the stock, including:\n1. Price trends and volatility\n2. Trading volume analysis\n3. Fundamental analysis based on the company overview\n4. Any discrepancies between data sources and potential reasons\n5. Overall outlook and potential risks/opportunities\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This multi-source analysis example combines data from Yahoo Finance, Polygon.io, and Alpha Vantage to provide a more comprehensive view of a stock. The AI agent can then analyze this diverse set of data to provide deeper insights.
Now, let's explore some additional advanced analysis techniques:
"},{"location":"guides/financial_data_api/#time-series-forecasting","title":"Time Series Forecasting","text":"We can implement a simple time series forecasting model using the Prophet library and integrate it with our AI agent:
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport yfinance as yf\nimport pandas as pd\nfrom prophet import Prophet\nimport matplotlib.pyplot as plt\n\nload_dotenv()\n\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\nagent = Agent(\n agent_name=\"Time-Series-Forecast-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in time series forecasting. Your task is to analyze stock price predictions and provide insights.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_stock_data(symbol, start_date, end_date):\n stock = yf.Ticker(symbol)\n data = stock.history(start=start_date, end=end_date)\n return data\n\ndef forecast_stock_price(data, periods=30):\n df = data.reset_index()[['Date', 'Close']]\n df.columns = ['ds', 'y']\n\n model = Prophet()\n model.fit(df)\n\n future = model.make_future_dataframe(periods=periods)\n forecast = model.predict(future)\n\n fig = model.plot(forecast)\n plt.savefig('forecast_plot.png')\n plt.close()\n\n return forecast\n\n# Example usage\nsymbol = \"MSFT\"\nstart_date = \"2020-01-01\"\nend_date = \"2023-12-31\"\n\nstock_data = get_stock_data(symbol, start_date, end_date)\nforecast = forecast_stock_price(stock_data)\n\nanalysis_request = f\"\"\"\nAnalyze the following time series forecast for {symbol}:\n\nForecast Data:\n{forecast.tail(30).to_string()}\n\nThe forecast plot has been saved as 'forecast_plot.png'.\n\nProvide insights on:\n1. The predicted trend for the stock price\n2. Any seasonal patterns observed\n3. Potential factors that might influence the forecast\n4. Limitations of this forecasting method\n5. Recommendations for investors based on this forecast\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example demonstrates how to integrate a time series forecasting model (Prophet) with our AI agent. The agent can then provide insights based on the forecasted data.
"},{"location":"guides/financial_data_api/#sentiment-analysis-of-social-media","title":"Sentiment Analysis of Social Media","text":"We can use a pre-trained sentiment analysis model to analyze tweets about a company and integrate this with our AI agent:
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport tweepy\nfrom textblob import TextBlob\nimport pandas as pd\n\nload_dotenv()\n\n# Twitter API setup\nTWITTER_API_KEY = os.getenv(\"TWITTER_API_KEY\")\nTWITTER_API_SECRET = os.getenv(\"TWITTER_API_SECRET\")\nTWITTER_ACCESS_TOKEN = os.getenv(\"TWITTER_ACCESS_TOKEN\")\nTWITTER_ACCESS_TOKEN_SECRET = os.getenv(\"TWITTER_ACCESS_TOKEN_SECRET\")\n\nauth = tweepy.OAuthHandler(TWITTER_API_KEY, TWITTER_API_SECRET)\nauth.set_access_token(TWITTER_ACCESS_TOKEN, TWITTER_ACCESS_TOKEN_SECRET)\napi = tweepy.API(auth)\n\n# OpenAI setup\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\nagent = Agent(\n agent_name=\"Social-Media-Sentiment-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in social media sentiment analysis. Your task is to analyze sentiment data from tweets and provide insights on market perception.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_tweets(query, count=100):\n tweets = api.search_tweets(q=query, count=count, tweet_mode=\"extended\")\n return [tweet.full_text for tweet in tweets]\n\ndef analyze_sentiment(tweets):\n sentiments = [TextBlob(tweet).sentiment.polarity for tweet in tweets]\n return pd.DataFrame({'tweet': tweets, 'sentiment': sentiments})\n\n# Example usage\nsymbol = \"TSLA\"\nquery = f\"${symbol} stock\"\n\ntweets = get_tweets(query)\nsentiment_data = analyze_sentiment(tweets)\n\nanalysis_request = f\"\"\"\nAnalyze the following sentiment data for tweets about {symbol} stock:\n\nSentiment Summary:\nPositive tweets: {sum(sentiment_data['sentiment'] > 0)}\nNegative tweets: {sum(sentiment_data['sentiment'] < 0)}\nNeutral tweets: {sum(sentiment_data['sentiment'] == 0)}\n\nAverage sentiment: {sentiment_data['sentiment'].mean()}\n\nSample tweets and their sentiments:\n{sentiment_data.head(10).to_string()}\n\nProvide insights on:\n1. The overall sentiment towards the stock\n2. Any notable trends or patterns in the sentiment\n3. Potential reasons for the observed sentiment\n4. How this sentiment might impact the stock price\n5. Limitations of this sentiment analysis method\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example shows how to perform sentiment analysis on tweets about a stock and integrate the results with our AI agent for further analysis.
"},{"location":"guides/financial_data_api/#portfolio-optimization","title":"Portfolio Optimization","text":"We can use the PyPortfolioOpt library to perform portfolio optimization and have our AI agent provide insights:
import os\nfrom swarms import Agent\nfrom swarms.models import OpenAIChat\nfrom dotenv import load_dotenv\nimport yfinance as yf\nimport pandas as pd\nimport numpy as np\nfrom pypfopt import EfficientFrontier\nfrom pypfopt import risk_models\nfrom pypfopt import expected_returns\n\nload_dotenv()\n\nOPENAI_API_KEY = os.getenv(\"OPENAI_API_KEY\")\n\nmodel = OpenAIChat(\n openai_api_key=OPENAI_API_KEY,\n model_name=\"gpt-4\",\n temperature=0.1\n)\n\nagent = Agent(\n agent_name=\"Portfolio-Optimization-Agent\",\n system_prompt=\"You are a financial analysis AI assistant specializing in portfolio optimization. Your task is to analyze optimized portfolio allocations and provide investment advice.\",\n llm=model,\n max_loops=1,\n dashboard=False,\n verbose=True\n)\n\ndef get_stock_data(symbols, start_date, end_date):\n data = yf.download(symbols, start=start_date, end=end_date)['Adj Close']\n return data\n\ndef optimize_portfolio(data):\n mu = expected_returns.mean_historical_return(data)\n S = risk_models.sample_cov(data)\n\n ef = EfficientFrontier(mu, S)\n weights = ef.max_sharpe()\n cleaned_weights = ef.clean_weights()\n\n return cleaned_weights\n\n# Example usage\nsymbols = [\"AAPL\", \"GOOGL\", \"MSFT\", \"AMZN\", \"FB\"]\nstart_date = \"2018-01-01\"\nend_date = \"2023-12-31\"\n\nstock_data = get_stock_data(symbols, start_date, end_date)\noptimized_weights = optimize_portfolio(stock_data)\n\nanalysis_request = f\"\"\"\nAnalyze the following optimized portfolio allocation:\n\n{pd.Series(optimized_weights).to_string()}\n\nThe optimization aimed to maximize the Sharpe ratio based on historical data from {start_date} to {end_date}.\n\nProvide insights on:\n1. The recommended allocation and its potential benefits\n2. Any notable concentrations or diversification in the portfolio\n3. Potential risks associated with this allocation\n4. How this portfolio might perform in different market conditions\n5. Recommendations for an investor considering this allocation\n6. Limitations of this optimization method\n\"\"\"\n\nanalysis = agent.run(analysis_request)\nprint(analysis)\n
This example demonstrates how to perform portfolio optimization using the PyPortfolioOpt library and have our AI agent provide insights on the optimized allocation.
"},{"location":"guides/financial_data_api/#best-practices-and-considerations","title":"Best Practices and Considerations","text":"When using AI agents for financial data analysis, consider the following best practices:
Data quality: Ensure that the data you're feeding into the agents is accurate and up-to-date.
Model limitations: Be aware of the limitations of both the financial models and the AI models being used.
Regulatory compliance: Ensure that your use of AI in financial analysis complies with relevant regulations.
Ethical considerations: Be mindful of potential biases in AI models and strive for fair and ethical analysis.
Continuous monitoring: Regularly evaluate the performance of your AI agents and update them as needed.
Human oversight: While AI agents can provide valuable insights, human judgment should always play a role in financial decision-making.
Privacy and security: Implement robust security measures to protect sensitive financial data.
The integration of AI agents with financial data APIs opens up exciting possibilities for advanced financial analysis. By leveraging the power of the Swarms framework and connecting it with various financial data providers, analysts and quants can gain deeper insights, automate complex analyses, and potentially make more informed investment decisions.
However, it's crucial to remember that while AI agents can process vast amounts of data and identify patterns that humans might miss, they should be used as tools to augment human decision-making rather than replace it entirely. The financial markets are complex systems influenced by numerous factors, many of which may not be captured in historical data or current models.
As the field of AI in finance continues to evolve, we can expect even more sophisticated analysis techniques and integrations. Staying updated with the latest developments in both AI and financial analysis will be key to leveraging these powerful tools effectively.
"},{"location":"guides/healthcare_blog/","title":"Unlocking Efficiency and Cost Savings in Healthcare: How Swarms of LLM Agents Can Revolutionize Medical Operations and Save Millions","text":"The healthcare industry is a complex ecosystem where time and money are critical. From administrative tasks to patient care, medical professionals often struggle to keep up with mounting demands, leading to inefficiencies that cost both time and money. Swarms of Large Language Model (LLM) agents represent a groundbreaking solution to these problems. By leveraging artificial intelligence in the form of swarms, healthcare organizations can automate various tasks, optimize processes, and dramatically improve both the quality of care and operational efficiency.
In this comprehensive analysis, we will explore how swarms of LLM agents can help healthcare and medical organizations save millions of dollars and thousands of hours annually. We will provide precise estimations based on industry data, calculate potential savings, and outline various use cases. Additionally, mermaid diagrams will be provided to illustrate swarm architectures, and reference links to Swarms GitHub and other resources will be included.
"},{"location":"guides/healthcare_blog/#1-administrative-automation","title":"1. Administrative Automation","text":""},{"location":"guides/healthcare_blog/#use-case-billing-and-claims-processing","title":"Use Case: Billing and Claims Processing","text":"Administrative work is a major time drain in the healthcare sector, especially when it comes to billing and claims processing. The process is traditionally labor-intensive, requiring human staff to manually review and process claims, which often results in errors, delays, and higher operational costs.
How Swarms of LLM Agents Can Help: Swarms of LLM agents can automate the entire billing and claims process, from coding procedures to filing claims with insurance companies. These agents can read medical records, understand the diagnosis codes (ICD-10), and automatically generate billing forms. With intelligent claims management, LLM agents can also follow up with insurance companies to ensure timely payment.
Estimated Savings:
Average cost per manual claim: $25
Average claims per hospital: 10,000 per month
Swarms of LLM agents can reduce processing time by 90% and errors by 95%
Estimated annual savings per hospital:
Savings per claim: $22.5 (90% reduction)
Total annual savings: 10,000 claims/month \u00d7 12 months \u00d7 \\(22.5 = **\\)2.7 million**
graph TD;\n A[Medical Records] --> B[ICD-10 Coding Agent];\n B --> C[Billing Form Agent];\n C --> D[Claims Submission Agent];\n D --> E[Insurance Follow-up Agent];\n E --> F[Payment Processing];
"},{"location":"guides/healthcare_blog/#2-enhancing-clinical-decision-support","title":"2. Enhancing Clinical Decision Support","text":""},{"location":"guides/healthcare_blog/#use-case-diagnostic-assistance","title":"Use Case: Diagnostic Assistance","text":"Doctors are increasingly turning to AI to assist in diagnosing complex medical conditions. Swarms of LLM agents can be trained to analyze patient data, laboratory results, and medical histories to assist doctors in making more accurate diagnoses.
How Swarms of LLM Agents Can Help: A swarm of LLM agents can scan through thousands of medical records, journals, and patient histories to identify patterns or suggest rare diagnoses. These agents work collaboratively to analyze test results, compare symptoms with a vast medical knowledge base, and provide doctors with a list of probable diagnoses and recommended tests.
Estimated Savings:
Time saved per diagnosis: 2 hours per patient
Average patient cases per hospital: 5,000 per year
Time saved annually: 2 \u00d7 5,000 = 10,000 hours
Doctor's hourly rate: $150
Total annual savings: 10,000 \u00d7 \\(150 = **\\)1.5 million**
graph TD;\n A[Patient Data] --> B[Lab Results];\n A --> C[Medical History];\n B --> D[Symptom Analysis Agent];\n C --> E[Pattern Recognition Agent];\n D --> F[Diagnosis Suggestion Agent];\n E --> F;\n F --> G[Doctor];
"},{"location":"guides/healthcare_blog/#3-streamlining-patient-communication","title":"3. Streamlining Patient Communication","text":""},{"location":"guides/healthcare_blog/#use-case-patient-follow-ups-and-reminders","title":"Use Case: Patient Follow-ups and Reminders","text":"Timely communication with patients is critical for maintaining healthcare quality, but it can be extremely time-consuming for administrative staff. Missed appointments and delayed follow-ups lead to poor patient outcomes and lost revenue.
How Swarms of LLM Agents Can Help: LLM agents can handle patient follow-ups by sending reminders for appointments, check-ups, and medication refills. Additionally, these agents can answer common patient queries, thereby reducing the workload for human staff. These agents can be connected to Electronic Health Record (EHR) systems to monitor patient data and trigger reminders based on predefined criteria.
Estimated Savings:
Average cost per patient follow-up: $5
Number of follow-ups: 20,000 annually per hospital
Swarm efficiency: 90% reduction in manual effort
Total annual savings: 20,000 \u00d7 \\(4.5 = **\\)90,000**
graph TD;\n A[Patient Data from EHR] --> B[Appointment Reminder Agent];\n A --> C[Medication Reminder Agent];\n B --> D[Automated Text/Email];\n C --> D;\n D --> E[Patient];
"},{"location":"guides/healthcare_blog/#4-optimizing-inventory-management","title":"4. Optimizing Inventory Management","text":""},{"location":"guides/healthcare_blog/#use-case-pharmaceutical-stock-management","title":"Use Case: Pharmaceutical Stock Management","text":"Hospitals often struggle with managing pharmaceutical inventory efficiently. Overstocking leads to wasted resources, while understocking can be a critical problem for patient care.
How Swarms of LLM Agents Can Help: A swarm of LLM agents can predict pharmaceutical needs by analyzing patient data, historical inventory usage, and supplier delivery times. These agents can dynamically adjust stock levels, automatically place orders, and ensure that hospitals have the right medications at the right time.
Estimated Savings:
Annual waste due to overstocking: $500,000 per hospital
Swarm efficiency: 80% reduction in overstocking
Total annual savings: \\(500,000 \u00d7 0.8 = **\\)400,000**
graph TD;\n A[Patient Admission Data] --> B[Inventory Prediction Agent];\n B --> C[Stock Adjustment Agent];\n C --> D[Supplier Ordering Agent];\n D --> E[Pharmacy];
"},{"location":"guides/healthcare_blog/#5-improving-clinical-research","title":"5. Improving Clinical Research","text":""},{"location":"guides/healthcare_blog/#use-case-literature-review-and-data-analysis","title":"Use Case: Literature Review and Data Analysis","text":"Medical researchers spend a significant amount of time reviewing literature and analyzing clinical trial data. Swarms of LLM agents can assist by rapidly scanning through research papers, extracting relevant information, and even suggesting areas for further investigation.
How Swarms of LLM Agents Can Help: These agents can be trained to perform literature reviews, extract relevant data, and cross-reference findings with ongoing clinical trials. LLM agents can also simulate clinical trial results by analyzing historical data, offering valuable insights before actual trials commence.
Estimated Savings:
Average time spent on literature review per paper: 5 hours
Number of papers reviewed annually: 1,000
Time saved: 80% reduction in review time
Total time saved: 1,000 \u00d7 5 \u00d7 0.8 = 4,000 hours
Researcher's hourly rate: $100
Total annual savings: 4,000 \u00d7 \\(100 = **\\)400,000**
graph TD;\n A[Research Papers] --> B[Data Extraction Agent];\n B --> C[Cross-reference Agent];\n C --> D[Simulation Agent];\n D --> E[Researcher];
"},{"location":"guides/healthcare_blog/#6-automating-medical-record-keeping","title":"6. Automating Medical Record Keeping","text":""},{"location":"guides/healthcare_blog/#use-case-ehr-management-and-documentation","title":"Use Case: EHR Management and Documentation","text":"Healthcare providers spend a significant amount of time inputting and managing Electronic Health Records (EHR). Manual entry often results in errors and takes away from the time spent with patients.
How Swarms of LLM Agents Can Help: Swarms of LLM agents can automate the documentation process by transcribing doctor-patient interactions, updating EHRs in real-time, and even detecting errors in the documentation. These agents can integrate with voice recognition systems to create seamless workflows, freeing up more time for healthcare providers to focus on patient care.
Estimated Savings:
Average time spent on EHR per patient: 20 minutes
Number of patients annually: 30,000
Time saved: 80% reduction in manual effort
Total time saved: 30,000 \u00d7 20 minutes \u00d7 0.8 = 480,000 minutes or 8,000 hours
Provider's hourly rate: $150
Total annual savings: 8,000 \u00d7 \\(150 = **\\)1.2 million**
graph TD;\n A[Doctor-Patient Interaction] --> B[Voice-to-Text Agent];\n B --> C[EHR Update Agent];\n C --> D[Error Detection Agent];\n D --> E[EHR System];
"},{"location":"guides/healthcare_blog/#7-reducing-diagnostic-errors","title":"7. Reducing Diagnostic Errors","text":""},{"location":"guides/healthcare_blog/#use-case-medical-imaging-analysis","title":"Use Case: Medical Imaging Analysis","text":"Medical imaging, such as MRI and CT scans, requires expert interpretation, which can be both time-consuming and prone to errors. Misdiagnoses or delays in interpretation can lead to prolonged treatment times and increased costs.
How Swarms of LLM Agents Can Help: Swarms of LLM agents trained in computer vision can analyze medical images more accurately and faster than human radiologists. These agents can compare current scans with historical data, detect anomalies, and provide a diagnosis within minutes. Additionally, the swarm can escalate complex cases to human experts when necessary.
Estimated Savings:
Time saved per scan: 30 minutes
Number of scans annually: 10,000
Time saved: 10,000 \u00d7 30 minutes = 5,000 hours
Radiologist's hourly rate: $200
Total annual savings: 5,000 \u00d7 $
200 = $1 million
"},{"location":"guides/healthcare_blog/#medical-imaging-swarm","title":"Medical Imaging Swarm","text":"graph TD;\n A[Medical Image] --> B[Anomaly Detection Agent];\n B --> C[Comparison with Historical Data Agent];\n C --> D[Diagnosis Suggestion Agent];\n D --> E[Radiologist Review];
"},{"location":"guides/healthcare_blog/#conclusion-the-financial-and-time-saving-impact-of-llm-swarms-in-healthcare","title":"Conclusion: The Financial and Time-Saving Impact of LLM Swarms in Healthcare","text":"In this comprehensive analysis, we explored how swarms of LLM agents can revolutionize the healthcare and medical industries by automating complex, labor-intensive tasks that currently drain both time and resources. From billing and claims processing to diagnostic assistance, patient communication, and medical imaging analysis, these intelligent agents can work collaboratively to significantly improve efficiency while reducing costs. Through our detailed calculations, it is evident that healthcare organizations could save upwards of $7.29 million annually, along with thousands of hours in administrative and clinical work.
Swarms of LLM agents not only promise financial savings but also lead to improved patient outcomes, streamlined research, and enhanced operational workflows. By adopting these agentic solutions, healthcare organizations can focus more on their mission of providing high-quality care while ensuring their systems run seamlessly and efficiently.
To explore more about how swarms of agents can be tailored to your healthcare operations, you can visit the Swarms GitHub for code and documentation, explore our Swarms Website for further insights, and if you're ready to implement these solutions in your organization, feel free to book a call for a personalized consultation.
The future of healthcare is agentic, and by embracing swarms of LLM agents, your organization can unlock unprecedented levels of productivity and savings.
Swarms of LLM agents offer a powerful solution for medical and healthcare organizations looking to reduce costs and save time. Through automation, these agents can optimize everything from administrative tasks to clinical decision-making and inventory management. Based on the estimates provided, healthcare organizations can potentially save millions of dollars annually, all while improving the quality of care provided to patients.
The table below summarizes the estimated savings for each use case:
Use Case Estimated Annual Savings Billing and Claims Processing $2.7 million Diagnostic Assistance $1.5 million Patient Follow-ups and Reminders $90,000 Pharmaceutical Stock Management $400,000 Clinical Research $400,000 EHR Management and Documentation $1.2 million Medical Imaging Analysis $1 million Total Estimated Savings $7.29 million"},{"location":"guides/healthcare_blog/#references","title":"References","text":"Swarms GitHub
Swarms Website
book a call
Swarms Discord: https://discord.gg/jM3Z6M9uMq
Swarms Twitter: https://x.com/swarms_corp
Swarms Spotify: https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994
Swarms Blog: https://medium.com/@kyeg Swarms Website: https://swarms.xyz
By adopting swarms of LLM agents, healthcare organizations can streamline operations, reduce inefficiencies, and focus on what truly matters\u2014delivering top-notch patient care.
"},{"location":"guides/pricing/","title":"Comparing LLM Provider Pricing: A Guide for Enterprises","text":"Large language models (LLMs) have become a cornerstone of innovation for enterprises across various industries.
As executives contemplate which model to integrate into their operations, understanding the intricacies of LLM provider pricing is crucial.
This comprehensive guide delves into the tactical business considerations, unit economics, profit margins, and ROI calculations that will empower decision-makers to deploy the right AI solution for their organization.
"},{"location":"guides/pricing/#table-of-contents","title":"Table of Contents","text":"The pricing of Large Language Models (LLMs) is a complex landscape that can significantly impact an enterprise's bottom line. As we dive into this topic, it's crucial to understand the various pricing models employed by LLM providers and how they align with different business needs.
"},{"location":"guides/pricing/#pay-per-token-model","title":"Pay-per-Token Model","text":"The most common pricing structure in the LLM market is the pay-per-token model. In this system, businesses are charged based on the number of tokens processed by the model. A token can be as short as one character or as long as one word, depending on the language and the specific tokenization method used by the model.
Advantages: - Scalability: Costs scale directly with usage, allowing for flexibility as demand fluctuates. - Transparency: Easy to track and attribute costs to specific projects or departments.
Disadvantages: - Unpredictability: Costs can vary significantly based on the verbosity of inputs and outputs. - Potential for overruns: Without proper monitoring, costs can quickly escalate.
"},{"location":"guides/pricing/#subscription-based-models","title":"Subscription-Based Models","text":"Some providers offer subscription tiers that provide a set amount of compute resources or tokens for a fixed monthly or annual fee.
Advantages: - Predictable costs: Easier budgeting and financial planning. - Potential cost savings: Can be more economical for consistent, high-volume usage.
Disadvantages: - Less flexibility: May lead to underutilization or overages. - Commitment required: Often involves longer-term contracts.
"},{"location":"guides/pricing/#custom-enterprise-agreements","title":"Custom Enterprise Agreements","text":"For large-scale deployments, providers may offer custom pricing agreements tailored to the specific needs of an enterprise.
Advantages: - Optimized for specific use cases: Can include specialized support, SLAs, and pricing structures. - Potential for significant cost savings at scale.
Disadvantages: - Complexity: Negotiating and managing these agreements can be resource-intensive. - Less standardization: Difficult to compare across providers.
"},{"location":"guides/pricing/#hybrid-models","title":"Hybrid Models","text":"Some providers are beginning to offer hybrid models that combine elements of pay-per-token and subscription-based pricing.
Advantages: - Flexibility: Can adapt to varying usage patterns. - Risk mitigation: Balances the benefits of both main pricing models.
Disadvantages: - Complexity: Can be more challenging to understand and manage. - Potential for suboptimal pricing if not carefully structured.
As we progress through this guide, we'll explore how these pricing models interact with various business considerations and how executives can leverage this understanding to make informed decisions.
"},{"location":"guides/pricing/#2-understanding-unit-economics-in-llm-deployment","title":"2. Understanding Unit Economics in LLM Deployment","text":"To make informed decisions about LLM deployment, executives must have a clear grasp of the unit economics involved. This section breaks down the components that contribute to the cost per unit of LLM usage and how they impact overall business economics.
"},{"location":"guides/pricing/#defining-the-unit","title":"Defining the Unit","text":"In the context of LLMs, a \"unit\" can be defined in several ways:
Understanding which unit is most relevant to your use case is crucial for accurate economic analysis.
"},{"location":"guides/pricing/#components-of-unit-cost","title":"Components of Unit Cost","text":"Data transfer costs
Indirect Costs
Networking costs
Operational Costs
Customer support related to AI functions
Overhead
To calculate the true unit economics, follow these steps:
Determine Total Costs: Sum all direct, indirect, operational, and overhead costs over a fixed period (e.g., monthly).
Measure Total Units: Track the total number of relevant units processed in the same period.
Calculate Cost per Unit: Divide total costs by total units.
Cost per Unit = Total Costs / Total Units\n
Analyze Revenue per Unit: If the LLM is part of a revenue-generating product, calculate the revenue attributed to each unit.
Determine Profit per Unit: Subtract the cost per unit from the revenue per unit.
Profit per Unit = Revenue per Unit - Cost per Unit\n
"},{"location":"guides/pricing/#example-calculation","title":"Example Calculation","text":"Let's consider a hypothetical customer service AI chatbot:
Cost per Interaction = ($10,000 + $5,000) / 100,000 = $0.15\n
If each interaction generates an average of $0.50 in value (through cost savings or revenue):
Profit per Interaction = $0.50 - $0.15 = $0.35\n
"},{"location":"guides/pricing/#economies-of-scale","title":"Economies of Scale","text":"As usage increases, unit economics often improve due to:
However, it's crucial to model how these economies of scale manifest in your specific use case, as they may plateau or even reverse at very high volumes due to increased complexity and support needs.
"},{"location":"guides/pricing/#diseconomies-of-scale","title":"Diseconomies of Scale","text":"Conversely, be aware of potential diseconomies of scale:
By thoroughly understanding these unit economics, executives can make more informed decisions about which LLM provider and pricing model best aligns with their business objectives and scale.
"},{"location":"guides/pricing/#3-profit-margins-and-cost-structures","title":"3. Profit Margins and Cost Structures","text":"Understanding profit margins and cost structures is crucial for executives evaluating LLM integration. This section explores how different pricing models and operational strategies can impact overall profitability.
"},{"location":"guides/pricing/#components-of-profit-margin","title":"Components of Profit Margin","text":"Gross Margin: The difference between revenue and the direct costs of LLM usage.
Gross Margin = Revenue - Direct LLM Costs\nGross Margin % = (Gross Margin / Revenue) * 100\n
Contribution Margin: Gross margin minus variable operational costs.
Contribution Margin = Gross Margin - Variable Operational Costs\n
Net Margin: The final profit after all costs, including fixed overheads.
Net Margin = Contribution Margin - Fixed Costs\nNet Margin % = (Net Margin / Revenue) * 100\n
Licensing fees for essential software
Variable Costs
Performance-based team bonuses
Step Costs
Let's compare how different LLM pricing models might affect profit margins for a hypothetical AI-powered writing assistant service:
Scenario: The service charges users $20/month and expects to process an average of 100,000 tokens per user per month.
Gross margin per user: $14 (70%)
Subscription Model
Gross margin per user: $15 (75%)
Hybrid Model
Use compression algorithms for inputs and outputs
Leverage Economies of Scale
Spread fixed costs across a larger user base
Implement Tiered Pricing
Example: Basic (\\(10/month, 50K tokens), Pro (\\)30/month, 200K tokens)
Vertical Integration
Reduce dependency on third-party providers for critical operations
Smart Caching and Pre-computation
Perform batch processing during off-peak hours
Hybrid Cloud Strategies
Consider a company that initially used a pay-per-token model:
Initial State: - Revenue per user: $20 - LLM cost per user: $6 - Other variable costs: $4 - Fixed costs per user: $5 - Net margin per user: $5 (25%)
After Optimization: - Implemented efficient prompting: Reduced token usage by 20% - Negotiated volume discount: 10% reduction in per-token price - Introduced tiered pricing: Average revenue per user increased to $25 - Optimized operations: Reduced other variable costs to $3
Result: - New LLM cost per user: $4.32 - New net margin per user: $12.68 (50.7%)
This case study demonstrates how a holistic approach to margin improvement, addressing both revenue and various cost components, can significantly enhance profitability.
Understanding these profit margin dynamics and cost structures is essential for executives to make informed decisions about LLM integration and to continuously optimize their AI-powered services for maximum profitability.
"},{"location":"guides/pricing/#4-llm-pricing-in-action-case-studies","title":"4. LLM Pricing in Action: Case Studies","text":"To provide a concrete understanding of how LLM pricing models work in real-world scenarios, let's examine several case studies across different industries and use cases. These examples will illustrate the interplay between pricing models, usage patterns, and business outcomes.
"},{"location":"guides/pricing/#case-study-1-e-commerce-product-description-generator","title":"Case Study 1: E-commerce Product Description Generator","text":"Company: GlobalMart, a large online retailer Use Case: Automated generation of product descriptions LLM Provider: GPT-4o
Pricing Model: Pay-per-token - Input: $5.00 per 1M tokens - Output: $15.00 per 1M tokens
Usage Pattern: - Average input: 50 tokens per product (product attributes) - Average output: 200 tokens per product (generated description) - Daily products processed: 10,000
Daily Cost Calculation: 1. Input cost: (50 tokens * 10,000 products) / 1M * $5.00 = $2.50 2. Output cost: (200 tokens * 10,000 products) / 1M * $15.00 = $30.00 3. Total daily cost: $32.50
Business Impact: - Reduced time to market for new products by 70% - Improved SEO performance due to unique, keyword-rich descriptions - Estimated daily value generated: $500 (based on increased sales and efficiency)
ROI Analysis: - Daily investment: $32.50 - Daily return: $500 - ROI = (Return - Investment) / Investment * 100 = 1,438%
Key Takeaway: The pay-per-token model works well for this use case due to the predictable and moderate token usage per task. The high ROI justifies the investment in a more advanced model like GPT-4o.
"},{"location":"guides/pricing/#case-study-2-customer-service-chatbot","title":"Case Study 2: Customer Service Chatbot","text":"Company: TechSupport Inc., a software company Use Case: 24/7 customer support chatbot LLM Provider: Claude 3.5 Sonnet
Pricing Model: Input: $3 per 1M tokens, Output: $15 per 1M tokens
Usage Pattern: - Average conversation: 500 tokens input (customer queries + context), 1000 tokens output (bot responses) - Daily conversations: 5,000
Daily Cost Calculation: 1. Input cost: (500 tokens * 5,000 conversations) / 1M * $3 = $7.50 2. Output cost: (1000 tokens * 5,000 conversations) / 1M * $15 = $75.00 3. Total daily cost: $82.50
Business Impact: - Reduced customer wait times by 90% - Resolved 70% of queries without human intervention - Estimated daily cost savings: $2,000 (based on reduced human support hours)
ROI Analysis: - Daily investment: $82.50 - Daily return: $2,000 - ROI = (Return - Investment) / Investment * 100 = 2,324%
Key Takeaway: The higher cost of Claude 3.5 Sonnet is justified by its superior performance in handling complex customer queries, resulting in significant cost savings and improved customer satisfaction.
"},{"location":"guides/pricing/#case-study-3-financial-report-summarization","title":"Case Study 3: Financial Report Summarization","text":"Company: FinAnalyze, a financial services firm Use Case: Automated summarization of lengthy financial reports LLM Provider: GPT-3.5 Turbo
Pricing Model: Input: $0.50 per 1M tokens, Output: $1.50 per 1M tokens
Usage Pattern: - Average report: 20,000 tokens input, 2,000 tokens output - Daily reports processed: 100
Daily Cost Calculation: 1. Input cost: (20,000 tokens * 100 reports) / 1M * $0.50 = $100 2. Output cost: (2,000 tokens * 100 reports) / 1M * $1.50 = $30 3. Total daily cost: $130
Business Impact: - Reduced analysis time by 80% - Improved consistency in report summaries - Enabled analysts to focus on high-value tasks - Estimated daily value generated: $1,000 (based on time savings and improved decision-making)
ROI Analysis: - Daily investment: $130 - Daily return: $1,000 - ROI = (Return - Investment) / Investment * 100 = 669%
Key Takeaway: The lower cost of GPT-3.5 Turbo is suitable for this task, which requires processing large volumes of text but doesn't necessarily need the most advanced language understanding. The high input token count makes the input pricing a significant factor in model selection.
"},{"location":"guides/pricing/#case-study-4-ai-powered-language-learning-app","title":"Case Study 4: AI-Powered Language Learning App","text":"Company: LinguaLeap, an edtech startup Use Case: Personalized language exercises and conversations LLM Provider: Claude 3 Haiku
Pricing Model: Input: $0.25 per 1M tokens, Output: $1.25 per 1M tokens
Usage Pattern: - Average session: 300 tokens input (user responses + context), 500 tokens output (exercises + feedback) - Daily active users: 50,000 - Average sessions per user per day: 3
Daily Cost Calculation: 1. Input cost: (300 tokens * 3 sessions * 50,000 users) / 1M * $0.25 = $11.25 2. Output cost: (500 tokens * 3 sessions * 50,000 users) / 1M * $1.25 = $93.75 3. Total daily cost: $105
Business Impact: - Increased user engagement by 40% - Improved learning outcomes, leading to higher user retention - Enabled scaling to new languages without proportional increase in human tutors - Estimated daily revenue: $5,000 (based on subscription fees and in-app purchases)
ROI Analysis: - Daily investment: $105 - Daily revenue: $5,000 - ROI = (Revenue - Investment) / Investment * 100 = 4,662%
Key Takeaway: The high-volume, relatively simple interactions in this use case make Claude 3 Haiku an excellent choice. Its low cost allows for frequent interactions without prohibitive expenses, which is crucial for an app relying on regular user engagement.
"},{"location":"guides/pricing/#case-study-5-legal-document-analysis","title":"Case Study 5: Legal Document Analysis","text":"Company: LegalEagle LLP, a large law firm Use Case: Contract review and risk assessment LLM Provider: Claude 3 Opus
Pricing Model: Input: $15 per 1M tokens, Output: $75 per 1M tokens
Usage Pattern: - Average contract: 10,000 tokens input, 3,000 tokens output (analysis and risk assessment) - Daily contracts processed: 50
Daily Cost Calculation: 1. Input cost: (10,000 tokens * 50 contracts) / 1M * $15 = $7.50 2. Output cost: (3,000 tokens * 50 contracts) / 1M * $75 = $11.25 3. Total daily cost: $18.75
Business Impact: - Reduced contract review time by 60% - Improved accuracy in identifying potential risks - Enabled handling of more complex cases - Estimated daily value: $10,000 (based on time savings and improved risk management)
ROI Analysis: - Daily investment: $18.75 - Daily value: $10,000 - ROI = (Value - Investment) / Investment * 100 = 53,233%
Key Takeaway: Despite the high cost per token, Claude 3 Opus's advanced capabilities justify its use in this high-stakes environment where accuracy and nuanced understanding are critical. The high value generated per task offsets the higher token costs.
These case studies demonstrate how different LLM providers and pricing models can be optimal for various use cases, depending on factors such as token volume, task complexity, and the value generated by the AI application. Executives should carefully consider these factors when selecting an LLM provider and pricing model for their specific needs.
"},{"location":"guides/pricing/#5-calculating-roi-for-llm-integration","title":"5. Calculating ROI for LLM Integration","text":"Calculating the Return on Investment (ROI) for LLM integration is crucial for executives to justify the expenditure and assess the business value of AI implementation. This section will guide you through the process of calculating ROI, considering both tangible and intangible benefits.
"},{"location":"guides/pricing/#the-roi-formula","title":"The ROI Formula","text":"The basic ROI formula is:
ROI = (Net Benefit / Cost of Investment) * 100\n
For LLM integration, we can expand this to:
ROI = ((Total Benefits - Total Costs) / Total Costs) * 100\n
"},{"location":"guides/pricing/#identifying-benefits","title":"Identifying Benefits","text":"Lower error-related costs
Revenue Increases
Upselling and cross-selling opportunities
Productivity Gains
Improved employee efficiency
Quality Improvements
Reduced error rates
Strategic Advantages
Subscription costs
Infrastructure Costs
Networking expenses
Integration and Development Costs
Custom feature development
Training and Support
Change management initiatives
Compliance and Security
Define the Time Period: Determine the timeframe for your ROI calculation (e.g., 1 year, 3 years).
Estimate Total Benefits:
Estimate the value of strategic advantages (this may be more subjective)
Calculate Total Costs:
Sum up all direct and indirect costs related to LLM integration
Apply the ROI Formula:
ROI = ((Total Benefits - Total Costs) / Total Costs) * 100\n
Consider Time Value of Money: For longer-term projections, use Net Present Value (NPV) to account for the time value of money.
Let's consider a hypothetical customer service chatbot implementation:
Time Period: 1 year
Benefits: - Labor cost savings: $500,000 - Increased sales from improved customer satisfaction: $300,000 - Productivity gains from faster query resolution: $200,000
Total Benefits: $1,000,000
Costs: - LLM API fees: $100,000 - Integration and development: $150,000 - Training and support: $50,000 - Infrastructure: $50,000
Total Costs: $350,000
ROI Calculation:
ROI = (($1,000,000 - $350,000) / $350,000) * 100 = 185.7%\n
This indicates a strong positive return on investment, with benefits outweighing costs by a significant margin.
"},{"location":"guides/pricing/#considerations-for-accurate-roi-calculation","title":"Considerations for Accurate ROI Calculation","text":"Be Conservative in Estimates: It's better to underestimate benefits and overestimate costs to provide a more realistic view.
Account for Ramp-Up Time: Full benefits may not be realized immediately. Consider a phased approach in your calculations.
Include Opportunity Costs: Consider the potential returns if the investment were made elsewhere.
Factor in Risk: Adjust your ROI based on the likelihood of achieving projected benefits.
Consider Non-Financial Benefits: Some benefits, like improved employee satisfaction or enhanced brand perception, may not have direct financial equivalents but are still valuable.
Perform Sensitivity Analysis: Calculate ROI under different scenarios (best case, worst case, most likely) to understand the range of possible outcomes.
Benchmark Against Alternatives: Compare the ROI of LLM integration against other potential investments or solutions.
While initial ROI calculations are crucial for decision-making, it's important to consider long-term implications:
By thoroughly calculating and analyzing the ROI of LLM integration, executives can make data-driven decisions about AI investments and set realistic expectations for the value these technologies can bring to their organizations.
"},{"location":"guides/pricing/#6-comparative-analysis-of-major-llm-providers","title":"6. Comparative Analysis of Major LLM Providers","text":"In this section, we'll compare the offerings of major LLM providers, focusing on their pricing structures, model capabilities, and unique selling points. This analysis will help executives understand the landscape and make informed decisions about which provider best suits their needs.
"},{"location":"guides/pricing/#openai","title":"OpenAI","text":"Models: GPT-4o, GPT-3.5 Turbo
Pricing Structure: - Pay-per-token model - Different rates for input and output tokens - Bulk discounts available for high-volume users
Key Features: - State-of-the-art performance on a wide range of tasks - Regular model updates and improvements - Extensive documentation and community support
Considerations: - Higher pricing compared to some competitors - Potential for rapid price changes as technology evolves - Usage limits and approval process for higher-tier models
"},{"location":"guides/pricing/#anthropic","title":"Anthropic","text":"Models: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
Pricing Structure: - Pay-per-token model - Different rates for input and output tokens - Tiered pricing based on model capabilities
Key Features: - Strong focus on AI safety and ethics - Long context windows (200K tokens) - Specialized models for different use cases (e.g., Haiku for speed, Opus for complex tasks)
Considerations: - Newer to the market compared to OpenAI - Potentially more limited third-party integrations - Strong emphasis on responsible AI use
"},{"location":"guides/pricing/#google-vertex-ai","title":"Google (Vertex AI)","text":"Models: PaLM 2 for Chat, PaLM 2 for Text
Pricing Structure: - Pay-per-thousand characters model - Different rates for input and output - Additional charges for advanced features (e.g., semantic retrieval)
Key Features: - Integration with Google Cloud ecosystem - Multi-modal capabilities (text, image, audio) - Enterprise-grade security and compliance features
Considerations: - Pricing can be complex due to additional Google Cloud costs - Strong performance in specialized domains (e.g., coding, mathematical reasoning) - Potential for integration with other Google services
"},{"location":"guides/pricing/#amazon-bedrock","title":"Amazon (Bedrock)","text":"Models: Claude (Anthropic), Titan
Pricing Structure: - Pay-per-second of compute time - Additional charges for data transfer and storage
Key Features: - Seamless integration with AWS services - Access to multiple model providers through a single API - Fine-tuning and customization options
Considerations: - Pricing model can be less predictable for inconsistent workloads - Strong appeal for existing AWS customers - Potential for cost optimizations through AWS ecosystem
"},{"location":"guides/pricing/#microsoft-azure-openai-service","title":"Microsoft (Azure OpenAI Service)","text":"Models: GPT-4, GPT-3.5 Turbo
Pricing Structure: - Similar to OpenAI's pricing, but with Azure integration - Additional costs for Azure services (e.g., storage, networking)
Key Features: - Enterprise-grade security and compliance - Integration with Azure AI services - Access to fine-tuning and customization options
Considerations: - Attractive for organizations already using Azure - Potential for volume discounts through Microsoft Enterprise Agreements - Additional overhead for Azure management
"},{"location":"guides/pricing/#comparative-analysis","title":"Comparative Analysis","text":"Provider Pricing Model Strengths Considerations OpenAI Pay-per-token - Top performance- Regular updates- Strong community - Higher costs- Usage limits Anthropic Pay-per-token - Ethical focus- Long context- Specialized models - Newer provider- Limited integrations Google Pay-per-character - Google Cloud integration- Multi-modal- Enterprise features - Complex pricing- Google ecosystem lock-in Amazon Pay-per-compute time - AWS integration- Multiple providers- Customization options - Less predictable costs- AWS ecosystem focus Microsoft Pay-per-token (Azure-based) - Enterprise security- Azure integration- Fine-tuning options - Azure overhead- Potential lock-in"},{"location":"guides/pricing/#factors-to-consider-in-provider-selection","title":"Factors to Consider in Provider Selection","text":"Performance Requirements: Assess whether you need state-of-the-art performance or if a less advanced (and potentially cheaper) model suffices.
Pricing Predictability: Consider whether your usage patterns align better with token-based or compute-time-based pricing.
Integration Needs: Evaluate how well each provider integrates with your existing technology stack.
Scalability: Assess each provider's ability to handle your expected growth in usage.
Customization Options: Determine if you need fine-tuning or specialized model development capabilities.
Compliance and Security: Consider your industry-specific regulatory requirements and each provider's security offerings.
Support and Documentation: Evaluate the quality of documentation, community support, and enterprise-level assistance.
Ethical Considerations: Assess each provider's stance on AI ethics and responsible use.
Lock-In Concerns: Consider the long-term implications of committing to a specific provider or cloud ecosystem.
Multi-Provider Strategy: Evaluate the feasibility and benefits of using multiple providers for different use cases.
By carefully comparing these providers and considering the factors most relevant to your organization, you can make an informed decision that balances cost, performance, and strategic fit. Remember that the LLM landscape is rapidly evolving, so it's important to regularly reassess your choices and stay informed about new developments and pricing changes.
"},{"location":"guides/pricing/#7-hidden-costs-and-considerations","title":"7. Hidden Costs and Considerations","text":"When evaluating LLM providers and calculating the total cost of ownership, it's crucial to look beyond the advertised pricing and consider the hidden costs and additional factors that can significantly impact your budget and overall implementation success. This section explores these often-overlooked aspects to help executives make more comprehensive and accurate assessments.
"},{"location":"guides/pricing/#1-data-preparation-and-cleaning","title":"1. Data Preparation and Cleaning","text":"Considerations: - Cost of data collection and aggregation - Expenses related to data cleaning and normalization - Ongoing data maintenance and updates
Impact: - Can be time-consuming and labor-intensive - May require specialized tools or personnel - Critical for model performance and accuracy
"},{"location":"guides/pricing/#2-fine-tuning-and-customization","title":"2. Fine-Tuning and Customization","text":"Considerations: - Costs associated with creating custom datasets - Compute resources required for fine-tuning - Potential need for specialized ML expertise
Impact: - Can significantly improve model performance for specific tasks - May lead to better ROI in the long run - Increases initial implementation costs
"},{"location":"guides/pricing/#3-integration-and-development","title":"3. Integration and Development","text":"Considerations: - Engineering time for API integration - Development of custom interfaces or applications - Ongoing maintenance and updates
Impact: - Can be substantial, especially for complex integrations - May require hiring additional developers or consultants - Critical for seamless user experience and workflow integration
"},{"location":"guides/pricing/#4-monitoring-and-optimization","title":"4. Monitoring and Optimization","text":"Considerations: - Tools and systems for performance monitoring - Regular audits and optimizations - Costs associated with debugging and troubleshooting
Impact: - Ongoing expense that increases with scale - Essential for maintaining efficiency and cost-effectiveness - Can lead to significant savings through optimized usage
"},{"location":"guides/pricing/#5-compliance-and-security","title":"5. Compliance and Security","text":"Considerations: - Legal counsel for data privacy and AI regulations - Implementation of security measures (e.g., encryption, access controls) - Regular audits and certifications
Impact: - Can be substantial, especially in heavily regulated industries - Critical for risk management and maintaining customer trust - May limit certain use cases or require additional safeguards
"},{"location":"guides/pricing/#6-training-and-change-management","title":"6. Training and Change Management","text":"Impact: - Often underestimated but crucial for adoption - Can affect productivity during the transition period - Important for realizing the full potential of LLM integration
"},{"location":"guides/pricing/#7-scaling-costs","title":"7. Scaling Costs","text":"Considerations: - Potential price increases as usage grows - Need for additional infrastructure or resources - Costs associated with managing increased complexity
Impact: - Can lead to unexpected expenses if not properly forecasted - May require renegotiation of contracts or switching providers - Important to consider in long-term planning
"},{"location":"guides/pricing/#8-opportunity-costs","title":"8. Opportunity Costs","text":"Considerations: - Time and resources diverted from other projects - Potential missed opportunities due to focus on LLM implementation - Learning curve and productivity dips during adoption
Impact: - Difficult to quantify but important to consider - Can affect overall business strategy and priorities - May influence timing and scope of LLM integration
"},{"location":"guides/pricing/#9-vendor-lock-in","title":"9. Vendor Lock-in","text":"Considerations: - Costs associated with switching providers - Dependency on provider-specific features or integrations - Potential for price increases once deeply integrated
Impact: - Can limit flexibility and negotiating power - May affect long-term costs and strategic decisions - Important to consider multi-provider or portable implementation strategies
"},{"location":"guides/pricing/#10-ethical-and-reputational-considerations","title":"10. Ethical and Reputational Considerations","text":"Considerations: - Potential backlash from AI-related controversies - Costs of ensuring ethical AI use and transparency - Investments in responsible AI practices
Impact: - Can affect brand reputation and customer trust - May require ongoing public relations efforts - Important for long-term sustainability and social responsibility
By carefully considering these hidden costs and factors, executives can develop a more comprehensive understanding of the total investment required for successful LLM integration. This holistic approach allows for better budgeting, risk management, and strategic planning.
"},{"location":"guides/pricing/#conclusion-navigating-the-llm-pricing-landscape","title":"Conclusion: Navigating the LLM Pricing Landscape","text":"As we've explored throughout this guide, the landscape of LLM provider pricing is complex and multifaceted. From understanding the basic pricing models to calculating ROI and considering hidden costs, there are numerous factors that executives must weigh when making decisions about AI integration.
Key takeaways include:
As the AI landscape continues to evolve rapidly, staying informed and adaptable is crucial. What may be the best choice today could change as new models are released, pricing structures shift, and your organization's needs evolve.
To help you navigate these complexities and make the most informed decisions for your enterprise, we invite you to take the next steps in your AI journey:
Book a Consultation: Speak with our enterprise-grade LLM specialists who can provide personalized insights and recommendations tailored to your specific needs. Schedule a 15-minute call at https://cal.com/swarms/15min.
Join Our Community: Connect with fellow AI executives, share experiences, and stay updated on the latest developments in the LLM space. Join our Discord community at https://discord.gg/yxU9t9da.
By leveraging expert guidance and peer insights, you can position your organization to make the most of LLM technologies while optimizing costs and maximizing value. The future of AI in enterprise is bright, and with the right approach, your organization can be at the forefront of this transformative technology.
"},{"location":"misc/features/20swarms/","title":"20swarms","text":"# Swarm Alpha: Data Cruncher\n**Overview**: Processes large datasets. \n**Strengths**: Efficient data handling. \n**Weaknesses**: Requires structured data. \n\n**Pseudo Code**:\n```sql\nFOR each data_entry IN dataset:\n result = PROCESS(data_entry)\n STORE(result)\nEND FOR\nRETURN aggregated_results\n
"},{"location":"misc/features/20swarms/#swarm-beta-artistic-ally","title":"Swarm Beta: Artistic Ally","text":"Overview: Generates art pieces. Strengths: Creativity. Weaknesses: Somewhat unpredictable.
Pseudo Code:
INITIATE canvas_parameters\nSELECT art_style\nDRAW(canvas_parameters, art_style)\nRETURN finished_artwork\n
"},{"location":"misc/features/20swarms/#swarm-gamma-sound-sculptor","title":"Swarm Gamma: Sound Sculptor","text":"Overview: Crafts audio sequences. Strengths: Diverse audio outputs. Weaknesses: Complexity in refining outputs.
Pseudo Code:
DEFINE sound_parameters\nSELECT audio_style\nGENERATE_AUDIO(sound_parameters, audio_style)\nRETURN audio_sequence\n
"},{"location":"misc/features/20swarms/#swarm-delta-web-weaver","title":"Swarm Delta: Web Weaver","text":"Overview: Constructs web designs. Strengths: Modern design sensibility. Weaknesses: Limited to web interfaces.
Pseudo Code:
SELECT template\nAPPLY user_preferences(template)\nDESIGN_web(template, user_preferences)\nRETURN web_design\n
"},{"location":"misc/features/20swarms/#swarm-epsilon-code-compiler","title":"Swarm Epsilon: Code Compiler","text":"Overview: Writes and compiles code snippets. Strengths: Quick code generation. Weaknesses: Limited to certain programming languages.
Pseudo Code:
DEFINE coding_task\nWRITE_CODE(coding_task)\nCOMPILE(code)\nRETURN executable\n
"},{"location":"misc/features/20swarms/#swarm-zeta-security-shield","title":"Swarm Zeta: Security Shield","text":"Overview: Detects system vulnerabilities. Strengths: High threat detection rate. Weaknesses: Potential false positives.
Pseudo Code:
MONITOR system_activity\nIF suspicious_activity_detected:\n ANALYZE threat_level\n INITIATE mitigation_protocol\nEND IF\nRETURN system_status\n
"},{"location":"misc/features/20swarms/#swarm-eta-researcher-relay","title":"Swarm Eta: Researcher Relay","text":"Overview: Gathers and synthesizes research data. Strengths: Access to vast databases. Weaknesses: Depth of research can vary.
Pseudo Code:
DEFINE research_topic\nSEARCH research_sources(research_topic)\nSYNTHESIZE findings\nRETURN research_summary\n
"},{"location":"misc/features/20swarms/#swarm-theta-sentiment-scanner","title":"Swarm Theta: Sentiment Scanner","text":"Overview: Analyzes text for sentiment and emotional tone. Strengths: Accurate sentiment detection. Weaknesses: Contextual nuances might be missed.
Pseudo Code:
INPUT text_data\nANALYZE text_data FOR emotional_tone\nDETERMINE sentiment_value\nRETURN sentiment_value\n
"},{"location":"misc/features/20swarms/#swarm-iota-image-interpreter","title":"Swarm Iota: Image Interpreter","text":"Overview: Processes and categorizes images. Strengths: High image recognition accuracy. Weaknesses: Can struggle with abstract visuals.
Pseudo Code:
LOAD image_data\nPROCESS image_data FOR features\nCATEGORIZE image_based_on_features\nRETURN image_category\n
"},{"location":"misc/features/20swarms/#swarm-kappa-language-learner","title":"Swarm Kappa: Language Learner","text":"Overview: Translates and interprets multiple languages. Strengths: Supports multiple languages. Weaknesses: Nuances in dialects might pose challenges.
Pseudo Code:
RECEIVE input_text, target_language\nTRANSLATE input_text TO target_language\nRETURN translated_text\n
"},{"location":"misc/features/20swarms/#swarm-lambda-trend-tracker","title":"Swarm Lambda: Trend Tracker","text":"Overview: Monitors and predicts trends based on data. Strengths: Proactive trend identification. Weaknesses: Requires continuous data stream.
Pseudo Code:
COLLECT data_over_time\nANALYZE data_trends\nPREDICT upcoming_trends\nRETURN trend_forecast\n
"},{"location":"misc/features/20swarms/#swarm-mu-financial-forecaster","title":"Swarm Mu: Financial Forecaster","text":"Overview: Analyzes financial data to predict market movements. Strengths: In-depth financial analytics. Weaknesses: Market volatility can affect predictions.
Pseudo Code:
GATHER financial_data\nCOMPUTE statistical_analysis\nFORECAST market_movements\nRETURN financial_projections\n
"},{"location":"misc/features/20swarms/#swarm-nu-network-navigator","title":"Swarm Nu: Network Navigator","text":"Overview: Optimizes and manages network traffic. Strengths: Efficient traffic management. Weaknesses: Depends on network infrastructure.
Pseudo Code:
MONITOR network_traffic\nIDENTIFY congestion_points\nOPTIMIZE traffic_flow\nRETURN network_status\n
"},{"location":"misc/features/20swarms/#swarm-xi-content-curator","title":"Swarm Xi: Content Curator","text":"Overview: Gathers and presents content based on user preferences. Strengths: Personalized content delivery. Weaknesses: Limited by available content sources.
Pseudo Code:
DEFINE user_preferences\nSEARCH content_sources\nFILTER content_matching_preferences\nDISPLAY curated_content\n
"},{"location":"misc/features/SMAPS/","title":"Swarms Multi-Agent Permissions System (SMAPS)","text":""},{"location":"misc/features/SMAPS/#description","title":"Description","text":"SMAPS is a robust permissions management system designed to integrate seamlessly with Swarm's multi-agent AI framework. Drawing inspiration from Amazon's IAM, SMAPS ensures secure, granular control over agent actions while allowing for collaborative human-in-the-loop interventions.
"},{"location":"misc/features/SMAPS/#technical-specification","title":"Technical Specification","text":""},{"location":"misc/features/SMAPS/#1-components","title":"1. Components","text":"Each role has specific permissions associated with it, defining what actions can be performed on AI agents or tasks.
Dynamic Permissions:
Permissions granularity: From broad (e.g., view all tasks) to specific (e.g., modify parameters of a particular agent).
Multiplayer Collaboration:
A voting system for decision-making when human intervention is required.
Agent Supervision:
Intervene, if necessary, to guide agent actions based on permissions.
Audit Trail:
Swarms Multi-Agent Permissions System (SMAPS) offers a sophisticated permissions management mechanism tailored for multi-agent AI frameworks. It combines the robustness of Amazon IAM-like permissions with a unique \"multiplayer\" feature, allowing multiple humans to collaboratively guide AI agents in real-time. This ensures not only that tasks are executed efficiently but also that they uphold the highest standards of accuracy and ethics. With SMAPS, businesses can harness the power of swarms with confidence, knowing that they have full control and transparency over their AI operations.
"},{"location":"misc/features/agent_archive/","title":"AgentArchive Documentation","text":""},{"location":"misc/features/agent_archive/#swarms-multi-agent-framework","title":"Swarms Multi-Agent Framework","text":"AgentArchive is an advanced feature crafted to archive, bookmark, and harness the transcripts of agent runs. It promotes the storing and leveraging of successful agent interactions, offering a powerful means for users to derive \"recipes\" for future agents. Furthermore, with its public archive feature, users can contribute to and benefit from the collective wisdom of the community.
"},{"location":"misc/features/agent_archive/#overview","title":"Overview:","text":"AgentArchive empowers users to: 1. Preserve complete transcripts of agent instances. 2. Bookmark and annotate significant runs. 3. Categorize runs using various tags. 4. Transform successful runs into actionable \"recipes\". 5. Publish and access a shared knowledge base via a public archive.
"},{"location":"misc/features/agent_archive/#features","title":"Features:","text":""},{"location":"misc/features/agent_archive/#1-archiving","title":"1. Archiving:","text":"Organize and classify agent runs via: - Prompt: The originating instruction that triggered the agent run. - Tasks: Distinct tasks or operations executed by the agent. - Model: The specific AI model or iteration used during the interaction. - Temperature (Temp): The set randomness or innovation level for the agent.
"},{"location":"misc/features/agent_archive/#4-recipe-generation","title":"4. Recipe Generation:","text":"With AgentArchive, users not only benefit from their past interactions but can also leverage the collective expertise of the Swarms community, ensuring continuous improvement and shared success.
"},{"location":"misc/features/fail_protocol/","title":"Swarms Multi-Agent Framework Documentation","text":""},{"location":"misc/features/fail_protocol/#table-of-contents","title":"Table of Contents","text":"Agent failures may arise from bugs, unexpected inputs, or external system changes. This protocol aims to diagnose, address, and prevent such failures.
"},{"location":"misc/features/fail_protocol/#2-root-cause-analysis","title":"2. Root Cause Analysis","text":"Swarm failures are more complex, often resulting from inter-agent conflicts, systemic bugs, or large-scale environmental changes. This protocol delves deep into such failures to ensure the swarm operates optimally.
"},{"location":"misc/features/fail_protocol/#2-root-cause-analysis_1","title":"2. Root Cause Analysis","text":"By following these protocols, the Swarms Multi-Agent Framework can systematically address and prevent failures, ensuring a high degree of reliability and efficiency.
"},{"location":"misc/features/human_in_loop/","title":"Human-in-the-Loop Task Handling Protocol","text":""},{"location":"misc/features/human_in_loop/#overview","title":"Overview","text":"The Swarms Multi-Agent Framework recognizes the invaluable contributions humans can make, especially in complex scenarios where nuanced judgment is required. The \"Human-in-the-Loop Task Handling Protocol\" ensures that when agents encounter challenges they cannot handle autonomously, the most capable human collaborator is engaged to provide guidance, based on their skills and expertise.
"},{"location":"misc/features/human_in_loop/#protocol-steps","title":"Protocol Steps","text":""},{"location":"misc/features/human_in_loop/#1-task-initiation-analysis","title":"1. Task Initiation & Analysis","text":"The integration of human expertise with AI capabilities is a cornerstone of the Swarms Multi-Agent Framework. This \"Human-in-the-Loop Task Handling Protocol\" ensures that tasks are executed efficiently, leveraging the best of both human judgment and AI automation. Through collaborative synergy, we can tackle challenges more effectively and drive innovation.
"},{"location":"misc/features/info_sec/","title":"Secure Communication Protocols","text":""},{"location":"misc/features/info_sec/#overview","title":"Overview","text":"The Swarms Multi-Agent Framework prioritizes the security and integrity of data, especially personal and sensitive information. Our Secure Communication Protocols ensure that all communications between agents are encrypted, authenticated, and resistant to tampering or unauthorized access.
"},{"location":"misc/features/info_sec/#features","title":"Features","text":""},{"location":"misc/features/info_sec/#1-end-to-end-encryption","title":"1. End-to-End Encryption","text":"Secure communication is paramount in the Swarms Multi-Agent Framework, especially when dealing with personal and sensitive information. Adhering to these protocols and best practices ensures the safety, privacy, and trust of all stakeholders involved.
"},{"location":"misc/features/promptimizer/","title":"Promptimizer Documentation","text":""},{"location":"misc/features/promptimizer/#swarms-multi-agent-framework","title":"Swarms Multi-Agent Framework","text":"The Promptimizer Tool stands as a cornerstone innovation within the Swarms Multi-Agent Framework, meticulously engineered to refine and supercharge prompts across diverse categories. Capitalizing on extensive libraries of best-practice prompting techniques, this tool ensures your prompts are razor-sharp, tailored, and primed for optimal outcomes.
"},{"location":"misc/features/promptimizer/#overview","title":"Overview:","text":"The Promptimizer Tool is crafted to: 1. Rigorously analyze and elevate the quality of provided prompts. 2. Furnish best-in-class recommendations rooted in proven prompting strategies. 3. Serve a spectrum of categories, from technical operations to expansive creative ventures.
"},{"location":"misc/features/promptimizer/#core-features","title":"Core Features:","text":""},{"location":"misc/features/promptimizer/#1-deep-prompt-analysis","title":"1. Deep Prompt Analysis:","text":"By integrating the Promptimizer Tool into their workflow, Swarms users stand poised to redefine the boundaries of what's possible, turning each prompt into a beacon of excellence and efficiency.
"},{"location":"misc/features/shorthand/","title":"Shorthand Communication System","text":""},{"location":"misc/features/shorthand/#swarms-multi-agent-framework","title":"Swarms Multi-Agent Framework","text":"The Enhanced Shorthand Communication System is designed to streamline agent-agent communication within the Swarms Multi-Agent Framework. This system employs concise alphanumeric notations to relay task-specific details to agents efficiently.
"},{"location":"misc/features/shorthand/#format","title":"Format:","text":"The shorthand format is structured as [AgentType]-[TaskLayer].[TaskNumber]-[Priority]-[Status]
.
C
: Code agentD
: Data processing agentM
: Monitoring agentN
: Network agentR
: Resource management agentI
: Interface agentS
: Security agent1.8
signifies Task layer 1, task number 8.H
: HighM
: MediumL
: LowI
: InitializedP
: In-progressC
: CompletedF
: FailedW
: WaitingE01
: Resource issuesE02
: Data inconsistencyE03
: Dependency malfunction ... and more as needed.+
: Denotes required collaboration.C-1.8-H-I
: A high-priority coding task that's initializing.D-2.3-M-P
: A medium-priority data task currently in-progress.M-3.5-L-P+
: A low-priority monitoring task in progress needing collaboration.By leveraging the Enhanced Shorthand Communication System, the Swarms Multi-Agent Framework can ensure swift interactions, concise communications, and effective task management.
"},{"location":"swarms/contributing/","title":"Contribution Guidelines","text":""},{"location":"swarms/contributing/#table-of-contents","title":"Table of Contents","text":"swarms is a library focused on making it simple to orchestrate agents to automate real-world activities. The goal is to automate the world economy with these swarms of agents.
We need your help to:
Your contributions will help us push the boundaries of AI and make this library a valuable resource for the community.
"},{"location":"swarms/contributing/#getting-started","title":"Getting Started","text":""},{"location":"swarms/contributing/#installation","title":"Installation","text":"You can install swarms using pip
:
pip3 install swarms\n
Alternatively, you can clone the repository:
git clone https://github.com/kyegomez/swarms\n
"},{"location":"swarms/contributing/#project-structure","title":"Project Structure","text":"swarms/
: Contains all the source code for the library.examples/
: Includes example scripts and notebooks demonstrating how to use the library.tests/
: (To be created) Will contain unit tests for the library.docs/
: (To be maintained) Contains documentation files.If you find any bugs, inconsistencies, or have suggestions for enhancements, please open an issue on GitHub:
We welcome pull requests (PRs) for bug fixes, improvements, and new features. Please follow these guidelines:
git clone https://github.com/kyegomez/swarms.git\n
git checkout -b feature/your-feature-name\n
git commit -am \"Add feature X\"\n
git push origin feature/your-feature-name\n
Create a Pull Request:
Go to the original repository on GitHub.
Provide a clear description of your changes and reference any related issues.
Respond to Feedback: Be prepared to make changes based on code reviews.
Note: It's recommended to create small and focused PRs for easier review and faster integration.
"},{"location":"swarms/contributing/#coding-standards","title":"Coding Standards","text":"To maintain code quality and consistency, please adhere to the following standards.
"},{"location":"swarms/contributing/#type-annotations","title":"Type Annotations","text":"def add_numbers(a: int, b: int) -> int:\n return a + b\n
Raises: List any exceptions that are raised.
Example:
def calculate_mean(values: List[float]) -> float:\n \"\"\"\n Calculates the mean of a list of numbers.\n\n Args:\n values (List[float]): A list of numerical values.\n\n Returns:\n float: The mean of the input values.\n\n Raises:\n ValueError: If the input list is empty.\n \"\"\"\n if not values:\n raise ValueError(\"The input list is empty.\")\n return sum(values) / len(values)\n
unittest
, pytest
, or a similar testing framework.tests/
directory, mirroring the structure of swarms/
.pytest tests/\n
"},{"location":"swarms/contributing/#code-style","title":"Code Style","text":"flake8
, black
, or pylint
to check code style.We have several areas where contributions are particularly welcome.
"},{"location":"swarms/contributing/#writing-tests","title":"Writing Tests","text":"swarms/
.examples/
directory.docs/
directory.By contributing to swarms, you agree that your contributions will be licensed under the MIT License.
Thank you for contributing to swarms! Your efforts help make this project better for everyone.
If you have any questions or need assistance, please feel free to open an issue or reach out to the maintainers.
"},{"location":"swarms/ecosystem/","title":"Swarms Ecosystem","text":"The Complete Enterprise-Grade Multi-Agent AI Platform
"},{"location":"swarms/ecosystem/#join-the-future-of-ai-development","title":"Join the Future of AI Development","text":"We're Building the Operating System for the Agent Economy - The Swarms ecosystem represents the most comprehensive, production-ready multi-agent AI platform available today. From our flagship Python framework to high-performance Rust implementations and client libraries spanning every major programming language, we provide enterprise-grade tools that power the next generation of intelligent applications.
"},{"location":"swarms/ecosystem/#complete-product-portfolio","title":"Complete Product Portfolio","text":"Product Technology Status Repository Documentation Swarms Python Framework Python Production swarms Docs Swarms Rust Framework Rust Production swarms-rs Docs Python API Client Python Production swarms-sdk Docs TypeScript/Node.js Client TypeScript Production swarms-ts Coming Soon Go Client Go Production swarms-client-go Coming Soon Java Client Java Production swarms-java Coming Soon Kotlin Client Kotlin Q2 2025 In Development Coming Soon Ruby Client Ruby Q2 2025 In Development Coming Soon Rust Client Rust Q2 2025 In Development Coming Soon C#/.NET Client C# Q3 2025 In Development Coming Soon"},{"location":"swarms/ecosystem/#why-choose-the-swarms-ecosystem","title":"Why Choose the Swarms Ecosystem?","text":""},{"location":"swarms/ecosystem/#enterprise-grade-architecture","title":"Enterprise-Grade Architecture","text":"Production Ready: Battle-tested in enterprise environments with 99.9%+ uptime
Scalable Infrastructure: Handle millions of agent interactions with automatic scaling
Security First: End-to-end encryption, API key management, and enterprise compliance
Observability: Comprehensive logging, monitoring, and debugging capabilities
Multiple Language Support: Native clients for every major programming language
Unified API: Consistent interface across all platforms and languages
Rich Documentation: Comprehensive guides, tutorials, and API references
Active Community: 24/7 support through Discord, GitHub, and direct channels
High Throughput: Process thousands of concurrent agent requests
Low Latency: Optimized for real-time applications and user experiences
Fault Tolerance: Automatic retries, circuit breakers, and graceful degradation
Multi-Cloud: Deploy on AWS, GCP, Azure, or on-premises infrastructure
Ready to work on cutting-edge agent technology that's shaping the future? We're actively recruiting exceptional engineers, researchers, and technical leaders to join our mission of building the operating system for the agent economy.
Why Join Swarms? What We Offer Cutting-Edge Technology Work on the most powerful multi-agent systems, distributed computing, and enterprise-scale infrastructure Global Impact Your code will power agent applications used by Fortune 500 companies and millions of developers World-Class Team Collaborate with top engineers, researchers, and industry experts from Google, OpenAI, and more Fast Growth Join a rapidly scaling company with massive market opportunity and venture backing"},{"location":"swarms/ecosystem/#open-positions","title":"Open Positions","text":"Position Role Description Senior Rust Engineers Building high-performance agent infrastructure Python Framework Engineers Expanding our core multi-agent capabilities DevOps/Platform Engineers Scaling cloud infrastructure for millions of agents Technical Writers Creating world-class developer documentation Solutions Engineers Helping enterprises adopt multi-agent AIReady to Build the Future? Apply Now at swarms.ai/hiring
"},{"location":"swarms/ecosystem/#get-started-today","title":"Get Started Today","text":""},{"location":"swarms/ecosystem/#quick-start-guide","title":"Quick Start Guide","text":"Step Action Time Required 1 Install Swarms Python Framework 5 minutes 2 Run Your First Agent 10 minutes 3 Try Multi-Agent Workflows 15 minutes 4 Join Our Discord Community 2 minutes 5 Explore Enterprise Features 20 minutes"},{"location":"swarms/ecosystem/#enterprise-support-partnerships","title":"Enterprise Support & Partnerships","text":""},{"location":"swarms/ecosystem/#ready-to-scale-with-swarms","title":"Ready to Scale with Swarms?","text":"Contact Type Best For Response Time Contact Information Technical Support Development questions, troubleshooting < 24 hours Book Support Call Enterprise Sales Custom deployments, enterprise licensing < 4 hours kye@swarms.world Partnerships Integration partnerships, technology alliances < 48 hours kye@swarms.world Investor Relations Investment opportunities, funding updates By appointment kye@swarms.worldReady to build the future of AI? Start with Swarms today and join thousands of developers creating the next generation of intelligent applications.
"},{"location":"swarms/features/","title":"Feature Set","text":""},{"location":"swarms/features/#enterprise-features","title":"\u2728 Enterprise Features","text":"Swarms delivers a comprehensive, enterprise-grade multi-agent infrastructure platform designed for production-scale deployments and seamless integration with existing systems.
Category Enterprise Capabilities Business Value \ud83c\udfe2 Enterprise Architecture \u2022 Production-Ready Infrastructure\u2022 High Availability Systems\u2022 Modular Microservices Design\u2022 Comprehensive Observability\u2022 Backwards Compatibility \u2022 99.9%+ Uptime Guarantee\u2022 Reduced Operational Overhead\u2022 Seamless Legacy Integration\u2022 Enhanced System Monitoring\u2022 Risk-Free Migration Path \ud83e\udd16 Multi-Agent Orchestration \u2022 Hierarchical Agent Swarms\u2022 Parallel Processing Pipelines\u2022 Sequential Workflow Orchestration\u2022 Graph-Based Agent Networks\u2022 Dynamic Agent Composition\u2022 Agent Registry Management \u2022 Complex Business Process Automation\u2022 Scalable Task Distribution\u2022 Flexible Workflow Adaptation\u2022 Optimized Resource Utilization\u2022 Centralized Agent Governance\u2022 Enterprise-Grade Agent Lifecycle Management \ud83d\udd04 Enterprise Integration \u2022 Multi-Model Provider Support\u2022 Custom Agent Development Framework\u2022 Extensive Enterprise Tool Library\u2022 Multiple Memory Systems\u2022 Backwards Compatibility with LangChain, AutoGen, CrewAI\u2022 Standardized API Interfaces \u2022 Vendor-Agnostic Architecture\u2022 Custom Solution Development\u2022 Extended Functionality Integration\u2022 Enhanced Knowledge Management\u2022 Seamless Framework Migration\u2022 Reduced Integration Complexity \ud83d\udcc8 Enterprise Scalability \u2022 Concurrent Multi-Agent Processing\u2022 Intelligent Resource Management\u2022 Load Balancing & Auto-Scaling\u2022 Horizontal Scaling Capabilities\u2022 Performance Optimization\u2022 Capacity Planning Tools \u2022 High-Throughput Processing\u2022 Cost-Effective Resource Utilization\u2022 Elastic Scaling Based on Demand\u2022 Linear Performance Scaling\u2022 Optimized Response Times\u2022 Predictable Growth Planning \ud83d\udee0\ufe0f Developer Experience \u2022 Intuitive Enterprise API\u2022 Comprehensive Documentation\u2022 Active Enterprise Community\u2022 CLI & SDK Tools\u2022 IDE Integration Support\u2022 Code Generation Templates \u2022 Accelerated Development Cycles\u2022 Reduced Learning Curve\u2022 Expert Community Support\u2022 Rapid Deployment Capabilities\u2022 Enhanced Developer Productivity\u2022 Standardized Development Patterns \ud83d\udd10 Enterprise Security \u2022 Comprehensive Error Handling\u2022 Advanced Rate Limiting\u2022 Real-Time Monitoring Integration\u2022 Detailed Audit Logging\u2022 Role-Based Access Control\u2022 Data Encryption & Privacy \u2022 Enhanced System Reliability\u2022 API Security Protection\u2022 Proactive Issue Detection\u2022 Regulatory Compliance Support\u2022 Granular Access Management\u2022 Enterprise Data Protection \ud83d\udcca Advanced Enterprise Features \u2022 SpreadsheetSwarm for Mass Agent Management\u2022 Group Chat for Collaborative AI\u2022 Centralized Agent Registry\u2022 Mixture of Agents for Complex Solutions\u2022 Agent Performance Analytics\u2022 Automated Agent Optimization \u2022 Large-Scale Agent Operations\u2022 Team-Based AI Collaboration\u2022 Centralized Agent Governance\u2022 Sophisticated Problem Solving\u2022 Performance Insights & Optimization\u2022 Continuous Agent Improvement \ud83d\udd0c Provider Ecosystem \u2022 OpenAI Integration\u2022 Anthropic Claude Support\u2022 ChromaDB Vector Database\u2022 Custom Provider Framework\u2022 Multi-Cloud Deployment\u2022 Hybrid Infrastructure Support \u2022 Provider Flexibility & Independence\u2022 Advanced Vector Search Capabilities\u2022 Custom Integration Development\u2022 Cloud-Agnostic Architecture\u2022 Flexible Deployment Options\u2022 Risk Mitigation Through Diversification \ud83d\udcaa Production Readiness \u2022 Automatic Retry Mechanisms\u2022 Asynchronous Processing Support\u2022 Environment Configuration Management\u2022 Type Safety & Validation\u2022 Health Check Endpoints\u2022 Graceful Degradation \u2022 Enhanced System Reliability\u2022 Improved Performance Characteristics\u2022 Simplified Configuration Management\u2022 Reduced Runtime Errors\u2022 Proactive Health Monitoring\u2022 Continuous Service Availability \ud83c\udfaf Enterprise Use Cases \u2022 Industry-Specific Agent Solutions\u2022 Custom Workflow Development\u2022 Regulatory Compliance Support\u2022 Extensible Framework Architecture\u2022 Multi-Tenant Support\u2022 Enterprise SLA Guarantees \u2022 Rapid Industry Deployment\u2022 Flexible Solution Architecture\u2022 Compliance-Ready Implementations\u2022 Future-Proof Technology Investment\u2022 Scalable Multi-Client Operations\u2022 Predictable Service Quality"},{"location":"swarms/features/#missing-a-feature","title":"\ud83d\ude80 Missing a Feature?","text":"Swarms is continuously evolving to meet enterprise needs. If you don't see a specific feature or capability that your organization requires:
"},{"location":"swarms/features/#report-missing-features","title":"\ud83d\udcdd Report Missing Features","text":"Create a GitHub Issue to request new features
Describe your use case and business requirements
Our team will evaluate and prioritize based on enterprise demand
Book a call with our enterprise team for personalized guidance
Discuss your specific multi-agent architecture requirements
Get expert recommendations for your implementation strategy
Explore custom enterprise solutions and integrations
Our team is committed to ensuring Swarms meets your enterprise multi-agent infrastructure needs. We welcome feedback and collaboration to build the most comprehensive platform for production-scale AI agent deployments.
"},{"location":"swarms/glossary/","title":"Glossary of Terms","text":"Agent: An LLM (Large Language Model) equipped with tools and memory, operating with a specific objective in a loop. An agent can perform tasks, interact with other agents, and utilize external tools and memory systems to achieve its goals.
Swarms: A group of more than two agents working together and communicating to accomplish a shared objective. Swarms enable complex, collaborative tasks that leverage the strengths of multiple agents.
Tool: A Python function that is converted into a function call, allowing agents to perform specific actions or access external resources. Tools enhance the capabilities of agents by providing specialized functionalities.
Memory System: A system for managing information retrieval and storage, often implemented as a Retrieval-Augmented Generation (RAG) system or a memory vector database. Memory systems enable agents to recall previous interactions, store new information, and improve decision-making based on historical data.
LLM (Large Language Model): A type of AI model designed to understand and generate human-like text. LLMs, such as GPT-3 or GPT-4, are used as the core computational engine for agents.
System Prompt: A predefined prompt that sets the context and instructions for an agent's task. The system prompt guides the agent's behavior and response generation.
Max Loops: The maximum number of iterations an agent will perform to complete its task. This parameter helps control the extent of an agent's processing and ensures tasks are completed efficiently.
Dashboard: A user interface that provides real-time monitoring and control over the agents and their activities. Dashboards can display agent status, logs, and performance metrics.
Streaming On: A setting that enables agents to stream their output incrementally, providing real-time feedback as they process tasks. This feature is useful for monitoring progress and making adjustments on the fly.
Verbose: A setting that controls the level of detail in an agent's output and logging. When verbose mode is enabled, the agent provides more detailed information about its operations and decisions.
Multi-modal: The capability of an agent to process and integrate multiple types of data, such as text, images, and audio. Multi-modal agents can handle more complex tasks that require diverse inputs.
Autosave: A feature that automatically saves the agent's state and progress at regular intervals. Autosave helps prevent data loss and allows for recovery in case of interruptions.
Flow: The predefined sequence in which agents in a swarm interact and process tasks. The flow ensures that each agent's output is appropriately passed to the next agent, facilitating coordinated efforts.
Long Term Memory: A component of the memory system that retains information over extended periods, enabling agents to recall and utilize past interactions and experiences.
Output Schema: A structured format for the output generated by agents, often defined using data models like Pydantic's BaseModel. Output schemas ensure consistency and clarity in the information produced by agents.
By understanding these terms, you can effectively build and orchestrate agents and swarms, leveraging their capabilities to perform complex, collaborative tasks.
"},{"location":"swarms/papers/","title":"awesome-multi-agent-papers","text":"An awesome list of multi-agent papers that show you various swarm architectures and much more. Get started
"},{"location":"swarms/products/","title":"Swarms Products","text":"Welcome to the official documentation for Swarms, the first multi-agent orchestration framework enabling seamless collaboration between LLMs and other tools to automate business operations at scale. Below, you\u2019ll find detailed descriptions of all Swarms products and services to help you get started and unlock the full potential of this groundbreaking platform.
Name Description Link Swarms Marketplace A platform to discover, share, and integrate prompts, agents, and tools. swarms.world Swarms Spreadsheet A tool for managing and scaling thousands of agent outputs, with results saved to a CSV file for easy analysis. swarms.world Drag n Drop Swarm An intuitive interface to visually create and manage swarms of agents through drag-and-drop functionality. swarms.world Swarms API An API enabling seamless integration of swarms of agents into your applications and workflows. swarms.world Wallet API A secure API for managing transactions and interactions within the Swarms ecosystem. Coming Soon Swarm Exchange A marketplace for buying and selling prompts, agents, and tools within the Swarms ecosystem. Coming Soon"},{"location":"swarms/products/#swarms-marketplace","title":"Swarms Marketplace","text":"Website: swarms.world
The Swarms Marketplace is your one-stop destination for discovering, adding, and managing:
Prompts: Access and share production-ready prompts for LLMs.
Agents: Browse pre-built agents tailored for tasks in marketing, finance, programming, and more.
Commenting System: Share feedback and insights with the Swarms community.
Coming Soon: Buy and sell prompts, agents, and tools directly within the marketplace.
Website: swarms.world
The Swarms Spreadsheet is a powerful tool for managing outputs from thousands of agents efficiently. Ideal for businesses needing scalable solutions, it provides:
"},{"location":"swarms/products/#key-features_1","title":"Key Features:","text":"Batch Task Execution: Assign tasks to multiple agents simultaneously.
CSV Integration: Automatically save agent outputs to CSV files for easy analysis.
Customizable Agents: Upload single or multiple agents and run repeat tasks with ease.
Marketing: Generate and analyze campaign ideas at scale.
Finance: Process financial models and scenarios quickly.
Operations: Automate repetitive tasks across multiple domains.
Website: swarms.world
The Drag-n-Drop Swarm enables non-technical users to create and deploy agent workflows with a simple drag-and-drop interface. It\u2019s perfect for:
"},{"location":"swarms/products/#key-features_2","title":"Key Features:","text":"Visual Workflow Builder: Design agent interactions without writing code.
Pre-Built Templates: Start quickly with ready-made workflows for common tasks.
Intuitive Interface: Drag, drop, and connect agents to create robust automation pipelines.
Website: swarms.world
The Swarms API provides developers with the ability to:
"},{"location":"swarms/products/#key-features_3","title":"Key Features:","text":"Agent Management: Programmatically create, update, and delete agents.
Task Orchestration: Dynamically assign tasks to agents and monitor their progress.
Custom Integration: Seamlessly integrate Swarms functionality into existing applications and workflows.
The Wallet API enables secure and efficient transactions within the Swarms ecosystem, allowing users to:
"},{"location":"swarms/products/#key-features_4","title":"Key Features:","text":"Seamless Transactions: Manage payments for prompts, agents, and tools.
Secure Wallets: Store and transfer funds safely within the Swarms platform.
Transaction History: Access detailed logs of all wallet activity.
The Swarm Exchange will revolutionize the way agents and tools are traded in the Swarms ecosystem. It will feature:
"},{"location":"swarms/products/#key-features_5","title":"Key Features:","text":"Decentralized Marketplace: Trade agents and tools securely.
Dynamic Pricing: Leverage demand-based pricing for assets.
Global Access: Participate in the exchange from anywhere.
Stay tuned for updates on the Swarm Exchange launch.
"},{"location":"swarms/products/#additional-resources","title":"Additional Resources","text":"GitHub Repository: Swarms Framework
Documentation: Swarms Documentation
Support: Contact us via our Discord Community.
Experience the future of multi-agent collaboration with Swarms. Start building your agentic workflows today!
"},{"location":"swarms/support/","title":"Technical Support","text":"Getting Help with the Swarms Multi-Agent Framework
"},{"location":"swarms/support/#getting-started-with-support","title":"Getting Started with Support","text":"The Swarms team is committed to providing exceptional technical support to help you build production-grade multi-agent systems. Whether you're experiencing bugs, need implementation guidance, or want to request new features, we have multiple channels to ensure you get the help you need quickly and efficiently.
"},{"location":"swarms/support/#support-channels-overview","title":"Support Channels Overview","text":"Support Type Best For Response Time Channel Bug Reports Code issues, errors, unexpected behavior < 24 hours GitHub Issues Feature Requests New capabilities, enhancements < 48 hours Email kye@swarms.world Private Issues Security concerns, enterprise consulting < 4 hours Book Support Call Real-time Help Quick questions, community discussions Immediate Discord Community Documentation Usage guides, examples, tutorials Self-service docs.swarms.world"},{"location":"swarms/support/#reporting-bugs-technical-issues","title":"Reporting Bugs & Technical Issues","text":""},{"location":"swarms/support/#when-to-use-github-issues","title":"When to Use GitHub Issues","text":"Use GitHub Issues for:
Code bugs and errors
Installation problems
Documentation issues
Performance problems
API inconsistencies
Public technical discussions
Visit our Issues page: https://github.com/kyegomez/swarms/issues
Search existing issues to avoid duplicates
Click \"New Issue\" and select the appropriate template
Include the following information:
A clear description of what the bug is.
"},{"location":"swarms/support/#environment","title":"Environment","text":"Swarms version: [e.g., 5.9.2]
Python version: [e.g., 3.9.0]
Operating System: [e.g., Ubuntu 20.04, macOS 14, Windows 11]
Model provider: [e.g., OpenAI, Anthropic, Groq]
What you expected to happen.
"},{"location":"swarms/support/#actual-behavior","title":"Actual Behavior","text":"What actually happened.
"},{"location":"swarms/support/#code-sample","title":"Code Sample","text":"# Minimal code that reproduces the issue\nfrom swarms import Agent\n\nagent = Agent(model_name=\"gpt-4o-mini\")\nresult = agent.run(\"Your task here\")\n
"},{"location":"swarms/support/#error-messages","title":"Error Messages","text":"Paste any error messages or stack traces here
"},{"location":"swarms/support/#additional-context","title":"Additional Context","text":"Any other context, screenshots, or logs that might help.
"},{"location":"swarms/support/#issue-templates-available","title":"Issue Templates Available","text":"Template Use Case Bug Report Standard bug reporting template Documentation Issues with docs, guides, examples Feature Request Suggesting new functionality Question General questions about usage Enterprise Enterprise-specific issues"},{"location":"swarms/support/#private-enterprise-support","title":"Private & Enterprise Support","text":""},{"location":"swarms/support/#when-to-book-a-private-support-call","title":"When to Book a Private Support Call","text":"Book a private consultation for:
Security vulnerabilities or concerns
Enterprise deployment guidance
Custom implementation consulting
Architecture review sessions
Performance optimization
Integration troubleshooting
Strategic technical planning
Visit our booking page: https://cal.com/swarms/swarms-technical-support?overlayCalendar=true
Select an available time that works for your timezone
Provide details about your issue or requirements
Prepare for the call:
Have your code/environment ready
Prepare specific questions
Include relevant error messages or logs
Share your use case and goals
Direct access to Swarms core team members
Screen sharing for live debugging
Custom solutions tailored to your needs
Follow-up resources and documentation
Priority support for implementation
Get instant help from our active community of developers and core team members.
Discord Benefits:
24/7 availability - Someone is always online
Instant responses - Get help in real-time
Community wisdom - Learn from other developers
Specialized channels - Find the right help quickly
Latest updates - Stay informed about new releases
Join here: https://discord.gg/jM3Z6M9uMq
Read the rules and introduce yourself in #general
Use the right channel for your question type
Provide context when asking questions:
Python version: 3.9\nSwarms version: 5.9.2\nOS: macOS 14\nQuestion: How do I implement custom tools with MCP?\nWhat I tried: [paste your code]\nError: [paste error message]\n
Be patient and respectful - our community loves helping!
Contact us directly for:
Major new framework capabilities
Architecture enhancements
New model provider integrations
Enterprise-specific features
Analytics and monitoring tools
UI/UX improvements
Email: kye@swarms.world
Subject Format: [FEATURE REQUEST] Brief description
Include in your email:
## Feature Description\nClear description of the proposed feature\n\n## Use Case\nWhy this feature is needed and how it would be used\n\n## Business Impact\nHow this would benefit the Swarms ecosystem\n\n## Technical Requirements\nAny specific technical considerations\n\n## Priority Level\n- Low: Nice to have\n\n- Medium: Would significantly improve workflow\n\n- High: Critical for adoption/production use\n\n\n## Alternatives Considered\nOther solutions you've explored\n\n## Implementation Ideas\nAny thoughts on how this could be implemented\n
"},{"location":"swarms/support/#feature-request-process","title":"Feature Request Process","text":"Before reaching out for support, check these resources:
"},{"location":"swarms/support/#documentation","title":"Documentation","text":"Complete Documentation - Comprehensive guides and API reference
Installation Guide - Setup and configuration
Quick Start - Get up and running fast
Examples Gallery - Real-world use cases
pip install -U swarms
Memory issues Review Performance Guide Agent not working Check Basic Agent Example"},{"location":"swarms/support/#video-tutorials","title":"Video Tutorials","text":"YouTube Channel - Step-by-step tutorials
Live Coding Sessions - Real-world implementations
Before requesting support, please:
Check the documentation for existing solutions
Search GitHub issues for similar problems
Update to latest version: pip install -U swarms
Verify environment setup and API keys
Test with minimal code to isolate the issue
Gather error messages and relevant logs
Note your environment (OS, Python version, Swarms version)
Help improve support for everyone:
Answer questions in Discord or GitHub
Improve documentation with your learnings
Share examples of successful implementations
Report bugs you discover
Suggest improvements to this support process
Your contributions make Swarms better for everyone.
"},{"location":"swarms/support/#support-channel-summary","title":"Support Channel Summary","text":"Urgency Best Channel Emergency Book Immediate Call Urgent Discord #technical-support Standard GitHub Issues Feature Ideas Email kye@swarms.worldWe're here to help you succeed with Swarms.
"},{"location":"swarms/agents/","title":"Agents Introduction","text":"The Agent class is the core component of the Swarms framework, designed to create intelligent, autonomous AI agents capable of handling complex tasks through multi-modal processing, tool integration, and structured outputs. This comprehensive guide covers all aspects of the Agent class, from basic setup to advanced features.
"},{"location":"swarms/agents/#table-of-contents","title":"Table of Contents","text":"Python 3.7+
OpenAI API key (for GPT models)
Anthropic API key (for Claude models)
pip3 install -U swarms\n
"},{"location":"swarms/agents/#environment-setup","title":"Environment Setup","text":"Create a .env
file with your API keys:
OPENAI_API_KEY=\"your-openai-api-key\"\nANTHROPIC_API_KEY=\"your-anthropic-api-key\"\nWORKSPACE_DIR=\"agent_workspace\"\n
"},{"location":"swarms/agents/#basic-agent-configuration","title":"Basic Agent Configuration","text":""},{"location":"swarms/agents/#core-agent-structure","title":"Core Agent Structure","text":"The Agent class provides a comprehensive set of parameters for customization:
from swarms import Agent\n\n# Basic agent initialization\nagent = Agent(\n agent_name=\"MyAgent\",\n agent_description=\"A specialized AI agent for specific tasks\",\n system_prompt=\"You are a helpful assistant...\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n max_tokens=4096,\n temperature=0.7,\n output_type=\"str\",\n safety_prompt_on=True\n)\n
"},{"location":"swarms/agents/#key-configuration-parameters","title":"Key Configuration Parameters","text":"Parameter Type Description Default agent_name
str Unique identifier for the agent Required agent_description
str Detailed description of capabilities Required system_prompt
str Core instructions defining behavior Required model_name
str AI model to use \"gpt-4o-mini\" max_loops
int Maximum execution loops 1 max_tokens
int Maximum response tokens 4096 temperature
float Response creativity (0-1) 0.7 output_type
str Response format type \"str\" multi_modal
bool Enable image processing False safety_prompt_on
bool Enable safety checks True"},{"location":"swarms/agents/#simple-example","title":"Simple Example","text":"from swarms import Agent\n\n# Create a basic financial advisor agent\nfinancial_agent = Agent(\n agent_name=\"Financial-Advisor\",\n agent_description=\"Personal finance and investment advisor\",\n system_prompt=\"\"\"You are an expert financial advisor with deep knowledge of:\n - Investment strategies and portfolio management\n - Risk assessment and mitigation\n - Market analysis and trends\n - Financial planning and budgeting\n\n Provide clear, actionable advice while considering risk tolerance.\"\"\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n temperature=0.3,\n output_type=\"str\"\n)\n\n# Run the agent\nresponse = financial_agent.run(\"What are the best investment strategies for a 30-year-old?\")\nprint(response)\n
"},{"location":"swarms/agents/#multi-modal-capabilities","title":"Multi-Modal Capabilities","text":""},{"location":"swarms/agents/#image-processing","title":"Image Processing","text":"The Agent class supports comprehensive image analysis through vision-enabled models:
from swarms import Agent\n\n# Create a vision-enabled agent\nvision_agent = Agent(\n agent_name=\"Vision-Analyst\",\n agent_description=\"Advanced image analysis and quality control agent\",\n system_prompt=\"\"\"You are an expert image analyst capable of:\n - Detailed visual inspection and quality assessment\n - Object detection and classification\n - Scene understanding and context analysis\n - Defect identification and reporting\n\n Provide comprehensive analysis with specific observations.\"\"\",\n model_name=\"gpt-4o-mini\", # Vision-enabled model\n multi_modal=True, # Enable multi-modal processing\n max_loops=1,\n output_type=\"str\"\n)\n\n# Analyze a single image\nresponse = vision_agent.run(\n task=\"Analyze this image for quality control purposes\",\n img=\"path/to/image.jpg\"\n)\n\n# Process multiple images\nresponse = vision_agent.run(\n task=\"Compare these images and identify differences\",\n imgs=[\"image1.jpg\", \"image2.jpg\", \"image3.jpg\"],\n summarize_multiple_images=True\n)\n
"},{"location":"swarms/agents/#supported-image-formats","title":"Supported Image Formats","text":"Format Description Max Size JPEG/JPG Standard compressed format 20MB PNG Lossless with transparency 20MB GIF Animated (first frame only) 20MB WebP Modern efficient format 20MB"},{"location":"swarms/agents/#quality-control-example","title":"Quality Control Example","text":"from swarms import Agent\nfrom swarms.prompts.logistics import Quality_Control_Agent_Prompt\n\ndef security_analysis(danger_level: str) -> str:\n \"\"\"Analyze security danger level and return appropriate response.\"\"\"\n danger_responses = {\n \"low\": \"No immediate danger detected\",\n \"medium\": \"Moderate security concern identified\",\n \"high\": \"Critical security threat detected\",\n None: \"No danger level assessment available\"\n }\n return danger_responses.get(danger_level, \"Unknown danger level\")\n\n# Quality control agent with tool integration\nquality_agent = Agent(\n agent_name=\"Quality-Control-Agent\",\n agent_description=\"Advanced quality control and security analysis agent\",\n system_prompt=f\"\"\"\n {Quality_Control_Agent_Prompt}\n\n You have access to security analysis tools. When analyzing images:\n 1. Identify potential safety hazards\n 2. Assess quality standards compliance\n 3. Determine appropriate danger levels (low, medium, high)\n 4. Use the security_analysis function for threat assessment\n \"\"\",\n model_name=\"gpt-4o-mini\",\n multi_modal=True,\n max_loops=1,\n tools=[security_analysis]\n)\n\n# Analyze factory image\nresponse = quality_agent.run(\n task=\"Analyze this factory image for safety and quality issues\",\n img=\"factory_floor.jpg\"\n)\n
"},{"location":"swarms/agents/#tool-integration","title":"Tool Integration","text":""},{"location":"swarms/agents/#creating-custom-tools","title":"Creating Custom Tools","text":"Tools are Python functions that extend your agent's capabilities:
import json\nimport requests\nfrom typing import Optional, Dict, Any\n\ndef get_weather_data(city: str, country: Optional[str] = None) -> str:\n \"\"\"\n Get current weather data for a specified city.\n\n Args:\n city (str): The city name\n country (Optional[str]): Country code (e.g., 'US', 'UK')\n\n Returns:\n str: JSON formatted weather data\n\n Example:\n >>> weather = get_weather_data(\"San Francisco\", \"US\")\n >>> print(weather)\n {\"temperature\": 18, \"condition\": \"partly cloudy\", ...}\n \"\"\"\n try:\n # API call logic here\n weather_data = {\n \"city\": city,\n \"country\": country,\n \"temperature\": 18,\n \"condition\": \"partly cloudy\",\n \"humidity\": 65,\n \"wind_speed\": 12\n }\n return json.dumps(weather_data, indent=2)\n\n except Exception as e:\n return json.dumps({\"error\": f\"Weather API error: {str(e)}\"})\n\ndef calculate_portfolio_metrics(prices: list, weights: list) -> str:\n \"\"\"\n Calculate portfolio performance metrics.\n\n Args:\n prices (list): List of asset prices\n weights (list): List of portfolio weights\n\n Returns:\n str: JSON formatted portfolio metrics\n \"\"\"\n try:\n # Portfolio calculation logic\n portfolio_value = sum(p * w for p, w in zip(prices, weights))\n metrics = {\n \"total_value\": portfolio_value,\n \"weighted_average\": portfolio_value / sum(weights),\n \"asset_count\": len(prices)\n }\n return json.dumps(metrics, indent=2)\n\n except Exception as e:\n return json.dumps({\"error\": f\"Calculation error: {str(e)}\"})\n
"},{"location":"swarms/agents/#tool-integration-example","title":"Tool Integration Example","text":"from swarms import Agent\n\n# Create agent with custom tools\nmulti_tool_agent = Agent(\n agent_name=\"Multi-Tool-Assistant\",\n agent_description=\"Versatile assistant with weather and financial tools\",\n system_prompt=\"\"\"You are a versatile assistant with access to:\n - Weather data retrieval for any city\n - Portfolio analysis and financial calculations\n\n Use these tools to provide comprehensive assistance.\"\"\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n tools=[get_weather_data, calculate_portfolio_metrics]\n)\n\n# Use the agent with tools\nresponse = multi_tool_agent.run(\n \"What's the weather in New York and calculate metrics for a portfolio with prices [100, 150, 200] and weights [0.3, 0.4, 0.3]?\"\n)\n
"},{"location":"swarms/agents/#api-integration-tools","title":"API Integration Tools","text":"import requests\nimport json\nfrom typing import List\n\ndef get_cryptocurrency_price(coin_id: str, vs_currency: str = \"usd\") -> str:\n \"\"\"Get current cryptocurrency price from CoinGecko API.\"\"\"\n try:\n url = \"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": coin_id,\n \"vs_currencies\": vs_currency,\n \"include_market_cap\": True,\n \"include_24hr_vol\": True,\n \"include_24hr_change\": True\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except Exception as e:\n return json.dumps({\"error\": f\"API error: {str(e)}\"})\n\ndef get_top_cryptocurrencies(limit: int = 10) -> str:\n \"\"\"Get top cryptocurrencies by market cap.\"\"\"\n try:\n url = \"https://api.coingecko.com/api/v3/coins/markets\"\n params = {\n \"vs_currency\": \"usd\",\n \"order\": \"market_cap_desc\",\n \"per_page\": limit,\n \"page\": 1\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except Exception as e:\n return json.dumps({\"error\": f\"API error: {str(e)}\"})\n\n# Crypto analysis agent\ncrypto_agent = Agent(\n agent_name=\"Crypto-Analysis-Agent\",\n agent_description=\"Cryptocurrency market analysis and price tracking agent\",\n system_prompt=\"\"\"You are a cryptocurrency analysis expert with access to:\n - Real-time price data for any cryptocurrency\n - Market capitalization rankings\n - Trading volume and price change data\n\n Provide insightful market analysis and investment guidance.\"\"\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n tools=[get_cryptocurrency_price, get_top_cryptocurrencies]\n)\n\n# Analyze crypto market\nresponse = crypto_agent.run(\"Analyze the current Bitcoin price and show me the top 5 cryptocurrencies\")\n
"},{"location":"swarms/agents/#structured-outputs","title":"Structured Outputs","text":""},{"location":"swarms/agents/#function-schema-definition","title":"Function Schema Definition","text":"Define structured outputs using OpenAI's function calling format:
from swarms import Agent\n\n# Define function schemas for structured outputs\nstock_analysis_schema = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"analyze_stock_performance\",\n \"description\": \"Analyze stock performance with detailed metrics\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\n \"type\": \"string\",\n \"description\": \"Stock ticker symbol (e.g., AAPL, GOOGL)\"\n },\n \"analysis_type\": {\n \"type\": \"string\",\n \"enum\": [\"technical\", \"fundamental\", \"comprehensive\"],\n \"description\": \"Type of analysis to perform\"\n },\n \"time_period\": {\n \"type\": \"string\",\n \"enum\": [\"1d\", \"1w\", \"1m\", \"3m\", \"1y\"],\n \"description\": \"Time period for analysis\"\n },\n \"metrics\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\",\n \"enum\": [\"price\", \"volume\", \"pe_ratio\", \"market_cap\", \"volatility\"]\n },\n \"description\": \"Metrics to include in analysis\"\n }\n },\n \"required\": [\"ticker\", \"analysis_type\"]\n }\n }\n}\n\nportfolio_optimization_schema = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"optimize_portfolio\",\n \"description\": \"Optimize portfolio allocation based on risk and return\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"assets\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"object\",\n \"properties\": {\n \"symbol\": {\"type\": \"string\"},\n \"current_weight\": {\"type\": \"number\"},\n \"expected_return\": {\"type\": \"number\"},\n \"risk_level\": {\"type\": \"string\", \"enum\": [\"low\", \"medium\", \"high\"]}\n },\n \"required\": [\"symbol\", \"current_weight\"]\n }\n },\n \"risk_tolerance\": {\n \"type\": \"string\",\n \"enum\": [\"conservative\", \"moderate\", \"aggressive\"]\n },\n \"investment_horizon\": {\n \"type\": \"integer\",\n \"minimum\": 1,\n \"maximum\": 30,\n \"description\": \"Investment time horizon in years\"\n }\n },\n \"required\": [\"assets\", \"risk_tolerance\"]\n }\n }\n}\n\n# Create agent with structured outputs\nstructured_agent = Agent(\n agent_name=\"Structured-Financial-Agent\",\n agent_description=\"Financial analysis agent with structured output capabilities\",\n system_prompt=\"\"\"You are a financial analysis expert that provides structured outputs.\n Use the provided function schemas to format your responses consistently.\"\"\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n tools_list_dictionary=[stock_analysis_schema, portfolio_optimization_schema]\n)\n\n# Generate structured analysis\nresponse = structured_agent.run(\n \"Analyze Apple stock (AAPL) performance with comprehensive analysis for the last 3 months\"\n)\n
"},{"location":"swarms/agents/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms/agents/#dynamic-temperature-control","title":"Dynamic Temperature Control","text":"from swarms import Agent\n\n# Agent with dynamic temperature adjustment\nadaptive_agent = Agent(\n agent_name=\"Adaptive-Response-Agent\",\n agent_description=\"Agent that adjusts response creativity based on context\",\n system_prompt=\"You are an adaptive AI that adjusts your response style based on the task complexity.\",\n model_name=\"gpt-4o-mini\",\n dynamic_temperature_enabled=True, # Enable adaptive temperature\n max_loops=1,\n output_type=\"str\"\n)\n
"},{"location":"swarms/agents/#output-type-configurations","title":"Output Type Configurations","text":"# Different output type examples\njson_agent = Agent(\n agent_name=\"JSON-Agent\",\n system_prompt=\"Always respond in valid JSON format\",\n output_type=\"json\"\n)\n\nstreaming_agent = Agent(\n agent_name=\"Streaming-Agent\", \n system_prompt=\"Provide detailed streaming responses\",\n output_type=\"str-all-except-first\"\n)\n\nfinal_only_agent = Agent(\n agent_name=\"Final-Only-Agent\",\n system_prompt=\"Provide only the final result\",\n output_type=\"final\"\n)\n
"},{"location":"swarms/agents/#safety-and-content-filtering","title":"Safety and Content Filtering","text":"from swarms import Agent\n\n# Agent with enhanced safety features\nsafe_agent = Agent(\n agent_name=\"Safe-Agent\",\n agent_description=\"Agent with comprehensive safety measures\",\n system_prompt=\"You are a helpful, harmless, and honest AI assistant.\",\n model_name=\"gpt-4o-mini\",\n safety_prompt_on=True, # Enable safety prompts\n max_loops=1,\n temperature=0.3 # Lower temperature for more consistent, safe responses\n)\n
"},{"location":"swarms/agents/#best-practices","title":"Best Practices","text":""},{"location":"swarms/agents/#error-handling-and-robustness","title":"Error Handling and Robustness","text":"import logging\nfrom swarms import Agent\n\n# Configure logging\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\n\ndef robust_agent_execution(agent, task, max_retries=3):\n \"\"\"Execute agent with retry logic and error handling.\"\"\"\n for attempt in range(max_retries):\n try:\n response = agent.run(task)\n logger.info(f\"Agent execution successful on attempt {attempt + 1}\")\n return response\n except Exception as e:\n logger.error(f\"Attempt {attempt + 1} failed: {str(e)}\")\n if attempt == max_retries - 1:\n raise\n time.sleep(2 ** attempt) # Exponential backoff\n\n return None\n\n# Example usage\ntry:\n result = robust_agent_execution(agent, \"Analyze market trends\")\n print(result)\nexcept Exception as e:\n print(f\"Agent execution failed: {e}\")\n
"},{"location":"swarms/agents/#performance-optimization","title":"Performance Optimization","text":"from swarms import Agent\nimport time\n\n# Optimized agent configuration\noptimized_agent = Agent(\n agent_name=\"Optimized-Agent\",\n agent_description=\"Performance-optimized agent configuration\",\n system_prompt=\"You are an efficient AI assistant optimized for performance.\",\n model_name=\"gpt-4o-mini\", # Faster model\n max_loops=1, # Minimize loops\n max_tokens=2048, # Reasonable token limit\n temperature=0.5, # Balanced creativity\n output_type=\"str\"\n)\n\n# Batch processing example\ndef process_tasks_batch(agent, tasks, batch_size=5):\n \"\"\"Process multiple tasks efficiently.\"\"\"\n results = []\n for i in range(0, len(tasks), batch_size):\n batch = tasks[i:i + batch_size]\n batch_results = []\n\n for task in batch:\n start_time = time.time()\n result = agent.run(task)\n execution_time = time.time() - start_time\n\n batch_results.append({\n \"task\": task,\n \"result\": result,\n \"execution_time\": execution_time\n })\n\n results.extend(batch_results)\n time.sleep(1) # Rate limiting\n\n return results\n
"},{"location":"swarms/agents/#complete-examples","title":"Complete Examples","text":""},{"location":"swarms/agents/#multi-modal-quality-control-system","title":"Multi-Modal Quality Control System","text":"from swarms import Agent\nfrom swarms.prompts.logistics import Quality_Control_Agent_Prompt\n\ndef security_analysis(danger_level: str) -> str:\n \"\"\"Analyze security danger level and return appropriate response.\"\"\"\n responses = {\n \"low\": \"\u2705 No immediate danger detected - Safe to proceed\",\n \"medium\": \"\u26a0\ufe0f Moderate security concern - Requires attention\",\n \"high\": \"\ud83d\udea8 Critical security threat - Immediate action required\",\n None: \"\u2753 No danger level assessment available\"\n }\n return responses.get(danger_level, \"Unknown danger level\")\n\ndef quality_assessment(quality_score: int) -> str:\n \"\"\"Assess quality based on numerical score (1-10).\"\"\"\n if quality_score >= 8:\n return \"\u2705 Excellent quality - Meets all standards\"\n elif quality_score >= 6:\n return \"\u26a0\ufe0f Good quality - Minor improvements needed\"\n elif quality_score >= 4:\n return \"\u274c Poor quality - Significant issues identified\"\n else:\n return \"\ud83d\udea8 Critical quality failure - Immediate attention required\"\n\n# Advanced quality control agent\nquality_control_system = Agent(\n agent_name=\"Advanced-Quality-Control-System\",\n agent_description=\"Comprehensive quality control and security analysis system\",\n system_prompt=f\"\"\"\n {Quality_Control_Agent_Prompt}\n\n You are an advanced quality control system with the following capabilities:\n\n 1. Visual Inspection: Analyze images for defects, compliance, and safety\n 2. Security Assessment: Identify potential security threats and hazards\n 3. Quality Scoring: Provide numerical quality ratings (1-10 scale)\n 4. Detailed Reporting: Generate comprehensive analysis reports\n\n When analyzing images:\n - Identify specific defects or issues\n - Assess compliance with safety standards\n - Determine appropriate danger levels (low, medium, high)\n - Provide quality scores and recommendations\n - Use available tools for detailed analysis\n\n Always provide specific, actionable feedback.\n \"\"\",\n model_name=\"gpt-4o-mini\",\n multi_modal=True,\n max_loops=1,\n tools=[security_analysis, quality_assessment],\n output_type=\"str\"\n)\n\n# Process factory images\nfactory_images = [\"factory_floor.jpg\", \"assembly_line.jpg\", \"safety_equipment.jpg\"]\n\nfor image in factory_images:\n print(f\"\\n--- Analyzing {image} ---\")\n response = quality_control_system.run(\n task=f\"Perform comprehensive quality control analysis of this image. Assess safety, quality, and provide specific recommendations.\",\n img=image\n )\n print(response)\n
"},{"location":"swarms/agents/#advanced-financial-analysis-agent","title":"Advanced Financial Analysis Agent","text":"from swarms import Agent\nimport json\nimport requests\n\ndef get_market_data(symbol: str, period: str = \"1y\") -> str:\n \"\"\"Get comprehensive market data for a symbol.\"\"\"\n # Simulated market data (replace with real API)\n market_data = {\n \"symbol\": symbol,\n \"current_price\": 150.25,\n \"change_percent\": 2.5,\n \"volume\": 1000000,\n \"market_cap\": 2500000000,\n \"pe_ratio\": 25.5,\n \"dividend_yield\": 1.8,\n \"52_week_high\": 180.50,\n \"52_week_low\": 120.30\n }\n return json.dumps(market_data, indent=2)\n\ndef calculate_risk_metrics(prices: list, benchmark_prices: list) -> str:\n \"\"\"Calculate risk metrics for a portfolio.\"\"\"\n import numpy as np\n\n try:\n returns = np.diff(prices) / prices[:-1]\n benchmark_returns = np.diff(benchmark_prices) / benchmark_prices[:-1]\n\n volatility = np.std(returns) * np.sqrt(252) # Annualized\n sharpe_ratio = (np.mean(returns) / np.std(returns)) * np.sqrt(252)\n max_drawdown = np.max(np.maximum.accumulate(prices) - prices) / np.max(prices)\n\n beta = np.cov(returns, benchmark_returns)[0, 1] / np.var(benchmark_returns)\n\n risk_metrics = {\n \"volatility\": float(volatility),\n \"sharpe_ratio\": float(sharpe_ratio),\n \"max_drawdown\": float(max_drawdown),\n \"beta\": float(beta)\n }\n\n return json.dumps(risk_metrics, indent=2)\n\n except Exception as e:\n return json.dumps({\"error\": f\"Risk calculation error: {str(e)}\"})\n\n# Financial analysis schemas\nfinancial_analysis_schema = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"comprehensive_financial_analysis\",\n \"description\": \"Perform comprehensive financial analysis with structured output\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"analysis_summary\": {\n \"type\": \"object\",\n \"properties\": {\n \"overall_rating\": {\"type\": \"string\", \"enum\": [\"buy\", \"hold\", \"sell\"]},\n \"confidence_level\": {\"type\": \"number\", \"minimum\": 0, \"maximum\": 100},\n \"key_strengths\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n \"key_concerns\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}},\n \"price_target\": {\"type\": \"number\"},\n \"risk_level\": {\"type\": \"string\", \"enum\": [\"low\", \"medium\", \"high\"]}\n }\n },\n \"technical_analysis\": {\n \"type\": \"object\",\n \"properties\": {\n \"trend_direction\": {\"type\": \"string\", \"enum\": [\"bullish\", \"bearish\", \"neutral\"]},\n \"support_levels\": {\"type\": \"array\", \"items\": {\"type\": \"number\"}},\n \"resistance_levels\": {\"type\": \"array\", \"items\": {\"type\": \"number\"}},\n \"momentum_indicators\": {\"type\": \"array\", \"items\": {\"type\": \"string\"}}\n }\n }\n },\n \"required\": [\"analysis_summary\", \"technical_analysis\"]\n }\n }\n}\n\n# Advanced financial agent\nfinancial_analyst = Agent(\n agent_name=\"Advanced-Financial-Analyst\",\n agent_description=\"Comprehensive financial analysis and investment advisory agent\",\n system_prompt=\"\"\"You are an expert financial analyst with advanced capabilities in:\n\n - Fundamental analysis and valuation\n - Technical analysis and chart patterns\n - Risk assessment and portfolio optimization\n - Market sentiment analysis\n - Economic indicator interpretation\n\n Your analysis should be:\n - Data-driven and objective\n - Risk-aware and practical\n - Clearly structured and actionable\n - Compliant with financial regulations\n\n Use available tools to gather market data and calculate risk metrics.\n Provide structured outputs using the defined schemas.\"\"\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n tools=[get_market_data, calculate_risk_metrics],\n tools_list_dictionary=[financial_analysis_schema],\n output_type=\"json\"\n)\n\n# Comprehensive financial analysis\nanalysis_response = financial_analyst.run(\n \"Perform a comprehensive analysis of Apple Inc. (AAPL) including technical and fundamental analysis with structured recommendations\"\n)\n\nprint(json.dumps(json.loads(analysis_response), indent=2))\n
"},{"location":"swarms/agents/#multi-agent-collaboration-system","title":"Multi-Agent Collaboration System","text":"from swarms import Agent\nimport json\n\n# Specialized agents for different tasks\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Market research and data analysis specialist\",\n system_prompt=\"You are a market research expert specializing in data collection and analysis.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n temperature=0.3\n)\n\nstrategy_agent = Agent(\n agent_name=\"Strategy-Advisor\", \n agent_description=\"Strategic planning and recommendation specialist\",\n system_prompt=\"You are a strategic advisor providing high-level recommendations based on research.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n temperature=0.5\n)\n\nexecution_agent = Agent(\n agent_name=\"Execution-Planner\",\n agent_description=\"Implementation and execution planning specialist\", \n system_prompt=\"You are an execution expert creating detailed implementation plans.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n temperature=0.4\n)\n\ndef collaborative_analysis(topic: str):\n \"\"\"Perform collaborative analysis using multiple specialized agents.\"\"\"\n\n # Step 1: Research Phase\n research_task = f\"Conduct comprehensive research on {topic}. Provide key findings, market data, and trends.\"\n research_results = research_agent.run(research_task)\n\n # Step 2: Strategy Phase\n strategy_task = f\"Based on this research: {research_results}\\n\\nDevelop strategic recommendations for {topic}.\"\n strategy_results = strategy_agent.run(strategy_task)\n\n # Step 3: Execution Phase\n execution_task = f\"Create a detailed implementation plan based on:\\nResearch: {research_results}\\nStrategy: {strategy_results}\"\n execution_results = execution_agent.run(execution_task)\n\n return {\n \"research\": research_results,\n \"strategy\": strategy_results,\n \"execution\": execution_results\n }\n\n# Example: Collaborative investment analysis\ninvestment_analysis = collaborative_analysis(\"renewable energy sector investment opportunities\")\n\nfor phase, results in investment_analysis.items():\n print(f\"\\n=== {phase.upper()} PHASE ===\")\n print(results)\n
"},{"location":"swarms/agents/#support-and-resources","title":"Support and Resources","text":"Join our community of agent engineers and researchers for technical support, cutting-edge updates, and exclusive access to world-class agent engineering insights!
Platform Description Link \ud83d\udcda Documentation Official documentation and guides docs.swarms.world \ud83d\udcdd Blog Latest updates and technical articles Medium \ud83d\udcac Discord Live chat and community support Join Discord \ud83d\udc26 Twitter Latest news and announcements @kyegomez \ud83d\udc65 LinkedIn Professional network and updates The Swarm Corporation \ud83d\udcfa YouTube Tutorials and demos Swarms Channel \ud83c\udfab Events Join our community events Sign up here \ud83d\ude80 Onboarding Session Get onboarded with Kye Gomez, creator and lead maintainer of Swarms Book Session"},{"location":"swarms/agents/#getting-help","title":"Getting Help","text":"If you encounter issues or need assistance:
We welcome contributions! Here's how to get involved:
Report Bugs: Help us improve by reporting issues
Suggest Features: Share your ideas for new capabilities
Submit Code: Contribute improvements and new features
Improve Documentation: Help make our docs better
Share Examples: Show how you're using Swarms in your projects
This guide covers the essential aspects of the Swarms Agent class. For the most up-to-date information and advanced features, please refer to the official documentation and community resources.
"},{"location":"swarms/agents/abstractagent/","title":"swarms.agents","text":""},{"location":"swarms/agents/abstractagent/#1-introduction","title":"1. Introduction","text":"AbstractAgent
is an abstract class that serves as a foundation for implementing AI agents. An agent is an entity that can communicate with other agents and perform actions. The AbstractAgent
class allows for customization in the implementation of the receive
method, enabling different agents to define unique actions for receiving and processing messages.
AbstractAgent
provides capabilities for managing tools and accessing memory, and has methods for running, chatting, and stepping through communication with other agents.
class AbstractAgent:\n \"\"\"An abstract class for AI agent.\n\n An agent can communicate with other agents and perform actions.\n Different agents can differ in what actions they perform in the `receive` method.\n\n Agents are full and completed:\n\n Agents = llm + tools + memory\n \"\"\"\n\n def __init__(self, name: str):\n \"\"\"\n Args:\n name (str): name of the agent.\n \"\"\"\n self._name = name\n\n @property\n def name(self):\n \"\"\"Get the name of the agent.\"\"\"\n return self._name\n\n def tools(self, tools):\n \"\"\"init tools\"\"\"\n\n def memory(self, memory_store):\n \"\"\"init memory\"\"\"\n\n def reset(self):\n \"\"\"(Abstract method) Reset the agent.\"\"\"\n\n def run(self, task: str):\n \"\"\"Run the agent once\"\"\"\n\n def _arun(self, taks: str):\n \"\"\"Run Async run\"\"\"\n\n def chat(self, messages: List[Dict]):\n \"\"\"Chat with the agent\"\"\"\n\n def _achat(self, messages: List[Dict]):\n \"\"\"Asynchronous Chat\"\"\"\n\n def step(self, message: str):\n \"\"\"Step through the agent\"\"\"\n\n def _astep(self, message: str):\n \"\"\"Asynchronous step\"\"\"\n
"},{"location":"swarms/agents/abstractagent/#3-functionality-and-usage","title":"3. Functionality and Usage","text":"The AbstractAgent
class represents a generic AI agent and provides a set of methods to interact with it.
To create an instance of an agent, the name
of the agent should be specified.
reset
","text":"The reset
method allows the agent to be reset to its initial state.
agent.reset()\n
"},{"location":"swarms/agents/abstractagent/#2-run","title":"2. run
","text":"The run
method allows the agent to perform a specific task.
agent.run(\"some_task\")\n
"},{"location":"swarms/agents/abstractagent/#3-chat","title":"3. chat
","text":"The chat
method enables communication with the agent through a series of messages.
messages = [{\"id\": 1, \"text\": \"Hello, agent!\"}, {\"id\": 2, \"text\": \"How are you?\"}]\nagent.chat(messages)\n
"},{"location":"swarms/agents/abstractagent/#4-step","title":"4. step
","text":"The step
method allows the agent to process a single message.
agent.step(\"Hello, agent!\")\n
"},{"location":"swarms/agents/abstractagent/#asynchronous-methods","title":"Asynchronous Methods","text":"The class also provides asynchronous variants of the core methods.
"},{"location":"swarms/agents/abstractagent/#additional-functionality","title":"Additional Functionality","text":"Additional functionalities for agent initialization and management of tools and memory are also provided.
agent.tools(some_tools)\nagent.memory(some_memory_store)\n
"},{"location":"swarms/agents/abstractagent/#4-additional-information-and-tips","title":"4. Additional Information and Tips","text":"When implementing a new agent using the AbstractAgent
class, ensure that the receive
method is overridden to define the specific behavior of the agent upon receiving messages.
For further exploration and understanding of AI agents and agent communication, refer to the relevant literature and research on this topic.
"},{"location":"swarms/agents/agent_judge/","title":"AgentJudge","text":"A specialized agent for evaluating and judging outputs from other agents or systems. Acts as a quality control mechanism providing objective assessments and feedback.
Based on the research paper: \"Agent-as-a-Judge: Evaluate Agents with Agents\" - arXiv:2410.10934
"},{"location":"swarms/agents/agent_judge/#overview","title":"Overview","text":"The AgentJudge is designed to evaluate and critique outputs from other AI agents, providing structured feedback on quality, accuracy, and areas for improvement. It supports both single-shot evaluations and iterative refinement through multiple evaluation loops with context building.
Key capabilities:
Quality Assessment: Evaluates correctness, clarity, and completeness of agent outputs
Structured Feedback: Provides detailed critiques with strengths, weaknesses, and suggestions
Multimodal Support: Can evaluate text outputs alongside images
Context Building: Maintains evaluation context across multiple iterations
Batch Processing: Efficiently processes multiple evaluations
graph TD\n A[Input Task] --> B[AgentJudge]\n B --> C{Evaluation Mode}\n\n C -->|step()| D[Single Eval]\n C -->|run()| E[Iterative Eval]\n C -->|run_batched()| F[Batch Eval]\n\n D --> G[Agent Core]\n E --> G\n F --> G\n\n G --> H[LLM Model]\n H --> I[Quality Analysis]\n I --> J[Feedback & Output]\n\n subgraph \"Feedback Details\"\n N[Strengths]\n O[Weaknesses]\n P[Improvements]\n Q[Accuracy Check]\n end\n\n J --> N\n J --> O\n J --> P\n J --> Q\n
"},{"location":"swarms/agents/agent_judge/#class-reference","title":"Class Reference","text":""},{"location":"swarms/agents/agent_judge/#constructor","title":"Constructor","text":"AgentJudge(\n id: str = str(uuid.uuid4()),\n agent_name: str = \"Agent Judge\",\n description: str = \"You're an expert AI agent judge...\",\n system_prompt: str = AGENT_JUDGE_PROMPT,\n model_name: str = \"openai/o1\",\n max_loops: int = 1,\n verbose: bool = False,\n *args,\n **kwargs\n)\n
"},{"location":"swarms/agents/agent_judge/#parameters","title":"Parameters","text":"Parameter Type Default Description id
str
str(uuid.uuid4())
Unique identifier for the judge instance agent_name
str
\"Agent Judge\"
Name of the agent judge description
str
\"You're an expert AI agent judge...\"
Description of the agent's role system_prompt
str
AGENT_JUDGE_PROMPT
System instructions for evaluation model_name
str
\"openai/o1\"
LLM model for evaluation max_loops
int
1
Maximum evaluation iterations verbose
bool
False
Enable verbose logging"},{"location":"swarms/agents/agent_judge/#methods","title":"Methods","text":""},{"location":"swarms/agents/agent_judge/#step","title":"step()","text":"step(\n task: str = None,\n tasks: Optional[List[str]] = None,\n img: Optional[str] = None\n) -> str\n
Processes a single task or list of tasks and returns evaluation.
Parameter Type Default Descriptiontask
str
None
Single task/output to evaluate tasks
List[str]
None
List of tasks/outputs to evaluate img
str
None
Path to image for multimodal evaluation Returns: str
- Detailed evaluation response
run(\n task: str = None,\n tasks: Optional[List[str]] = None,\n img: Optional[str] = None\n) -> List[str]\n
Executes evaluation in multiple iterations with context building.
Parameter Type Default Descriptiontask
str
None
Single task/output to evaluate tasks
List[str]
None
List of tasks/outputs to evaluate img
str
None
Path to image for multimodal evaluation Returns: List[str]
- List of evaluation responses from each iteration
run_batched(\n tasks: Optional[List[str]] = None,\n imgs: Optional[List[str]] = None\n) -> List[List[str]]\n
Executes batch evaluation of multiple tasks with corresponding images.
Parameter Type Default Descriptiontasks
List[str]
None
List of tasks/outputs to evaluate imgs
List[str]
None
List of image paths (same length as tasks) Returns: List[List[str]]
- Evaluation responses for each task
from swarms import AgentJudge\n\n# Initialize with default settings\njudge = AgentJudge()\n\n# Single task evaluation\nresult = judge.step(task=\"The capital of France is Paris.\")\nprint(result)\n
"},{"location":"swarms/agents/agent_judge/#custom-configuration","title":"Custom Configuration","text":"from swarms import AgentJudge\n\n# Custom judge configuration\njudge = AgentJudge(\n agent_name=\"content-evaluator\",\n model_name=\"gpt-4\",\n max_loops=3,\n verbose=True\n)\n\n# Evaluate multiple outputs\noutputs = [\n \"Agent CalculusMaster: The integral of x^2 + 3x + 2 is (1/3)x^3 + (3/2)x^2 + 2x + C\",\n \"Agent DerivativeDynamo: The derivative of sin(x) is cos(x)\",\n \"Agent LimitWizard: The limit of sin(x)/x as x approaches 0 is 1\"\n]\n\nevaluation = judge.step(tasks=outputs)\nprint(evaluation)\n
"},{"location":"swarms/agents/agent_judge/#iterative-evaluation-with-context","title":"Iterative Evaluation with Context","text":"from swarms import AgentJudge\n\n# Multiple iterations with context building\njudge = AgentJudge(max_loops=3)\n\n# Each iteration builds on previous context\nevaluations = judge.run(task=\"Agent output: 2+2=5\")\nfor i, eval_result in enumerate(evaluations):\n print(f\"Iteration {i+1}: {eval_result}\\n\")\n
"},{"location":"swarms/agents/agent_judge/#multimodal-evaluation","title":"Multimodal Evaluation","text":"from swarms import AgentJudge\n\njudge = AgentJudge()\n\n# Evaluate with image\nevaluation = judge.step(\n task=\"Describe what you see in this image\",\n img=\"path/to/image.jpg\"\n)\nprint(evaluation)\n
"},{"location":"swarms/agents/agent_judge/#batch-processing","title":"Batch Processing","text":"from swarms import AgentJudge\n\njudge = AgentJudge()\n\n# Batch evaluation with images\ntasks = [\n \"Describe this chart\",\n \"What's the main trend?\",\n \"Any anomalies?\"\n]\nimages = [\n \"chart1.png\",\n \"chart2.png\", \n \"chart3.png\"\n]\n\n# Each task evaluated independently\nevaluations = judge.run_batched(tasks=tasks, imgs=images)\nfor i, task_evals in enumerate(evaluations):\n print(f\"Task {i+1} evaluations: {task_evals}\")\n
"},{"location":"swarms/agents/agent_judge/#reference","title":"Reference","text":"@misc{zhuge2024agentasajudgeevaluateagentsagents,\n title={Agent-as-a-Judge: Evaluate Agents with Agents}, \n author={Mingchen Zhuge and Changsheng Zhao and Dylan Ashley and Wenyi Wang and Dmitrii Khizbullin and Yunyang Xiong and Zechun Liu and Ernie Chang and Raghuraman Krishnamoorthi and Yuandong Tian and Yangyang Shi and Vikas Chandra and J\u00fcrgen Schmidhuber},\n year={2024},\n eprint={2410.10934},\n archivePrefix={arXiv},\n primaryClass={cs.AI},\n url={https://arxiv.org/abs/2410.10934}\n}\n
"},{"location":"swarms/agents/consistency_agent/","title":"Consistency Agent Documentation","text":"The SelfConsistencyAgent
is a specialized agent designed for generating multiple independent responses to a given task and aggregating them into a single, consistent final answer. It leverages concurrent processing to enhance efficiency and employs a majority voting mechanism to ensure the reliability of the aggregated response.
The primary objective of the SelfConsistencyAgent
is to provide a robust mechanism for decision-making and problem-solving by generating diverse responses and synthesizing them into a coherent final answer. This approach is particularly useful in scenarios where consistency and reliability are critical.
SelfConsistencyAgent
","text":""},{"location":"swarms/agents/consistency_agent/#initialization","title":"Initialization","text":"__init__
: Initializes the SelfConsistencyAgent
with specified parameters.name
str
\"Self-Consistency-Agent\"
Name of the agent. description
str
\"An agent that uses self consistency to generate a final answer.\"
Description of the agent's purpose. system_prompt
str
CONSISTENCY_SYSTEM_PROMPT
System prompt for the reasoning agent. model_name
str
Required The underlying language model to use. num_samples
int
5
Number of independent responses to generate. max_loops
int
1
Maximum number of reasoning loops per sample. majority_voting_prompt
Optional[str]
majority_voting_prompt
Custom prompt for majority voting aggregation. eval
bool
False
Enable evaluation mode for answer validation. output_type
OutputType
\"dict\"
Format of the output. random_models_on
bool
False
Enable random model selection for diversity."},{"location":"swarms/agents/consistency_agent/#methods","title":"Methods","text":"run
: Generates multiple responses for the given task and aggregates them.task
(str
): The input prompt.img
(Optional[str]
, optional): Image input for vision tasks.answer
(Optional[str]
, optional): Expected answer for validation (if eval=True).Returns: Union[str, Dict[str, Any]]
- The aggregated final answer.
aggregation_agent
: Aggregates a list of responses into a single final answer using majority voting.
responses
(List[str]
): The list of responses.prompt
(str
, optional): Custom prompt for the aggregation agent.model_name
(str
, optional): Model to use for aggregation.Returns: str
- The aggregated answer.
check_responses_for_answer
: Checks if a specified answer is present in any of the provided responses.
responses
(List[str]
): A list of responses to check.answer
(str
): The answer to look for in the responses.Returns: bool
- True
if the answer is found, False
otherwise.
batched_run
: Run the agent on multiple tasks in batch.
tasks
(List[str]
): List of tasks to be processed.List[Union[str, Dict[str, Any]]]
- List of results for each task.from swarms.agents.consistency_agent import SelfConsistencyAgent\n\n# Initialize the agent\nagent = SelfConsistencyAgent(\n name=\"Math-Reasoning-Agent\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n num_samples=5\n)\n\n# Define a task\ntask = \"What is the 40th prime number?\"\n\n# Run the agent\nfinal_answer = agent.run(task)\n\n# Print the final aggregated answer\nprint(\"Final aggregated answer:\", final_answer)\n
"},{"location":"swarms/agents/consistency_agent/#example-2-using-custom-majority-voting-prompt","title":"Example 2: Using Custom Majority Voting Prompt","text":"from swarms.agents.consistency_agent import SelfConsistencyAgent\n\n# Initialize the agent with a custom majority voting prompt\nagent = SelfConsistencyAgent(\n name=\"Reasoning-Agent\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n num_samples=5,\n majority_voting_prompt=\"Please provide the most common response.\"\n)\n\n# Define a task\ntask = \"Explain the theory of relativity in simple terms.\"\n\n# Run the agent\nfinal_answer = agent.run(task)\n\n# Print the final aggregated answer\nprint(\"Final aggregated answer:\", final_answer)\n
"},{"location":"swarms/agents/consistency_agent/#example-3-evaluation-mode","title":"Example 3: Evaluation Mode","text":"from swarms.agents.consistency_agent import SelfConsistencyAgent\n\n# Initialize the agent with evaluation mode\nagent = SelfConsistencyAgent(\n name=\"Validation-Agent\",\n model_name=\"gpt-4o-mini\",\n num_samples=3,\n eval=True\n)\n\n# Run with expected answer for validation\nresult = agent.run(\"What is 2 + 2?\", answer=\"4\", eval=True)\nif result is not None:\n print(\"Validation passed:\", result)\nelse:\n print(\"Validation failed - expected answer not found\")\n
"},{"location":"swarms/agents/consistency_agent/#example-4-random-models-for-diversity","title":"Example 4: Random Models for Diversity","text":"from swarms.agents.consistency_agent import SelfConsistencyAgent\n\n# Initialize the agent with random model selection\nagent = SelfConsistencyAgent(\n name=\"Diverse-Reasoning-Agent\",\n model_name=\"gpt-4o-mini\",\n num_samples=5,\n random_models_on=True\n)\n\n# Run the agent\nresult = agent.run(\"What are the benefits of renewable energy?\")\nprint(\"Diverse reasoning result:\", result)\n
"},{"location":"swarms/agents/consistency_agent/#example-5-batch-processing","title":"Example 5: Batch Processing","text":"from swarms.agents.consistency_agent import SelfConsistencyAgent\n\n# Initialize the agent\nagent = SelfConsistencyAgent(\n name=\"Batch-Processing-Agent\",\n model_name=\"gpt-4o-mini\",\n num_samples=3\n)\n\n# Define multiple tasks\ntasks = [\n \"What is the capital of France?\",\n \"What is 15 * 23?\",\n \"Explain photosynthesis in simple terms.\"\n]\n\n# Process all tasks\nresults = agent.batched_run(tasks)\n\n# Print results\nfor i, result in enumerate(results):\n print(f\"Task {i+1} result: {result}\")\n
"},{"location":"swarms/agents/consistency_agent/#key-features","title":"Key Features","text":""},{"location":"swarms/agents/consistency_agent/#self-consistency-technique","title":"Self-Consistency Technique","text":"The agent implements the self-consistency approach based on the research paper \"Self-Consistency Improves Chain of Thought Reasoning in Language Models\" by Wang et al. (2022). This technique:
The agent uses ThreadPoolExecutor
to generate multiple responses concurrently, improving performance while maintaining independence between reasoning paths.
The aggregation uses an AI-powered agent that: - Identifies dominant responses - Analyzes disparities and disagreements - Evaluates consensus strength - Synthesizes minority insights - Provides comprehensive recommendations
"},{"location":"swarms/agents/consistency_agent/#output-formats","title":"Output Formats","text":"The agent supports various output types: - \"dict\"
: Dictionary format with conversation history - \"str\"
: Simple string output - \"list\"
: List format - \"json\"
: JSON formatted output
num_samples
increases processing time and costbatched_run
for multiple related tasksThe create_agents_from_yaml
function is designed to dynamically create agents and orchestrate swarms based on configurations defined in a YAML file. It is particularly suited for enterprise use-cases, offering scalability and reliability for agent-based workflows.
*args
and **kwargs
) for fine-tuning agent behavior.model
A callable representing the model (LLM or other) that agents will use. Callable None OpenAIChat(model_name=\"gpt-4\")
yaml_file
Path to the YAML file containing agent configurations. String \"agents.yaml\" \"config/agents.yaml\"
return_type
Determines the type of return object. Options: \"auto\"
, \"swarm\"
, \"agents\"
, \"both\"
, \"tasks\"
, \"run_swarm\"
. String \"auto\" \"both\"
*args
Additional positional arguments for further customization (e.g., agent behavior). List N/A N/A **kwargs
Additional keyword arguments for customization (e.g., specific parameters passed to the agents or swarm). Dict N/A N/A"},{"location":"swarms/agents/create_agents_yaml/#return-types","title":"Return Types","text":"Return Type Description SwarmRouter
Returns a SwarmRouter
object, orchestrating the created agents, only if swarm architecture is defined in YAML. Agent
Returns a single agent if only one is defined. List[Agent]
Returns a list of agents if multiple are defined. Tuple
If both agents and a swarm are present, returns both as a tuple (SwarmRouter, List[Agent]
). List[Dict]
Returns a list of task results if tasks were executed. None
Returns nothing if an invalid return type is provided or an error occurs."},{"location":"swarms/agents/create_agents_yaml/#detailed-return-types","title":"Detailed Return Types","text":"Return Type Condition Example Return Value \"auto\"
Automatically determines the return based on YAML content. SwarmRouter
if swarm architecture is defined, otherwise Agent
or List[Agent]
. \"swarm\"
Returns SwarmRouter
if present; otherwise returns agents. <SwarmRouter>
\"agents\"
Returns a list of agents (or a single agent if only one is defined). [<Agent>, <Agent>]
or <Agent>
\"both\"
Returns both SwarmRouter
and agents in a tuple. (<SwarmRouter>, [<Agent>, <Agent>])
\"tasks\"
Returns the task results, if tasks were executed by agents. [{'task': 'task_output'}, {'task2': 'output'}]
\"run_swarm\"
Executes the swarm (if defined) and returns the result. 'Swarm task output here'
"},{"location":"swarms/agents/create_agents_yaml/#example-use-cases","title":"Example Use Cases","text":"agents:\n - agent_name: \"Financial-Analysis-Agent\"\n system_prompt: \"Analyze the best investment strategy for 2024.\"\n max_loops: 1\n autosave: true\n verbose: false\n context_length: 100000\n output_type: \"str\"\n task: \"Analyze stock options for long-term gains.\"\n\n - agent_name: \"Risk-Analysis-Agent\"\n system_prompt: \"Evaluate the risk of tech stocks in 2024.\"\n max_loops: 2\n autosave: false\n verbose: true\n context_length: 50000\n output_type: \"json\"\n task: \"What are the riskiest stocks in the tech sector?\"\n
from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_router import SwarmRouter\n\n# Model representing your LLM\ndef model(prompt):\n return f\"Processed: {prompt}\"\n\n# Create agents and return them as a list\nagents = create_agents_from_yaml(model=model, yaml_file=\"agents.yaml\", return_type=\"agents\")\nprint(agents)\n
agents:\n - agent_name: \"Legal-Agent\"\n system_prompt: \"Provide legal advice on corporate structuring.\"\n task: \"How to incorporate a business as an LLC?\"\n\nswarm_architecture:\n name: \"Corporate-Swarm\"\n description: \"A swarm for helping businesses with legal and tax advice.\"\n swarm_type: \"ConcurrentWorkflow\"\n task: \"How can we optimize a business structure for maximum tax efficiency?\"\n max_loops: 3\n
import os\n\nfrom dotenv import load_dotenv\nfrom loguru import logger\nfrom swarm_models import OpenAIChat\n\nfrom swarms.agents.create_agents_from_yaml import (\n create_agents_from_yaml,\n)\n\n# Load environment variables\nload_dotenv()\n\n# Path to your YAML file\nyaml_file = \"agents_multi_agent.yaml\"\n\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"GROQ_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_base=\"https://api.groq.com/openai/v1\",\n openai_api_key=api_key,\n model_name=\"llama-3.1-70b-versatile\",\n temperature=0.1,\n)\n\ntry:\n # Create agents and run tasks (using 'both' to return agents and task results)\n task_results = create_agents_from_yaml(\n model=model, yaml_file=yaml_file, return_type=\"run_swarm\"\n )\n\n logger.info(f\"Results from agents: {task_results}\")\nexcept Exception as e:\n logger.error(f\"An error occurred: {e}\")\n
agents:\n - agent_name: \"Market-Research-Agent\"\n system_prompt: \"What are the latest trends in AI?\"\n task: \"Provide a market analysis for AI technologies in 2024.\"\n
from swarms.structs.agent import Agent\n\n# Model representing your LLM\ndef model(prompt):\n return f\"Processed: {prompt}\"\n\n# Create agents and run tasks, return both agents and task results\nswarm, agents = create_agents_from_yaml(model=model, yaml_file=\"agents.yaml\", return_type=\"both\")\nprint(swarm, agents)\n
"},{"location":"swarms/agents/create_agents_yaml/#yaml-schema-overview","title":"YAML Schema Overview:","text":"Below is a breakdown of the attributes expected in the YAML configuration file, which governs how agents and swarms are created.
"},{"location":"swarms/agents/create_agents_yaml/#yaml-attributes-table","title":"YAML Attributes Table:","text":"Attribute Name Description Type Required Default/Example Valueagents
List of agents to be created. Each agent must have specific configurations. List of dicts Yes agent_name
The name of the agent. String Yes \"Stock-Analysis-Agent\"
system_prompt
The system prompt that the agent will use. String Yes \"Your full system prompt here\"
max_loops
Maximum number of iterations or loops for the agent. Integer No 1 autosave
Whether the agent should automatically save its state. Boolean No true
dashboard
Whether to enable a dashboard for the agent. Boolean No false
verbose
Whether to run the agent in verbose mode (for debugging). Boolean No false
dynamic_temperature_enabled
Enable dynamic temperature adjustments during agent execution. Boolean No false
saved_state_path
Path where the agent's state is saved for recovery. String No \"path_to_save_state.json\"
user_name
Name of the user interacting with the agent. String No \"default_user\"
retry_attempts
Number of times to retry an operation in case of failure. Integer No 1 context_length
Maximum context length for agent interactions. Integer No 100000 return_step_meta
Whether to return metadata for each step of the task. Boolean No false
output_type
The type of output the agent will return (e.g., str
, json
). String No \"str\"
task
Task to be executed by the agent (optional). String No \"What is the best strategy for long-term stock investment?\"
"},{"location":"swarms/agents/create_agents_yaml/#swarm-architecture-optional","title":"Swarm Architecture (Optional):","text":"Attribute Name Description Type Required Default/Example Value swarm_architecture
Defines the swarm configuration. For more information on what can be added to the swarm architecture, please refer to the Swarm Router documentation. Dict No name
The name of the swarm. String Yes \"MySwarm\"
description
Description of the swarm and its purpose. String No \"A swarm for collaborative task solving\"
max_loops
Maximum number of loops for the swarm. Integer No 5 swarm_type
The type of swarm (e.g., ConcurrentWorkflow
) SequentialWorkflow
. String Yes \"ConcurrentWorkflow\"
task
The primary task assigned to the swarm. String No \"How can we trademark concepts as a delaware C CORP for free?\"
"},{"location":"swarms/agents/create_agents_yaml/#yaml-schema-example","title":"YAML Schema Example:","text":"Below is an updated YAML schema that conforms to the function's expectations:
agents:\n - agent_name: \"Financial-Analysis-Agent\"\n system_prompt: \"Your full system prompt here\"\n max_loops: 1\n autosave: true\n dashboard: false\n verbose: true\n dynamic_temperature_enabled: true\n saved_state_path: \"finance_agent.json\"\n user_name: \"swarms_corp\"\n retry_attempts: 1\n context_length: 200000\n return_step_meta: false\n output_type: \"str\"\n # task: \"How can I establish a ROTH IRA to buy stocks and get a tax break?\" # Turn off if using swarm\n\n - agent_name: \"Stock-Analysis-Agent\"\n system_prompt: \"Your full system prompt here\"\n max_loops: 2\n autosave: true\n dashboard: false\n verbose: true\n dynamic_temperature_enabled: false\n saved_state_path: \"stock_agent.json\"\n user_name: \"stock_user\"\n retry_attempts: 3\n context_length: 150000\n return_step_meta: true\n output_type: \"json\"\n # task: \"What is the best strategy for long-term stock investment?\"\n\n# Optional Swarm Configuration\nswarm_architecture:\n name: \"MySwarm\"\n description: \"A swarm for collaborative task solving\"\n max_loops: 5\n swarm_type: \"ConcurrentWorkflow\"\n task: \"How can we trademark concepts as a delaware C CORP for free?\" # Main task \n
"},{"location":"swarms/agents/create_agents_yaml/#diagram","title":"Diagram","text":"graph TD;\n A[Task] -->|Send to| B[Financial-Analysis-Agent]\n A -->|Send to| C[Stock-Analysis-Agent]
"},{"location":"swarms/agents/create_agents_yaml/#how-to-use-create_agents_from_yaml-function-with-yaml","title":"How to Use create_agents_from_yaml
Function with YAML:","text":"import os\n\nfrom dotenv import load_dotenv\nfrom loguru import logger\nfrom swarm_models import OpenAIChat\n\nfrom swarms.agents.create_agents_from_yaml import (\n create_agents_from_yaml,\n)\n\n# Load environment variables\nload_dotenv()\n\n# Path to your YAML file\nyaml_file = \"agents.yaml\"\n\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"GROQ_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_base=\"https://api.groq.com/openai/v1\",\n openai_api_key=api_key,\n model_name=\"llama-3.1-70b-versatile\",\n temperature=0.1,\n)\n\ntry:\n # Create agents and run tasks (using 'both' to return agents and task results)\n task_results = create_agents_from_yaml(\n model=model, yaml_file=yaml_file, return_type=\"run_swarm\" # \n )\n\n logger.info(f\"Results from agents: {task_results}\")\nexcept Exception as e:\n logger.error(f\"An error occurred: {e}\")\n
"},{"location":"swarms/agents/create_agents_yaml/#error-handling","title":"Error Handling:","text":"ValueError
.The create_agents_from_yaml
function provides a flexible and powerful way to dynamically configure and execute agents, supporting a wide range of tasks and configurations for enterprise-level use cases. By following the YAML schema and function signature, users can easily define and manage their agents and swarms.
Integrating external agents from other frameworks like Langchain, Griptape, and more is straightforward using Swarms. Below are step-by-step guides on how to bring these agents into Swarms by creating a new class, implementing the required methods, and ensuring compatibility.
"},{"location":"swarms/agents/external_party_agents/#quick-overview","title":"Quick Overview","text":"Agent
class from Swarms..run(task: str) -> str
method that will execute the agent and return a string response.The primary structure you'll need to integrate any external agent is the Agent
class from Swarms. Here\u2019s a template for how your new agent class should be structured:
from swarms import Agent\n\nclass ExternalAgent(Agent):\n def run(self, task: str) -> str:\n # Implement logic to run external agent\n pass\n\n def save_to_json(self, output: str, filepath: str):\n # Optionally save the result to a JSON file\n with open(filepath, \"w\") as file:\n json.dump({\"response\": output}, file)\n
"},{"location":"swarms/agents/external_party_agents/#griptape-agent-integration-example","title":"Griptape Agent Integration Example","text":"In this example, we will create a Griptape agent by inheriting from the Swarms Agent
class and implementing the run
method.
SwarmsAgent
class.run()
method: Implement logic to process a task string and execute the Griptape agent.from swarms import (\n Agent as SwarmsAgent,\n) # Import the base Agent class from Swarms\nfrom griptape.structures import Agent as GriptapeAgent\nfrom griptape.tools import (\n WebScraperTool,\n FileManagerTool,\n PromptSummaryTool,\n)\n\n# Create a custom agent class that inherits from SwarmsAgent\nclass GriptapeSwarmsAgent(SwarmsAgent):\n def __init__(self, *args, **kwargs):\n # Initialize the Griptape agent with its tools\n self.agent = GriptapeAgent(\n input=\"Load {{ args[0] }}, summarize it, and store it in a file called {{ args[1] }}.\",\n tools=[\n WebScraperTool(off_prompt=True),\n PromptSummaryTool(off_prompt=True),\n FileManagerTool(),\n ],\n *args,\n **kwargs,\n )\n\n # Override the run method to take a task and execute it using the Griptape agent\n def run(self, task: str) -> str:\n # Extract URL and filename from task\n url, filename = task.split(\",\") # Example task string: \"https://example.com, output.txt\"\n # Execute the Griptape agent\n result = self.agent.run(url.strip(), filename.strip())\n # Return the final result as a string\n return str(result)\n\n\n# Example usage:\ngriptape_swarms_agent = GriptapeSwarmsAgent()\noutput = griptape_swarms_agent.run(\"https://griptape.ai, griptape.txt\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#explanation","title":"Explanation:","text":"You can enhance your external agents with additional features such as:
Saving outputs to JSON, databases, or logs.
Handling errors and retry mechanisms for robustness.
Custom logging with tools like Loguru for extensive debugging.
Next, we demonstrate how to integrate a Langchain agent with Swarms by following similar steps.
"},{"location":"swarms/agents/external_party_agents/#langchain-integration-steps","title":"Langchain Integration Steps:","text":"SwarmsAgent
class.run()
method: Pass tasks to the Langchain agent and return the response.from swarms import Agent as SwarmsAgent\nfrom langchain import LLMChain\nfrom langchain.llms import OpenAI\nfrom langchain.prompts import PromptTemplate\n\n# Create a custom agent class that inherits from SwarmsAgent\nclass LangchainSwarmsAgent(SwarmsAgent):\n def __init__(self, *args, **kwargs):\n # Initialize the Langchain agent with LLM and prompt\n prompt_template = PromptTemplate(template=\"Answer the question: {question}\")\n llm = OpenAI(model=\"gpt-3.5-turbo\")\n self.chain = LLMChain(llm=llm, prompt=prompt_template)\n super().__init__(*args, **kwargs)\n\n # Override the run method to take a task and execute it using the Langchain agent\n def run(self, task: str) -> str:\n # Pass the task to the Langchain agent\n result = self.chain.run({\"question\": task})\n # Return the final result as a string\n return result\n\n# Example usage:\nlangchain_swarms_agent = LangchainSwarmsAgent()\noutput = langchain_swarms_agent.run(\"What is the capital of France?\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#explanation_1","title":"Explanation:","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nimport openai\n\n# Custom OpenAI Function Calling Agent\nclass OpenAIFunctionAgent(SwarmsAgent):\n def __init__(self, *args, **kwargs):\n # Initialize OpenAI API credentials and settings\n self.api_key = \"your_openai_api_key\"\n super().__init__(*args, **kwargs)\n\n def run(self, task: str) -> str:\n # Example task: \"summarize, 'Provide a short summary of this text...'\"\n command, input_text = task.split(\", \")\n response = openai.Completion.create(\n model=\"gpt-4\",\n prompt=f\"{command}: {input_text}\",\n temperature=0.5,\n max_tokens=100,\n )\n return response.choices[0].text.strip()\n\n# Example usage:\nopenai_agent = OpenAIFunctionAgent()\noutput = openai_agent.run(\"summarize, Provide a short summary of this text...\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#2-rasa-agents","title":"2. Rasa Agents","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nfrom rasa.core.agent import Agent as RasaAgent\nfrom rasa.core.interpreter import RasaNLUInterpreter\n\n# Custom Rasa Swarms Agent\nclass RasaSwarmsAgent(SwarmsAgent):\n def __init__(self, model_path: str, *args, **kwargs):\n # Initialize the Rasa agent with a pre-trained model\n self.agent = RasaAgent.load(model_path)\n super().__init__(*args, **kwargs)\n\n def run(self, task: str) -> str:\n # Pass user input to the Rasa agent\n result = self.agent.handle_text(task)\n # Return the final response from the agent\n return result[0][\"text\"] if result else \"No response.\"\n\n# Example usage:\nrasa_swarms_agent = RasaSwarmsAgent(\"path/to/rasa_model\")\noutput = rasa_swarms_agent.run(\"Hello, how can I get a refund?\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#3-hugging-face-transformers","title":"3. Hugging Face Transformers","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nfrom transformers import pipeline\n\n# Custom Hugging Face Agent\nclass HuggingFaceSwarmsAgent(SwarmsAgent):\n def __init__(self, model_name: str, *args, **kwargs):\n # Initialize a pre-trained pipeline from Hugging Face\n self.pipeline = pipeline(\"text-generation\", model=model_name)\n super().__init__(*args, **kwargs)\n\n def run(self, task: str) -> str:\n # Generate text based on the task input\n result = self.pipeline(task, max_length=50)\n return result[0][\"generated_text\"]\n\n# Example usage:\nhf_swarms_agent = HuggingFaceSwarmsAgent(\"gpt2\")\noutput = hf_swarms_agent.run(\"Once upon a time in a land far, far away...\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#4-autogpt-or-babyagi","title":"4. AutoGPT or BabyAGI","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nfrom autogpt import AutoGPT\n\n# Custom AutoGPT Agent\nclass AutoGPTSwarmsAgent(SwarmsAgent):\n def __init__(self, config, *args, **kwargs):\n # Initialize AutoGPT with configuration\n self.agent = AutoGPT(config)\n super().__init__(*args, **kwargs)\n\n def run(self, task: str) -> str:\n # Execute task recursively using AutoGPT\n result = self.agent.run(task)\n return result\n\n# Example usage:\nautogpt_swarms_agent = AutoGPTSwarmsAgent({\"goal\": \"Solve world hunger\"})\noutput = autogpt_swarms_agent.run(\"Develop a plan to solve world hunger.\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#5-dialogflow-agents","title":"5. DialogFlow Agents","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nfrom google.cloud import dialogflow\n\n# Custom DialogFlow Agent\nclass DialogFlowSwarmsAgent(SwarmsAgent):\n def __init__(self, project_id: str, session_id: str, *args, **kwargs):\n # Initialize DialogFlow session client\n self.session_client = dialogflow.SessionsClient()\n self.project_id = project_id\n self.session_id = session_id\n super().__init__(*args, **kwargs)\n\n def run(self, task: str) -> str:\n session = self.session_client.session_path(self.project_id, self.session_id)\n text_input = dialogflow.TextInput(text=task, language_code=\"en-US\")\n query_input = dialogflow.QueryInput(text=text_input)\n response = self.session_client.detect_intent(\n request={\"session\": session, \"query_input\": query_input}\n )\n return response.query_result.fulfillment_text\n\n# Example usage:\ndialogflow_swarms_agent = DialogFlowSwarmsAgent(\"your_project_id\", \"your_session_id\")\noutput = dialogflow_swarms_agent.run(\"Book me a flight to Paris.\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#6-chatterbot-agents","title":"6. ChatterBot Agents","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nfrom chatterbot import ChatBot\n\n# Custom ChatterBot Agent\nclass ChatterBotSwarmsAgent(SwarmsAgent):\n def __init__(self, name: str, *args, **kwargs):\n # Initialize ChatterBot\n self.agent = ChatBot(name)\n super().__init__(*args, **kwargs)\n\n def run(self, task: str) -> str:\n # Get a response from ChatterBot based on user input\n response = self.agent.get_response(task)\n return str(response)\n\n# Example usage:\nchatterbot_swarms_agent = ChatterBotSwarmsAgent(\"Assistant\")\noutput = chatterbot_swarms_agent.run(\"What is the capital of Italy?\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#7-custom-apis-as-agents","title":"7. Custom APIs as Agents","text":"## Example Integration:
from swarms import Agent as SwarmsAgent\nimport requests\n\n# Custom API Agent\nclass APIAgent(SwarmsAgent):\n def run(self, task: str) -> str:\n # Parse task for API endpoint and parameters\n endpoint, params = task.split(\", \")\n response = requests.get(endpoint, params={\"q\": params})\n return response.text\n\n# Example usage:\napi_swarms_agent = APIAgent()\noutput = api_swarms_agent.run(\"https://api.example.com/search, python\")\nprint(output)\n
"},{"location":"swarms/agents/external_party_agents/#summary-of-integrations","title":"Summary of Integrations:","text":"Griptape: Integrate with tools for web scraping, summarization, etc.
Langchain: Use powerful language model orchestration.
OpenAI Function Calling: Directly run OpenAI API-based agents.
Rasa: Build and integrate conversational agents.
Hugging Face: Leverage transformer models.
AutoGPT/BabyAGI: Recursive, autonomous task execution.
DialogFlow: Integrate conversational flows for voice/chat-based systems.
ChatterBot: Machine-learning conversational agents.
Custom APIs: Leverage external APIs as agents for custom workflows.
By following the steps outlined above, you can seamlessly integrate external agent frameworks like Griptape and Langchain into Swarms. This makes Swarms a highly versatile platform for orchestrating various agentic workflows and leveraging the unique capabilities of different frameworks.
For more examples and use cases, please refer to the official Swarms documentation site.
"},{"location":"swarms/agents/gkp_agent/","title":"Generated Knowledge Prompting (GKP) Agent","text":"The GKP Agent is a sophisticated reasoning system that enhances its capabilities by generating relevant knowledge before answering queries. This approach, inspired by Liu et al. 2022, is particularly effective for tasks requiring commonsense reasoning and factual information.
"},{"location":"swarms/agents/gkp_agent/#overview","title":"Overview","text":"The GKP Agent consists of three main components: 1. Knowledge Generator - Creates relevant factual information 2. Reasoner - Uses generated knowledge to form answers 3. Coordinator - Synthesizes multiple reasoning paths into a final answer
"},{"location":"swarms/agents/gkp_agent/#architecture","title":"Architecture","text":"graph TD\n A[Input Query] --> B[Knowledge Generator]\n B --> C[Generate Knowledge Items]\n C --> D[Reasoner]\n D --> E[Multiple Reasoning Paths]\n E --> F[Coordinator]\n F --> G[Final Answer]\n\n subgraph \"Knowledge Generation\"\n B\n C\n end\n\n subgraph \"Reasoning\"\n D\n E\n end\n\n subgraph \"Coordination\"\n F\n G\n end
"},{"location":"swarms/agents/gkp_agent/#use-cases","title":"Use Cases","text":"graph LR\n A[GKP Agent] --> B[Commonsense Reasoning]\n A --> C[Factual Question Answering]\n A --> D[Complex Problem Solving]\n A --> E[Multi-step Reasoning]\n\n B --> B1[Everyday Logic]\n B --> B2[Social Situations]\n\n C --> C1[Historical Facts]\n C --> C2[Scientific Information]\n\n D --> D1[Technical Analysis]\n D --> D2[Decision Making]\n\n E --> E1[Chain of Thought]\n E --> E2[Multi-perspective Analysis]
"},{"location":"swarms/agents/gkp_agent/#api-reference","title":"API Reference","text":""},{"location":"swarms/agents/gkp_agent/#gkpagent","title":"GKPAgent","text":"The main agent class that orchestrates the knowledge generation and reasoning process.
"},{"location":"swarms/agents/gkp_agent/#initialization-parameters","title":"Initialization Parameters","text":"Parameter Type Default Description agent_name str \"gkp-agent\" Name identifier for the agent model_name str \"openai/o1\" LLM model to use for all components num_knowledge_items int 6 Number of knowledge snippets to generate per query"},{"location":"swarms/agents/gkp_agent/#methods","title":"Methods","text":"Method Description Parameters Returns process(query: str) Process a single query through the GKP pipeline query: str Dict[str, Any] containing full processing results run(queries: List[str], detailed_output: bool = False) Process multiple queries queries: List[str], detailed_output: bool Union[List[str], List[Dict[str, Any]]]"},{"location":"swarms/agents/gkp_agent/#knowledgegenerator","title":"KnowledgeGenerator","text":"Component responsible for generating relevant knowledge for queries.
"},{"location":"swarms/agents/gkp_agent/#initialization-parameters_1","title":"Initialization Parameters","text":"Parameter Type Default Description agent_name str \"knowledge-generator\" Name identifier for the knowledge generator agent model_name str \"openai/o1\" Model to use for knowledge generation num_knowledge_items int 2 Number of knowledge items to generate per query"},{"location":"swarms/agents/gkp_agent/#methods_1","title":"Methods","text":"Method Description Parameters Returns generate_knowledge(query: str) Generate relevant knowledge for a query query: str List[str] of generated knowledge statements"},{"location":"swarms/agents/gkp_agent/#reasoner","title":"Reasoner","text":"Component that uses generated knowledge to reason about and answer queries.
"},{"location":"swarms/agents/gkp_agent/#initialization-parameters_2","title":"Initialization Parameters","text":"Parameter Type Default Description agent_name str \"knowledge-reasoner\" Name identifier for the reasoner agent model_name str \"openai/o1\" Model to use for reasoning"},{"location":"swarms/agents/gkp_agent/#methods_2","title":"Methods","text":"Method Description Parameters Returns reason_and_answer(query: str, knowledge: str) Reason about a query using provided knowledge query: str, knowledge: str Dict[str, str] containing explanation, confidence, and answer"},{"location":"swarms/agents/gkp_agent/#example-usage","title":"Example Usage","text":"from swarms.agents.gkp_agent import GKPAgent\n\n# Initialize the GKP Agent\nagent = GKPAgent(\n agent_name=\"gkp-agent\",\n model_name=\"gpt-4\", # Using OpenAI's model\n num_knowledge_items=6, # Generate 6 knowledge items per query\n)\n\n# Example queries\nqueries = [\n \"What are the implications of quantum entanglement on information theory?\",\n]\n\n# Run the agent\nresults = agent.run(queries)\n\n# Print results\nfor i, result in enumerate(results):\n print(f\"\\nQuery {i+1}: {queries[i]}\")\n print(f\"Answer: {result}\")\n
"},{"location":"swarms/agents/gkp_agent/#best-practices","title":"Best Practices","text":"Adjust model parameters for optimal performance
Reasoning Process
Consider multiple perspectives
Coordination
The agent includes robust error handling for: - Invalid queries - Failed knowledge generation - Reasoning errors - Coordination failures
"},{"location":"swarms/agents/iterative_agent/","title":"Iterative Reflective Expansion (IRE) Algorithm Documentation","text":"The Iterative Reflective Expansion (IRE) Algorithm is a sophisticated reasoning framework that employs iterative hypothesis generation, simulation, and refinement to solve complex problems. It leverages a multi-step approach where an AI agent generates initial solution paths, evaluates their effectiveness through simulation, reflects on errors, and dynamically revises reasoning strategies. Through continuous cycles of hypothesis testing and meta-cognitive reflection, the algorithm progressively converges on optimal solutions by learning from both successful and unsuccessful reasoning attempts.
"},{"location":"swarms/agents/iterative_agent/#architecture","title":"Architecture","text":"graph TD\n Problem_Input[\"\ud83e\udde9 Problem Input\"] --> Generate_Hypotheses\n Generate_Hypotheses[\"Generate Initial Hypotheses\"] --> Simulate\n subgraph Iterative Reflective Expansion Loop\n Simulate[\"Simulate Reasoning Paths\"] --> Evaluate\n Evaluate[\"Evaluate Outcomes\"] --> Reflect{Is solution satisfactory?}\n Reflect -->|No, issues found| Meta_Reflect\n Reflect -->|Yes| Promising\n Meta_Reflect[\"Meta-Cognitive Reflection\"] --> Revise_Paths\n Meta_Reflect --> Memory[(Reasoning Memory)]\n Meta_Reflect --> Memory\n Revise_Paths[\"Revise Paths Based on Feedback\"] --> Expand_Paths\n Meta_Reflect --> Revise_Path\n Revise_Path[\"Revise Paths\"] --> Expand_Paths\n Expand_Paths[\"Iterative Expansion & Pruning\"] --> Simulate\n end\n Promising[\"Promising Paths Selected\"] --> Memory\n Memory[\"Memory Integration\"] --> Synthesize\n Synthesize[\"Synthesize Final Solution\"] --> Final[\"Final Solution \u2705\"]\n
"},{"location":"swarms/agents/iterative_agent/#workflow","title":"Workflow","text":"from swarms import IterativeReflectiveExpansion\n\nagent = IterativeReflectiveExpansion(\n max_iterations=3,\n)\n\nagent.run(\"What is the 40th prime number?\")\n
"},{"location":"swarms/agents/iterative_agent/#conclusion","title":"Conclusion","text":"The Iterative Reflective Expansion (IRE) Algorithm is a powerful tool for solving complex problems through iterative reasoning and reflection. By leveraging the capabilities of a Swarms agent, it can dynamically adapt and refine its approach to converge on optimal solutions.
"},{"location":"swarms/agents/message/","title":"The Module/Class Name: Message","text":"In the swarms.agents framework, the class Message
is used to represent a message with timestamp and optional metadata.
The Message
class is a fundamental component that enables the representation of messages within an agent system. Messages contain essential information such as the sender, content, timestamp, and optional metadata.
__init__
","text":"The constructor of the Message
class takes three parameters:
sender
(str): The sender of the message.content
(str): The content of the message.metadata
(dict or None): Optional metadata associated with the message.__repr__(self)
: Returns a string representation of the Message
object, including the timestamp, sender, and content.class Message:\n \"\"\"\n Represents a message with timestamp and optional metadata.\n\n Usage\n --------------\n mes = Message(\n sender = \"Kye\",\n content = \"message\"\n )\n\n print(mes)\n \"\"\"\n\n def __init__(self, sender, content, metadata=None):\n self.timestamp = datetime.datetime.now()\n self.sender = sender\n self.content = content\n self.metadata = metadata or {}\n\n def __repr__(self):\n \"\"\"\n __repr__ represents the string representation of the Message object.\n\n Returns:\n (str) A string containing the timestamp, sender, and content of the message.\n \"\"\"\n return f\"{self.timestamp} - {self.sender}: {self.content}\"\n
"},{"location":"swarms/agents/message/#functionality-and-usage","title":"Functionality and Usage","text":"The Message
class represents a message in the agent system. Upon initialization, the timestamp
is set to the current date and time, and the metadata
is set to an empty dictionary if no metadata is provided.
Creating a Message
object and displaying its string representation.
mes = Message(sender=\"Kye\", content=\"Hello! How are you?\")\n\nprint(mes)\n
Output:
2023-09-20 13:45:00 - Kye: Hello! How are you?\n
"},{"location":"swarms/agents/message/#usage-example-2","title":"Usage Example 2","text":"Creating a Message
object with metadata.
metadata = {\"priority\": \"high\", \"category\": \"urgent\"}\nmes_with_metadata = Message(\n sender=\"Alice\", content=\"Important update\", metadata=metadata\n)\n\nprint(mes_with_metadata)\n
Output:
2023-09-20 13:46:00 - Alice: Important update\n
"},{"location":"swarms/agents/message/#usage-example-3","title":"Usage Example 3","text":"Creating a Message
object without providing metadata.
mes_no_metadata = Message(sender=\"Bob\", content=\"Reminder: Meeting at 2PM\")\n\nprint(mes_no_metadata)\n
Output:
2023-09-20 13:47:00 - Bob: Reminder: Meeting at 2PM\n
"},{"location":"swarms/agents/message/#additional-information-and-tips","title":"Additional Information and Tips","text":"When creating a new Message
object, ensure that the required parameters sender
and content
are provided. The timestamp
will automatically be assigned the current date and time. Optional metadata
can be included to provide additional context or information associated with the message.
For further information on the Message
class and its usage, refer to the official swarms.agents documentation and relevant tutorials related to message handling and communication within the agent system.
This guide will walk you through the steps to build high-quality agents by extending the Agent
class. It emphasizes best practices, the use of type annotations, comprehensive documentation, and modular design to ensure maintainability and scalability. Additionally, you will learn how to incorporate a callable llm
parameter or specify a model_name
attribute to enhance flexibility and functionality. These principles ensure that agents are not only functional but also robust and adaptable to future requirements.
A good agent is a modular and reusable component designed to perform specific tasks efficiently. By inheriting from the base Agent
class, developers can extend its functionality while adhering to standardized principles. Each custom agent should:
Agent
class to maintain compatibility with swarms.run(task: str, img: str)
method to execute tasks effectively.name
, system_prompt
, and description
to enhance clarity.llm
parameter (callable) or a model_name
to enable seamless integration with language models.By following these guidelines, you can create agents that integrate well with broader systems and exhibit high reliability in real-world applications.
"},{"location":"swarms/agents/new_agent/#creating-a-custom-agent","title":"Creating a Custom Agent","text":"Here is a detailed template for creating a custom agent by inheriting the Agent
class. This template demonstrates how to structure an agent with extendable and reusable features:
from typing import Callable, Any\nfrom swarms import Agent\n\nclass MyNewAgent(Agent):\n \"\"\"\n A custom agent class for specialized tasks.\n\n Attributes:\n name (str): The name of the agent.\n system_prompt (str): The prompt guiding the agent's behavior.\n description (str): A brief description of the agent's purpose.\n llm (Callable, optional): A callable representing the language model to use.\n \"\"\"\n\n def __init__(self, name: str, system_prompt: str, model_name: str = None, description: str, llm: Callable = None):\n \"\"\"\n Initialize the custom agent.\n\n Args:\n name (str): The name of the agent.\n system_prompt (str): The prompt guiding the agent.\n model_name (str): The name of your model can use litellm [openai/gpt-4o]\n description (str): A description of the agent's purpose.\n llm (Callable, optional): A callable representing the language model to use.\n \"\"\"\n super().__init__(agent_name=name, system_prompt=system_prompt, model_name=model_name)\n self.agent_name = agent_name\n self.system_prompt system_prompt\n self.description = description\n self.model_name = model_name\n\n def run(self, task: str, img: str, *args: Any, **kwargs: Any) -> Any:\n \"\"\"\n Execute the task assigned to the agent.\n\n Args:\n task (str): The task description.\n img (str): The image input for processing.\n *args: Additional positional arguments.\n **kwargs: Additional keyword arguments.\n\n Returns:\n Any: The result of the task execution.\n \"\"\"\n # Your custom logic \n ...\n
This design ensures a seamless extension of functionality while maintaining clear and maintainable code.
"},{"location":"swarms/agents/new_agent/#key-considerations","title":"Key Considerations","text":""},{"location":"swarms/agents/new_agent/#1-type-annotations","title":"1. Type Annotations","text":"Always use type hints for method parameters and return values. This improves code readability, supports static analysis tools, and reduces bugs, ensuring long-term reliability.
"},{"location":"swarms/agents/new_agent/#2-comprehensive-documentation","title":"2. Comprehensive Documentation","text":"Provide detailed docstrings for all classes, methods, and attributes. Clear documentation ensures that your agent's functionality is understandable to both current and future collaborators.
"},{"location":"swarms/agents/new_agent/#3-modular-design","title":"3. Modular Design","text":"Keep the agent logic modular and reusable. Modularity simplifies debugging, testing, and extending functionalities, making the code more adaptable to diverse scenarios.
"},{"location":"swarms/agents/new_agent/#4-flexible-model-integration","title":"4. Flexible Model Integration","text":"Use either an llm
callable or model_name
attribute for integrating language models. This flexibility ensures your agent can adapt to various tasks, environments, and system requirements.
Incorporate robust error handling to manage unexpected inputs or issues during execution. This not only ensures reliability but also builds user trust in your system.
"},{"location":"swarms/agents/new_agent/#6-scalability-considerations","title":"6. Scalability Considerations","text":"Ensure your agent design can scale to accommodate increased complexity or a larger number of tasks without compromising performance.
"},{"location":"swarms/agents/new_agent/#example-usage","title":"Example Usage","text":"Here is an example of how to use your custom agent effectively:
# Example LLM callable\nclass MockLLM:\n \"\"\"\n A mock language model class for simulating LLM behavior.\n\n Methods:\n run(task: str, img: str, *args: Any, **kwargs: Any) -> str:\n Processes the task and image input to return a simulated response.\n \"\"\"\n\n def run(self, task: str, img: str, *args: Any, **kwargs: Any) -> str:\n return f\"Processed task '{task}' with image '{img}'\"\n\n# Create an instance of MyNewAgent\nagent = MyNewAgent(\n name=\"ImageProcessor\",\n system_prompt=\"Process images and extract relevant details.\",\n description=\"An agent specialized in processing images and extracting insights.\",\n llm=MockLLM().run\n)\n\n# Run a task\nresult = agent.run(task=\"Analyze content\", img=\"path/to/image.jpg\")\nprint(result)\n
This example showcases the practical application of the MyNewAgent
class and highlights its extensibility.
In this example, we will create a Griptape agent by inheriting from the Swarms Agent
class and implementing the run
method.
SwarmsAgent
class.run()
method: Implement logic to process a task string and execute the Griptape agent.from swarms import (\n Agent as SwarmsAgent,\n) # Import the base Agent class from Swarms\nfrom griptape.structures import Agent as GriptapeAgent\nfrom griptape.tools import (\n WebScraperTool,\n FileManagerTool,\n PromptSummaryTool,\n)\n\n# Create a custom agent class that inherits from SwarmsAgent\nclass GriptapeSwarmsAgent(SwarmsAgent):\n def __init__(self, name: str, system_prompt: str: str, *args, **kwargs):\n super().__init__(agent_name=name, system_prompt=system_prompt)\n # Initialize the Griptape agent with its tools\n self.agent = GriptapeAgent(\n input=\"Load {{ args[0] }}, summarize it, and store it in a file called {{ args[1] }}.\",\n tools=[\n WebScraperTool(off_prompt=True),\n PromptSummaryTool(off_prompt=True),\n FileManagerTool(),\n ],\n *args,\n **kwargs,\n )\n\n # Override the run method to take a task and execute it using the Griptape agent\n def run(self, task: str) -> str:\n # Extract URL and filename from task\n url, filename = task.split(\",\") # Example task string: \"https://example.com, output.txt\"\n # Execute the Griptape agent\n result = self.agent.run(url.strip(), filename.strip())\n # Return the final result as a string\n return str(result)\n\n\n# Example usage:\ngriptape_swarms_agent = GriptapeSwarmsAgent()\noutput = griptape_swarms_agent.run(\"https://griptape.ai, griptape.txt\")\nprint(output)\n
"},{"location":"swarms/agents/new_agent/#best-practices","title":"Best Practices","text":"Test Extensively: Validate your agent with various task inputs to ensure it performs as expected under different conditions.
Follow the Single Responsibility Principle: Design each agent to focus on a specific task or role, ensuring clarity and modularity in implementation.
Log Actions: Include detailed logging within the run
method to capture key actions, inputs, and results for debugging and monitoring.
Use Open-Source Contributions: Contribute your custom agents to the Swarms repository at https://github.com/kyegomez/swarms. Sharing your innovations helps advance the ecosystem and encourages collaboration.
Iterate and Refactor: Continuously improve your agents based on feedback, performance evaluations, and new requirements to maintain relevance and functionality.
By following these guidelines, you can create powerful and flexible agents tailored to specific tasks. Leveraging inheritance from the Agent
class ensures compatibility and standardization across swarms. Emphasize modularity, thorough testing, and clear documentation to build agents that are robust, scalable, and easy to integrate. Collaborate with the community by submitting your innovative agents to the Swarms repository, contributing to a growing ecosystem of intelligent solutions. With a well-designed agent, you are equipped to tackle diverse challenges efficiently and effectively.
The OpenAI Assistant class provides a wrapper around OpenAI's Assistants API, integrating it with the swarms framework.
"},{"location":"swarms/agents/openai_assistant/#overview","title":"Overview","text":"The OpenAIAssistant
class allows you to create and interact with OpenAI Assistants, providing a simple interface for:
pip install swarms\n
"},{"location":"swarms/agents/openai_assistant/#basic-usage","title":"Basic Usage","text":"from swarms import OpenAIAssistant\n\n#Create an assistant\nassistant = OpenAIAssistant(\n name=\"Math Tutor\",\n instructions=\"You are a helpful math tutor.\",\n model=\"gpt-4o\",\n tools=[{\"type\": \"code_interpreter\"}]\n)\n\n#Run a Task\nresponse = assistant.run(\"Solve the equation: 3x + 11 = 14\")\nprint(response)\n\n# Continue the conversation in the same thread\nfollow_up = assistant.run(\"Now explain how you solved it\")\nprint(follow_up)\n
"},{"location":"swarms/agents/openai_assistant/#function-calling","title":"Function Calling","text":"The assistant supports custom function integration:
def get_weather(location: str, unit: str = \"celsius\") -> str:\n # Mock weather function\n return f\"The weather in {location} is 22 degrees {unit}\"\n\n# Add function to assistant\nassistant.add_function(\n description=\"Get the current weather in a location\",\n parameters={\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"City name\"\n },\n \"unit\": {\n \"type\": \"string\",\n \"enum\": [\"celsius\", \"fahrenheit\"],\n \"default\": \"celsius\"\n }\n },\n \"required\": [\"location\"]\n }\n)\n
"},{"location":"swarms/agents/openai_assistant/#api-reference","title":"API Reference","text":""},{"location":"swarms/agents/openai_assistant/#constructor","title":"Constructor","text":"OpenAIAssistant(\n name: str,\n instructions: Optional[str] = None,\n model: str = \"gpt-4o\",\n tools: Optional[List[Dict[str, Any]]] = None,\n file_ids: Optional[List[str]] = None,\n metadata: Optional[Dict[str, Any]] = None,\n functions: Optional[List[Dict[str, Any]]] = None,\n)\n
"},{"location":"swarms/agents/openai_assistant/#methods","title":"Methods","text":""},{"location":"swarms/agents/openai_assistant/#runtask-str-str","title":"run(task: str) -> str","text":"Sends a task to the assistant and returns its response. The conversation thread is maintained between calls.
"},{"location":"swarms/agents/openai_assistant/#add_functionfunc-callable-description-str-parameters-dictstr-any-none","title":"add_function(func: Callable, description: str, parameters: Dict[str, Any]) -> None","text":"Adds a callable function that the assistant can use during conversations.
"},{"location":"swarms/agents/openai_assistant/#add_messagecontent-str-file_ids-optionalliststr-none-none","title":"add_message(content: str, file_ids: Optional[List[str]] = None) -> None","text":"Adds a message to the current conversation thread.
"},{"location":"swarms/agents/openai_assistant/#error-handling","title":"Error Handling","text":"The assistant implements robust error handling: - Retries on rate limits - Graceful handling of API errors - Clear error messages for debugging - Status monitoring for runs and completions
"},{"location":"swarms/agents/openai_assistant/#best-practices","title":"Best Practices","text":"Monitor thread status during long-running operations
Function Integration
Test functions independently before integration
Performance
Overview
The ReasoningAgentRouter is a sophisticated agent routing system that enables dynamic selection and execution of different reasoning strategies based on the task requirements. It provides a flexible interface to work with multiple reasoning approaches including Reasoning Duo, Self-Consistency, IRE (Iterative Reflective Expansion), Reflexion, GKP (Generated Knowledge Prompting), and Agent Judge.
"},{"location":"swarms/agents/reasoning_agent_router/#architecture","title":"Architecture","text":"graph TD\n Task[Task Input] --> Router[ReasoningAgentRouter]\n Router --> SelectSwarm{Select Swarm Type}\n SelectSwarm -->|Reasoning Duo| RD[ReasoningDuo]\n SelectSwarm -->|Self Consistency| SC[SelfConsistencyAgent]\n SelectSwarm -->|IRE| IRE[IterativeReflectiveExpansion]\n SelectSwarm -->|Reflexion| RF[ReflexionAgent]\n SelectSwarm -->|GKP| GKP[GKPAgent]\n SelectSwarm -->|Agent Judge| AJ[AgentJudge]\n RD --> Output[Task Output]\n SC --> Output\n IRE --> Output\n RF --> Output\n GKP --> Output\n AJ --> Output
"},{"location":"swarms/agents/reasoning_agent_router/#configuration","title":"Configuration","text":""},{"location":"swarms/agents/reasoning_agent_router/#arguments","title":"Arguments","text":"Constructor Parameters
Argument Type Default Descriptionagent_name
str \"reasoning_agent\" Name identifier for the agent description
str \"A reasoning agent...\" Description of the agent's capabilities model_name
str \"gpt-4o-mini\" The underlying language model to use system_prompt
str \"You are a helpful...\" System prompt for the agent max_loops
int 1 Maximum number of reasoning loops swarm_type
agent_types \"reasoning_duo\" Type of reasoning swarm to use num_samples
int 1 Number of samples for self-consistency output_type
OutputType \"dict-all-except-first\" Format of the output num_knowledge_items
int 6 Number of knowledge items for GKP agent memory_capacity
int 6 Memory capacity for agents that support it eval
bool False Enable evaluation mode for self-consistency random_models_on
bool False Enable random model selection for diversity majority_voting_prompt
Optional[str] None Custom prompt for majority voting reasoning_model_name
Optional[str] \"claude-3-5-sonnet-20240620\" Model to use for reasoning in ReasoningDuo"},{"location":"swarms/agents/reasoning_agent_router/#available-agent-types","title":"Available Agent Types","text":"Supported Types
The following agent types are supported through the swarm_type
parameter:
\"reasoning-duo\"
or \"reasoning-agent\"
\"self-consistency\"
or \"consistency-agent\"
\"ire\"
or \"ire-agent\"
\"ReflexionAgent\"
\"GKPAgent\"
\"AgentJudge\"
Key Features
Best Use Cases
Required Parameters
Optional Parameters
Key Features
Best Use Cases
Required Parameters
Optional Parameters
Key Features
Best Use Cases
Required Parameters
Optional Parameters
Key Features
Best Use Cases
Required Parameters
Optional Parameters
Key Features
Best Use Cases
Required Parameters
Optional Parameters
Key Features
Best Use Cases
Required Parameters
Optional Parameters
Available Methods
Method Descriptionselect_swarm()
Selects and initializes the appropriate reasoning swarm based on specified type run(task: str, img: Optional[str] = None, **kwargs)
Executes the selected swarm's reasoning process on the given task batched_run(tasks: List[str], imgs: Optional[List[str]] = None, **kwargs)
Executes the reasoning process on a batch of tasks"},{"location":"swarms/agents/reasoning_agent_router/#image-support","title":"Image Support","text":"Multi-modal Capabilities
The ReasoningAgentRouter supports image inputs for compatible agent types:
Supported Parameters:
img
(str, optional): Path or URL to a single image file for single task executionimgs
(List[str], optional): List of image paths/URLs for batch task executionCompatible Agent Types:
reasoning-duo
/ reasoning-agent
: Full image support for both reasoning and execution phasesUsage Example:
# Single image with task\nrouter = ReasoningAgentRouter(swarm_type=\"reasoning-duo\")\nresult = router.run(\n task=\"Describe what you see in this image\",\n img=\"path/to/image.jpg\"\n)\n\n# Batch processing with images\nresults = router.batched_run(\n tasks=[\"Analyze this chart\", \"Describe this photo\"],\n imgs=[\"chart.png\", \"photo.jpg\"]\n)\n
"},{"location":"swarms/agents/reasoning_agent_router/#code-examples","title":"Code Examples","text":"Basic UsageSelf-Consistency ExamplesReflexionAgentGKPAgentReasoningDuo ExamplesAgentJudge from swarms.agents.reasoning_agents import ReasoningAgentRouter\n\n# Initialize the router\nrouter = ReasoningAgentRouter(\n agent_name=\"reasoning-agent\",\n description=\"A reasoning agent that can answer questions and help with tasks.\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are a helpful assistant that can answer questions and help with tasks.\",\n max_loops=1,\n swarm_type=\"self-consistency\",\n num_samples=3,\n eval=False,\n random_models_on=False,\n majority_voting_prompt=None\n)\n\n# Run a single task\nresult = router.run(\"What is the best approach to solve this problem?\")\n\n# Run with image input\nresult_with_image = router.run(\n \"Analyze this image and provide insights\",\n img=\"path/to/image.jpg\"\n)\n
# Basic self-consistency\nrouter = ReasoningAgentRouter(\n swarm_type=\"self-consistency\",\n num_samples=3,\n model_name=\"gpt-4o-mini\"\n)\n\n# Self-consistency with evaluation mode\nrouter = ReasoningAgentRouter(\n swarm_type=\"self-consistency\",\n num_samples=5,\n model_name=\"gpt-4o-mini\",\n eval=True,\n random_models_on=True\n)\n\n# Self-consistency with custom majority voting\nrouter = ReasoningAgentRouter(\n swarm_type=\"self-consistency\",\n num_samples=3,\n model_name=\"gpt-4o-mini\",\n majority_voting_prompt=\"Analyze the responses and provide the most accurate answer.\"\n)\n
router = ReasoningAgentRouter(\n swarm_type=\"ReflexionAgent\",\n max_loops=3,\n model_name=\"gpt-4o-mini\"\n)\n
router = ReasoningAgentRouter(\n swarm_type=\"GKPAgent\",\n model_name=\"gpt-4o-mini\",\n num_knowledge_items=6\n)\n
# Basic ReasoningDuo\nrouter = ReasoningAgentRouter(\n swarm_type=\"reasoning-duo\",\n model_name=\"gpt-4o-mini\",\n reasoning_model_name=\"claude-3-5-sonnet-20240620\"\n)\n\n# ReasoningDuo with image support\nrouter = ReasoningAgentRouter(\n swarm_type=\"reasoning-duo\",\n model_name=\"gpt-4o-mini\",\n reasoning_model_name=\"gpt-4-vision-preview\",\n max_loops=2\n)\n\nresult = router.run(\n \"Analyze this image and explain the patterns you see\",\n img=\"data_visualization.png\"\n)\n
router = ReasoningAgentRouter(\n swarm_type=\"AgentJudge\",\n model_name=\"gpt-4o-mini\",\n max_loops=2\n)\n
"},{"location":"swarms/agents/reasoning_agent_router/#best-practices","title":"Best Practices","text":"Optimization Tips
Swarm Type Selection
Use SelfConsistency for tasks requiring high reliability
Use IRE for complex problem-solving requiring iterative refinement
Performance Optimization
Adjust max_loops based on task complexity
Increase num_samples for higher reliability (3-7 for most tasks)
Choose appropriate model_name based on task requirements
Enable random_models_on for diverse reasoning approaches
Use eval mode for validation tasks with known answers
Output Handling
Use appropriate output_type for your needs
Process batched results appropriately
Handle errors gracefully
Self-Consistency Specific
Use 3-5 samples for most tasks, 7+ for critical decisions
Enable eval mode when you have expected answers for validation
Customize majority_voting_prompt for domain-specific aggregation
Consider random_models_on for diverse model perspectives
Multi-modal and Reasoning Configuration
Use vision-capable models when processing images (e.g., \"gpt-4-vision-preview\")
For ReasoningDuo, set different models for reasoning vs execution via reasoning_model_name
Ensure image paths are accessible and in supported formats (JPG, PNG, etc.)
Consider using reasoning_model_name with specialized reasoning models for complex tasks
Known Limitations
Processing time increases with:
Larger max_loops
More complex tasks
Model-specific limitations based on:
Token limits
Model capabilities
API rate limits
Development Guidelines
When extending the ReasoningAgentRouter:
Reasoning agents are sophisticated agents that employ advanced cognitive strategies to improve problem-solving performance beyond standard language model capabilities. Unlike traditional prompt-based approaches, reasoning agents implement structured methodologies that enable them to think more systematically, self-reflect, collaborate, and iteratively refine their responses.
These agents are inspired by cognitive science and human reasoning processes, incorporating techniques such as:
Multi-step reasoning: Breaking down complex problems into manageable components
Self-reflection: Evaluating and critiquing their own outputs
Iterative refinement: Progressively improving solutions through multiple iterations
Collaborative thinking: Using multiple reasoning pathways or agent perspectives
Memory integration: Learning from past experiences and building knowledge over time
Meta-cognitive awareness: Understanding their own thinking processes and limitations
SelfConsistencyAgent
Guide Reasoning Duo Collaborative Novel dual-agent architecture \u2022 Separate reasoning and execution agents\u2022 Collaborative problem solving\u2022 Task decomposition\u2022 Cross-validation \u2022 Complex analysis tasks\u2022 Multi-step problem solving\u2022 Tasks requiring verification\u2022 Research and planning ReasoningDuo
Guide IRE Agent Iterative Iterative Reflective Expansion framework \u2022 Hypothesis generation\u2022 Path simulation\u2022 Error reflection\u2022 Dynamic revision \u2022 Complex reasoning tasks\u2022 Research problems\u2022 Learning scenarios\u2022 Strategy development IterativeReflectiveExpansion
Guide Reflexion Agent Self-reflective Reflexion: Language Agents with Verbal Reinforcement Learning (Shinn et al., 2023) \u2022 Self-evaluation\u2022 Experience memory\u2022 Adaptive improvement\u2022 Learning from failures \u2022 Continuous improvement tasks\u2022 Long-term projects\u2022 Learning scenarios\u2022 Quality refinement ReflexionAgent
Guide GKP Agent Knowledge-based Generated Knowledge Prompting (Liu et al., 2022) \u2022 Knowledge generation\u2022 Multi-perspective reasoning\u2022 Information synthesis\u2022 Fact integration \u2022 Knowledge-intensive tasks\u2022 Research questions\u2022 Fact-based reasoning\u2022 Information synthesis GKPAgent
Guide Agent Judge Evaluation Agent-as-a-Judge: Evaluate Agents with Agents \u2022 Quality assessment\u2022 Structured evaluation\u2022 Performance metrics\u2022 Feedback generation \u2022 Quality control\u2022 Output evaluation\u2022 Performance assessment\u2022 Model comparison AgentJudge
Guide REACT Agent Action-based ReAct: Synergizing Reasoning and Acting (Yao et al., 2022) \u2022 Reason-Act-Observe cycle\u2022 Memory integration\u2022 Action planning\u2022 Experience building \u2022 Interactive tasks\u2022 Tool usage scenarios\u2022 Planning problems\u2022 Learning environments ReactAgent
Guide"},{"location":"swarms/agents/reasoning_agents_overview/#agent-architectures","title":"Agent Architectures","text":""},{"location":"swarms/agents/reasoning_agents_overview/#self-consistency-agent","title":"Self-Consistency Agent","text":"Description: Implements multiple independent reasoning paths with consensus-building to improve response reliability and accuracy through majority voting mechanisms.
Key Features:
Concurrent execution of multiple reasoning instances
AI-powered aggregation and consensus analysis
Validation mode for answer verification
Configurable sample sizes and output formats
Architecture Diagram:
graph TD\n A[Task Input] --> B[Agent Pool]\n B --> C[Response 1]\n B --> D[Response 2]\n B --> E[Response 3]\n B --> F[Response N]\n C --> G[Aggregation Agent]\n D --> G\n E --> G\n F --> G\n G --> H[Majority Voting Analysis]\n H --> I[Consensus Evaluation]\n I --> J[Final Answer]\n\n style A fill:#e1f5fe\n style J fill:#c8e6c9\n style G fill:#fff3e0
Use Cases: Mathematical problem solving, high-stakes decision making, answer validation, quality assurance processes
Implementation: SelfConsistencyAgent
Documentation: Self-Consistency Agent Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#reasoning-duo","title":"Reasoning Duo","text":"Description: Dual-agent collaborative system that separates reasoning and execution phases, enabling specialized analysis and task completion through coordinated agent interaction.
Key Features:
Separate reasoning and execution agents
Collaborative problem decomposition
Cross-validation between agents
Configurable model selection for each agent
Architecture Diagram:
graph TD\n A[Task Input] --> B[Reasoning Agent]\n B --> C[Deep Analysis]\n C --> D[Strategy Planning]\n D --> E[Reasoning Output]\n E --> F[Main Agent]\n F --> G[Task Execution]\n G --> H[Response Generation]\n H --> I[Final Output]\n\n style A fill:#e1f5fe\n style B fill:#f3e5f5\n style F fill:#e8f5e8\n style I fill:#c8e6c9
Use Cases: Complex analysis tasks, multi-step problem solving, research and planning, verification workflows
Implementation: ReasoningDuo
Documentation: Reasoning Duo Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#ire-agent-iterative-reflective-expansion","title":"IRE Agent (Iterative Reflective Expansion)","text":"Description: Sophisticated reasoning framework employing iterative hypothesis generation, simulation, and refinement through continuous cycles of testing and meta-cognitive reflection.
Key Features:
Hypothesis generation and testing
Path simulation and evaluation
Meta-cognitive reflection capabilities
Dynamic strategy revision based on feedback
Architecture Diagram:
graph TD\n A[Problem Input] --> B[Hypothesis Generation]\n B --> C[Path Simulation]\n C --> D[Outcome Evaluation]\n D --> E{Satisfactory?}\n E -->|No| F[Meta-Cognitive Reflection]\n F --> G[Path Revision]\n G --> H[Knowledge Integration]\n H --> C\n E -->|Yes| I[Solution Synthesis]\n I --> J[Final Answer]\n\n style A fill:#e1f5fe\n style F fill:#fff3e0\n style J fill:#c8e6c9
Use Cases: Complex reasoning tasks, research problems, strategy development, iterative learning scenarios
Implementation: IterativeReflectiveExpansion
Documentation: IRE Agent Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#reflexion-agent","title":"Reflexion Agent","text":"Description: Advanced self-reflective system implementing actor-evaluator-reflector architecture for continuous improvement through experience-based learning and memory integration.
Key Features:
Actor-evaluator-reflector sub-agent architecture
Self-evaluation and quality assessment
Experience memory and learning capabilities
Adaptive improvement through reflection
Architecture Diagram:
graph TD\n A[Task Input] --> B[Actor Agent]\n B --> C[Initial Response]\n C --> D[Evaluator Agent]\n D --> E[Quality Assessment]\n E --> F[Performance Score]\n F --> G[Reflector Agent]\n G --> H[Self-Reflection]\n H --> I[Experience Memory]\n I --> J{Max Iterations?}\n J -->|No| K[Refined Response]\n K --> D\n J -->|Yes| L[Final Response]\n\n style A fill:#e1f5fe\n style B fill:#e8f5e8\n style D fill:#fff3e0\n style G fill:#f3e5f5\n style L fill:#c8e6c9
Use Cases: Continuous improvement tasks, long-term projects, adaptive learning, quality refinement processes
Implementation: ReflexionAgent
Documentation: Reflexion Agent Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#gkp-agent-generated-knowledge-prompting","title":"GKP Agent (Generated Knowledge Prompting)","text":"Description: Knowledge-driven reasoning system that generates relevant information before answering queries, implementing multi-perspective analysis through coordinated knowledge synthesis.
Key Features:
Dynamic knowledge generation
Multi-perspective reasoning coordination
Information synthesis and integration
Configurable knowledge item generation
Architecture Diagram:
graph TD\n A[Query Input] --> B[Knowledge Generator]\n B --> C[Generate Knowledge Item 1]\n B --> D[Generate Knowledge Item 2]\n B --> E[Generate Knowledge Item N]\n C --> F[Reasoner Agent]\n D --> F\n E --> F\n F --> G[Knowledge Integration]\n G --> H[Reasoning Process]\n H --> I[Response Generation]\n I --> J[Coordinator]\n J --> K[Final Answer]\n\n style A fill:#e1f5fe\n style B fill:#fff3e0\n style F fill:#e8f5e8\n style J fill:#f3e5f5\n style K fill:#c8e6c9
Use Cases: Knowledge-intensive tasks, research questions, fact-based reasoning, information synthesis
Implementation: GKPAgent
Documentation: GKP Agent Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#agent-judge","title":"Agent Judge","text":"Description: Specialized evaluation system for assessing agent outputs and system performance, providing structured feedback and quality metrics through comprehensive assessment frameworks.
Key Features:
Structured evaluation methodology
Quality assessment and scoring
Performance metrics generation
Configurable evaluation criteria
Architecture Diagram:
graph TD\n A[Output to Evaluate] --> B[Evaluation Criteria]\n A --> C[Judge Agent]\n B --> C\n C --> D[Quality Analysis]\n D --> E[Criteria Assessment]\n E --> F[Scoring Framework]\n F --> G[Feedback Generation]\n G --> H[Evaluation Report]\n\n style A fill:#e1f5fe\n style C fill:#fff3e0\n style H fill:#c8e6c9
Use Cases: Quality control, output evaluation, performance assessment, model comparison
Implementation: AgentJudge
Documentation: Agent Judge Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#react-agent-reason-act-observe","title":"REACT Agent (Reason-Act-Observe)","text":"Description: Action-oriented reasoning system implementing iterative reason-act-observe cycles with memory integration for interactive task completion and environmental adaptation.
Key Features:
Reason-Act-Observe cycle implementation
Memory integration and experience building
Action planning and execution
Environmental state observation
Architecture Diagram:
graph TD\n A[Task Input] --> B[Memory Review]\n B --> C[Current State Observation]\n C --> D[Reasoning Process]\n D --> E[Action Planning]\n E --> F[Action Execution]\n F --> G[Outcome Observation]\n G --> H[Experience Storage]\n H --> I{Task Complete?}\n I -->|No| C\n I -->|Yes| J[Final Response]\n\n style A fill:#e1f5fe\n style B fill:#f3e5f5\n style D fill:#fff3e0\n style J fill:#c8e6c9
Use Cases: Interactive tasks, tool usage scenarios, planning problems, learning environments
Implementation: ReactAgent
Documentation: REACT Agent Guide
"},{"location":"swarms/agents/reasoning_agents_overview/#implementation-guide","title":"Implementation Guide","text":""},{"location":"swarms/agents/reasoning_agents_overview/#unified-interface-via-reasoning-agent-router","title":"Unified Interface via Reasoning Agent Router","text":"The ReasoningAgentRouter
provides a centralized interface for accessing all reasoning agent implementations:
from swarms.agents import ReasoningAgentRouter\n\n# Initialize router with specific reasoning strategy\nrouter = ReasoningAgentRouter(\n swarm_type=\"self-consistency\", # Select reasoning methodology\n model_name=\"gpt-4o-mini\",\n num_samples=5, # Configuration for consensus-based methods\n max_loops=3 # Configuration for iterative methods\n)\n\n# Execute reasoning process\nresult = router.run(\"Analyze the optimal solution for this complex business problem\")\nprint(result)\n
"},{"location":"swarms/agents/reasoning_agents_overview/#direct-agent-implementation","title":"Direct Agent Implementation","text":"from swarms.agents import SelfConsistencyAgent, ReasoningDuo, ReflexionAgent\n\n# Self-Consistency Agent for high-accuracy requirements\nconsistency_agent = SelfConsistencyAgent(\n model_name=\"gpt-4o-mini\",\n num_samples=5\n)\n\n# Reasoning Duo for collaborative analysis workflows\nduo_agent = ReasoningDuo(\n model_names=[\"gpt-4o-mini\", \"gpt-4o\"]\n)\n\n# Reflexion Agent for adaptive learning scenarios\nreflexion_agent = ReflexionAgent(\n model_name=\"gpt-4o-mini\",\n max_loops=3,\n memory_capacity=100\n)\n
"},{"location":"swarms/agents/reasoning_agents_overview/#choosing-the-right-reasoning-agent","title":"Choosing the Right Reasoning Agent","text":"Scenario Recommended Agent Why? High-stakes decisions Self-Consistency Multiple validation paths ensure reliability Complex research tasks Reasoning Duo + GKP Collaboration + knowledge synthesis Learning & improvement Reflexion Built-in self-improvement mechanisms Mathematical problems Self-Consistency Proven effectiveness on logical reasoning Quality assessment Agent Judge Specialized evaluation capabilities Interactive planning REACT Action-oriented reasoning cycle Iterative refinement IRE Designed for progressive improvement"},{"location":"swarms/agents/reasoning_agents_overview/#technical-documentation","title":"Technical Documentation","text":"For comprehensive technical documentation on each reasoning agent implementation:
Self-Consistency Agent
Reasoning Duo
IRE Agent
Reflexion Agent
GKP Agent
Agent Judge
Reasoning Agent Router
Reasoning agents represent a significant advancement in enterprise agent capabilities, implementing sophisticated cognitive architectures that deliver enhanced reliability, consistency, and performance compared to traditional language model implementations.
"},{"location":"swarms/agents/reasoning_duo/","title":"ReasoningDuo","text":"The ReasoningDuo class implements a dual-agent reasoning system that combines a reasoning agent and a main agent to provide well-thought-out responses to complex tasks. This architecture enables more robust and reliable outputs by separating the reasoning process from the final response generation.
"},{"location":"swarms/agents/reasoning_duo/#class-overview","title":"Class Overview","text":""},{"location":"swarms/agents/reasoning_duo/#constructor-parameters","title":"Constructor Parameters","text":"Parameter Type Default Description model_name str \"reasoning-agent-01\" Name identifier for the reasoning agent description str \"A highly intelligent...\" Description of the reasoning agent's capabilities model_names list[str] [\"gpt-4o-mini\", \"gpt-4o\"] Model names for reasoning and main agents system_prompt str \"You are a helpful...\" System prompt for the main agent"},{"location":"swarms/agents/reasoning_duo/#methods","title":"Methods","text":"Method Parameters Returns Description run task: str str Processes a single task through both agents batched_run tasks: List[str] List[str] Processes multiple tasks sequentially"},{"location":"swarms/agents/reasoning_duo/#quick-start","title":"Quick Start","text":"from swarms.agents.reasoning_duo import ReasoningDuo\n\n# Initialize the ReasoningDuo\nduo = ReasoningDuo(\n model_name=\"reasoning-agent-01\",\n model_names=[\"gpt-4o-mini\", \"gpt-4o\"]\n)\n\n# Run a single task\nresult = duo.run(\"Explain the concept of gravitational waves\")\n\n# Run multiple tasks\ntasks = [\n \"Calculate compound interest for $1000 over 5 years\",\n \"Explain quantum entanglement\"\n]\nresults = duo.batched_run(tasks)\n
"},{"location":"swarms/agents/reasoning_duo/#examples","title":"Examples","text":""},{"location":"swarms/agents/reasoning_duo/#1-mathematical-analysis","title":"1. Mathematical Analysis","text":"duo = ReasoningDuo()\n\n# Complex mathematical problem\nmath_task = \"\"\"\nSolve the following differential equation:\ndy/dx + 2y = x^2, y(0) = 1\n\"\"\"\n\nsolution = duo.run(math_task)\n
"},{"location":"swarms/agents/reasoning_duo/#2-physics-problem","title":"2. Physics Problem","text":"# Quantum mechanics problem\nphysics_task = \"\"\"\nCalculate the wavelength of an electron with kinetic energy of 50 eV \nusing the de Broglie relationship.\n\"\"\"\n\nresult = duo.run(physics_task)\n
"},{"location":"swarms/agents/reasoning_duo/#3-financial-analysis","title":"3. Financial Analysis","text":"# Complex financial analysis\nfinance_task = \"\"\"\nCalculate the Net Present Value (NPV) of a project with:\n- Initial investment: $100,000\n- Annual cash flows: $25,000 for 5 years\n- Discount rate: 8%\n\"\"\"\n\nanalysis = duo.run(finance_task)\n
"},{"location":"swarms/agents/reasoning_duo/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/agents/reasoning_duo/#customizing-agent-behavior","title":"Customizing Agent Behavior","text":"You can customize both agents by modifying their initialization parameters:
duo = ReasoningDuo(\n model_name=\"custom-reasoning-agent\",\n description=\"Specialized financial analysis agent\",\n model_names=[\"gpt-4o-mini\", \"gpt-4o\"],\n system_prompt=\"You are a financial expert AI assistant...\"\n)\n
"},{"location":"swarms/agents/reasoning_duo/#batch-processing-with-progress-tracking","title":"Batch Processing with Progress Tracking","text":"tasks = [\n \"Analyze market trends for tech stocks\",\n \"Calculate risk metrics for a portfolio\",\n \"Forecast revenue growth\"\n]\n\n# Process multiple tasks with logging\nresults = duo.batched_run(tasks)\n
"},{"location":"swarms/agents/reasoning_duo/#implementation-details","title":"Implementation Details","text":"The ReasoningDuo uses a two-stage process:
Task Input \u2192 Reasoning Agent \u2192 Structured Analysis \u2192 Main Agent \u2192 Final Output\n
"},{"location":"swarms/agents/reasoning_duo/#best-practices","title":"Best Practices","text":"Break complex problems into smaller subtasks
Performance Optimization
The ReasoningDuo includes built-in logging using the loguru
library:
from loguru import logger\n\n# Logs are automatically generated for each task\nlogger.info(\"Task processing started\")\n
"},{"location":"swarms/agents/reasoning_duo/#limitations","title":"Limitations","text":"For a runnable demonstration, see the reasoning_duo_batched.py example.
"},{"location":"swarms/agents/reflexion_agent/","title":"ReflexionAgent","text":"The ReflexionAgent is an advanced AI agent that implements the Reflexion framework to improve through self-reflection. It follows a process of acting on tasks, evaluating its performance, generating self-reflections, and using these reflections to improve future responses.
"},{"location":"swarms/agents/reflexion_agent/#overview","title":"Overview","text":"The ReflexionAgent consists of three specialized sub-agents: - Actor: Generates initial responses to tasks - Evaluator: Critically assesses responses against quality criteria - Reflector: Generates self-reflections to improve future responses
"},{"location":"swarms/agents/reflexion_agent/#initialization","title":"Initialization","text":"from swarms.agents import ReflexionAgent\n\nagent = ReflexionAgent(\n agent_name=\"reflexion-agent\",\n system_prompt=\"...\", # Optional custom system prompt\n model_name=\"openai/o1\",\n max_loops=3,\n memory_capacity=100\n)\n
"},{"location":"swarms/agents/reflexion_agent/#parameters","title":"Parameters","text":"Parameter Type Default Description agent_name
str
\"reflexion-agent\"
Name of the agent system_prompt
str
REFLEXION_PROMPT
System prompt for the agent model_name
str
\"openai/o1\"
Model name for generating responses max_loops
int
3
Maximum number of reflection iterations per task memory_capacity
int
100
Maximum capacity of long-term memory"},{"location":"swarms/agents/reflexion_agent/#methods","title":"Methods","text":""},{"location":"swarms/agents/reflexion_agent/#act","title":"act","text":"Generates a response to the given task using the actor agent.
response = agent.act(task: str, relevant_memories: List[Dict[str, Any]] = None) -> str\n
Parameter Type Description task
str
The task to respond to relevant_memories
List[Dict[str, Any]]
Optional relevant past memories to consider"},{"location":"swarms/agents/reflexion_agent/#evaluate","title":"evaluate","text":"Evaluates the quality of a response to a task.
evaluation, score = agent.evaluate(task: str, response: str) -> Tuple[str, float]\n
Parameter Type Description task
str
The original task response
str
The response to evaluate Returns: - evaluation
: Detailed feedback on the response - score
: Numerical score between 0 and 1
Generates a self-reflection based on the task, response, and evaluation.
reflection = agent.reflect(task: str, response: str, evaluation: str) -> str\n
Parameter Type Description task
str
The original task response
str
The generated response evaluation
str
The evaluation feedback"},{"location":"swarms/agents/reflexion_agent/#refine","title":"refine","text":"Refines the original response based on evaluation and reflection.
refined_response = agent.refine(\n task: str,\n original_response: str,\n evaluation: str,\n reflection: str\n) -> str\n
Parameter Type Description task
str
The original task original_response
str
The original response evaluation
str
The evaluation feedback reflection
str
The self-reflection"},{"location":"swarms/agents/reflexion_agent/#step","title":"step","text":"Processes a single task through one iteration of the Reflexion process.
result = agent.step(\n task: str,\n iteration: int = 0,\n previous_response: str = None\n) -> Dict[str, Any]\n
Parameter Type Description task
str
The task to process iteration
int
Current iteration number previous_response
str
Response from previous iteration Returns a dictionary containing: - task
: The original task - response
: The generated response - evaluation
: The evaluation feedback - reflection
: The self-reflection - score
: Numerical score - iteration
: Current iteration number
Executes the Reflexion process for a list of tasks.
results = agent.run(\n tasks: List[str],\n include_intermediates: bool = False\n) -> List[Any]\n
Parameter Type Description tasks
List[str]
List of tasks to process include_intermediates
bool
Whether to include intermediate iterations in results Returns: - If include_intermediates=False
: List of final responses - If include_intermediates=True
: List of complete iteration histories
from swarms.agents import ReflexionAgent\n\n# Initialize the Reflexion Agent\nagent = ReflexionAgent(\n agent_name=\"reflexion-agent\",\n model_name=\"openai/o1\",\n max_loops=3\n)\n\n# Example tasks\ntasks = [\n \"Explain quantum computing to a beginner.\",\n \"Write a Python function to sort a list of dictionaries by a specific key.\"\n]\n\n# Run the agent\nresults = agent.run(tasks)\n\n# Print results\nfor i, result in enumerate(results):\n print(f\"\\nTask {i+1}: {tasks[i]}\")\n print(f\"Response: {result}\")\n
"},{"location":"swarms/agents/reflexion_agent/#memory-system","title":"Memory System","text":"The ReflexionAgent includes a sophisticated memory system (ReflexionMemory
) that maintains both short-term and long-term memories of past experiences, reflections, and feedback. This system helps the agent learn from past interactions and improve its responses over time.
max_loops
based on task complexity (more complex tasks may benefit from more iterations)memory_capacity
as neededOverview
Structured outputs help ensure that your agents return data in a consistent, predictable format that can be easily parsed and processed by your application. This is particularly useful when building complex applications that require standardized data handling.
"},{"location":"swarms/agents/structured_outputs/#schema-definition","title":"Schema Definition","text":"Structured outputs are defined using JSON Schema format. Here's the basic structure:
Basic SchemaAdvanced Schema Basic Tool Schematools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"function_name\",\n \"description\": \"Description of what the function does\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n # Define your parameters here\n },\n \"required\": [\n # List required parameters\n ]\n }\n }\n }\n]\n
Advanced Tool Schema with Multiple Parameterstools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"advanced_function\",\n \"description\": \"Advanced function with multiple parameter types\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"text_param\": {\n \"type\": \"string\",\n \"description\": \"A text parameter\"\n },\n \"number_param\": {\n \"type\": \"number\",\n \"description\": \"A numeric parameter\"\n },\n \"boolean_param\": {\n \"type\": \"boolean\",\n \"description\": \"A boolean parameter\"\n },\n \"array_param\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"},\n \"description\": \"An array of strings\"\n }\n },\n \"required\": [\"text_param\", \"number_param\"]\n }\n }\n }\n]\n
"},{"location":"swarms/agents/structured_outputs/#parameter-types","title":"Parameter Types","text":"The following parameter types are supported:
Type Description Examplestring
Text values \"Hello World\"
number
Numeric values 42
, 3.14
boolean
True/False values true
, false
object
Nested objects {\"key\": \"value\"}
array
Lists or arrays [1, 2, 3]
null
Null values null
"},{"location":"swarms/agents/structured_outputs/#implementation-steps","title":"Implementation Steps","text":"Quick Start Guide
Follow these steps to implement structured outputs in your agent:
"},{"location":"swarms/agents/structured_outputs/#step-1-define-your-schema","title":"Step 1: Define Your Schema","text":"tools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_stock_price\",\n \"description\": \"Retrieve stock price information\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\n \"type\": \"string\",\n \"description\": \"Stock ticker symbol\"\n },\n \"include_volume\": {\n \"type\": \"boolean\",\n \"description\": \"Include trading volume data\"\n }\n },\n \"required\": [\"ticker\"]\n }\n }\n }\n]\n
"},{"location":"swarms/agents/structured_outputs/#step-2-initialize-the-agent","title":"Step 2: Initialize the Agent","text":"from swarms import Agent\n\nagent = Agent(\n agent_name=\"Your-Agent-Name\",\n agent_description=\"Agent description\",\n system_prompt=\"Your system prompt\",\n tools_list_dictionary=tools\n)\n
"},{"location":"swarms/agents/structured_outputs/#step-3-run-the-agent","title":"Step 3: Run the Agent","text":"response = agent.run(\"Your query here\")\n
"},{"location":"swarms/agents/structured_outputs/#step-4-parse-the-output","title":"Step 4: Parse the Output","text":"from swarms.utils.str_to_dict import str_to_dict\n\nparsed_output = str_to_dict(response)\n
"},{"location":"swarms/agents/structured_outputs/#example-usage","title":"Example Usage","text":"Complete Financial Agent Example
Here's a comprehensive example using a financial analysis agent:
Python ImplementationExpected Outputfrom dotenv import load_dotenv\nfrom swarms import Agent\nfrom swarms.utils.str_to_dict import str_to_dict\n\n# Load environment variables\nload_dotenv()\n\n# Define tools with structured output schema\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_stock_price\",\n \"description\": \"Retrieve the current stock price and related information\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\n \"type\": \"string\",\n \"description\": \"Stock ticker symbol (e.g., AAPL, GOOGL)\"\n },\n \"include_history\": {\n \"type\": \"boolean\",\n \"description\": \"Include historical data in the response\"\n },\n \"time\": {\n \"type\": \"string\",\n \"format\": \"date-time\",\n \"description\": \"Specific time for stock data (ISO format)\"\n }\n },\n \"required\": [\"ticker\", \"include_history\", \"time\"]\n }\n }\n }\n]\n\n# Initialize agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent\",\n system_prompt=\"You are a helpful financial analysis assistant.\",\n max_loops=1,\n tools_list_dictionary=tools\n)\n\n# Run agent\nresponse = agent.run(\"What is the current stock price for AAPL?\")\n\n# Parse structured output\nparsed_data = str_to_dict(response)\nprint(f\"Parsed response: {parsed_data}\")\n
{\n \"function_calls\": [\n {\n \"name\": \"get_stock_price\",\n \"arguments\": {\n \"ticker\": \"AAPL\",\n \"include_history\": true,\n \"time\": \"2024-01-15T10:30:00Z\"\n }\n }\n ]\n}\n
"},{"location":"swarms/agents/structured_outputs/#best-practices","title":"Best Practices","text":"Schema Design
Keep it simple: Design schemas that are as simple as possible while meeting your needs
Clear naming: Use descriptive parameter names that clearly indicate their purpose
Detailed descriptions: Include comprehensive descriptions for each parameter
Required fields: Explicitly specify all required parameters
Error Handling
Validate output: Always validate the output format before processing
Exception handling: Implement proper error handling for parsing failures
Safety first: Use try-except blocks when converting strings to dictionaries
Performance Tips
Minimize requirements: Keep the number of required parameters to a minimum
Appropriate types: Use the most appropriate data types for each parameter
Caching: Consider caching parsed results if they're used frequently
Common Issues
"},{"location":"swarms/agents/structured_outputs/#invalid-output-format","title":"Invalid Output Format","text":"Problem
The agent returns data in an unexpected format
Solution
Ensure your schema matches the expected output structure
Verify all required fields are present in the response
Check for proper JSON formatting in the output
Problem
Errors occur when trying to parse the agent's response
Solution
from swarms.utils.str_to_dict import str_to_dict\n\ntry:\n parsed_data = str_to_dict(response)\nexcept Exception as e:\n print(f\"Parsing error: {e}\")\n # Handle the error appropriately\n
"},{"location":"swarms/agents/structured_outputs/#missing-fields","title":"Missing Fields","text":"Problem
Required fields are missing from the output
Solution
Pro Tips
Nested ObjectsConditional Fields nested_schema.py\"properties\": {\n \"user_info\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\"type\": \"string\"},\n \"age\": {\"type\": \"number\"},\n \"preferences\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"}\n }\n }\n }\n}\n
conditional_schema.py\"properties\": {\n \"data_type\": {\n \"type\": \"string\",\n \"enum\": [\"stock\", \"crypto\", \"forex\"]\n },\n \"symbol\": {\"type\": \"string\"},\n \"exchange\": {\n \"type\": \"string\",\n \"description\": \"Required for crypto and forex\"\n }\n}\n
"},{"location":"swarms/agents/third_party/","title":"Swarms Framework: Integrating and Customizing Agent Libraries","text":"Agent-based systems have emerged as a powerful paradigm for solving complex problems and automating tasks.
The swarms framework offers a flexible and extensible approach to working with various agent libraries, allowing developers to create custom agents and integrate them seamlessly into their projects.
In this comprehensive guide, we'll explore the swarms framework, discuss agent handling, and demonstrate how to build custom agents using swarms. We'll also cover the integration of popular agent libraries such as Langchain, Griptape, CrewAI, and Autogen.
"},{"location":"swarms/agents/third_party/#table-of-contents","title":"Table of Contents","text":"The swarms framework is a powerful and flexible system designed to facilitate the creation, management, and coordination of multiple AI agents. It provides a standardized interface for working with various agent types, allowing developers to leverage the strengths of different agent libraries while maintaining a consistent programming model.
At its core, the swarms framework is built around the concept of a parent Agent
class, which serves as a foundation for creating custom agents and integrating third-party agent libraries. This approach offers several benefits:
As the field of AI and agent-based systems continues to grow, numerous libraries and frameworks have emerged, each with its own strengths and specialized features. While this diversity offers developers a wide range of tools to choose from, it also presents challenges in terms of integration and interoperability.
This is where the concept of wrappers becomes crucial. By creating wrappers around different agent libraries, we can:
In the context of the swarms framework, wrappers take the form of custom classes that inherit from the parent Agent
class. These wrapper classes encapsulate the functionality of specific agent libraries while exposing a consistent interface that aligns with the swarms framework.
To illustrate the process of building custom agents using the swarms framework, let's start with a basic example of creating a custom agent class:
from swarms import Agent\n\nclass MyCustomAgent(Agent):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # Custom initialization logic\n\n def custom_method(self, *args, **kwargs):\n # Implement custom logic here\n pass\n\n def run(self, task, *args, **kwargs):\n # Customize the run method\n response = super().run(task, *args, **kwargs)\n # Additional custom logic\n return response\n
This example demonstrates the fundamental structure of a custom agent class within the swarms framework. Let's break down the key components:
Inheritance: The class inherits from the Agent
parent class, ensuring it adheres to the swarms framework's interface.
Initialization: The __init__
method calls the parent class's initializer and can include additional custom initialization logic.
Custom methods: You can add any number of custom methods to extend the agent's functionality.
Run method: The run
method is a key component of the agent interface. By overriding this method, you can customize how the agent processes tasks while still leveraging the parent class's functionality.
To create more sophisticated custom agents, you can expand on this basic structure by adding features such as:
By leveraging these custom agent classes, developers can create highly specialized and adaptive agents tailored to their specific use cases while still benefiting from the standardized interface provided by the swarms framework.
"},{"location":"swarms/agents/third_party/#4-integrating-third-party-agent-libraries","title":"4. Integrating Third-Party Agent Libraries","text":"One of the key strengths of the swarms framework is its ability to integrate with various third-party agent libraries. In this section, we'll explore how to create wrappers for popular agent libraries, including Griptape, Langchain, CrewAI, and Autogen.
"},{"location":"swarms/agents/third_party/#griptape-integration","title":"Griptape Integration","text":"Griptape is a powerful library for building AI agents with a focus on composability and tool use. Let's create a wrapper for a Griptape agent:
from typing import List, Optional\n\nfrom griptape.structures import Agent as GriptapeAgent\nfrom griptape.tools import FileManager, TaskMemoryClient, WebScraper\n\nfrom swarms import Agent\n\n\nclass GriptapeAgentWrapper(Agent):\n \"\"\"\n A wrapper class for the GriptapeAgent from the griptape library.\n \"\"\"\n\n def __init__(self, name: str, tools: Optional[List] = None, *args, **kwargs):\n \"\"\"\n Initialize the GriptapeAgentWrapper.\n\n Parameters:\n - name: The name of the agent.\n - tools: A list of tools to be used by the agent. If not provided, default tools will be used.\n - *args, **kwargs: Additional arguments to be passed to the parent class constructor.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.name = name\n self.tools = tools or [\n WebScraper(off_prompt=True),\n TaskMemoryClient(off_prompt=True),\n FileManager()\n ]\n self.griptape_agent = GriptapeAgent(\n input=f\"I am {name}, an AI assistant. How can I help you?\",\n tools=self.tools\n )\n\n def run(self, task: str, *args, **kwargs) -> str:\n \"\"\"\n Run a task using the GriptapeAgent.\n\n Parameters:\n - task: The task to be performed by the agent.\n\n Returns:\n - The response from the GriptapeAgent as a string.\n \"\"\"\n response = self.griptape_agent.run(task, *args, **kwargs)\n return str(response)\n\n def add_tool(self, tool) -> None:\n \"\"\"\n Add a tool to the agent.\n\n Parameters:\n - tool: The tool to be added.\n \"\"\"\n self.tools.append(tool)\n self.griptape_agent = GriptapeAgent(\n input=f\"I am {self.name}, an AI assistant. How can I help you?\",\n tools=self.tools\n )\n\n# Usage example\ngriptape_wrapper = GriptapeAgentWrapper(\"GriptapeAssistant\")\nresult = griptape_wrapper.run(\"Load https://example.com, summarize it, and store it in a file called example_summary.txt.\")\nprint(result)\n
This wrapper encapsulates the functionality of a Griptape agent while exposing it through the swarms framework's interface. It allows for easy customization of tools and provides a simple way to execute tasks using the Griptape agent.
"},{"location":"swarms/agents/third_party/#langchain-integration","title":"Langchain Integration","text":"Langchain is a popular framework for developing applications powered by language models. Here's an example of how we can create a wrapper for a Langchain agent:
from typing import List, Optional\n\nfrom langchain.agents import AgentExecutor, LLMSingleActionAgent, Tool\nfrom langchain.chains import LLMChain\nfrom langchain_community.llms import OpenAI\nfrom langchain.prompts import StringPromptTemplate\nfrom langchain.tools import DuckDuckGoSearchRun\n\nfrom swarms import Agent\n\n\nclass LangchainAgentWrapper(Agent):\n \"\"\"\n Initialize the LangchainAgentWrapper.\n\n Args:\n name (str): The name of the agent.\n tools (List[Tool]): The list of tools available to the agent.\n llm (Optional[OpenAI], optional): The OpenAI language model to use. Defaults to None.\n \"\"\"\n def __init__(\n self,\n name: str,\n tools: List[Tool],\n llm: Optional[OpenAI] = None,\n *args,\n **kwargs,\n ):\n super().__init__(*args, **kwargs)\n self.name = name\n self.tools = tools\n self.llm = llm or OpenAI(temperature=0)\n\n prompt = StringPromptTemplate.from_template(\n \"You are {name}, an AI assistant. Answer the following question: {question}\"\n )\n\n llm_chain = LLMChain(llm=self.llm, prompt=prompt)\n tool_names = [tool.name for tool in self.tools]\n\n self.agent = LLMSingleActionAgent(\n llm_chain=llm_chain,\n output_parser=None,\n stop=[\"\\nObservation:\"],\n allowed_tools=tool_names,\n )\n\n self.agent_executor = AgentExecutor.from_agent_and_tools(\n agent=self.agent, tools=self.tools, verbose=True\n )\n\n def run(self, task: str, *args, **kwargs):\n \"\"\"\n Run the agent with the given task.\n\n Args:\n task (str): The task to be performed by the agent.\n\n Returns:\n Any: The result of the agent's execution.\n \"\"\"\n try:\n return self.agent_executor.run(task)\n except Exception as e:\n print(f\"An error occurred: {e}\")\n\n\n# Usage example\n\nsearch_tool = DuckDuckGoSearchRun()\ntools = [\n Tool(\n name=\"Search\",\n func=search_tool.run,\n description=\"Useful for searching the internet\",\n )\n]\n\nlangchain_wrapper = LangchainAgentWrapper(\"LangchainAssistant\", tools)\nresult = langchain_wrapper.run(\"What is the capital of France?\")\nprint(result)\n
This wrapper integrates a Langchain agent into the swarms framework, allowing for easy use of Langchain's powerful features such as tool use and multi-step reasoning.
"},{"location":"swarms/agents/third_party/#crewai-integration","title":"CrewAI Integration","text":"CrewAI is a library focused on creating and managing teams of AI agents. Let's create a wrapper for a CrewAI agent:
from swarms import Agent\nfrom crewai import Agent as CrewAIAgent\nfrom crewai import Task, Crew, Process\n\nclass CrewAIAgentWrapper(Agent):\n def __init__(self, name, role, goal, backstory, tools=None, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.name = name\n self.crewai_agent = CrewAIAgent(\n role=role,\n goal=goal,\n backstory=backstory,\n verbose=True,\n allow_delegation=False,\n tools=tools or []\n )\n\n def run(self, task, *args, **kwargs):\n crew_task = Task(\n description=task,\n agent=self.crewai_agent\n )\n crew = Crew(\n agents=[self.crewai_agent],\n tasks=[crew_task],\n process=Process.sequential\n )\n result = crew.kickoff()\n return result\n\n# Usage example\nfrom crewai_tools import SerperDevTool\n\nsearch_tool = SerperDevTool()\n\ncrewai_wrapper = CrewAIAgentWrapper(\n \"ResearchAnalyst\",\n role='Senior Research Analyst',\n goal='Uncover cutting-edge developments in AI and data science',\n backstory=\"\"\"You work at a leading tech think tank.\n Your expertise lies in identifying emerging trends.\n You have a knack for dissecting complex data and presenting actionable insights.\"\"\",\n tools=[search_tool]\n)\n\nresult = crewai_wrapper.run(\"Analyze the latest trends in quantum computing and summarize the key findings.\")\nprint(result)\n
This wrapper allows us to use CrewAI agents within the swarms framework, leveraging CrewAI's focus on role-based agents and collaborative task execution.
"},{"location":"swarms/agents/third_party/#autogen-integration","title":"Autogen Integration","text":"Autogen is a framework for building conversational AI agents. Here's how we can create a wrapper for an Autogen agent:
from swarms import Agent\nfrom autogen import ConversableAgent\n\nclass AutogenAgentWrapper(Agent):\n def __init__(self, name, llm_config, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.name = name\n self.autogen_agent = ConversableAgent(\n name=name,\n llm_config=llm_config,\n code_execution_config=False,\n function_map=None,\n human_input_mode=\"NEVER\"\n )\n\n def run(self, task, *args, **kwargs):\n messages = [{\"content\": task, \"role\": \"user\"}]\n response = self.autogen_agent.generate_reply(messages)\n return response\n\n# Usage example\nimport os\n\nllm_config = {\n \"config_list\": [{\"model\": \"gpt-4\", \"api_key\": os.environ.get(\"OPENAI_API_KEY\")}]\n}\n\nautogen_wrapper = AutogenAgentWrapper(\"AutogenAssistant\", llm_config)\nresult = autogen_wrapper.run(\"Tell me a joke about programming.\")\nprint(result)\n
This wrapper integrates Autogen's ConversableAgent into the swarms framework, allowing for easy use of Autogen's conversational AI capabilities.
By creating these wrappers, we can seamlessly integrate agents from various libraries into the swarms framework, allowing for a unified approach to agent management and task execution.
"},{"location":"swarms/agents/third_party/#5-advanced-agent-handling-techniques","title":"5. Advanced Agent Handling Techniques","text":"As you build more complex systems using the swarms framework and integrated agent libraries, you'll need to employ advanced techniques for agent handling. Here are some strategies to consider:
"},{"location":"swarms/agents/third_party/#1-dynamic-agent-creation","title":"1. Dynamic Agent Creation","text":"Implement a factory pattern to create agents dynamically based on task requirements:
class AgentFactory:\n @staticmethod\n def create_agent(agent_type, *args, **kwargs):\n if agent_type == \"griptape\":\n return GriptapeAgentWrapper(*args, **kwargs)\n elif agent_type == \"langchain\":\n return LangchainAgentWrapper(*args, **kwargs)\n elif agent_type == \"crewai\":\n return CrewAIAgentWrapper(*args, **kwargs)\n elif agent_type == \"autogen\":\n return AutogenAgentWrapper(*args, **kwargs)\n else:\n raise ValueError(f\"Unknown agent type: {agent_type}\")\n\n# Usage\nagent = AgentFactory.create_agent(\"griptape\", \"DynamicGriptapeAgent\")\n
"},{"location":"swarms/agents/third_party/#2-agent-pooling","title":"2. Agent Pooling","text":"Implement an agent pool to manage and reuse agents efficiently:
from queue import Queue\n\nclass AgentPool:\n def __init__(self, pool_size=5):\n self.pool = Queue(maxsize=pool_size)\n self.pool_size = pool_size\n\n def get_agent(self, agent_type, *args, **kwargs):\n if not self.pool.empty():\n return self.pool.get()\n else:\n return AgentFactory.create_agent(agent_type, *args, **kwargs)\n\n def release_agent(self, agent):\n if self.pool.qsize() < self.pool_size:\n self.pool.put(agent)\n\n# Usage\npool = AgentPool()\nagent = pool.get_agent(\"langchain\", \"PooledLangchainAgent\")\nresult = agent.run(\"Perform a task\")\npool.release_agent(agent)\n
"},{"location":"swarms/agents/third_party/#3-agent-composition","title":"3. Agent Composition","text":"Create composite agents that combine the capabilities of multiple agent types:
class CompositeAgent(Agent):\n def __init__(self, name, agents):\n super().__init__()\n self.name = name\n self.agents = agents\n\n def run(self, task):\n results = []\n for agent in self.agents:\n results.append(agent.run(task))\n return self.aggregate_results(results)\n\n def aggregate_results(self, results):\n # Implement your own logic to combine results\n return \"\\n\".join(results)\n\n# Usage\ngriptape_agent = GriptapeAgentWrapper(\"GriptapeComponent\")\nlangchain_agent = LangchainAgentWrapper(\"LangchainComponent\", [])\ncomposite_agent = CompositeAgent(\"CompositeAssistant\", [griptape_agent, langchain_agent])\nresult = composite_agent.run(\"Analyze the pros and cons of quantum computing\")\n
"},{"location":"swarms/agents/third_party/#4-agent-specialization","title":"4. Agent Specialization","text":"Create specialized agents for specific domains or tasks:
class DataAnalysisAgent(Agent):\n def __init__(self, name, analysis_tools):\n super().__init__()\n self.name = name\n self.analysis_tools = analysis_tools\n\n def run(self, data):\n results = {}\n for tool in self.analysis_tools:\n results[tool.name] = tool.analyze(data)\n return results\n\n# Usage\nimport pandas as pd\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\n\nclass AnalysisTool:\n def __init__(self, name, func):\n self.name = name\n self.func = func\n\n def analyze(self, data):\n return self.func(data)\n\ntools = [\n AnalysisTool(\"Descriptive Stats\", lambda data: data.describe()),\n AnalysisTool(\"Correlation\", lambda data: data.corr()),\n AnalysisTool(\"PCA\", lambda data: PCA().fit_transform(StandardScaler().fit_transform(data)))\n]\n\ndata_agent = DataAnalysisAgent(\"DataAnalyst\", tools)\ndf = pd.read_csv(\"sample_data.csv\")\nanalysis_results = data_agent.run(df)\n
"},{"location":"swarms/agents/third_party/#5-agent-monitoring-and-logging","title":"5. Agent Monitoring and Logging","text":"Implement a monitoring system to track agent performance and log their activities:
import logging\nfrom functools import wraps\n\ndef log_agent_activity(func):\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n logging.info(f\"Agent {self.name} started task: {args[0]}\")\n result = func(self, *args, **kwargs)\n logging.info(f\"Agent {self.name} completed task. Result length: {len(str(result))}\")\n return result\n return wrapper\n\nclass MonitoredAgent(Agent):\n def __init__(self, name, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.name = name\n\n @log_agent_activity\n def run(self, task, *args, **kwargs):\n return super().run(task, *args, **kwargs)\n\n# Usage\nlogging.basicConfig(level=logging.INFO)\nmonitored_agent = MonitoredAgent(\"MonitoredGriptapeAgent\")\nresult = monitored_agent.run(\"Summarize the latest AI research papers\")\n
Additionally the Agent class now includes built-in logging functionality and the ability to switch between JSON and string output. To switch between JSON and string output: - Use output_type=\"str\"
for string output (default) - Use output_type=\"json\"
for JSON output
The output_type
parameter determines the format of the final result returned by the run
method. When set to \"str\", it returns a string representation of the agent's response. When set to \"json\", it returns a JSON object containing detailed information about the agent's run, including all steps and metadata.
When developing custom agents using the swarms framework, consider the following best practices:
Modular Design: Design your agents with modularity in mind. Break down complex functionality into smaller, reusable components.
Consistent Interfaces: Maintain consistent interfaces across your custom agents to ensure interoperability within the swarms framework.
Error Handling: Implement robust error handling and graceful degradation in your agents to ensure system stability.
Performance Optimization: Optimize your agents for performance, especially when dealing with resource-intensive tasks or large-scale deployments.
Testing and Validation: Develop comprehensive test suites for your custom agents to ensure their reliability and correctness.
Documentation: Provide clear and detailed documentation for your custom agents, including their capabilities, limitations, and usage examples.
Versioning: Implement proper versioning for your custom agents to manage updates and maintain backwards compatibility.
Security Considerations: Implement security best practices, especially when dealing with sensitive data or integrating with external services.
Here's an example that incorporates some of these best practices:
import logging\nfrom typing import Dict, Any\nfrom swarms import Agent\n\nclass SecureCustomAgent(Agent):\n def __init__(self, name: str, api_key: str, version: str = \"1.0.0\", *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.name = name\n self._api_key = api_key # Store sensitive data securely\n self.version = version\n self.logger = logging.getLogger(f\"{self.__class__.__name__}.{self.name}\")\n\n def run(self, task: str, *args, **kwargs) -> Dict[str, Any]:\n try:\n self.logger.info(f\"Agent {self.name} (v{self.version}) starting task: {task}\")\n result = self._process_task(task)\n self.logger.info(f\"Agent {self.name} completed task successfully\")\n return {\"status\": \"success\", \"result\": result}\n except Exception as e:\n self.logger.error(f\"Error in agent {self.name}: {str(e)}\")\n return {\"status\": \"error\", \"message\": str(e)}\n\n def _process_task(self, task: str) -> str:\n # Implement the core logic of your agent here\n # This is a placeholder implementation\n return f\"Processed task: {task}\"\n\n @property\n def api_key(self) -> str:\n # Provide a secure way to access the API key\n return self._api_key\n\n def __repr__(self) -> str:\n return f\"<{self.__class__.__name__} name='{self.name}' version='{self.version}'>\"\n\n# Usage\nlogging.basicConfig(level=logging.INFO)\nsecure_agent = SecureCustomAgent(\"SecureAgent\", api_key=\"your-api-key-here\")\nresult = secure_agent.run(\"Perform a secure operation\")\nprint(result)\n
This example demonstrates several best practices: - Modular design with separate methods for initialization and task processing - Consistent interface adhering to the swarms framework - Error handling and logging - Secure storage of sensitive data (API key) - Version tracking - Type hinting for improved code readability and maintainability - Informative string representation of the agent
"},{"location":"swarms/agents/third_party/#7-future-directions-and-challenges","title":"7. Future Directions and Challenges","text":"As the field of AI and agent-based systems continues to evolve, the swarms framework and its ecosystem of integrated agent libraries will face new opportunities and challenges. Some potential future directions and areas of focus include:
Enhanced Interoperability: Developing more sophisticated protocols for agent communication and collaboration across different libraries and frameworks.
Scalability: Improving the framework's ability to handle large-scale swarms of agents, potentially leveraging distributed computing techniques.
Adaptive Learning: Incorporating more advanced machine learning techniques to allow agents to adapt and improve their performance over time.
Ethical AI: Integrating ethical considerations and safeguards into the agent development process to ensure responsible AI deployment.
Human-AI Collaboration: Exploring new paradigms for human-AI interaction and collaboration within the swarms framework.
Domain-Specific Optimizations: Developing specialized agent types and tools for specific industries or problem domains.
Explainability and Transparency: Improving the ability to understand and explain agent decision-making processes.
Security and Privacy: Enhancing the framework's security features to protect against potential vulnerabilities and ensure data privacy.
As these areas develop, developers working with the swarms framework will need to stay informed about new advancements and be prepared to adapt their agent implementations accordingly.
"},{"location":"swarms/agents/third_party/#8-conclusion","title":"8. Conclusion","text":"The swarms framework provides a powerful and flexible foundation for building custom agents and integrating various agent libraries. By leveraging the techniques and best practices discussed in this guide, developers can create sophisticated, efficient, and scalable agent-based systems.
The ability to seamlessly integrate agents from libraries like Griptape, Langchain, CrewAI, and Autogen opens up a world of possibilities for creating diverse and specialized AI applications. Whether you're building a complex multi-agent system for data analysis, a conversational AI platform, or a collaborative problem-solving environment, the swarms framework offers the tools and flexibility to bring your vision to life.
As you embark on your journey with the swarms framework, remember that the field of AI and agent-based systems is rapidly evolving. Stay curious, keep experimenting, and don't hesitate to push the boundaries of what's possible with custom agents and integrated libraries.
By embracing the power of the swarms framework and the ecosystem of agent libraries it supports, you're well-positioned to create the next generation of intelligent, adaptive, and collaborative AI systems. Happy agent building!
"},{"location":"swarms/agents/tool_agent/","title":"ToolAgent Documentation","text":"The ToolAgent
class is a specialized agent that facilitates the execution of specific tasks using a model and tokenizer. It is part of the swarms
module and inherits from the Agent
class. This agent is designed to generate functions based on a given JSON schema and task, making it highly adaptable for various use cases, including natural language processing and data generation.
The ToolAgent
class plays a crucial role in leveraging pre-trained models and tokenizers to automate tasks that require the interpretation and generation of structured data. By providing a flexible interface and robust error handling, it ensures smooth integration and efficient task execution.
name
str
The name of the tool agent. Default is \"Function Calling Agent\". description
str
A description of the tool agent. Default is \"Generates a function based on the input json schema and the task\". model
Any
The model used by the tool agent. tokenizer
Any
The tokenizer used by the tool agent. json_schema
Any
The JSON schema used by the tool agent. max_number_tokens
int
The maximum number of tokens for generation. Default is 500. parsing_function
Optional[Callable]
An optional parsing function to process the output of the tool agent. llm
Any
An optional large language model to be used by the tool agent. *args
Variable length argument list Additional positional arguments. **kwargs
Arbitrary keyword arguments Additional keyword arguments."},{"location":"swarms/agents/tool_agent/#attributes","title":"Attributes","text":"Attribute Type Description name
str
The name of the tool agent. description
str
A description of the tool agent. model
Any
The model used by the tool agent. tokenizer
Any
The tokenizer used by the tool agent. json_schema
Any
The JSON schema used by the tool agent."},{"location":"swarms/agents/tool_agent/#methods","title":"Methods","text":""},{"location":"swarms/agents/tool_agent/#run","title":"run
","text":"def run(self, task: str, *args, **kwargs) -> Any:\n
Parameters:
Parameter Type Descriptiontask
str
The task to be performed by the tool agent. *args
Variable length argument list Additional positional arguments. **kwargs
Arbitrary keyword arguments Additional keyword arguments. Returns:
Raises:
Exception
: If an error occurs during the execution of the tool agent.The ToolAgent
class provides a structured way to perform tasks using a model and tokenizer. It initializes with essential parameters and attributes, and the run
method facilitates the execution of the specified task.
The initialization of a ToolAgent
involves specifying its name, description, model, tokenizer, JSON schema, maximum number of tokens, optional parsing function, and optional large language model.
agent = ToolAgent(\n name=\"My Tool Agent\",\n description=\"A tool agent for specific tasks\",\n model=model,\n tokenizer=tokenizer,\n json_schema=json_schema,\n max_number_tokens=1000,\n parsing_function=my_parsing_function,\n llm=my_llm\n)\n
"},{"location":"swarms/agents/tool_agent/#running-a-task","title":"Running a Task","text":"To execute a task using the ToolAgent
, the run
method is called with the task description and any additional arguments or keyword arguments.
result = agent.run(\"Generate a person's information based on the given schema.\")\nprint(result)\n
"},{"location":"swarms/agents/tool_agent/#detailed-examples","title":"Detailed Examples","text":""},{"location":"swarms/agents/tool_agent/#example-1-basic-usage","title":"Example 1: Basic Usage","text":"from transformers import AutoModelForCausalLM, AutoTokenizer\nfrom swarms import ToolAgent\n\nmodel = AutoModelForCausalLM.from_pretrained(\"databricks/dolly-v2-12b\")\ntokenizer = AutoTokenizer.from_pretrained(\"databricks/dolly-v2-12b\")\n\njson_schema = {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\"type\": \"string\"},\n \"age\": {\"type\": \"number\"},\n \"is_student\": {\"type\": \"boolean\"},\n \"courses\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"}\n }\n }\n}\n\ntask = \"Generate a person's information based on the following schema:\"\nagent = ToolAgent(model=model, tokenizer=tokenizer, json_schema=json_schema)\ngenerated_data = agent.run(task)\n\nprint(generated_data)\n
"},{"location":"swarms/agents/tool_agent/#example-2-using-a-parsing-function","title":"Example 2: Using a Parsing Function","text":"def parse_output(output):\n # Custom parsing logic\n return output\n\nagent = ToolAgent(\n name=\"Parsed Tool Agent\",\n description=\"A tool agent with a parsing function\",\n model=model,\n tokenizer=tokenizer,\n json_schema=json_schema,\n parsing_function=parse_output\n)\n\ntask = \"Generate a person's information with custom parsing:\"\nparsed_data = agent.run(task)\n\nprint(parsed_data)\n
"},{"location":"swarms/agents/tool_agent/#example-3-specifying-maximum-number-of-tokens","title":"Example 3: Specifying Maximum Number of Tokens","text":"agent = ToolAgent(\n name=\"Token Limited Tool Agent\",\n description=\"A tool agent with a token limit\",\n model=model,\n tokenizer=tokenizer,\n json_schema=json_schema,\n max_number_tokens=200\n)\n\ntask = \"Generate a concise person's information:\"\nlimited_data = agent.run(task)\n\nprint(limited_data)\n
"},{"location":"swarms/agents/tool_agent/#full-usage","title":"Full Usage","text":"from pydantic import BaseModel, Field\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nfrom swarms import ToolAgent\nfrom swarms.tools.json_utils import base_model_to_json\n\n# Model name\nmodel_name = \"CohereForAI/c4ai-command-r-v01-4bit\"\n\n# Load the pre-trained model and tokenizer\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n device_map=\"auto\",\n)\n\n# Load the pre-trained model and tokenizer\ntokenizer = AutoTokenizer.from_pretrained(model_name)\n\n\n# Initialize the schema for the person's information\nclass APIExampleRequestSchema(BaseModel):\n endpoint: str = Field(\n ..., description=\"The API endpoint for the example request\"\n )\n method: str = Field(\n ..., description=\"The HTTP method for the example request\"\n )\n headers: dict = Field(\n ..., description=\"The headers for the example request\"\n )\n body: dict = Field(..., description=\"The body of the example request\")\n response: dict = Field(\n ...,\n description=\"The expected response of the example request\",\n )\n\n\n# Convert the schema to a JSON string\napi_example_schema = base_model_to_json(APIExampleRequestSchema)\n# Convert the schema to a JSON string\n\n# Define the task to generate a person's information\ntask = \"Generate an example API request using this code:\\n\"\n\n# Create an instance of the ToolAgent class\nagent = ToolAgent(\n name=\"Command R Tool Agent\",\n description=(\n \"An agent that generates an API request using the Command R\"\n \" model.\"\n ),\n model=model,\n tokenizer=tokenizer,\n json_schema=api_example_schema,\n)\n\n# Run the agent to generate the person's information\ngenerated_data = agent.run(task)\n\n# Print the generated data\nprint(f\"Generated data: {generated_data}\")\n
"},{"location":"swarms/agents/tool_agent/#jamba-toolagent","title":"Jamba ++ ToolAgent","text":"from pydantic import BaseModel, Field\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nfrom swarms import ToolAgent\nfrom swarms.tools.json_utils import base_model_to_json\n\n# Model name\nmodel_name = \"ai21labs/Jamba-v0.1\"\n\n# Load the pre-trained model and tokenizer\nmodel = AutoModelForCausalLM.from_pretrained(\n model_name,\n device_map=\"auto\",\n)\n\n# Load the pre-trained model and tokenizer\ntokenizer = AutoTokenizer.from_pretrained(model_name)\n\n\n# Initialize the schema for the person's information\nclass APIExampleRequestSchema(BaseModel):\n endpoint: str = Field(\n ..., description=\"The API endpoint for the example request\"\n )\n method: str = Field(\n ..., description=\"The HTTP method for the example request\"\n )\n headers: dict = Field(\n ..., description=\"The headers for the example request\"\n )\n body: dict = Field(..., description=\"The body of the example request\")\n response: dict = Field(\n ...,\n description=\"The expected response of the example request\",\n )\n\n\n# Convert the schema to a JSON string\napi_example_schema = base_model_to_json(APIExampleRequestSchema)\n# Convert the schema to a JSON string\n\n# Define the task to generate a person's information\ntask = \"Generate an example API request using this code:\\n\"\n\n# Create an instance of the ToolAgent class\nagent = ToolAgent(\n name=\"Command R Tool Agent\",\n description=(\n \"An agent that generates an API request using the Command R\"\n \" model.\"\n ),\n model=model,\n tokenizer=tokenizer,\n json_schema=api_example_schema,\n)\n\n# Run the agent to generate the person's information\ngenerated_data = agent(task)\n\n# Print the generated data\nprint(f\"Generated data: {generated_data}\")\n
"},{"location":"swarms/agents/tool_agent/#additional-information-and-tips","title":"Additional Information and Tips","text":"model
or llm
parameter is provided during initialization. If neither is provided, the ToolAgent
will raise an exception.parsing_function
parameter is optional but can be very useful for post-processing the output of the tool agent.max_number_tokens
parameter to control the length of the generated output, depending on the requirements of the task.This documentation provides a comprehensive guide to the ToolAgent
class, including its initialization, usage, and practical examples. By following the detailed instructions and examples, developers can effectively utilize the ToolAgent
for various tasks involving model and tokenizer-based operations.
Artifact
","text":"The Artifact
class represents a file artifact, encapsulating the file's path, type, contents, versions, and edit count. This class provides a comprehensive way to manage file versions, edit contents, and handle various file-related operations such as saving, loading, and exporting to JSON.
The Artifact
class is particularly useful in contexts where file version control and content management are essential. By keeping track of the number of edits and maintaining a version history, it allows for robust file handling and auditability.
file_path
str
N/A The path to the file. file_type
str
N/A The type of the file. contents
str
\"\"
The contents of the file. versions
List[FileVersion]
[]
The list of file versions. edit_count
int
0
The number of times the file has been edited."},{"location":"swarms/artifacts/artifact/#parameters-and-validation","title":"Parameters and Validation","text":"file_path
: A string representing the file path.file_type
: A string representing the file type. This attribute is validated to ensure it matches supported file types based on the file extension if not provided.contents
: A string representing the contents of the file. Defaults to an empty string.versions
: A list of FileVersion
instances representing the version history of the file. Defaults to an empty list.edit_count
: An integer representing the number of edits made to the file. Defaults to 0.The Artifact
class includes various methods for creating, editing, saving, loading, and exporting file artifacts.
create
","text":"Parameter Type Description initial_content
str
The initial content of the file. Usage Example:
artifact = Artifact(file_path=\"example.txt\", file_type=\"txt\")\nartifact.create(initial_content=\"Initial file content\")\n
The file type parameter supports the following file types: .txt
, .md
, .py
, .pdf
."},{"location":"swarms/artifacts/artifact/#edit","title":"edit
","text":"Parameter Type Description new_content
str
The new content of the file. Usage Example:
artifact.edit(new_content=\"Updated file content\")\n
"},{"location":"swarms/artifacts/artifact/#save","title":"save
","text":"Usage Example:
artifact.save()\n
"},{"location":"swarms/artifacts/artifact/#load","title":"load
","text":"Usage Example:
artifact.load()\n
"},{"location":"swarms/artifacts/artifact/#get_version","title":"get_version
","text":"Parameter Type Description version_number
int
The version number to retrieve. Usage Example:
version = artifact.get_version(version_number=1)\n
"},{"location":"swarms/artifacts/artifact/#get_contents","title":"get_contents
","text":"Usage Example:
current_contents = artifact.get_contents()\n
"},{"location":"swarms/artifacts/artifact/#get_version_history","title":"get_version_history
","text":"Usage Example:
version_history = artifact.get_version_history()\n
"},{"location":"swarms/artifacts/artifact/#export_to_json","title":"export_to_json
","text":"Parameter Type Description file_path
str
The path to the JSON file to save the artifact. Usage Example:
artifact.export_to_json(file_path=\"artifact.json\")\n
"},{"location":"swarms/artifacts/artifact/#import_from_json","title":"import_from_json
","text":"Parameter Type Description file_path
str
The path to the JSON file to import the artifact from. Usage Example:
imported_artifact = Artifact.import_from_json(file_path=\"artifact.json\")\n
"},{"location":"swarms/artifacts/artifact/#get_metrics","title":"get_metrics
","text":"Usage Example:
metrics = artifact.get_metrics()\n
"},{"location":"swarms/artifacts/artifact/#to_dict","title":"to_dict
","text":"Usage Example:
artifact_dict = artifact.to_dict()\n
"},{"location":"swarms/artifacts/artifact/#from_dict","title":"from_dict
","text":"Parameter Type Description data
Dict[str, Any]
The dictionary representation of the artifact. Usage Example:
artifact_data = {\n \"file_path\": \"example.txt\",\n \"file_type\": \"txt\",\n \"contents\": \"File content\",\n \"versions\": [],\n \"edit_count\": 0\n}\nartifact = Artifact.from_dict(artifact_data)\n
"},{"location":"swarms/artifacts/artifact/#additional-information-and-tips","title":"Additional Information and Tips","text":"Artifact
class uses the pydantic
library to handle data validation and serialization.file_path
is set correctly to avoid file operation errors.get_version
and get_version_history
methods to maintain a clear audit trail of changes to the file.export_to_json
and import_from_json
methods are useful for backing up and restoring the state of an artifact.from datetime import datetime\nfrom pydantic import BaseModel, Field, validator\nfrom typing import List, Dict, Any, Union\nimport os\nimport json\n\n# Define FileVersion class\nclass FileVersion(BaseModel):\n version_number: int\n content: str\n timestamp: datetime\n\n# Artifact class definition goes here\n\n# Create an artifact\nartifact = Artifact(file_path=\"example.txt\", file_type=\"txt\")\nartifact.create(initial_content=\"Initial file content\")\n\n# Edit the artifact\nartifact.edit(new_content=\"Updated file content\")\n\n# Save the artifact to a file\nartifact.save()\n\n# Load the artifact from the file\nartifact.load()\n\n# Print the current contents of the artifact\nprint(artifact.get_contents())\n\n# Print the version history\nprint(artifact.get_version_history())\n
"},{"location":"swarms/artifacts/artifact/#example-2-exporting-and-importing-an-artifact","title":"Example 2: Exporting and Importing an Artifact","text":"# Export the artifact to a JSON file\nartifact.export_to_json(file_path=\"artifact.json\")\n\n# Import\n\n the artifact from a JSON file\nimported_artifact = Artifact.import_from_json(file_path=\"artifact.json\")\n\n# Print the metrics of the imported artifact\nprint(imported_artifact.get_metrics())\n
"},{"location":"swarms/artifacts/artifact/#example-3-converting-an-artifact-to-and-from-a-dictionary","title":"Example 3: Converting an Artifact to and from a Dictionary","text":"# Convert the artifact to a dictionary\nartifact_dict = artifact.to_dict()\n\n# Create a new artifact from the dictionary\nnew_artifact = Artifact.from_dict(artifact_dict)\n\n# Print the metrics of the new artifact\nprint(new_artifact.get_metrics())\n
"},{"location":"swarms/changelog/5_6_8/","title":"Swarms ChangeLog 5.6.8 -","text":"The biggest update in Swarms history! We've introduced major fixes, updates, and new features to enhance your agent workflows and performance. To get the latest updates run the following:
"},{"location":"swarms/changelog/5_6_8/#installation","title":"Installation","text":"$ pip3 install -U swarms\n
"},{"location":"swarms/changelog/5_6_8/#log","title":"Log","text":"Here\u2019s the breakdown of the latest changes:
"},{"location":"swarms/changelog/5_6_8/#fixes","title":"\ud83d\udc1e Fixes:","text":"swarms.models
module into its own package: swarm_models
for improved code organization.agents
class for streamlined and efficient operations.AgentRearrange
, SpreadsheetSwarm
, and other swarms, improving data handling.Agent
class with JSON metadata output, supporting OpenAI-like API responses with output_type=\"json\"
and return_step_meta=True
.ForestSwarm
, a new architecture that clusters agents into trees, enabling precise task execution.AgentRegistry
, allowing you to store multiple agents for future use.Ready to dive in? Get started now: https://buff.ly/444kDjA
"},{"location":"swarms/changelog/5_8_1/","title":"Swarms 5.8.1 Feature Documentation","text":""},{"location":"swarms/changelog/5_8_1/#1-enhanced-command-line-interface-cli","title":"1. Enhanced Command Line Interface (CLI)","text":""},{"location":"swarms/changelog/5_8_1/#11-integrated-onboarding-process","title":"1.1 Integrated Onboarding Process","text":"$ swarms onboarding\n
"},{"location":"swarms/changelog/5_8_1/#12-run-agents-command","title":"1.2 Run Agents Command","text":"$ swarms run-agents --yaml-file agents.yaml\n
This command allows users to execute multiple agents defined in a YAML file. Here's the process:
agents.yaml
in this case).max_loops
, autosave
, and verbose
.The YAML file structure allows users to define multiple agents with different configurations, making it easy to run complex, multi-agent tasks from the command line.
"},{"location":"swarms/changelog/5_8_1/#13-generate-prompt-feature","title":"1.3 Generate Prompt Feature","text":"$ swarms generate-prompt --prompt \"Create a marketing strategy for a new product launch\"\n
This feature leverages Swarms' language model to generate expanded or refined prompts:
This feature can help users create more effective prompts for their agents or other AI tasks.
"},{"location":"swarms/changelog/5_8_1/#2-new-prompt-management-system","title":"2. New Prompt Management System","text":""},{"location":"swarms/changelog/5_8_1/#21-prompt-class","title":"2.1 Prompt Class","text":"The new Prompt
class provides a robust system for managing and versioning prompts:
from swarms import Prompt\n\nmarketing_prompt = Prompt(content=\"Initial marketing strategy draft\", autosave=True)\n\nprint(marketing_prompt.get_prompt())\n
Key features of the Prompt
class:
Initialization: The class is initialized with initial content and an autosave
option.
Editing:
marketing_prompt.edit_prompt(\"Updated marketing strategy with social media focus\")\n
This method updates the prompt content and, if autosave
is True, automatically saves the new version. Retrieval:
current_content = marketing_prompt.get_prompt()\n
This method returns the current content of the prompt. Version History:
print(f\"Edit history: {marketing_prompt.edit_history}\")\n
The class maintains a history of edits, allowing users to track changes over time. Rollback:
marketing_prompt.rollback(1)\n
This feature allows users to revert to a previous version of the prompt. Duplicate Prevention: The class includes logic to prevent duplicate edits, raising a ValueError
if an attempt is made to save the same content twice in a row.
This system provides a powerful way to manage prompts, especially for complex projects where prompt engineering and iteration are crucial.
"},{"location":"swarms/changelog/5_8_1/#3-upcoming-features-preview","title":"3. Upcoming Features Preview","text":""},{"location":"swarms/changelog/5_8_1/#31-enhanced-agent-execution-capabilities","title":"3.1 Enhanced Agent Execution Capabilities","text":"The preview code demonstrates planned enhancements for agent execution:
from swarms import Agent, ExecutionEnvironment\n\nmy_agent = Agent(name=\"data_processor\")\n\ncpu_env = ExecutionEnvironment(type=\"cpu\", cores=4)\nmy_agent.run(environment=cpu_env)\n\ngpu_env = ExecutionEnvironment(type=\"gpu\", device_id=0)\nmy_agent.run(environment=gpu_env)\n\nfractional_env = ExecutionEnvironment(type=\"fractional\", cpu_fraction=0.5, gpu_fraction=0.3)\nmy_agent.run(environment=fractional_env)\n
This upcoming feature will allow for more fine-grained control over the execution environment:
These features will provide users with greater flexibility in resource allocation, potentially improving performance and allowing for more efficient use of available hardware.
"},{"location":"swarms/changelog/6_0_0%202/","title":"Swarms 6.0.0 - Performance & Reliability Update \ud83d\ude80","text":"We're excited to announce the release of Swarms 6.0.0, bringing significant improvements to performance, reliability, and developer experience. This release focuses on streamlining core functionalities while enhancing the overall stability of the framework.
"},{"location":"swarms/changelog/6_0_0%202/#installation","title":"\ud83d\udce6 Installation","text":"pip3 install -U swarms\n
"},{"location":"swarms/changelog/6_0_0%202/#highlights","title":"\ud83c\udf1f Highlights","text":""},{"location":"swarms/changelog/6_0_0%202/#agent-enhancements","title":"Agent Enhancements","text":"load()
function__init__
Join our growing team! We're currently looking for: - Agent Engineers - Developer Relations - Infrastructure Engineers - And more!
"},{"location":"swarms/changelog/6_0_0%202/#get-involved","title":"Get Involved","text":"Have ideas for features, bug fixes, or improvements? We'd love to hear from you! Reach out through our GitHub issues or email us directly.
Thank you to all our contributors and users who make Swarms better every day. Together, we're building the future of swarm intelligence.
"},{"location":"swarms/changelog/6_0_0%202/#swarmai-opensource-ai-machinelearning","title":"SwarmAI #OpenSource #AI #MachineLearning","text":""},{"location":"swarms/changelog/6_0_0/","title":"Swarms 6.0.0 - Performance & Reliability Update \ud83d\ude80","text":"We're excited to announce the release of Swarms 6.0.0, bringing significant improvements to performance, reliability, and developer experience. This release focuses on streamlining core functionalities while enhancing the overall stability of the framework.
"},{"location":"swarms/changelog/6_0_0/#installation","title":"\ud83d\udce6 Installation","text":"pip3 install -U swarms\n
"},{"location":"swarms/changelog/6_0_0/#highlights","title":"\ud83c\udf1f Highlights","text":""},{"location":"swarms/changelog/6_0_0/#agent-enhancements","title":"Agent Enhancements","text":"load()
function__init__
Join our growing team! We're currently looking for: - Agent Engineers - Developer Relations - Infrastructure Engineers - And more!
"},{"location":"swarms/changelog/6_0_0/#get-involved","title":"Get Involved","text":"Have ideas for features, bug fixes, or improvements? We'd love to hear from you! Reach out through our GitHub issues or email us directly.
Thank you to all our contributors and users who make Swarms better every day. Together, we're building the future of swarm intelligence.
"},{"location":"swarms/changelog/6_0_0/#swarmai-opensource-ai-machinelearning","title":"SwarmAI #OpenSource #AI #MachineLearning","text":""},{"location":"swarms/changelog/changelog_new/","title":"\ud83d\ude80 Swarms 5.9.2 Release Notes","text":""},{"location":"swarms/changelog/changelog_new/#major-features","title":"\ud83c\udfaf Major Features","text":""},{"location":"swarms/changelog/changelog_new/#concurrent-agent-execution-suite","title":"Concurrent Agent Execution Suite","text":"We're excited to introduce a comprehensive suite of agent execution methods to supercharge your multi-agent workflows:
run_agents_concurrently
: Execute multiple agents in parallel with optimal resource utilizationrun_agents_concurrently_async
: Asynchronous execution for improved performancerun_single_agent
: Streamlined single agent executionrun_agents_concurrently_multiprocess
: Multi-process execution for CPU-intensive tasksrun_agents_sequentially
: Sequential execution with controlled flowrun_agents_with_different_tasks
: Assign different tasks to different agentsrun_agent_with_timeout
: Time-bounded agent executionrun_agents_with_resource_monitoring
: Monitor and manage resource usageagent_workspace
from swarms import Agent, run_agents_concurrently, run_agents_with_timeout, run_agents_with_different_tasks\n\n# Initialize multiple agents\nagents = [\n Agent(\n agent_name=f\"Analysis-Agent-{i}\",\n system_prompt=\"You are a financial analysis expert\",\n llm=model,\n max_loops=1\n )\n for i in range(5)\n]\n\n# Run agents concurrently\ntask = \"Analyze the impact of rising interest rates on tech stocks\"\noutputs = run_agents_concurrently(agents, task)\n\n# Example with timeout\noutputs_with_timeout = run_agents_with_timeout(\n agents=agents,\n task=task,\n timeout=30.0,\n batch_size=2\n)\n\n# Run different tasks\ntask_pairs = [\n (agents[0], \"Analyze tech stocks\"),\n (agents[1], \"Analyze energy stocks\"),\n (agents[2], \"Analyze retail stocks\")\n]\ndifferent_outputs = run_agents_with_different_tasks(task_pairs)\n
"},{"location":"swarms/changelog/changelog_new/#installation","title":"Installation","text":"pip3 install -U swarms\n
"},{"location":"swarms/changelog/changelog_new/#coming-soon","title":"Coming Soon","text":"We believe in the power of community-driven development. Help us make Swarms better!
For detailed documentation and examples, visit our GitHub repository.
Let's build the future of multi-agent systems together! \ud83d\ude80
"},{"location":"swarms/cli/cli_guide/","title":"The Ultimate Technical Guide to the Swarms CLI: A Step-by-Step Developer\u2019s Guide","text":"Welcome to the definitive technical guide for using the Swarms Command Line Interface (CLI). The Swarms CLI enables developers, engineers, and business professionals to seamlessly manage and run Swarms of agents from the command line. This guide will walk you through the complete process of installing, configuring, and using the Swarms CLI to orchestrate intelligent agents for your needs.
By following this guide, you will not only understand how to install and use the Swarms CLI but also learn about real-world use cases, including how the CLI is used to automate tasks across various industries, from finance to marketing, operations, and beyond.
Explore the official Swarms GitHub repository, dive into the comprehensive documentation at Swarms Docs, and explore the vast marketplace of agents on swarms.ai to kickstart your journey with Swarms!
"},{"location":"swarms/cli/cli_guide/#1-installing-the-swarms-cli","title":"1. Installing the Swarms CLI","text":"Before we explore the Swarms CLI commands, let\u2019s get it installed and running on your machine.
"},{"location":"swarms/cli/cli_guide/#11-installation-using-pip","title":"1.1. Installation Usingpip
","text":"For most users, the simplest way to install the Swarms CLI is through pip
:
pip3 install -U swarms\n
This command installs the latest version of the Swarms CLI package, ensuring that you have the newest features and fixes.
"},{"location":"swarms/cli/cli_guide/#12-installation-using-poetry","title":"1.2. Installation UsingPoetry
","text":"Alternatively, if you are using Poetry
as your Python package manager, you can add the Swarms package like this:
poetry add swarms\n
Once installed, you can run the Swarms CLI directly using:
poetry run swarms help\n
This command shows all the available options and commands, as we will explore in-depth below.
"},{"location":"swarms/cli/cli_guide/#2-understanding-swarms-cli-commands","title":"2. Understanding Swarms CLI Commands","text":"With the Swarms CLI installed, the next step is to explore its key functionalities. Here are the most essential commands:
"},{"location":"swarms/cli/cli_guide/#21-onboarding-setup-your-environment","title":"2.1.onboarding
: Setup Your Environment","text":"The onboarding
command guides you through setting up your environment and configuring the agents for your Swarms.
swarms onboarding\n
This is the first step when you begin working with the Swarms platform. It helps to:
help
: Learn Available Commands","text":"Running help
displays the various commands you can use:
swarms help\n
This command will output a helpful list like the one shown below, including detailed descriptions of each command.
Swarms CLI - Help\n\nCommands:\nonboarding : Starts the onboarding process\nhelp : Shows this help message\nget-api-key : Retrieves your API key from the platform\ncheck-login : Checks if you're logged in and starts the cache\nread-docs : Redirects you to swarms cloud documentation\nrun-agents : Run your Agents from your agents.yaml\n
"},{"location":"swarms/cli/cli_guide/#23-get-api-key-access-api-integration","title":"2.3. get-api-key
: Access API Integration","text":"One of the key functionalities of the Swarms platform is integrating your agents with the Swarms API. To retrieve your unique API key for communication, use this command:
swarms get-api-key\n
Your API key is essential to enable agent workflows and access various services through the Swarms platform.
"},{"location":"swarms/cli/cli_guide/#24-check-login-verify-authentication","title":"2.4.check-login
: Verify Authentication","text":"Use the check-login
command to verify if you're logged in and ensure that your credentials are cached:
swarms check-login\n
This ensures seamless operation, allowing agents to execute tasks securely on the Swarms platform without needing to log in repeatedly.
"},{"location":"swarms/cli/cli_guide/#25-read-docs-explore-official-documentation","title":"2.5.read-docs
: Explore Official Documentation","text":"Easily access the official documentation with this command:
swarms read-docs\n
You\u2019ll be redirected to the Swarms documentation site, Swarms Docs, where you'll find in-depth explanations, advanced use-cases, and more.
"},{"location":"swarms/cli/cli_guide/#26-run-agents-orchestrate-agents","title":"2.6.run-agents
: Orchestrate Agents","text":"Perhaps the most important command in the CLI is run-agents
, which allows you to execute your agents as defined in your agents.yaml
configuration file.
swarms run-agents --yaml-file agents.yaml\n
If you want to specify a custom configuration file, just pass in the YAML file using the --yaml-file
flag.
agents.yaml
Configuration File","text":"The agents.yaml
file is at the heart of your Swarms setup. This file allows you to define the structure and behavior of each agent you want to run. Below is an example YAML configuration for two agents.
agents.yaml
Configuration:","text":"agents:\n - agent_name: \"Financial-Advisor-Agent\"\n model:\n model_name: \"gpt-4o-mini\"\n temperature: 0.3\n max_tokens: 2500\n system_prompt: |\n You are a highly knowledgeable financial advisor with expertise in tax strategies, investment management, and retirement planning. \n Provide concise and actionable advice based on the user's financial goals and situation.\n max_loops: 1\n autosave: true\n dashboard: false\n verbose: true\n dynamic_temperature_enabled: true\n saved_state_path: \"financial_advisor_state.json\"\n user_name: \"finance_user\"\n retry_attempts: 2\n context_length: 200000\n return_step_meta: false\n output_type: \"str\"\n task: \"I am 35 years old with a moderate risk tolerance. How should I diversify my portfolio for retirement in 20 years?\"\n\n - agent_name: \"Stock-Market-Analysis-Agent\"\n model:\n model_name: \"gpt-4o-mini\"\n temperature: 0.25\n max_tokens: 1800\n system_prompt: |\n You are an expert stock market analyst with a deep understanding of technical analysis, market trends, and long-term investment strategies. \n Provide well-reasoned investment advice, taking current market conditions into account.\n max_loops: 2\n autosave: true\n dashboard: false\n verbose: true\n dynamic_temperature_enabled: false\n saved_state_path: \"stock_market_analysis_state.json\"\n user_name: \"market_analyst\"\n retry_attempts: 3\n context_length: 150000\n return_step_meta: true\n output_type: \"json\"\n task: \"Analyze the current market trends for tech stocks and suggest the best long-term investment options.\"\n\n - agent_name: \"Marketing-Strategy-Agent\"\n model:\n model_name: \"gpt-4o-mini\"\n temperature: 0.4\n max_tokens: 2200\n system_prompt: |\n You are a marketing strategist with expertise in digital campaigns, customer engagement, and branding. \n Provide a comprehensive marketing strategy to increase brand awareness and drive customer acquisition for an e-commerce business.\n max_loops: 1\n autosave: true\n dashboard: false\n verbose: true\n dynamic_temperature_enabled: true\n saved_state_path: \"marketing_strategy_state.json\"\n user_name: \"marketing_user\"\n retry_attempts: 2\n context_length: 200000\n return_step_meta: false\n output_type: \"str\"\n task: \"Create a 6-month digital marketing strategy for a new eco-friendly e-commerce brand targeting millennial consumers.\"\n\n - agent_name: \"Operations-Optimizer-Agent\"\n model:\n model_name: \"gpt-4o-mini\"\n temperature: 0.2\n max_tokens: 2000\n system_prompt: |\n You are an operations expert with extensive experience in optimizing workflows, reducing costs, and improving efficiency in supply chains. \n Provide actionable recommendations to streamline business operations.\n max_loops: 1\n autosave: true\n dashboard: false\n verbose: true\n dynamic_temperature_enabled: true\n saved_state_path: \"operations_optimizer_state.json\"\n user_name: \"operations_user\"\n retry_attempts: 1\n context_length: 200000\n return_step_meta: false\n output_type: \"str\"\n task: \"Identify ways to improve the efficiency of a small manufacturing company\u2019s supply chain to reduce costs by 15% within one year.\"\n
"},{"location":"swarms/cli/cli_guide/#32-explanation-of-key-fields","title":"3.2. Explanation of Key Fields","text":"gpt-4o-mini
is used.true
or false
depending on whether you want to enable the agent\u2019s dashboard.agents.yaml
","text":"After configuring the agents, you can execute them directly from the CLI:
swarms run-agents --yaml-file agents_config.yaml\n
This command will run the specified agents, allowing them to perform their tasks and return results according to your configuration.
"},{"location":"swarms/cli/cli_guide/#4-use-cases-for-the-swarms-cli","title":"4. Use Cases for the Swarms CLI","text":"Now that you have a solid understanding of the basic commands and the agents.yaml
configuration, let's explore how the Swarms CLI can be applied in real-world scenarios.
For financial firms or hedge funds, agents like the \"Financial-Analysis-Agent\" can be set up to automate complex financial analyses. You could have agents analyze market trends, recommend portfolio adjustments, or perform tax optimizations.
Example Task: Automating long-term investment analysis using historical stock data.
swarms run-agents --yaml-file finance_analysis.yaml\n
"},{"location":"swarms/cli/cli_guide/#42-marketing-automation","title":"4.2. Marketing Automation","text":"Marketing departments can utilize Swarms agents to optimize campaigns, generate compelling ad copy, or provide detailed marketing insights. You can create a Marketing-Agent
to process customer feedback, perform sentiment analysis, and suggest marketing strategies.
Example Task: Running multiple agents to analyze customer sentiment from recent surveys.
swarms run-agents --yaml-file marketing_agents.yaml\n
"},{"location":"swarms/cli/cli_guide/#43-operations-and-task-management","title":"4.3. Operations and Task Management","text":"Companies can create agents for automating internal task management. For example, you might have a set of agents responsible for managing deadlines, employee tasks, and progress tracking.
Example Task: Automating a task management system using Swarms agents.
swarms run-agents --yaml-file operations_agents.yaml\n
"},{"location":"swarms/cli/cli_guide/#5-advanced-usage-customizing-and-scaling-agents","title":"5. Advanced Usage: Customizing and Scaling Agents","text":"The Swarms CLI is flexible and scalable. As your needs grow, you can start running agents across multiple machines, scale workloads dynamically, and even run multiple swarms in parallel.
"},{"location":"swarms/cli/cli_guide/#51-running-agents-in-parallel","title":"5.1. Running Agents in Parallel","text":"To run multiple agents concurrently, you can utilize different YAML configurations for each agent or group of agents. This allows for extensive scaling, especially when dealing with large datasets or complex workflows.
swarms run-agents --yaml-file agents_batch_1.yaml &\nswar\n\nms run-agents --yaml-file agents_batch_2.yaml &\n
"},{"location":"swarms/cli/cli_guide/#52-integration-with-other-tools","title":"5.2. Integration with Other Tools","text":"The Swarms CLI integrates with many tools and platforms via APIs. You can connect Swarms with external platforms such as AWS, Azure, or your custom cloud setup for enterprise-level automation.
"},{"location":"swarms/cli/cli_guide/#6-conclusion-and-next-steps","title":"6. Conclusion and Next Steps","text":"The Swarms CLI is a powerful tool for automating agent workflows in various industries, including finance, marketing, and operations. By following this guide, you should now have a thorough understanding of how to install and use the CLI, configure agents, and apply it to real-world use cases.
To further explore Swarms, be sure to check out the official Swarms GitHub repository, where you can contribute to the framework or build your own custom agents. Dive deeper into the documentation at Swarms Docs, and browse the extensive agent marketplace at swarms.ai.
With the Swarms CLI, the future of automation is within reach.
"},{"location":"swarms/cli/main/","title":"Swarms CLI Documentation","text":"The Swarms Command Line Interface (CLI) allows you to easily manage and run your Swarms of agents from the command line. This page will guide you through the installation process and provide a breakdown of the available commands.
"},{"location":"swarms/cli/main/#installation","title":"Installation","text":"You can install the swarms
package using pip
or poetry
.
pip3 install -U swarms\n
"},{"location":"swarms/cli/main/#using-poetry","title":"Using poetry","text":"poetry add swarms\n
Once installed, you can run the Swarms CLI with the following command:
poetry run swarms help\n
"},{"location":"swarms/cli/main/#swarms-cli-help","title":"Swarms CLI - Help","text":"When running swarms help
, you'll see the following output:
_________ \n / _____/_ _ _______ _______ _____ ______\n \\_____ \\ \\/ \\/ /\\__ \\_ __ \\/ \\ / ___/\n / \\ / / __ \\| | \\/ Y Y \\___ \\ \n/_______ / \\/\\_/ (____ /__| |__|_| /____ >\n \\/ \\/ \\/ \\/ \n\n\n\n Swarms CLI - Help\n\n Commands:\n onboarding : Starts the onboarding process\n help : Shows this help message\n get-api-key : Retrieves your API key from the platform\n check-login : Checks if you're logged in and starts the cache\n read-docs : Redirects you to swarms cloud documentation!\n run-agents : Run your Agents from your agents.yaml\n\n For more details, visit: https://docs.swarms.world\n
"},{"location":"swarms/cli/main/#cli-commands","title":"CLI Commands","text":"Below is a detailed explanation of the available commands:
Usage:
swarms onboarding\n
Usage:
swarms help\n
Usage:
swarms get-api-key\n
Usage:
swarms check-login\n
Usage:
swarms read-docs\n
agents.yaml
configuration file, which defines the structure and behavior of your agents. Refer to this document for how to leverage yamls for fast, reliable, and simple agent orchestration. CLICK HERE You can customize what yaml file to run with --yaml-file
Usage:
swarms run-agents --yaml-file agents.yaml\n
"},{"location":"swarms/concept/framework_architecture/","title":"Swarms Framework Architecture","text":"The Swarms package is designed to orchestrate and manage swarms of agents, enabling collaboration between multiple Large Language Models (LLMs) or other agent types to solve complex tasks. The architecture is modular and scalable, facilitating seamless integration of various agents, models, prompts, and tools. Below is an overview of the architectural components, along with instructions on where to find the corresponding documentation.
swarms/\n\u251c\u2500\u2500 agents/\n\u251c\u2500\u2500 artifacts/\n\u251c\u2500\u2500 cli/\n\u251c\u2500\u2500 memory/\n\u251c\u2500\u2500 models/ ---> Moved to swarm_models\n\u251c\u2500\u2500 prompts/\n\u251c\u2500\u2500 schemas/\n\u251c\u2500\u2500 structs/\n\u251c\u2500\u2500 telemetry/\n\u251c\u2500\u2500 tools/\n\u251c\u2500\u2500 utils/\n\u2514\u2500\u2500 __init__.py\n
"},{"location":"swarms/concept/framework_architecture/#role-of-folders-in-the-swarms-framework","title":"Role of Folders in the Swarms Framework","text":"The Swarms framework is composed of several key folders, each serving a specific role in building, orchestrating, and managing swarms of agents. Below is an in-depth explanation of the role of each folder in the framework's architecture, focusing on how they contribute to the overall system for handling complex multi-agent workflows.
"},{"location":"swarms/concept/framework_architecture/#1-agents-folder-agents","title":"1. Agents Folder (agents/
)","text":"artifacts/
)","text":"cli/
)","text":"memory/
) Deprecated!!","text":"models/
) Moved to swarm_models
","text":"prompts/
)","text":"schemas/
)","text":"structs/
)","text":"telemetry/
)","text":"tools/
)","text":"utils/
)","text":"__init__.py
)","text":"__init__.py
file is the entry point of the Swarms package, ensuring that all necessary modules, agents, and tools are loaded when the Swarms framework is imported. It allows for the modular loading of different components, making it easier for users to work with only the parts of the framework they need.Here, users can find detailed guides, tutorials, and API references on how to use each of the folders mentioned above. The documentation covers setup, agent orchestration, and practical examples of how to leverage swarms for real-world tasks.
GitHub Repository:
By understanding the purpose and role of each folder in the Swarms framework, users can more effectively build, orchestrate, and manage agents to handle complex tasks and workflows at scale.
"},{"location":"swarms/concept/framework_architecture/#support","title":"Support:","text":"Post your issue whether it's an issue or a feature request
Community Support
Overview: A Federated Swarm architecture involves multiple independent swarms collaborating to complete a task. Each swarm operates autonomously but can share information and results with other swarms.
Use-Cases: - Distributed learning systems where data is processed across multiple nodes.
graph TD\n A[Central Coordinator]\n subgraph Swarm1\n B1[Agent 1.1] --> B2[Agent 1.2]\n B2 --> B3[Agent 1.3]\n end\n subgraph Swarm2\n C1[Agent 2.1] --> C2[Agent 2.2]\n C2 --> C3[Agent 2.3]\n end\n subgraph Swarm3\n D1[Agent 3.1] --> D2[Agent 3.2]\n D2 --> D3[Agent 3.3]\n end\n B1 --> A\n C1 --> A\n D1 --> A
"},{"location":"swarms/concept/future_swarm_architectures/#star-swarm","title":"Star Swarm","text":"Overview: A Star Swarm architecture features a central agent that coordinates the activities of several peripheral agents. The central agent assigns tasks to the peripheral agents and aggregates their results.
Use-Cases: - Centralized decision-making processes.
graph TD\n A[Central Agent] --> B1[Peripheral Agent 1]\n A --> B2[Peripheral Agent 2]\n A --> B3[Peripheral Agent 3]\n A --> B4[Peripheral Agent 4]
"},{"location":"swarms/concept/future_swarm_architectures/#mesh-swarm","title":"Mesh Swarm","text":"Overview: A Mesh Swarm architecture allows for a fully connected network of agents where each agent can communicate with any other agent. This setup provides high flexibility and redundancy.
Use-Cases: - Complex systems requiring high fault tolerance and redundancy.
graph TD\n A1[Agent 1] --> A2[Agent 2]\n A1 --> A3[Agent 3]\n A1 --> A4[Agent 4]\n A2 --> A3\n A2 --> A4\n A3 --> A4
"},{"location":"swarms/concept/future_swarm_architectures/#cascade-swarm","title":"Cascade Swarm","text":"Overview: A Cascade Swarm architecture involves a chain of agents where each agent triggers the next one in a cascade effect. This is useful for scenarios where tasks need to be processed in stages, and each stage initiates the next.
Use-Cases: - Multi-stage processing tasks such as data transformation pipelines.
graph TD\n A[Trigger Agent] --> B[Agent 1]\n B --> C[Agent 2]\n C --> D[Agent 3]\n D --> E[Agent 4]
"},{"location":"swarms/concept/future_swarm_architectures/#hybrid-swarm","title":"Hybrid Swarm","text":"Overview: A Hybrid Swarm architecture combines elements of various architectures to suit specific needs. It might integrate hierarchical and parallel components, or mix sequential and round robin patterns.
Use-Cases: - Complex workflows requiring a mix of different processing strategies.
graph TD\n A[Root Agent] --> B1[Sub-Agent 1]\n A --> B2[Sub-Agent 2]\n B1 --> C1[Parallel Agent 1]\n B1 --> C2[Parallel Agent 2]\n B2 --> C3[Sequential Agent 1]\n C3 --> C4[Sequential Agent 2]\n C3 --> C5[Sequential Agent 3]
These swarm architectures provide different models for organizing and orchestrating large language models (LLMs) to perform various tasks efficiently. Depending on the specific requirements of your project, you can choose the appropriate architecture or even combine elements from multiple architectures to create a hybrid solution.
"},{"location":"swarms/concept/how_to_choose_swarms/","title":"Choosing the Right Swarm for Your Business Problem","text":"Depending on the complexity and nature of your problem, different swarm configurations can be more effective in achieving optimal performance. This guide provides a detailed explanation of when to use each swarm type, including their strengths and potential drawbacks.
"},{"location":"swarms/concept/how_to_choose_swarms/#swarm-types-overview","title":"Swarm Types Overview","text":"MajorityVoting is ideal for scenarios where accuracy is paramount, and the decision must be determined from multiple perspectives. For instance, choosing the best marketing strategy where various marketing agents vote on the highest predicted performance.
"},{"location":"swarms/concept/how_to_choose_swarms/#advantages","title":"Advantages","text":"Warning
Majority voting can be slow if too many agents are involved. Ensure that your swarm size is manageable for real-time decision-making.
"},{"location":"swarms/concept/how_to_choose_swarms/#agentrearrange-sequential-and-parallel","title":"AgentRearrange (Sequential and Parallel)","text":""},{"location":"swarms/concept/how_to_choose_swarms/#sequential-swarm-use-case","title":"Sequential Swarm Use-Case","text":"For linear workflows where each task depends on the outcome of the previous task, such as processing legal documents step by step through a series of checks and validations.
"},{"location":"swarms/concept/how_to_choose_swarms/#parallel-swarm-use-case","title":"Parallel Swarm Use-Case","text":"For tasks that can be executed concurrently, such as batch processing customer data in marketing campaigns. Parallel swarms can significantly reduce processing time by dividing tasks across multiple agents.
"},{"location":"swarms/concept/how_to_choose_swarms/#notes","title":"Notes","text":"Note
Sequential swarms are slower but ensure strict task dependencies are respected. Parallel swarms are faster but require careful management of task interdependencies.
"},{"location":"swarms/concept/how_to_choose_swarms/#roundrobin-swarm","title":"RoundRobin Swarm","text":""},{"location":"swarms/concept/how_to_choose_swarms/#use-case_1","title":"Use-Case","text":"For balanced task distribution where agents need to handle tasks evenly. An example would be assigning customer support tickets to agents in a cyclic manner, ensuring no single agent is overloaded.
"},{"location":"swarms/concept/how_to_choose_swarms/#advantages_1","title":"Advantages","text":"Warning
Round-robin may not be the best choice when some agents are more competent than others, as it can assign tasks equally regardless of agent performance.
"},{"location":"swarms/concept/how_to_choose_swarms/#mixture-of-agents","title":"Mixture of Agents","text":""},{"location":"swarms/concept/how_to_choose_swarms/#use-case_2","title":"Use-Case","text":"Ideal for complex problems that require diverse skills. For example, a financial forecasting problem where some agents specialize in stock data, while others handle economic factors.
"},{"location":"swarms/concept/how_to_choose_swarms/#notes_1","title":"Notes","text":"Note
A mixture of agents is highly flexible and can adapt to various problem domains. However, be mindful of coordination overhead.
"},{"location":"swarms/concept/how_to_choose_swarms/#graphworkflow-swarm","title":"GraphWorkflow Swarm","text":""},{"location":"swarms/concept/how_to_choose_swarms/#use-case_3","title":"Use-Case","text":"This swarm structure is suited for tasks that can be broken down into a series of dependencies but are not strictly linear, such as an AI-driven software development pipeline where one agent handles front-end development while another handles back-end concurrently.
"},{"location":"swarms/concept/how_to_choose_swarms/#advantages_2","title":"Advantages","text":"Warning
GraphWorkflow requires clear definition of task dependencies, or it can lead to execution issues and delays.
"},{"location":"swarms/concept/how_to_choose_swarms/#groupchat-swarm","title":"GroupChat Swarm","text":""},{"location":"swarms/concept/how_to_choose_swarms/#use-case_4","title":"Use-Case","text":"For real-time collaborative decision-making. For instance, agents could participate in group chat for negotiating contracts, each contributing their expertise and adjusting responses based on the collective discussion.
"},{"location":"swarms/concept/how_to_choose_swarms/#advantages_3","title":"Advantages","text":"Warning
High communication overhead between agents may slow down decision-making in large swarms.
"},{"location":"swarms/concept/how_to_choose_swarms/#agentregistry-swarm","title":"AgentRegistry Swarm","text":""},{"location":"swarms/concept/how_to_choose_swarms/#use-case_5","title":"Use-Case","text":"For dynamically managing agents based on the problem domain. An AgentRegistry is useful when new agents can be added or removed as needed, such as adding new machine learning models for an evolving recommendation engine.
"},{"location":"swarms/concept/how_to_choose_swarms/#notes_2","title":"Notes","text":"Note
AgentRegistry is a flexible solution but introduces additional complexity when agents need to be discovered and registered on the fly.
"},{"location":"swarms/concept/how_to_choose_swarms/#spreadsheetswarm","title":"SpreadsheetSwarm","text":""},{"location":"swarms/concept/how_to_choose_swarms/#use-case_6","title":"Use-Case","text":"When dealing with massive-scale data or agent outputs that need to be stored and managed in a tabular format. SpreadsheetSwarm is ideal for businesses handling thousands of agent outputs, such as large-scale marketing analytics or financial audits.
"},{"location":"swarms/concept/how_to_choose_swarms/#advantages_4","title":"Advantages","text":"Warning
Ensure the correct configuration of agents in SpreadsheetSwarm to avoid data mismatches and inconsistencies when scaling up to thousands of agents.
"},{"location":"swarms/concept/how_to_choose_swarms/#final-thoughts","title":"Final Thoughts","text":"The choice of swarm depends on:
Nature of the task: Whether it's sequential or parallel.
Problem complexity: Simple problems might benefit from RoundRobin, while complex ones may need GraphWorkflow or Mixture of Agents.
Scale of execution: For large-scale tasks, Swarms like SpreadsheetSwarm or MajorityVoting provide scalability with structured outputs.
When integrating agents in a business workflow, it's crucial to balance task complexity, agent capabilities, and scalability to ensure the optimal swarm architecture.
"},{"location":"swarms/concept/philosophy/","title":"Our Philosophy: Simplifying Multi-Agent Collaboration Through Readable Code and Performance Optimization","text":"Our mission is to streamline multi-agent collaboration by emphasizing simplicity, readability, and performance in our codebase. This document outlines our core tactics:
By adhering to these principles, we aim to make our systems more maintainable, scalable, and efficient, facilitating easier integration and collaboration among developers and agents alike.
"},{"location":"swarms/concept/philosophy/#1-emphasizing-readable-code","title":"1. Emphasizing Readable Code","text":"Readable code is the cornerstone of maintainable and scalable systems. It ensures that developers can easily understand, modify, and extend the codebase.
"},{"location":"swarms/concept/philosophy/#11-use-of-type-annotations","title":"1.1 Use of Type Annotations","text":"Type annotations enhance code readability and catch errors early in the development process.
def process_data(data: List[str]) -> Dict[str, int]:\n result = {}\n for item in data:\n result[item] = len(item)\n return result\n
"},{"location":"swarms/concept/philosophy/#12-code-style-guidelines","title":"1.2 Code Style Guidelines","text":"Adhering to consistent code style guidelines, such as PEP 8 for Python, ensures uniformity across the codebase.
snake_case
for variables and functions.PascalCase
for class names.Comprehensive documentation helps new developers understand the purpose and functionality of code modules.
def fetch_user_profile(user_id: str) -> UserProfile:\n \"\"\"\n Fetches the user profile from the database.\n\n Args:\n user_id (str): The unique identifier of the user.\n\n Returns:\n UserProfile: An object containing user profile data.\n \"\"\"\n # Function implementation\n
"},{"location":"swarms/concept/philosophy/#14-consistent-naming-conventions","title":"1.4 Consistent Naming Conventions","text":"Consistent naming reduces confusion and makes the code self-explanatory.
calculate_total
).total_amount
).MAX_RETRIES
).Logging is essential for debugging and monitoring the health of applications.
"},{"location":"swarms/concept/philosophy/#21-why-logging-is-important","title":"2.1 Why Logging is Important","text":"import logging\n\nlogging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)s:%(message)s')\n\ndef connect_to_service(url: str) -> bool:\n logging.debug(f\"Attempting to connect to {url}\")\n try:\n # Connection logic\n logging.info(f\"Successfully connected to {url}\")\n return True\n except ConnectionError as e:\n logging.error(f\"Connection failed to {url}: {e}\")\n return False\n
"},{"location":"swarms/concept/philosophy/#3-achieving-bleeding-edge-performance","title":"3. Achieving Bleeding-Edge Performance","text":"Performance is critical, especially when dealing with multiple agents and large datasets.
"},{"location":"swarms/concept/philosophy/#31-concurrency-and-parallelism","title":"3.1 Concurrency and Parallelism","text":"Utilizing concurrency and parallelism can significantly improve performance.
Asynchronous programming allows for non-blocking operations, leading to better resource utilization.
import asyncio\n\nasync def fetch_data(endpoint: str) -> dict:\n async with aiohttp.ClientSession() as session:\n async with session.get(endpoint) as response:\n return await response.json()\n\nasync def main():\n endpoints = ['https://api.example.com/data1', 'https://api.example.com/data2']\n tasks = [fetch_data(url) for url in endpoints]\n results = await asyncio.gather(*tasks)\n print(results)\n\nasyncio.run(main())\n
"},{"location":"swarms/concept/philosophy/#33-utilizing-modern-hardware-capabilities","title":"3.3 Utilizing Modern Hardware Capabilities","text":"Leverage multi-core processors and GPUs for computationally intensive tasks.
from concurrent.futures import ThreadPoolExecutor\n\ndef process_item(item):\n # Processing logic\n return result\n\nitems = [1, 2, 3, 4, 5]\nwith ThreadPoolExecutor(max_workers=5) as executor:\n results = list(executor.map(process_item, items))\n
"},{"location":"swarms/concept/philosophy/#4-simplifying-multi-agent-collaboration","title":"4. Simplifying Multi-Agent Collaboration","text":"Simplifying the abstraction of multi-agent collaboration makes it accessible and manageable.
"},{"location":"swarms/concept/philosophy/#41-importance-of-simple-abstractions","title":"4.1 Importance of Simple Abstractions","text":"Every agent should adhere to a standard interface for consistency.
"},{"location":"swarms/concept/philosophy/#421-agent-base-class","title":"4.2.1 Agent Base Class","text":"from abc import ABC, abstractmethod\n\nclass BaseAgent(ABC):\n @abstractmethod\n def run(self, task: str) -> Any:\n pass\n\n def __call__(self, task: str) -> Any:\n return self.run(task)\n\n @abstractmethod\n async def arun(self, task: str) -> Any:\n pass\n
"},{"location":"swarms/concept/philosophy/#422-example-agent-implementation","title":"4.2.2 Example Agent Implementation","text":"class DataProcessingAgent(BaseAgent):\n def run(self, task: str) -> str:\n # Synchronous processing logic\n return f\"Processed {task}\"\n\n async def arun(self, task: str) -> str:\n # Asynchronous processing logic\n return f\"Processed {task} asynchronously\"\n
"},{"location":"swarms/concept/philosophy/#423-usage-example","title":"4.2.3 Usage Example","text":"agent = DataProcessingAgent()\n\n# Synchronous call\nresult = agent.run(\"data_task\")\nprint(result) # Output: Processed data_task\n\n# Asynchronous call\nasync def main():\n result = await agent.arun(\"data_task\")\n print(result) # Output: Processed data_task asynchronously\n\nasyncio.run(main())\n
"},{"location":"swarms/concept/philosophy/#43-mermaid-diagram-agent-interaction","title":"4.3 Mermaid Diagram: Agent Interaction","text":"sequenceDiagram\n participant User\n participant AgentA\n participant AgentB\n participant AgentC\n\n User->>AgentA: run(task)\n AgentA-->>AgentB: arun(sub_task)\n AgentB-->>AgentC: run(sub_sub_task)\n AgentC-->>AgentB: result_sub_sub_task\n AgentB-->>AgentA: result_sub_task\n AgentA-->>User: final_result
Agents collaborating to fulfill a user's task.
"},{"location":"swarms/concept/philosophy/#44-simplified-collaboration-workflow","title":"4.4 Simplified Collaboration Workflow","text":"flowchart TD\n UserRequest[\"User Request\"] --> Agent1[\"Agent 1\"]\n Agent1 -->|\"run(task)\"| Agent2[\"Agent 2\"]\n Agent2 -->|\"arun(task)\"| Agent3[\"Agent 3\"]\n Agent3 -->|\"result\"| Agent2\n Agent2 -->|\"result\"| Agent1\n Agent1 -->|\"result\"| UserResponse[\"User Response\"]
Workflow demonstrating how agents process a task collaboratively.
"},{"location":"swarms/concept/philosophy/#5-bringing-it-all-together","title":"5. Bringing It All Together","text":"By integrating these principles, we create a cohesive system where agents can efficiently collaborate while maintaining code quality and performance.
"},{"location":"swarms/concept/philosophy/#51-example-multi-agent-system","title":"5.1 Example: Multi-Agent System","text":""},{"location":"swarms/concept/philosophy/#511-agent-definitions","title":"5.1.1 Agent Definitions","text":"class AgentA(BaseAgent):\n def run(self, task: str) -> str:\n # Agent A processing\n return f\"AgentA processed {task}\"\n\n async def arun(self, task: str) -> str:\n # Agent A asynchronous processing\n return f\"AgentA processed {task} asynchronously\"\n\nclass AgentB(BaseAgent):\n def run(self, task: str) -> str:\n # Agent B processing\n return f\"AgentB processed {task}\"\n\n async def arun(self, task: str) -> str:\n # Agent B asynchronous processing\n return f\"AgentB processed {task} asynchronously\"\n
"},{"location":"swarms/concept/philosophy/#512-orchestrator-agent","title":"5.1.2 Orchestrator Agent","text":"class OrchestratorAgent(BaseAgent):\n def __init__(self):\n self.agent_a = AgentA()\n self.agent_b = AgentB()\n\n def run(self, task: str) -> str:\n result_a = self.agent_a.run(task)\n result_b = self.agent_b.run(task)\n return f\"Orchestrated results: {result_a} & {result_b}\"\n\n async def arun(self, task: str) -> str:\n result_a = await self.agent_a.arun(task)\n result_b = await self.agent_b.arun(task)\n return f\"Orchestrated results: {result_a} & {result_b}\"\n
"},{"location":"swarms/concept/philosophy/#513-execution","title":"5.1.3 Execution","text":"orchestrator = OrchestratorAgent()\n\n# Synchronous execution\nresult = orchestrator.run(\"task1\")\nprint(result)\n# Output: Orchestrated results: AgentA processed task1 & AgentB processed task1\n\n# Asynchronous execution\nasync def main():\n result = await orchestrator.arun(\"task1\")\n print(result)\n # Output: Orchestrated results: AgentA processed task1 asynchronously & AgentB processed task1 asynchronously\n\nasyncio.run(main())\n
"},{"location":"swarms/concept/philosophy/#52-mermaid-diagram-orchestrator-workflow","title":"5.2 Mermaid Diagram: Orchestrator Workflow","text":"sequenceDiagram\n participant User\n participant Orchestrator\n participant AgentA\n participant AgentB\n\n User->>Orchestrator: run(task)\n Orchestrator->>AgentA: run(task)\n Orchestrator->>AgentB: run(task)\n AgentA-->>Orchestrator: result_a\n AgentB-->>Orchestrator: result_b\n Orchestrator-->>User: Orchestrated results
Orchestrator coordinating between Agent A and Agent B.
"},{"location":"swarms/concept/philosophy/#6-conclusion","title":"6. Conclusion","text":"Our philosophy centers around making multi-agent collaboration as simple and efficient as possible by:
run
, __call__
, and arun
methods.By adhering to these principles, we create a robust foundation for scalable and maintainable systems that can adapt to evolving technological landscapes.
"},{"location":"swarms/concept/swarm_architectures/","title":"Multi-Agent Architectures","text":""},{"location":"swarms/concept/swarm_architectures/#what-is-a-multi-agent-architecture","title":"What is a Multi-Agent Architecture?","text":"A multi-agent architecture refers to a group of more than two agents working collaboratively to achieve a common goal. These agents can be software entities, such as LLMs that interact with each other to perform complex tasks. The concept of multi-agent architectures is inspired by how humans communicate and work together in teams, organizations, and communities, where individual contributions combine to create sophisticated collaborative problem-solving capabilities.
"},{"location":"swarms/concept/swarm_architectures/#how-multi-agent-architectures-facilitate-communication","title":"How Multi-Agent Architectures Facilitate Communication","text":"Multi-agent architectures are designed to establish and manage communication between agents within a system. These architectures define how agents interact, share information, and coordinate their actions to achieve the desired outcomes. Here are some key aspects of multi-agent architectures:
Hierarchical Communication: In hierarchical architectures, communication flows from higher-level agents to lower-level agents. Higher-level agents act as coordinators, distributing tasks and aggregating results. This structure is efficient for tasks that require top-down control and decision-making.
Concurrent Communication: In concurrent architectures, agents operate independently and simultaneously on different tasks. This architecture is suitable for tasks that can be processed concurrently without dependencies, allowing for faster execution and scalability.
Sequential Communication: Sequential architectures process tasks in a linear order, where each agent's output becomes the input for the next agent. This ensures that tasks with dependencies are handled in the correct sequence, maintaining the integrity of the workflow.
Mesh Communication: In mesh architectures, agents are fully connected, allowing any agent to communicate with any other agent. This setup provides high flexibility and redundancy, making it ideal for complex systems requiring dynamic interactions.
Federated Communication: Federated architectures involve multiple independent systems that collaborate by sharing information and results. Each system operates autonomously but can contribute to a larger task, enabling distributed problem-solving across different nodes.
Multi-agent architectures leverage these communication patterns to ensure that agents work together efficiently, adapting to the specific requirements of the task at hand. By defining clear communication protocols and interaction models, multi-agent architectures enable the seamless orchestration of multiple agents, leading to enhanced performance and problem-solving capabilities.
"},{"location":"swarms/concept/swarm_architectures/#core-multi-agent-architectures","title":"Core Multi-Agent Architectures","text":"Name Description Documentation Use Cases Hierarchical Architecture A system where agents are organized in a hierarchy, with higher-level agents coordinating lower-level agents to achieve complex tasks. Learn More Manufacturing process optimization, multi-level sales management, healthcare resource coordination Agent Rearrange A setup where agents rearrange themselves dynamically based on the task requirements and environmental conditions. Learn More Adaptive manufacturing lines, dynamic sales territory realignment, flexible healthcare staffing Concurrent Workflows Agents perform different tasks simultaneously, coordinating to complete a larger goal. Learn More Concurrent production lines, parallel sales operations, simultaneous patient care processes Sequential Coordination Agents perform tasks in a specific sequence, where the completion of one task triggers the start of the next. Learn More Step-by-step assembly lines, sequential sales processes, stepwise patient treatment workflows Mixture of Agents A heterogeneous architecture where agents with different capabilities are combined to solve complex problems. Learn More Financial forecasting, complex problem-solving requiring diverse skills Graph Workflow Agents collaborate in a directed acyclic graph (DAG) format to manage dependencies and parallel tasks. Learn More AI-driven software development pipelines, complex project management Group Chat Agents engage in a chat-like interaction to reach decisions collaboratively. Learn More Real-time collaborative decision-making, contract negotiations Interactive Group Chat Enhanced group chat with dynamic speaker selection and interaction patterns. Learn More Advanced collaborative decision-making, dynamic team coordination Agent Registry A centralized registry where agents are stored, retrieved, and invoked dynamically. Learn More Dynamic agent management, evolving recommendation engines SpreadSheet Manages tasks at scale, tracking agent outputs in a structured format like CSV files. Learn More Large-scale marketing analytics, financial audits Router Routes and chooses the architecture based on the task requirements and available agents. Learn More Dynamic task routing, adaptive architecture selection, optimized agent allocation Heavy High-performance architecture for handling intensive computational tasks with multiple agents. Learn More Large-scale data processing, intensive computational workflows Deep Research Specialized architecture for conducting in-depth research tasks across multiple domains. Learn More Academic research, market analysis, comprehensive data investigation De-Hallucination Architecture designed to reduce and eliminate hallucinations in AI outputs through consensus. Learn More Fact-checking, content verification, reliable information generation Council as Judge Multiple agents act as a council to evaluate and judge outputs or decisions. Learn More Quality assessment, decision validation, peer review processes MALT Specialized architecture for complex language processing tasks across multiple agents. Learn More Natural language processing, translation, content generation Majority Voting Agents vote on decisions with the majority determining the final outcome. Learn More Democratic decision-making, consensus building, error reduction Round Robin Tasks are distributed cyclically among agents in a rotating order. Learn More Load balancing, fair task distribution, resource optimization Auto-Builder Automatically constructs and configures multi-agent systems based on requirements. Learn More Dynamic system creation, adaptive architectures, rapid prototyping Hybrid Hierarchical Cluster Combines hierarchical and peer-to-peer communication patterns for complex workflows. Learn More Complex enterprise workflows, multi-department coordination Election Agents participate in democratic voting processes to select leaders or make collective decisions. Learn More Democratic governance, consensus building, leadership selection Dynamic Conversational Adaptive conversation management with dynamic agent selection and interaction patterns. Learn More Adaptive chatbots, dynamic customer service, contextual conversations Tree Hierarchical tree structure for organizing agents in parent-child relationships. Learn More Organizational hierarchies, decision trees, taxonomic classification"},{"location":"swarms/concept/swarm_architectures/#architectural-patterns","title":"Architectural Patterns","text":""},{"location":"swarms/concept/swarm_architectures/#hierarchical-architecture","title":"Hierarchical Architecture","text":"Overview: Organizes agents in a tree-like structure. Higher-level agents delegate tasks to lower-level agents, which can further divide tasks among themselves. This structure allows for efficient task distribution and scalability.
Use Cases:
Complex decision-making processes where tasks can be broken down into subtasks
Multi-stage workflows such as data processing pipelines or hierarchical reinforcement learning
Learn More
graph TD\n A[Root Agent] --> B1[Sub-Agent 1]\n A --> B2[Sub-Agent 2]\n B1 --> C1[Sub-Agent 1.1]\n B1 --> C2[Sub-Agent 1.2]\n B2 --> C3[Sub-Agent 2.1]\n B2 --> C4[Sub-Agent 2.2]
"},{"location":"swarms/concept/swarm_architectures/#agent-rearrange","title":"Agent Rearrange","text":"Overview: A dynamic architecture where agents rearrange themselves based on task requirements and environmental conditions. Agents can adapt their roles, positions, and relationships to optimize performance for different scenarios.
Use Cases: - Adaptive manufacturing lines that reconfigure based on product requirements
Dynamic sales territory realignment based on market conditions
Flexible healthcare staffing that adjusts to patient needs
Learn More
graph TD\n A[Task Requirements] --> B[Configuration Analyzer]\n B --> C[Optimization Engine]\n\n C --> D[Agent Pool]\n D --> E[Agent 1]\n D --> F[Agent 2]\n D --> G[Agent 3]\n D --> H[Agent N]\n\n C --> I[Rearrangement Logic]\n I --> J[New Configuration]\n J --> K[Role Assignment]\n K --> L[Execution Phase]\n\n L --> M[Performance Monitor]\n M --> N{Optimization Needed?}\n N -->|Yes| C\n N -->|No| O[Continue Execution]
"},{"location":"swarms/concept/swarm_architectures/#concurrent-architecture","title":"Concurrent Architecture","text":"Overview: Multiple agents operate independently and simultaneously on different tasks. Each agent works on its own task without dependencies on the others.
Use Cases: - Tasks that can be processed independently, such as parallel data analysis
Learn More
graph LR\n A[Task Input] --> B1[Agent 1]\n A --> B2[Agent 2]\n A --> B3[Agent 3]\n A --> B4[Agent 4]\n B1 --> C1[Output 1]\n B2 --> C2[Output 2]\n B3 --> C3[Output 3]\n B4 --> C4[Output 4]
"},{"location":"swarms/concept/swarm_architectures/#sequential-architecture","title":"Sequential Architecture","text":"Overview: Processes tasks in a linear sequence. Each agent completes its task before passing the result to the next agent in the chain. Ensures orderly processing and is useful when tasks have dependencies.
Use Cases:
Workflows where each step depends on the previous one, such as assembly lines or sequential data processing
Scenarios requiring strict order of operations
Learn More
graph TD\n A[Input] --> B[Agent 1]\n B --> C[Agent 2]\n C --> D[Agent 3]\n D --> E[Agent 4]\n E --> F[Final Output]
"},{"location":"swarms/concept/swarm_architectures/#round-robin-architecture","title":"Round Robin Architecture","text":"Overview: Tasks are distributed cyclically among a set of agents. Each agent takes turns handling tasks in a rotating order, ensuring even distribution of workload.
Use Cases:
Load balancing in distributed systems
Scenarios requiring fair distribution of tasks to avoid overloading any single agent
Learn More
graph TD\n A[Task Distributor] --> B1[Agent 1]\n A --> B2[Agent 2]\n A --> B3[Agent 3]\n A --> B4[Agent 4]\n B1 --> C[Task Queue]\n B2 --> C\n B3 --> C\n B4 --> C\n C --> A
"},{"location":"swarms/concept/swarm_architectures/#spreadsheet-architecture","title":"SpreadSheet Architecture","text":"Overview: Makes it easy to manage thousands of agents in one place: a CSV file. Initialize any number of agents and run loops of agents on tasks.
Use Cases: - Multi-threaded execution: Execute agents on multiple threads
Save agent outputs into CSV file
One place to analyze agent outputs
Learn More
graph TD\n A[Initialize SpreadSheet System] --> B[Initialize Agents]\n B --> C[Load Task Queue]\n C --> D[Distribute Tasks]\n\n subgraph Agent_Pool[Agent Pool]\n D --> E1[Agent 1]\n D --> E2[Agent 2]\n D --> E3[Agent 3]\n D --> E4[Agent N]\n end\n\n E1 --> F1[Process Task]\n E2 --> F2[Process Task]\n E3 --> F3[Process Task]\n E4 --> F4[Process Task]\n\n F1 --> G[Collect Results]\n F2 --> G\n F3 --> G\n F4 --> G\n\n G --> H[Save to CSV]\n H --> I[Generate Analytics]
"},{"location":"swarms/concept/swarm_architectures/#mixture-of-agents","title":"Mixture of Agents","text":"Overview: Combines multiple agents with different capabilities and expertise to solve complex problems that require diverse skill sets.
Use Cases: - Financial forecasting requiring different analytical approaches
Complex problem-solving needing diverse expertise
Multi-domain analysis tasks
Learn More
graph TD\n A[Task Input] --> B[Layer 1: Reference Agents]\n B --> C[Specialist Agent 1]\n B --> D[Specialist Agent 2]\n B --> E[Specialist Agent N]\n\n C --> F[Response 1]\n D --> G[Response 2]\n E --> H[Response N]\n\n F --> I[Layer 2: Aggregator Agent]\n G --> I\n H --> I\n I --> J[Synthesized Output]
"},{"location":"swarms/concept/swarm_architectures/#graph-workflow","title":"Graph Workflow","text":"Overview: Organizes agents in a directed acyclic graph (DAG) format, enabling complex dependencies and parallel execution paths.
Use Cases: - AI-driven software development pipelines
Complex project management with dependencies
Multi-step data processing workflows
Learn More
graph TD\n A[Start Node] --> B[Agent 1]\n A --> C[Agent 2]\n B --> D[Agent 3]\n C --> D\n B --> E[Agent 4]\n D --> F[Agent 5]\n E --> F\n F --> G[End Node]
"},{"location":"swarms/concept/swarm_architectures/#group-chat","title":"Group Chat","text":"Overview: Enables agents to engage in chat-like interactions to reach decisions collaboratively through discussion and consensus building.
Use Cases: - Real-time collaborative decision-making
Contract negotiations
Brainstorming sessions
Learn More
graph TD\n A[Discussion Topic] --> B[Chat Environment]\n B --> C[Agent 1]\n B --> D[Agent 2]\n B --> E[Agent 3]\n B --> F[Agent N]\n\n C --> G[Message Exchange]\n D --> G\n E --> G\n F --> G\n\n G --> H[Consensus Building]\n H --> I[Final Decision]
"},{"location":"swarms/concept/swarm_architectures/#interactive-group-chat","title":"Interactive Group Chat","text":"Overview: Enhanced version of Group Chat with dynamic speaker selection, priority-based communication, and advanced interaction patterns.
Use Cases: - Advanced collaborative decision-making
Dynamic team coordination
Adaptive conversation management
Learn More
graph TD\n A[Conversation Manager] --> B[Speaker Selection Logic]\n B --> C[Priority Speaker]\n B --> D[Random Speaker]\n B --> E[Round Robin Speaker]\n\n C --> F[Active Discussion]\n D --> F\n E --> F\n\n F --> G[Agent Pool]\n G --> H[Agent 1]\n G --> I[Agent 2]\n G --> J[Agent N]\n\n H --> K[Dynamic Response]\n I --> K\n J --> K\n K --> A
"},{"location":"swarms/concept/swarm_architectures/#agent-registry","title":"Agent Registry","text":"Overview: A centralized registry system where agents are stored, retrieved, and invoked dynamically. The registry maintains metadata about agent capabilities, availability, and performance metrics, enabling intelligent agent selection and management.
Use Cases: - Dynamic agent management in large-scale systems
Evolving recommendation engines that adapt agent selection
Service discovery in distributed agent systems
Learn More
graph TD\n A[Agent Registration] --> B[Registry Database]\n B --> C[Agent Metadata]\n C --> D[Capabilities]\n C --> E[Performance Metrics]\n C --> F[Availability Status]\n\n G[Task Request] --> H[Registry Query Engine]\n H --> I[Agent Discovery]\n I --> J[Capability Matching]\n J --> K[Agent Selection]\n\n K --> L[Agent Invocation]\n L --> M[Task Execution]\n M --> N[Performance Tracking]\n N --> O[Registry Update]\n O --> B
"},{"location":"swarms/concept/swarm_architectures/#router-architecture","title":"Router Architecture","text":"Overview: Intelligently routes tasks to the most appropriate agents or architectures based on task requirements and agent capabilities.
Use Cases: - Dynamic task routing
Adaptive architecture selection
Optimized agent allocation
Learn More
graph TD\n A[Incoming Task] --> B[Router Analysis]\n B --> C[Task Classification]\n C --> D[Agent Capability Matching]\n\n D --> E[Route to Sequential]\n D --> F[Route to Concurrent]\n D --> G[Route to Hierarchical]\n D --> H[Route to Specialist Agent]\n\n E --> I[Execute Architecture]\n F --> I\n G --> I\n H --> I\n\n I --> J[Collect Results]\n J --> K[Return Output]
"},{"location":"swarms/concept/swarm_architectures/#heavy-architecture","title":"Heavy Architecture","text":"Overview: High-performance architecture designed for handling intensive computational tasks with multiple agents working on resource-heavy operations.
Use Cases: - Large-scale data processing
Intensive computational workflows
High-throughput task execution
Learn More
graph TD\n A[Resource Manager] --> B[Load Balancer]\n B --> C[Heavy Agent Pool]\n\n C --> D[Compute Agent 1]\n C --> E[Compute Agent 2]\n C --> F[Compute Agent N]\n\n D --> G[Resource Monitor]\n E --> G\n F --> G\n\n G --> H[Performance Optimizer]\n H --> I[Result Aggregator]\n I --> J[Final Output]
"},{"location":"swarms/concept/swarm_architectures/#deep-research-architecture","title":"Deep Research Architecture","text":"Overview: Specialized architecture for conducting comprehensive research tasks across multiple domains with iterative refinement and cross-validation.
Use Cases: - Academic research projects
Market analysis and intelligence
Comprehensive data investigation
Learn More
graph TD\n A[Research Query] --> B[Research Planner]\n B --> C[Domain Analysis]\n C --> D[Research Agent 1]\n C --> E[Research Agent 2]\n C --> F[Research Agent N]\n\n D --> G[Initial Findings]\n E --> G\n F --> G\n\n G --> H[Cross-Validation]\n H --> I[Refinement Loop]\n I --> J[Synthesis Agent]\n J --> K[Comprehensive Report]
"},{"location":"swarms/concept/swarm_architectures/#de-hallucination-architecture","title":"De-Hallucination Architecture","text":"Overview: Architecture specifically designed to reduce and eliminate hallucinations in AI outputs through consensus mechanisms and fact-checking protocols.
Use Cases: - Fact-checking and verification
Content validation
Reliable information generation
graph TD\n A[Input Query] --> B[Primary Agent]\n B --> C[Initial Response]\n C --> D[Validation Layer]\n\n D --> E[Fact-Check Agent 1]\n D --> F[Fact-Check Agent 2]\n D --> G[Fact-Check Agent 3]\n\n E --> H[Consensus Engine]\n F --> H\n G --> H\n\n H --> I[Confidence Score]\n I --> J{Score > Threshold?}\n J -->|Yes| K[Validated Output]\n J -->|No| L[Request Refinement]\n L --> B
"},{"location":"swarms/concept/swarm_architectures/#council-as-judge","title":"Council as Judge","text":"Overview: Multiple agents act as a council to evaluate, judge, and validate outputs or decisions through collaborative assessment.
Use Cases: - Quality assessment and validation
Decision validation processes
Peer review systems
Learn More
graph TD\n A[Submission] --> B[Council Formation]\n B --> C[Judge Agent 1]\n B --> D[Judge Agent 2]\n B --> E[Judge Agent 3]\n B --> F[Judge Agent N]\n\n C --> G[Individual Assessment]\n D --> G\n E --> G\n F --> G\n\n G --> H[Scoring System]\n H --> I[Weighted Voting]\n I --> J[Final Judgment]\n J --> K[Feedback & Recommendations]
"},{"location":"swarms/concept/swarm_architectures/#malt-architecture","title":"MALT Architecture","text":"Overview: Specialized architecture for complex language processing tasks that require coordination between multiple language-focused agents.
Use Cases: - Natural language processing pipelines
Translation and localization
Content generation and editing
Learn More
graph TD\n A[Language Task] --> B[Task Analyzer]\n B --> C[Language Router]\n\n C --> D[Grammar Agent]\n C --> E[Semantics Agent]\n C --> F[Style Agent]\n C --> G[Context Agent]\n\n D --> H[Language Processor]\n E --> H\n F --> H\n G --> H\n\n H --> I[Quality Controller]\n I --> J[Output Formatter]\n J --> K[Final Language Output]
"},{"location":"swarms/concept/swarm_architectures/#majority-voting","title":"Majority Voting","text":"Overview: Agents vote on decisions with the majority determining the final outcome, providing democratic decision-making and error reduction through consensus.
Use Cases: - Democratic decision-making processes
Consensus building
Error reduction through voting
Learn More
graph TD\n A[Decision Request] --> B[Voting Coordinator]\n B --> C[Voting Pool]\n\n C --> D[Voter Agent 1]\n C --> E[Voter Agent 2]\n C --> F[Voter Agent 3]\n C --> G[Voter Agent N]\n\n D --> H[Vote Collection]\n E --> H\n F --> H\n G --> H\n\n H --> I[Vote Counter]\n I --> J[Majority Calculator]\n J --> K[Final Decision]\n K --> L[Decision Rationale]
"},{"location":"swarms/concept/swarm_architectures/#auto-builder","title":"Auto-Builder","text":"Overview: Automatically constructs and configures multi-agent systems based on requirements, enabling dynamic system creation and adaptation.
Use Cases: - Dynamic system creation
Adaptive architectures
Rapid prototyping of multi-agent systems
Learn More
graph TD\n A[Requirements Input] --> B[System Analyzer]\n B --> C[Architecture Selector]\n C --> D[Agent Configuration]\n\n D --> E[Agent Builder 1]\n D --> F[Agent Builder 2]\n D --> G[Agent Builder N]\n\n E --> H[System Assembler]\n F --> H\n G --> H\n\n H --> I[Configuration Validator]\n I --> J[System Deployment]\n J --> K[Performance Monitor]\n K --> L[Adaptive Optimizer]
"},{"location":"swarms/concept/swarm_architectures/#hybrid-hierarchical-cluster","title":"Hybrid Hierarchical Cluster","text":"Overview: Combines hierarchical and peer-to-peer communication patterns for complex workflows that require both centralized coordination and distributed collaboration.
Use Cases: - Complex enterprise workflows
Multi-department coordination
Hybrid organizational structures
Learn More
graph TD\n A[Central Coordinator] --> B[Cluster 1 Leader]\n A --> C[Cluster 2 Leader]\n A --> D[Cluster 3 Leader]\n\n B --> E[Peer Agent 1.1]\n B --> F[Peer Agent 1.2]\n E <--> F\n\n C --> G[Peer Agent 2.1]\n C --> H[Peer Agent 2.2]\n G <--> H\n\n D --> I[Peer Agent 3.1]\n D --> J[Peer Agent 3.2]\n I <--> J\n\n E --> K[Inter-Cluster Communication]\n G --> K\n I --> K\n K --> A
"},{"location":"swarms/concept/swarm_architectures/#election-architecture","title":"Election Architecture","text":"Overview: Agents participate in democratic voting processes to select leaders or make collective decisions.
Use Cases: - Democratic governance
Consensus building
Leadership selection
Learn More
graph TD\n A[Voting Process] --> B[Candidate Agents]\n B --> C[Voting Mechanism]\n\n C --> D[Voter Agent 1]\n C --> E[Voter Agent 2]\n C --> F[Voter Agent N]\n\n D --> G[Vote Collection]\n E --> G\n F --> G\n\n G --> H[Vote Counting]\n H --> I[Majority Check]\n I --> J{Majority?}\n J -->|Yes| K[Leader Selected]\n J -->|No| L[Continue Voting]\n L --> B
"},{"location":"swarms/concept/swarm_architectures/#dynamic-conversational-architecture","title":"Dynamic Conversational Architecture","text":"Overview: Adaptive conversation management with dynamic agent selection and interaction patterns.
Use Cases: - Adaptive chatbots
Dynamic customer service
Contextual conversations
Learn More
graph TD\n A[Conversation Manager] --> B[Speaker Selection Logic]\n B --> C[Priority Speaker]\n B --> D[Random Speaker]\n B --> E[Round Robin Speaker]\n\n C --> F[Active Discussion]\n D --> F\n E --> F\n\n F --> G[Agent Pool]\n G --> H[Agent 1]\n G --> I[Agent 2]\n G --> J[Agent N]\n\n H --> K[Dynamic Response]\n I --> K\n J --> K\n K --> A
"},{"location":"swarms/concept/swarm_architectures/#tree-architecture","title":"Tree Architecture","text":"Overview: Hierarchical tree structure for organizing agents in parent-child relationships.
Use Cases: - Organizational hierarchies
Decision trees
Taxonomic classification
Learn More
graph TD\n A[Root] --> B[Child 1]\n A --> C[Child 2]\n B --> D[Grandchild 1]\n B --> E[Grandchild 2]\n C --> F[Grandchild 3]\n C --> G[Grandchild 4]
"},{"location":"swarms/concept/swarm_ecosystem/","title":"Understanding the Swarms Ecosystem","text":"The Swarms Ecosystem is a powerful suite of tools and frameworks designed to help developers build, deploy, and manage swarms of autonomous agents. This ecosystem covers various domains, from Large Language Models (LLMs) to IoT data integration, providing a comprehensive platform for automation and scalability. Below, we\u2019ll explore the key components and how they contribute to this groundbreaking ecosystem.
"},{"location":"swarms/concept/swarm_ecosystem/#1-swarms-framework","title":"1. Swarms Framework","text":"The Swarms Framework is a Python-based toolkit that simplifies the creation, orchestration, and scaling of swarms of agents. Whether you are dealing with marketing, accounting, or data analysis, the Swarms Framework allows developers to automate complex workflows efficiently.
graph TD;\n SF[Swarms Framework] --> Core[Swarms Core]\n SF --> JS[Swarms JS]\n SF --> Memory[Swarms Memory]\n SF --> Evals[Swarms Evals]\n SF --> Zero[Swarms Zero]
"},{"location":"swarms/concept/swarm_ecosystem/#2-swarms-cloud","title":"2. Swarms-Cloud","text":"Swarms-Cloud is a cloud-based solution that enables you to deploy your agents with enterprise-level guarantees. It provides 99% uptime, infinite scalability, and self-healing capabilities, making it ideal for mission-critical operations.
graph TD;\n SC[Swarms-Cloud] --> Uptime[99% Uptime]\n SC --> Scale[Infinite Scalability]\n SC --> Healing[Self-Healing]
"},{"location":"swarms/concept/swarm_ecosystem/#3-swarms-models","title":"3. Swarms-Models","text":"Swarms-Models offer a seamless interface to leading LLM providers like OpenAI, Anthropic, and Ollama. It allows developers to tap into cutting-edge natural language understanding for their agents.
graph TD;\n SM[Swarms-Models] --> OpenAI[OpenAI API]\n SM --> Anthropic[Anthropic API]\n SM --> Ollama[Ollama API]
"},{"location":"swarms/concept/swarm_ecosystem/#4-agentparse","title":"4. AgentParse","text":"AgentParse is a high-performance library for mapping structured data like JSON, YAML, CSV, and Pydantic models into formats understandable by agents. This ensures fast, seamless data ingestion.
graph TD;\n AP[AgentParse] --> JSON[JSON Parsing]\n AP --> YAML[YAML Parsing]\n AP --> CSV[CSV Parsing]\n AP --> Pydantic[Pydantic Model Parsing]
"},{"location":"swarms/concept/swarm_ecosystem/#5-swarms-platform","title":"5. Swarms-Platform","text":"The Swarms-Platform is a marketplace where developers can find, buy, and sell autonomous agents. It enables the rapid scaling of agent ecosystems by leveraging ready-made solutions.
graph TD;\n SP[Swarms-Platform] --> Discover[Discover Agents]\n SP --> Buy[Buy Agents]\n SP --> Sell[Sell Agents]
"},{"location":"swarms/concept/swarm_ecosystem/#extending-the-ecosystem-swarms-core-js-and-more","title":"Extending the Ecosystem: Swarms Core, JS, and More","text":"In addition to the core components, the Swarms Ecosystem offers several other powerful packages:
graph TD;\n SC[Swarms Core] --> Rust[Rust for Performance]\n JS[Swarms JS] --> MultiAgent[Multi-Agent Orchestration]\n Memory[Swarms Memory] --> RAG[Retrieval Augmented Generation]\n Evals[Swarms Evals] --> Evaluation[Agent Evaluations]\n Zero[Swarms Zero] --> Automation[Enterprise Automation]
"},{"location":"swarms/concept/swarm_ecosystem/#conclusion","title":"Conclusion","text":"The Swarms Ecosystem is a comprehensive, flexible, and scalable platform for managing and deploying autonomous agents. Whether you\u2019re working with LLMs, IoT data, or building new models, the ecosystem provides the tools necessary to simplify automation at scale.
Start exploring the possibilities by checking out the Swarms Ecosystem GitHub repository and join our growing community of developers and innovators.
"},{"location":"swarms/concept/vision/","title":"Swarms \u2013 The Ultimate Multi-Agent LLM Framework for Developers","text":"Swarms aims to be the definitive and most reliable multi-agent LLM framework, offering developers the tools to automate business operations effortlessly. It provides a vast array of swarm architectures, seamless third-party integration, and unparalleled ease of use. With Swarms, developers can orchestrate intelligent, scalable agent ecosystems that can automate complex business processes.
"},{"location":"swarms/concept/vision/#key-features-for-developers","title":"Key Features for Developers:","text":"This example demonstrates a simple financial agent setup that responds to financial questions, such as establishing a ROTH IRA, using OpenAI's GPT-based model.
from swarms.structs.agent import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT\n\n# Initialize the Financial Analysis Agent with GPT-4o-mini model\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"finance_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n return_step_meta=False,\n)\n\n# Example task for the agent\nout = agent.run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\"\n)\n\n# Output the result\nprint(out)\n
"},{"location":"swarms/concept/vision/#2-agent-orchestration-with-agentrearrange","title":"2. Agent Orchestration with AgentRearrange:","text":"The following example showcases how to use the AgentRearrange
class to manage a multi-agent system. It sets up a director agent to orchestrate two workers\u2014one to generate a transcript and another to summarize it.
from swarms.structs.agent import Agent\nfrom swarms.structs.rearrange import AgentRearrange \n\n# Initialize the Director agent using Anthropic model via model_name\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"You are a Director agent. Your role is to coordinate and direct tasks for worker agents. Break down complex tasks into clear, actionable steps.\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n dashboard=False,\n streaming_on=False, \n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\n# Worker 1: transcript generation\nworker1 = Agent(\n agent_name=\"Worker1\",\n system_prompt=\"You are a content creator agent. Your role is to generate detailed, engaging transcripts for YouTube videos about technical topics. Focus on clarity and educational value.\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n dashboard=False,\n streaming_on=False, \n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"worker1.json\",\n)\n\n# Worker 2: summarization\nworker2 = Agent(\n agent_name=\"Worker2\",\n system_prompt=\"You are a summarization agent. Your role is to create concise, clear summaries of technical content while maintaining key information and insights.\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n dashboard=False,\n streaming_on=False, \n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"worker2.json\",\n)\n\n# Orchestrate the agents in sequence\nagents = [director, worker1, worker2]\nflow = \"Director -> Worker1 -> Worker2\"\nagent_system = AgentRearrange(agents=agents, flow=flow)\n\n# Run the workflow\noutput = agent_system.run(\n \"Create a format to express and communicate swarms of LLMs in a structured manner for YouTube\"\n)\nprint(output)\n
"},{"location":"swarms/concept/vision/#1-basic-agent-flow","title":"1. Basic Agent Flow:","text":"Here\u2019s a visual representation of the basic workflow using Mermaid to display the sequential flow between agents.
flowchart TD\n A[Director] --> B[Worker 1: Generate Transcript]\n B --> C[Worker 2: Summarize Transcript]
In this diagram: - The Director agent assigns tasks. - Worker 1 generates a transcript for a YouTube video. - Worker 2 summarizes the transcript.
"},{"location":"swarms/concept/vision/#2-sequential-agent-flow","title":"2. Sequential Agent Flow:","text":"This diagram showcases a sequential agent setup where one agent completes its task before the next agent starts its task.
flowchart TD\n A[Director] --> B[Worker 1: Generate Transcript]\n B --> C[Worker 2: Summarize Transcript]\n C --> D[Worker 3: Finalize]
In this setup:
The Director agent assigns tasks to Worker 1, which generates a transcript for a YouTube video.
Worker 1 completes its task before Worker 2 starts summarizing the transcript.
Worker 2 completes its task before Worker 3 finalizes the process.
Swarms is designed with flexibility at its core. Developers can create custom architectures and workflows, enabling extreme control over how agents interact with each other. Whether it\u2019s a linear process or a complex mesh of agent communications, Swarms handles it efficiently.
With support for extreme third-party integration, Swarms makes it easy for developers to plug into external systems, such as APIs or internal databases. This allows agents to act on live data, process external inputs, and execute actions in real time, making it a powerful tool for real-world applications.
Swarms abstracts the complexity of managing multiple agents with orchestration tools like AgentRearrange
. Developers can define workflows that execute tasks concurrently or sequentially, depending on the problem at hand. This makes it easy to build and maintain large-scale automation systems.
Swarms is not just another multi-agent framework; it's built specifically for developers who need powerful tools to automate complex, large-scale business operations. With flexible architecture, deep integration capabilities, and developer-friendly APIs, Swarms is the ultimate solution for businesses looking to streamline operations and future-proof their workflows.
"},{"location":"swarms/concept/why/","title":"Benefits","text":"Maximizing Enterprise Automation: Overcoming the Limitations of Individual AI Agents Through Multi-Agent Collaboration
In today's rapidly evolving business landscape, enterprises are constantly seeking innovative solutions to enhance efficiency, reduce operational costs, and maintain a competitive edge. Automation has emerged as a critical strategy for achieving these objectives, with artificial intelligence (AI) playing a pivotal role. AI agents, particularly those powered by advanced machine learning models, have shown immense potential in automating a variety of tasks. However, individual AI agents come with inherent limitations that hinder their ability to fully automate complex enterprise operations at scale.
This essay dives into the specific limitations of individual AI agents\u2014context window limits, hallucination, single-task execution, lack of collaboration, lack of accuracy, and slow processing speed\u2014and explores how multi-agent collaboration can overcome these challenges. By tailoring our discussion to the needs of enterprises aiming to automate operations at scale, we highlight practical strategies and frameworks that can be adopted to unlock the full potential of AI-driven automation.
"},{"location":"swarms/concept/why/#part-1-the-limitations-of-individual-ai-agents","title":"Part 1: The Limitations of Individual AI Agents","text":"Despite significant advancements, individual AI agents face several obstacles that limit their effectiveness in enterprise automation. Understanding these limitations is crucial for organizations aiming to implement AI solutions that are both efficient and scalable.
"},{"location":"swarms/concept/why/#1-context-window-limits","title":"1. Context Window Limits","text":"Explanation
AI agents, especially those based on language models like GPT-3 or GPT-4, operate within a fixed context window. This means they can only process and consider a limited amount of information (tokens) at a time. In practical terms, this restricts the agent's ability to handle large documents, long conversations, or complex datasets that exceed their context window.
Impact on Enterprises
For enterprises, this limitation poses significant challenges. Business operations often involve processing extensive documents such as legal contracts, technical manuals, or large datasets. An AI agent with a limited context window may miss crucial information located outside its immediate context, leading to incomplete analyses or erroneous conclusions.
graph LR\n subgraph \"Context Window Limit\"\n Input[Large Document]\n Agent[AI Agent]\n Output[Partial Understanding]\n Input -- Truncated Data --> Agent\n Agent -- Generates --> Output\n end
An AI agent processes only a portion of a large document due to context window limits, resulting in partial understanding.
"},{"location":"swarms/concept/why/#2-hallucination","title":"2. Hallucination","text":"Explanation
Hallucination refers to the tendency of AI agents to produce outputs that are not grounded in the input data or reality. They may generate plausible-sounding but incorrect or nonsensical information, especially when uncertain or when the input data is ambiguous.
Impact on Enterprises
In enterprise settings, hallucinations can lead to misinformation, poor decision-making, and a lack of trust in AI systems. For instance, if an AI agent generates incorrect financial forecasts or misinterprets regulatory requirements, the consequences could be financially damaging and legally problematic.
graph TD\n Input[Ambiguous Data]\n Agent[AI Agent]\n Output[Incorrect Information]\n Input --> Agent\n Agent --> Output
An AI agent generates incorrect information (hallucination) when processing ambiguous data.
"},{"location":"swarms/concept/why/#3-single-task-execution","title":"3. Single Task Execution","text":"Explanation
Many AI agents are designed to excel at a specific task or a narrow set of functions. They lack the flexibility to perform multiple tasks simultaneously or adapt to new tasks without significant reconfiguration or retraining.
Impact on Enterprises
Enterprises require systems that can handle a variety of tasks, often concurrently. Relying on single-task agents necessitates deploying multiple separate agents, which can lead to integration challenges, increased complexity, and higher maintenance costs.
graph LR\n TaskA[Task A] --> AgentA[Agent A]\n TaskB[Task B] --> AgentB[Agent B]\n AgentA --> OutputA[Result A]\n AgentB --> OutputB[Result B]
Separate agents handle different tasks independently, lacking integration.
"},{"location":"swarms/concept/why/#4-lack-of-collaboration","title":"4. Lack of Collaboration","text":"Explanation
Individual AI agents typically operate in isolation, without the ability to communicate or collaborate with other agents. This siloed operation prevents them from sharing insights, learning from each other, or coordinating actions to achieve a common goal.
Impact on Enterprises
Complex enterprise operations often require coordinated efforts across different functions and departments. The inability of AI agents to collaborate limits their effectiveness in such environments, leading to disjointed processes and suboptimal outcomes.
graph LR\n Agent1[Agent 1]\n Agent2[Agent 2]\n Agent3[Agent 3]\n Agent1 -->|No Communication| Agent2\n Agent2 -->|No Communication| Agent3
Agents operate without collaboration, resulting in isolated efforts.
"},{"location":"swarms/concept/why/#5-lack-of-accuracy","title":"5. Lack of Accuracy","text":"Explanation
AI agents may produce inaccurate results due to limitations in their training data, algorithms, or inability to fully understand complex inputs. Factors such as data bias, overfitting, or lack of domain-specific knowledge contribute to this inaccuracy.
Impact on Enterprises
Inaccurate outputs can have serious ramifications for businesses, including flawed strategic decisions, customer dissatisfaction, and compliance risks. High accuracy is essential for tasks like financial analysis, customer service, and regulatory compliance.
graph TD\n Input[Complex Data]\n Agent[AI Agent]\n Output[Inaccurate Result]\n Input --> Agent\n Agent --> Output
An AI agent produces an inaccurate result when handling complex data.
"},{"location":"swarms/concept/why/#6-slow-processing-speed","title":"6. Slow Processing Speed","text":"Explanation
Some AI agents require significant computational resources and time to process data and generate outputs. Factors like model complexity, inefficient algorithms, or hardware limitations can contribute to slow processing speeds.
Impact on Enterprises
Slow processing impedes real-time decision-making and responsiveness. In fast-paced business environments, delays can lead to missed opportunities, reduced productivity, and competitive disadvantages.
graph TD\n Input[Data]\n Agent[AI Agent]\n Delay[Processing Delay]\n Output[Delayed Response]\n Input --> Agent\n Agent --> Delay\n Delay --> Output
An AI agent's slow processing leads to delayed responses.
"},{"location":"swarms/concept/why/#part-2-overcoming-limitations-through-multi-agent-collaboration","title":"Part 2: Overcoming Limitations Through Multi-Agent Collaboration","text":"To address the challenges posed by individual AI agents, enterprises can adopt a multi-agent collaboration approach. By orchestrating multiple agents with complementary skills and functionalities, organizations can enhance performance, accuracy, and scalability in their automation efforts.
"},{"location":"swarms/concept/why/#1-extending-context-window-through-distributed-processing","title":"1. Extending Context Window Through Distributed Processing","text":"Solution
By dividing large inputs into smaller segments, multiple agents can process different parts simultaneously. A coordinating agent can then aggregate the results to form a comprehensive understanding.
Implementation in Enterprises
graph LR\n Input[Large Document]\n Splitter[Splitter Agent]\n A1[Agent 1]\n A2[Agent 2]\n A3[Agent 3]\n Aggregator[Aggregator Agent]\n Output[Comprehensive Analysis]\n Input --> Splitter\n Splitter --> A1\n Splitter --> A2\n Splitter --> A3\n A1 --> Aggregator\n A2 --> Aggregator\n A3 --> Aggregator\n Aggregator --> Output
Multiple agents process segments of a large document, and results are aggregated.
"},{"location":"swarms/concept/why/#2-reducing-hallucination-through-cross-verification","title":"2. Reducing Hallucination Through Cross-Verification","text":"Solution
Agents can verify each other's outputs by cross-referencing information and flagging inconsistencies. Implementing consensus mechanisms ensures that only accurate information is accepted.
Implementation in Enterprises
graph TD\n A[Agent's Output]\n V1[Verifier Agent 1]\n V2[Verifier Agent 2]\n Consensus[Consensus Mechanism]\n Output[Validated Output]\n A --> V1\n A --> V2\n V1 & V2 --> Consensus\n Consensus --> Output
Agents verify outputs through cross-verification and consensus.
"},{"location":"swarms/concept/why/#3-enhancing-multi-tasking-through-specialized-agents","title":"3. Enhancing Multi-Tasking Through Specialized Agents","text":"Solution
Deploy specialized agents for different tasks and enable them to work concurrently. An orchestrator agent manages task allocation and workflow integration.
Implementation in Enterprises
graph LR\n Task[Complex Task]\n Orchestrator[Orchestrator Agent]\n AgentA[Specialist Agent A]\n AgentB[Specialist Agent B]\n AgentC[Specialist Agent C]\n Output[Integrated Solution]\n Task --> Orchestrator\n Orchestrator --> AgentA\n Orchestrator --> AgentB\n Orchestrator --> AgentC\n AgentA & AgentB & AgentC --> Orchestrator\n Orchestrator --> Output
Specialized agents handle different tasks under the management of an orchestrator agent.
"},{"location":"swarms/concept/why/#4-facilitating-collaboration-through-communication-protocols","title":"4. Facilitating Collaboration Through Communication Protocols","text":"Solution
Implement communication protocols that allow agents to share information, request assistance, and coordinate actions. This fosters a collaborative environment where agents complement each other's capabilities.
Implementation in Enterprises
graph LR\n Agent1[Agent 1]\n Agent2[Agent 2]\n Agent3[Agent 3]\n Agent1 <--> Agent2\n Agent2 <--> Agent3\n Agent3 <--> Agent1\n Output[Collaborative Outcome]
Agents communicate and collaborate to achieve a common goal.
"},{"location":"swarms/concept/why/#5-improving-accuracy-through-ensemble-learning","title":"5. Improving Accuracy Through Ensemble Learning","text":"Solution
Use ensemble methods where multiple agents provide predictions or analyses, and a meta-agent combines these to produce a more accurate result.
Implementation in Enterprises
graph TD\n AgentA[Agent A Output]\n AgentB[Agent B Output]\n AgentC[Agent C Output]\n MetaAgent[Meta-Agent]\n Output[Enhanced Accuracy]\n AgentA --> MetaAgent\n AgentB --> MetaAgent\n AgentC --> MetaAgent\n MetaAgent --> Output
Meta-agent combines outputs from multiple agents to improve accuracy.
"},{"location":"swarms/concept/why/#6-increasing-processing-speed-through-parallelization","title":"6. Increasing Processing Speed Through Parallelization","text":"Solution
By distributing workloads among multiple agents operating in parallel, processing times are significantly reduced, enabling real-time responses.
Implementation in Enterprises
graph LR\n Data[Large Dataset]\n Agent1[Agent 1]\n Agent2[Agent 2]\n Agent3[Agent 3]\n Output[Processed Data]\n Data --> Agent1\n Data --> Agent2\n Data --> Agent3\n Agent1 & Agent2 & Agent3 --> Output
Parallel processing by agents leads to faster completion times.
"},{"location":"swarms/concept/why/#part-3-tailoring-multi-agent-systems-for-enterprise-automation-at-scale","title":"Part 3: Tailoring Multi-Agent Systems for Enterprise Automation at Scale","text":"Implementing multi-agent systems in an enterprise context requires careful planning and consideration of organizational needs, technical infrastructure, and strategic goals. Below are key considerations and steps for enterprises aiming to adopt multi-agent collaboration for automation at scale.
"},{"location":"swarms/concept/why/#1-identifying-automation-opportunities","title":"1. Identifying Automation Opportunities","text":"Enterprises should start by identifying processes and tasks that are suitable for automation through multi-agent systems. Prioritize areas where:
Develop a robust architecture that defines how agents will interact, communicate, and collaborate. Key components include:
Data security is paramount when agents handle sensitive enterprise information. Implement measures such as:
Establish monitoring tools to track agent performance, system health, and outcomes. Key metrics may include:
Develop strategies for scaling the system as enterprise needs grow, including:
Implement feedback loops for ongoing enhancement of the multi-agent system:
To illustrate the practical benefits of multi-agent collaboration in enterprise automation, let's explore several real-world examples.
"},{"location":"swarms/concept/why/#case-study-1-financial-services-automation","title":"Case Study 1: Financial Services Automation","text":"Challenge
A financial institution needs to process large volumes of loan applications, requiring data verification, risk assessment, compliance checks, and decision-making.
Solution
Decision Agent: Aggregates inputs and makes approval decisions.
Collaboration:
Outcome
Challenge
A manufacturing company wants to optimize its supply chain to reduce costs and improve delivery times.
Solution
Supplier Evaluation Agent: Assesses supplier performance and reliability.
Collaboration:
Outcome
Challenge
A hospital aims to improve patient care coordination, managing appointments, medical records, billing, and treatment plans.
Solution
Treatment Planning Agent: Assists in developing patient care plans.
Collaboration:
Outcome
For enterprises embarking on the journey of multi-agent automation, adhering to best practices ensures successful implementation.
"},{"location":"swarms/concept/why/#1-start-small-and-scale-gradually","title":"1. Start Small and Scale Gradually","text":"Enterprises seeking to automate operations at scale face the limitations inherent in individual AI agents. Context window limits, hallucinations, single-task execution, lack of collaboration, lack of accuracy, and slow processing speed hinder the full potential of automation efforts. Multi-agent collaboration emerges as a robust solution to these challenges, offering a pathway to enhanced efficiency, accuracy, scalability, and adaptability.
By adopting multi-agent systems, enterprises can:
Implementing multi-agent systems requires thoughtful planning, adherence to best practices, and a commitment to ongoing management and optimization. Enterprises that successfully navigate this journey will position themselves at the forefront of automation, unlocking new levels of productivity and competitive advantage in an increasingly digital world.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/","title":"The Limits of Individual Agents","text":"Individual agents have pushed the boundaries of what machines can learn and accomplish. However, despite their impressive capabilities, these agents face inherent limitations that can hinder their effectiveness in complex, real-world applications. This blog explores the critical constraints of individual agents, such as context window limits, hallucination, single-task threading, and lack of collaboration, and illustrates how multi-agent collaboration can address these limitations. In short,
One of the most significant constraints of individual agents, particularly in the domain of language models, is the context window limit. This limitation refers to the maximum amount of information an agent can consider at any given time. For instance, many language models can only process a fixed number of tokens (words or characters) in a single inference, restricting their ability to understand and generate responses based on longer texts. This limitation can lead to a lack of coherence in longer compositions and an inability to maintain context in extended conversations or documents.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#hallucination","title":"Hallucination","text":"Hallucination in AI refers to the phenomenon where an agent generates information that is not grounded in the input data or real-world facts. This can manifest as making up facts, entities, or events that do not exist or are incorrect. Hallucinations pose a significant challenge in ensuring the reliability and trustworthiness of AI-generated content, particularly in critical applications such as news generation, academic research, and legal advice.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#single-task-threading","title":"Single Task Threading","text":"Individual agents are often designed to excel at specific tasks, leveraging their architecture and training data to optimize performance in a narrowly defined domain. However, this specialization can also be a drawback, as it limits the agent's ability to multitask or adapt to tasks that fall outside its primary domain. Single-task threading means an agent may excel in language translation but struggle with image recognition or vice versa, necessitating the deployment of multiple specialized agents for comprehensive AI solutions.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#lack-of-collaboration","title":"Lack of Collaboration","text":"Traditional AI agents operate in isolation, processing inputs and generating outputs independently. This isolation limits their ability to leverage diverse perspectives, share knowledge, or build upon the insights of other agents. In complex problem-solving scenarios, where multiple facets of a problem need to be addressed simultaneously, this lack of collaboration can lead to suboptimal solutions or an inability to tackle multifaceted challenges effectively.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#the-elegant-yet-simple-solution","title":"The Elegant yet Simple Solution","text":"Recognizing the limitations of individual agents, researchers and practitioners have explored the potential of multi-agent collaboration as a means to transcend these constraints. Multi-agent systems comprise several agents that can interact, communicate, and collaborate to achieve common goals or solve complex problems. This collaborative approach offers several advantages:
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#multi-agent-collaboration","title":"Multi-Agent Collaboration","text":""},{"location":"swarms/concept/purpose/limits_of_individual_agents/#overcoming-context-window-limits","title":"Overcoming Context Window Limits","text":"By dividing a large task among multiple agents, each focusing on different segments of the problem, multi-agent systems can effectively overcome the context window limits of individual agents. For instance, in processing a long document, different agents could be responsible for understanding and analyzing different sections, pooling their insights to generate a coherent understanding of the entire text.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#mitigating-hallucination","title":"Mitigating Hallucination","text":"Through collaboration, agents can cross-verify facts and information, reducing the likelihood of hallucinations. If one agent generates a piece of information, other agents can provide checks and balances, verifying the accuracy against known data or through consensus mechanisms.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#enhancing-multitasking-capabilities","title":"Enhancing Multitasking Capabilities","text":"Multi-agent systems can tackle tasks that require a diverse set of skills by leveraging the specialization of individual agents. For example, in a complex project that involves both natural language processing and image analysis, one agent specialized in text can collaborate with another specialized in visual data, enabling a comprehensive approach to the task.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#facilitating-collaboration-and-knowledge-sharing","title":"Facilitating Collaboration and Knowledge Sharing","text":"Multi-agent collaboration inherently encourages the sharing of knowledge and insights, allowing agents to learn from each other and improve their collective performance. This can be particularly powerful in scenarios where iterative learning and adaptation are crucial, such as dynamic environments or tasks that evolve over time.
"},{"location":"swarms/concept/purpose/limits_of_individual_agents/#conclusion","title":"Conclusion","text":"While individual AI agents have made remarkable strides in various domains, their inherent limitations necessitate innovative approaches to unlock the full potential of artificial intelligence. Multi-agent collaboration emerges as a compelling solution, offering a pathway to transcend individual constraints through collective intelligence. By harnessing the power of collaborative AI, we can address more complex, multifaceted problems, paving the way for more versatile, efficient, and effective AI systems in the future.
"},{"location":"swarms/concept/purpose/why/","title":"The Swarms Framework: Orchestrating Agents for Enterprise Automation","text":"In the rapidly evolving landscape of artificial intelligence (AI) and automation, a new paradigm is emerging: the orchestration of multiple agents working in collaboration to tackle complex tasks. This approach, embodied by the Swarms Framework, aims to address the fundamental limitations of individual agents and unlocks the true potential of AI-driven automation in enterprise operations.
Individual agents are plagued by the same issues: short term memory constraints, hallucinations, single task limitations, lack of collaboration, and cost inefficiences.
Learn more here from a list of compiled agent papers
"},{"location":"swarms/concept/purpose/why/#the-purpose-of-swarms-overcoming-agent-limitations","title":"The Purpose of Swarms: Overcoming Agent Limitations","text":"Individual agents, while remarkable in their own right, face several inherent challenges that hinder their ability to effectively automate enterprise operations at scale. These limitations include:
By orchestrating multiple agents to work in concert, the Swarms Framework directly tackles these limitations, paving the way for more efficient, reliable, and cost-effective enterprise automation.
"},{"location":"swarms/concept/purpose/why/#limitation-1-short-term-memory-constraints","title":"Limitation 1: Short-Term Memory Constraints","text":"Many AI agents, particularly those based on large language models, suffer from short-term memory constraints. These agents can effectively process and respond to prompts, but their ability to retain and reason over information across multiple interactions or tasks is limited. This limitation can be problematic in enterprise environments, where complex workflows often involve retaining and referencing contextual information over extended periods.
The Swarms Framework addresses this limitation by leveraging the collective memory of multiple agents working in tandem. While individual agents may have limited short-term memory, their combined memory pool becomes significantly larger, enabling the retention and retrieval of contextual information over extended periods. This collective memory is facilitated by agents specializing in information storage and retrieval, such as those based on systems like Llama Index or Pinecone.
"},{"location":"swarms/concept/purpose/why/#limitation-2-hallucination-and-factual-inconsistencies","title":"Limitation 2: Hallucination and Factual Inconsistencies","text":"Another challenge faced by many AI agents is the tendency to generate responses that may contain factual inconsistencies or hallucinations -- information that is not grounded in reality or the provided context. This issue can undermine the reliability and trustworthiness of automated systems, particularly in domains where accuracy and consistency are paramount.
The Swarms Framework mitigates this limitation by employing multiple agents with diverse knowledge bases and capabilities. By leveraging the collective intelligence of these agents, the framework can cross-reference and validate information, reducing the likelihood of hallucinations and factual inconsistencies. Additionally, specialized agents can be tasked with fact-checking and verification, further enhancing the overall reliability of the system.
"},{"location":"swarms/concept/purpose/why/#limitation-3-single-task-limitations","title":"Limitation 3: Single-Task Limitations","text":"Most individual AI agents are designed and optimized for specific tasks or domains, limiting their ability to handle complex, multi-faceted workflows that often characterize enterprise operations. While an agent may excel at a particular task, such as natural language processing or data analysis, it may struggle with other aspects of a larger workflow, such as task coordination or decision-making.
The Swarms Framework overcomes this limitation by orchestrating a diverse ensemble of agents, each specializing in different tasks or capabilities. By intelligently combining and coordinating these agents, the framework can tackle complex, multi-threaded workflows that span various domains and task types. This modular approach allows for the seamless integration of new agents as they become available, enabling the continuous expansion and enhancement of the system's capabilities.
"},{"location":"swarms/concept/purpose/why/#limitation-4-lack-of-collaborative-capabilities","title":"Limitation 4: Lack of Collaborative Capabilities","text":"Most AI agents are designed to operate independently, lacking the ability to effectively collaborate with other agents or coordinate their actions towards a common goal. This limitation can hinder the scalability and efficiency of automated systems, particularly in enterprise environments where tasks often require the coordination of multiple agents or systems.
The Swarms Framework addresses this limitation by introducing a layer of coordination and collaboration among agents. Through specialized coordination agents and communication protocols, the framework enables agents to share information, divide tasks, and synchronize their actions. This collaborative approach not only increases efficiency but also enables the emergence of collective intelligence, where the combined capabilities of multiple agents surpass the sum of their individual abilities.
"},{"location":"swarms/concept/purpose/why/#limitation-5-cost-inefficiencies","title":"Limitation 5: Cost Inefficiencies","text":"Running large AI models or orchestrating multiple agents can be computationally expensive, particularly in enterprise environments where scalability and cost-effectiveness are critical considerations. Inefficient resource utilization or redundant computations can quickly escalate costs, making widespread adoption of AI-driven automation financially prohibitive.
The Swarms Framework tackles this limitation by optimizing resource allocation and workload distribution among agents. By intelligently assigning tasks to the most appropriate agents and leveraging agent specialization, the framework minimizes redundant computations and improves overall resource utilization. Additionally, the framework can dynamically scale agent instances based on demand, ensuring that computational resources are allocated efficiently and costs are minimized.
"},{"location":"swarms/concept/purpose/why/#the-swarms-framework-a-holistic-approach-to-enterprise-automation","title":"The Swarms Framework: A Holistic Approach to Enterprise Automation","text":"The Swarms Framework is a comprehensive solution that addresses the limitations of individual agents by orchestrating their collective capabilities. By integrating agents from various frameworks, including LangChain, AutoGPT, Llama Index, and others, the framework leverages the strengths of each agent while mitigating their individual weaknesses.
At its core, the Swarms Framework operates on the principle of multi-agent collaboration. By introducing specialized coordination agents and communication protocols, the framework enables agents to share information, divide tasks, and synchronize their actions towards a common goal. This collaborative approach not only increases efficiency but also enables the emergence of collective intelligence, where the combined capabilities of multiple agents surpass the sum of their individual abilities.
The framework's architecture is modular and extensible, allowing for the seamless integration of new agents as they become available. This flexibility ensures that the system's capabilities can continuously expand and adapt to evolving enterprise needs and technological advancements.
"},{"location":"swarms/concept/purpose/why/#benefits-of-the-swarms-framework","title":"Benefits of the Swarms Framework","text":"The adoption of the Swarms Framework in enterprise environments offers numerous benefits:
By orchestrating the collective capabilities of multiple agents, the Swarms Framework enables the efficient execution of complex, multi-threaded workflows. Tasks can be parallelized and distributed across specialized agents, reducing bottlenecks and increasing overall throughput. Additionally, the framework's modular design and ability to dynamically scale agent instances based on demand ensure that the system can adapt to changing workloads and scale seamlessly as enterprise needs evolve.
"},{"location":"swarms/concept/purpose/why/#improved-reliability-and-accuracy","title":"Improved Reliability and Accuracy","text":"The collaborative nature of the Swarms Framework reduces the risk of hallucinations and factual inconsistencies that can arise from individual agents. By leveraging the collective knowledge and diverse perspectives of multiple agents, the framework can cross-reference and validate information, enhancing the overall reliability and accuracy of its outputs.
Additionally, the framework's ability to incorporate specialized fact-checking and verification agents further strengthens the trustworthiness of the system's outcomes, ensuring that critical decisions and actions are based on accurate and reliable information.
"},{"location":"swarms/concept/purpose/why/#adaptability-and-continuous-improvement","title":"Adaptability and Continuous Improvement","text":"The modular architecture of the Swarms Framework allows for the seamless integration of new agents as they become available, enabling the continuous expansion and enhancement of the system's capabilities. As new AI models, algorithms, or data sources emerge, the framework can readily incorporate them, ensuring that enterprise operations remain at the forefront of technological advancements.
Furthermore, the framework's monitoring and analytics capabilities provide valuable insights into system performance, enabling the identification of areas for improvement and the optimization of agent selection, task assignments, and resource allocation strategies over time.
"},{"location":"swarms/concept/purpose/why/#cost-optimization","title":"Cost Optimization","text":"By intelligently orchestrating the collaboration of multiple agents, the Swarms Framework optimizes resource utilization and minimizes redundant computations. This efficient use of computational resources translates into cost savings, making the widespread adoption of AI-driven automation more financially viable for enterprises.
The framework's ability to dynamically scale agent instances based on demand further contributes to cost optimization, ensuring that resources are allocated only when needed and minimizing idle or underutilized instances.
"},{"location":"swarms/concept/purpose/why/#enhanced-security-and-compliance","title":"Enhanced Security and Compliance","text":"In enterprise environments, ensuring the security and compliance of automated systems is paramount. The Swarms Framework addresses these concerns by incorporating robust security measures and compliance controls.
The framework's centralized Memory Manager component enables the implementation of access control mechanisms and data encryption, protecting sensitive information from unauthorized access or breaches. Additionally, the framework's modular design allows for the integration of specialized agents focused on compliance monitoring and auditing, ensuring that enterprise operations adhere to relevant regulations and industry standards.
"},{"location":"swarms/concept/purpose/why/#real-world-applications-and-use-cases","title":"Real-World Applications and Use Cases","text":"The Swarms Framework finds applications across a wide range of enterprise domains, enabling organizations to automate complex operations and streamline their workflows. Here are some examples of real-world use cases:
In the realm of business process automation, the Swarms Framework can orchestrate agents to automate and optimize complex workflows spanning multiple domains and task types. By combining agents specialized in areas such as natural language processing, data extraction, decision-making, and task coordination, the framework can streamline and automate processes that traditionally required manual intervention or coordination across multiple systems.
"},{"location":"swarms/concept/purpose/why/#customer-service-and-support","title":"Customer Service and Support","text":"The framework's ability to integrate agents with diverse capabilities, such as natural language processing, knowledge retrieval, and decision-making, makes it well-suited for automating customer service and support operations. Agents can collaborate to understand customer inquiries, retrieve relevant information from knowledge bases, and provide accurate and personalized responses, improving customer satisfaction and reducing operational costs.
"},{"location":"swarms/concept/purpose/why/#fraud-detection-and-risk-management","title":"Fraud Detection and Risk Management","text":"In the financial and cybersecurity domains, the Swarms Framework can orchestrate agents specialized in data analysis, pattern recognition, and risk assessment to detect and mitigate fraudulent activities or security threats. By combining the collective intelligence of these agents, the framework can identify complex patterns and anomalies that may be difficult for individual agents to detect, enhancing the overall effectiveness of fraud detection and risk management strategies.
"},{"location":"swarms/concept/purpose/why/#supply-chain-optimization","title":"Supply Chain Optimization","text":"The complexity of modern supply chains often requires the coordination of multiple systems and stakeholders. The Swarms Framework can integrate agents specialized in areas such as demand forecasting, inventory management, logistics optimization, and supplier coordination to streamline and optimize supply chain operations. By orchestrating the collective capabilities of these agents, the framework can identify bottlenecks, optimize resource allocation, and facilitate seamless collaboration among supply chain partners.
"},{"location":"swarms/concept/purpose/why/#research-and-development","title":"Research and Development","text":"In research and development environments, the Swarms Framework can accelerate innovation by enabling the collaboration of agents specialized in areas such as literature review, data analysis, hypothesis generation, and experiment design. By orchestrating these agents, the framework can facilitate the exploration of new ideas, identify promising research directions, and streamline the iterative process of scientific inquiry.
"},{"location":"swarms/concept/purpose/why/#conclusion","title":"Conclusion","text":"The Swarms Framework represents a paradigm shift in the field of enterprise automation, addressing the limitations of individual agents by orchestrating their collective capabilities. By integrating agents from various frameworks and enabling multi-agent collaboration, the Swarms Framework overcomes challenges such as short-term memory constraints, hallucinations, single-task limitations, lack of collaboration, and cost inefficiencies.
Through its modular architecture, centralized coordination, and advanced monitoring and analytics capabilities, the Swarms Framework empowers enterprises to automate complex operations with increased efficiency, reliability, and adaptability. It unlocks the true potential of AI-driven automation, enabling organizations to stay ahead of the curve and thrive in an ever-evolving technological landscape.
As the field of artificial intelligence continues to advance, the Swarms Framework stands as a robust and flexible solution, ready to embrace new developments and seamlessly integrate emerging agents and capabilities. By harnessing the power of collective intelligence, the framework paves the way for a future where enterprises can leverage the full potential of AI to drive innovation, optimize operations, and gain a competitive edge in their respective industries.
"},{"location":"swarms/concept/purpose/why_swarms/","title":"Why Swarms?","text":"The need for multiple agents to work together in artificial intelligence (AI) and particularly in the context of Large Language Models (LLMs) stems from several inherent limitations and challenges in handling complex, dynamic, and multifaceted tasks with single-agent systems. Collaborating with multiple agents offers a pathway to enhance reliability, computational efficiency, cognitive diversity, and problem-solving capabilities. This section delves into the rationale behind employing multi-agent systems and strategizes on overcoming the associated expenses, such as API bills and hosting costs.
"},{"location":"swarms/concept/purpose/why_swarms/#why-multiple-agents-are-necessary","title":"Why Multiple Agents Are Necessary","text":""},{"location":"swarms/concept/purpose/why_swarms/#1-cognitive-diversity","title":"1. Cognitive Diversity","text":"Different agents can bring varied perspectives, knowledge bases, and problem-solving approaches to a task. This diversity is crucial in complex problem-solving scenarios where a single approach might not be sufficient. Cognitive diversity enhances creativity, leading to innovative solutions and the ability to tackle a broader range of problems.
"},{"location":"swarms/concept/purpose/why_swarms/#2-specialization-and-expertise","title":"2. Specialization and Expertise","text":"In many cases, tasks are too complex for a single agent to handle efficiently. By dividing the task among multiple specialized agents, each can focus on a segment where it excels, thereby increasing the overall efficiency and effectiveness of the solution. This approach leverages the expertise of individual agents to achieve superior performance in tasks that require multifaceted knowledge and skills.
"},{"location":"swarms/concept/purpose/why_swarms/#3-scalability-and-flexibility","title":"3. Scalability and Flexibility","text":"Multi-agent systems can more easily scale to handle large-scale or evolving tasks. Adding more agents to the system can increase its capacity or capabilities, allowing it to adapt to larger workloads or new types of tasks. This scalability is essential in dynamic environments where the demand and nature of tasks can change rapidly.
"},{"location":"swarms/concept/purpose/why_swarms/#4-robustness-and-redundancy","title":"4. Robustness and Redundancy","text":"Collaboration among multiple agents enhances the system's robustness by introducing redundancy. If one agent fails or encounters an error, others can compensate, ensuring the system remains operational. This redundancy is critical in mission-critical applications where failure is not an option.
"},{"location":"swarms/concept/purpose/why_swarms/#overcoming-expenses-with-api-bills-and-hosting","title":"Overcoming Expenses with API Bills and Hosting","text":"Deploying multiple agents, especially when relying on cloud-based services or APIs, can incur significant costs. Here are strategies to manage and reduce these expenses:
"},{"location":"swarms/concept/purpose/why_swarms/#1-optimize-agent-efficiency","title":"1. Optimize Agent Efficiency","text":"Before scaling up the number of agents, ensure each agent operates as efficiently as possible. This can involve refining algorithms, reducing unnecessary API calls, and optimizing data processing to minimize computational requirements and, consequently, the associated costs.
"},{"location":"swarms/concept/purpose/why_swarms/#2-use-open-source-and-self-hosted-solutions","title":"2. Use Open Source and Self-Hosted Solutions","text":"Where possible, leverage open-source models and technologies that can be self-hosted. While there is an initial investment in setting up the infrastructure, over time, self-hosting can significantly reduce costs related to API calls and reliance on third-party services.
"},{"location":"swarms/concept/purpose/why_swarms/#3-implement-intelligent-caching","title":"3. Implement Intelligent Caching","text":"Caching results for frequently asked questions or common tasks can drastically reduce the need for repeated computations or API calls. Intelligent caching systems can determine what information to store and for how long, optimizing the balance between fresh data and computational savings.
"},{"location":"swarms/concept/purpose/why_swarms/#4-dynamic-scaling-and-load-balancing","title":"4. Dynamic Scaling and Load Balancing","text":"Use cloud services that offer dynamic scaling and load balancing to adjust the resources allocated based on the current demand. This ensures you're not paying for idle resources during low-usage periods while still being able to handle high demand when necessary.
"},{"location":"swarms/concept/purpose/why_swarms/#5-collaborative-cost-sharing-models","title":"5. Collaborative Cost-Sharing Models","text":"In scenarios where multiple stakeholders benefit from the multi-agent system, consider implementing a cost-sharing model. This approach distributes the financial burden among the users or beneficiaries, making it more sustainable.
"},{"location":"swarms/concept/purpose/why_swarms/#6-monitor-and-analyze-costs","title":"6. Monitor and Analyze Costs","text":"Regularly monitor and analyze your usage and associated costs to identify potential savings. Many cloud providers offer tools to track and forecast expenses, helping you to adjust your usage patterns and configurations to minimize costs without sacrificing performance.
"},{"location":"swarms/concept/purpose/why_swarms/#conclusion","title":"Conclusion","text":"The collaboration of multiple agents in AI systems presents a robust solution to the complexity, specialization, scalability, and robustness challenges inherent in single-agent approaches. While the associated costs can be significant, strategic optimization, leveraging open-source technologies, intelligent caching, dynamic resource management, collaborative cost-sharing, and diligent monitoring can mitigate these expenses. By adopting these strategies, organizations can harness the power of multi-agent systems to tackle complex problems more effectively and efficiently, ensuring the sustainable deployment of these advanced technologies.
"},{"location":"swarms/config/board_config/","title":"Board of Directors Configuration","text":"The Board of Directors feature in Swarms provides a sophisticated configuration system that allows you to enable, customize, and manage the collective decision-making capabilities of the framework.
"},{"location":"swarms/config/board_config/#overview","title":"Overview","text":"The Board of Directors configuration system provides:
BoardConfig
Class","text":"The BoardConfig
class manages all configuration for the Board of Directors feature:
from swarms.config.board_config import BoardConfig\n\n# Create configuration with custom settings\nconfig = BoardConfig(\n config_file_path=\"board_config.json\",\n config_data={\n \"board_feature_enabled\": True,\n \"default_board_size\": 5,\n \"decision_threshold\": 0.7\n }\n)\n
"},{"location":"swarms/config/board_config/#configuration-sources","title":"Configuration Sources","text":"The configuration system loads settings from multiple sources in priority order:
You can configure the Board of Directors feature using environment variables:
# Enable the Board of Directors feature\nexport SWARMS_BOARD_FEATURE_ENABLED=true\n\n# Set default board size\nexport SWARMS_DEFAULT_BOARD_SIZE=5\n\n# Configure decision threshold\nexport SWARMS_DECISION_THRESHOLD=0.7\n\n# Enable voting mechanisms\nexport SWARMS_ENABLE_VOTING=true\n\n# Enable consensus building\nexport SWARMS_ENABLE_CONSENSUS=true\n\n# Set default board model\nexport SWARMS_DEFAULT_BOARD_MODEL=gpt-4o\n\n# Enable verbose logging\nexport SWARMS_VERBOSE_LOGGING=true\n\n# Set maximum board meeting duration\nexport SWARMS_MAX_BOARD_MEETING_DURATION=300\n\n# Enable auto fallback to Director mode\nexport SWARMS_AUTO_FALLBACK_TO_DIRECTOR=true\n
"},{"location":"swarms/config/board_config/#configuration-file","title":"Configuration File","text":"Create a JSON configuration file for persistent settings:
{\n \"board_feature_enabled\": true,\n \"default_board_size\": 5,\n \"decision_threshold\": 0.7,\n \"enable_voting\": true,\n \"enable_consensus\": true,\n \"default_board_model\": \"gpt-4o\",\n \"verbose_logging\": true,\n \"max_board_meeting_duration\": 300,\n \"auto_fallback_to_director\": true,\n \"custom_board_templates\": {\n \"financial\": {\n \"roles\": [\n {\"name\": \"CFO\", \"weight\": 1.5, \"expertise\": [\"finance\", \"risk_management\"]},\n {\"name\": \"Investment_Advisor\", \"weight\": 1.3, \"expertise\": [\"investments\", \"analysis\"]}\n ]\n }\n }\n}\n
"},{"location":"swarms/config/board_config/#configuration-functions","title":"Configuration Functions","text":""},{"location":"swarms/config/board_config/#feature-control","title":"Feature Control","text":"from swarms.config.board_config import (\n enable_board_feature,\n disable_board_feature,\n is_board_feature_enabled\n)\n\n# Check if feature is enabled\nif not is_board_feature_enabled():\n # Enable the feature\n enable_board_feature()\n print(\"Board of Directors feature enabled\")\n\n# Disable the feature\ndisable_board_feature()\n
"},{"location":"swarms/config/board_config/#board-composition","title":"Board Composition","text":"from swarms.config.board_config import (\n set_board_size,\n get_board_size\n)\n\n# Set default board size\nset_board_size(7)\n\n# Get current board size\ncurrent_size = get_board_size()\nprint(f\"Default board size: {current_size}\")\n
"},{"location":"swarms/config/board_config/#decision-settings","title":"Decision Settings","text":"from swarms.config.board_config import (\n set_decision_threshold,\n get_decision_threshold,\n enable_voting,\n disable_voting,\n enable_consensus,\n disable_consensus\n)\n\n# Set decision threshold (0.0 to 1.0)\nset_decision_threshold(0.75) # 75% majority required\n\n# Get current threshold\nthreshold = get_decision_threshold()\nprint(f\"Decision threshold: {threshold}\")\n\n# Enable/disable voting mechanisms\nenable_voting()\ndisable_voting()\n\n# Enable/disable consensus building\nenable_consensus()\ndisable_consensus()\n
"},{"location":"swarms/config/board_config/#model-configuration","title":"Model Configuration","text":"from swarms.config.board_config import (\n set_board_model,\n get_board_model\n)\n\n# Set default model for board members\nset_board_model(\"gpt-4o\")\n\n# Get current model\nmodel = get_board_model()\nprint(f\"Default board model: {model}\")\n
"},{"location":"swarms/config/board_config/#logging-configuration","title":"Logging Configuration","text":"from swarms.config.board_config import (\n enable_verbose_logging,\n disable_verbose_logging,\n is_verbose_logging_enabled\n)\n\n# Enable verbose logging\nenable_verbose_logging()\n\n# Check logging status\nif is_verbose_logging_enabled():\n print(\"Verbose logging is enabled\")\n\n# Disable verbose logging\ndisable_verbose_logging()\n
"},{"location":"swarms/config/board_config/#meeting-duration","title":"Meeting Duration","text":"from swarms.config.board_config import (\n set_max_board_meeting_duration,\n get_max_board_meeting_duration\n)\n\n# Set maximum meeting duration in seconds\nset_max_board_meeting_duration(600) # 10 minutes\n\n# Get current duration\nduration = get_max_board_meeting_duration()\nprint(f\"Max meeting duration: {duration} seconds\")\n
"},{"location":"swarms/config/board_config/#fallback-configuration","title":"Fallback Configuration","text":"from swarms.config.board_config import (\n enable_auto_fallback_to_director,\n disable_auto_fallback_to_director,\n is_auto_fallback_enabled\n)\n\n# Enable automatic fallback to Director mode\nenable_auto_fallback_to_director()\n\n# Check fallback status\nif is_auto_fallback_enabled():\n print(\"Auto fallback to Director mode is enabled\")\n\n# Disable fallback\ndisable_auto_fallback_to_director()\n
"},{"location":"swarms/config/board_config/#board-templates","title":"Board Templates","text":""},{"location":"swarms/config/board_config/#default-templates","title":"Default Templates","text":"The configuration system provides predefined board templates for common use cases:
from swarms.config.board_config import get_default_board_template\n\n# Get standard board template\nstandard_template = get_default_board_template(\"standard\")\nprint(\"Standard template roles:\", standard_template[\"roles\"])\n\n# Get executive board template\nexecutive_template = get_default_board_template(\"executive\")\nprint(\"Executive template roles:\", executive_template[\"roles\"])\n\n# Get advisory board template\nadvisory_template = get_default_board_template(\"advisory\")\nprint(\"Advisory template roles:\", advisory_template[\"roles\"])\n
"},{"location":"swarms/config/board_config/#template-structure","title":"Template Structure","text":"Each template defines the board composition:
# Standard template structure\nstandard_template = {\n \"roles\": [\n {\n \"name\": \"Chairman\",\n \"weight\": 1.5,\n \"expertise\": [\"leadership\", \"strategy\"]\n },\n {\n \"name\": \"Vice-Chairman\", \n \"weight\": 1.2,\n \"expertise\": [\"operations\", \"coordination\"]\n },\n {\n \"name\": \"Secretary\",\n \"weight\": 1.0,\n \"expertise\": [\"documentation\", \"communication\"]\n }\n ]\n}\n
"},{"location":"swarms/config/board_config/#custom-templates","title":"Custom Templates","text":"Create custom board templates for specific use cases:
from swarms.config.board_config import (\n add_custom_board_template,\n get_custom_board_template,\n list_custom_templates\n)\n\n# Define a custom financial analysis board\nfinancial_template = {\n \"roles\": [\n {\n \"name\": \"CFO\",\n \"weight\": 1.5,\n \"expertise\": [\"finance\", \"risk_management\", \"budgeting\"]\n },\n {\n \"name\": \"Investment_Advisor\",\n \"weight\": 1.3,\n \"expertise\": [\"investments\", \"market_analysis\", \"portfolio_management\"]\n },\n {\n \"name\": \"Compliance_Officer\",\n \"weight\": 1.2,\n \"expertise\": [\"compliance\", \"regulations\", \"legal\"]\n }\n ]\n}\n\n# Add custom template\nadd_custom_board_template(\"financial_analysis\", financial_template)\n\n# Get custom template\ntemplate = get_custom_board_template(\"financial_analysis\")\n\n# List all custom templates\ntemplates = list_custom_templates()\nprint(\"Available custom templates:\", templates)\n
"},{"location":"swarms/config/board_config/#configuration-validation","title":"Configuration Validation","text":"The configuration system includes comprehensive validation:
from swarms.config.board_config import validate_configuration\n\n# Validate current configuration\ntry:\n validation_result = validate_configuration()\n print(\"Configuration is valid:\", validation_result.is_valid)\n if not validation_result.is_valid:\n print(\"Validation errors:\", validation_result.errors)\nexcept Exception as e:\n print(f\"Configuration validation failed: {e}\")\n
"},{"location":"swarms/config/board_config/#configuration-persistence","title":"Configuration Persistence","text":""},{"location":"swarms/config/board_config/#save-configuration","title":"Save Configuration","text":"from swarms.config.board_config import save_configuration\n\n# Save current configuration to file\nsave_configuration(\"my_board_config.json\")\n
"},{"location":"swarms/config/board_config/#load-configuration","title":"Load Configuration","text":"from swarms.config.board_config import load_configuration\n\n# Load configuration from file\nconfig = load_configuration(\"my_board_config.json\")\n
"},{"location":"swarms/config/board_config/#reset-to-defaults","title":"Reset to Defaults","text":"from swarms.config.board_config import reset_to_defaults\n\n# Reset all configuration to default values\nreset_to_defaults()\n
"},{"location":"swarms/config/board_config/#integration-with-boardofdirectorsswarm","title":"Integration with BoardOfDirectorsSwarm","text":"The configuration system integrates seamlessly with the BoardOfDirectorsSwarm:
from swarms.structs.board_of_directors_swarm import BoardOfDirectorsSwarm\nfrom swarms.config.board_config import (\n enable_board_feature,\n set_decision_threshold,\n get_default_board_template\n)\n\n# Enable the feature globally\nenable_board_feature()\n\n# Set global decision threshold\nset_decision_threshold(0.7)\n\n# Get a board template\ntemplate = get_default_board_template(\"executive\")\n\n# Create board members from template\nboard_members = []\nfor role_config in template[\"roles\"]:\n agent = Agent(\n agent_name=role_config[\"name\"],\n agent_description=f\"Board member with expertise in {', '.join(role_config['expertise'])}\",\n model_name=\"gpt-4o-mini\"\n )\n board_member = BoardMember(\n agent=agent,\n role=BoardMemberRole.EXECUTIVE_DIRECTOR,\n voting_weight=role_config[\"weight\"],\n expertise_areas=role_config[\"expertise\"]\n )\n board_members.append(board_member)\n\n# Create the swarm with configured settings\nboard_swarm = BoardOfDirectorsSwarm(\n board_members=board_members,\n agents=worker_agents,\n decision_threshold=0.7, # Uses global setting\n enable_voting=True,\n enable_consensus=True\n)\n
"},{"location":"swarms/config/board_config/#best-practices","title":"Best Practices","text":"The configuration system includes comprehensive error handling:
from swarms.config.board_config import BoardConfig\n\ntry:\n config = BoardConfig(\n config_file_path=\"invalid_config.json\"\n )\nexcept Exception as e:\n print(f\"Configuration loading failed: {e}\")\n # Handle error appropriately\n
"},{"location":"swarms/config/board_config/#performance-considerations","title":"Performance Considerations","text":"For more information on using the Board of Directors feature, see the BoardOfDirectorsSwarm Documentation.
"},{"location":"swarms/examples/agent_output_types/","title":"Agent Output Types Examples with Vision Capabilities","text":"This example demonstrates how to use different output types when working with Swarms agents, including vision-enabled agents that can analyze images. Each output type formats the agent's response in a specific way, making it easier to integrate with different parts of your application.
"},{"location":"swarms/examples/agent_output_types/#prerequisites","title":"Prerequisites","text":"pip3 install -U swarms\n
"},{"location":"swarms/examples/agent_output_types/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\" # Required for GPT-4V vision capabilities\nANTHROPIC_API_KEY=\"\" # Optional, for Claude models\n
"},{"location":"swarms/examples/agent_output_types/#examples","title":"Examples","text":""},{"location":"swarms/examples/agent_output_types/#vision-enabled-quality-control-agent","title":"Vision-Enabled Quality Control Agent","text":"from swarms.structs import Agent\nfrom swarms.prompts.logistics import (\n Quality_Control_Agent_Prompt,\n)\n\n# Image for analysis\nfactory_image = \"image.jpg\"\n\n\n# Quality control agent\nquality_control_agent = Agent(\n agent_name=\"Quality Control Agent\",\n agent_description=\"A quality control agent that analyzes images and provides a detailed report on the quality of the product in the image.\",\n model_name=\"gpt-4.1-mini\",\n system_prompt=Quality_Control_Agent_Prompt,\n multi_modal=True,\n max_loops=2,\n output_type=\"str-all-except-first\",\n)\n\n\nresponse = quality_control_agent.run(\n task=\"what is in the image?\",\n img=factory_image,\n)\n\nprint(response)\n
"},{"location":"swarms/examples/agent_output_types/#supported-image-formats","title":"Supported Image Formats","text":"The vision-enabled agents support various image formats including:
Format Description JPEG/JPG Standard image format with lossy compression PNG Lossless format supporting transparency GIF Animated format (only first frame used) WebP Modern format with both lossy and lossless compression"},{"location":"swarms/examples/agent_output_types/#best-practices-for-vision-tasks","title":"Best Practices for Vision Tasks","text":"Best Practice Description Image Quality Ensure images are clear and well-lit for optimal analysis Image Size Keep images under 20MB and in supported formats Task Specificity Provide clear, specific instructions for image analysis Model Selection Use vision-capable models (e.g., GPT-4V) for image tasks"},{"location":"swarms/examples/agent_structured_outputs/","title":"Agent Structured Outputs","text":"This example demonstrates how to use structured outputs with Swarms agents following OpenAI's function calling schema. By defining function schemas, you can specify exactly how agents should structure their responses, making it easier to parse and use the outputs in your applications.
"},{"location":"swarms/examples/agent_structured_outputs/#prerequisites","title":"Prerequisites","text":"pip3 install -U swarms\n
"},{"location":"swarms/examples/agent_structured_outputs/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\n
"},{"location":"swarms/examples/agent_structured_outputs/#understanding-function-schemas","title":"Understanding Function Schemas","text":"Function schemas in Swarms follow OpenAI's function calling format. Each function schema is defined as a dictionary with the following structure:
{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"function_name\",\n \"description\": \"A clear description of what the function does\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n # Define the parameters your function accepts\n },\n \"required\": [\"list\", \"of\", \"required\", \"parameters\"]\n }\n }\n}\n
"},{"location":"swarms/examples/agent_structured_outputs/#code-example","title":"Code Example","text":"Here's an example showing how to use multiple function schemas with a Swarms agent:
from swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT\n\n# Define multiple function schemas\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_stock_price\",\n \"description\": \"Retrieve the current stock price and related information for a specified company.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\n \"type\": \"string\",\n \"description\": \"The stock ticker symbol of the company, e.g. AAPL for Apple Inc.\",\n },\n \"include_history\": {\n \"type\": \"boolean\",\n \"description\": \"Whether to include historical price data.\",\n },\n \"time\": {\n \"type\": \"string\",\n \"format\": \"date-time\",\n \"description\": \"Optional time for stock data, in ISO 8601 format.\",\n },\n },\n \"required\": [\"ticker\", \"include_history\"]\n },\n },\n },\n # Can pass in multiple function schemas as well\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"analyze_company_financials\",\n \"description\": \"Analyze key financial metrics and ratios for a company.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\n \"type\": \"string\",\n \"description\": \"The stock ticker symbol of the company\",\n },\n \"metrics\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\",\n \"enum\": [\"PE_ratio\", \"market_cap\", \"revenue\", \"profit_margin\"]\n },\n \"description\": \"List of financial metrics to analyze\"\n },\n \"timeframe\": {\n \"type\": \"string\",\n \"enum\": [\"quarterly\", \"annual\", \"ttm\"],\n \"description\": \"Timeframe for the analysis\"\n }\n },\n \"required\": [\"ticker\", \"metrics\"]\n }\n }\n }\n]\n\n# Initialize the agent with multiple function schemas\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent that can fetch stock prices and analyze financials\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=1,\n tools_list_dictionary=tools, # Pass in the list of function schemas\n output_type=\"final\"\n)\n\n# Example usage with stock price query\nstock_response = agent.run(\n \"What is the current stock price for Apple Inc. (AAPL)? Include historical data.\"\n)\nprint(\"Stock Price Response:\", stock_response)\n\n# Example usage with financial analysis query\nanalysis_response = agent.run(\n \"Analyze Apple's PE ratio and market cap using quarterly data.\"\n)\nprint(\"Financial Analysis Response:\", analysis_response)\n
"},{"location":"swarms/examples/agent_structured_outputs/#schema-types-and-properties","title":"Schema Types and Properties","text":"The function schema supports various parameter types and properties:
Schema Type Description Basic Typesstring
, number
, integer
, boolean
, array
, object
Format Specifications date-time
, date
, email
, etc. Enums Restrict values to a predefined set Required vs Optional Parameters Specify which parameters must be provided Nested Objects and Arrays Support for complex data structures Example of a more complex schema:
{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"generate_investment_report\",\n \"description\": \"Generate a comprehensive investment report\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"portfolio\": {\n \"type\": \"object\",\n \"properties\": {\n \"stocks\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\"type\": \"string\"},\n \"shares\": {\"type\": \"number\"},\n \"entry_price\": {\"type\": \"number\"}\n }\n }\n },\n \"risk_tolerance\": {\n \"type\": \"string\",\n \"enum\": [\"low\", \"medium\", \"high\"]\n },\n \"time_horizon\": {\n \"type\": \"integer\",\n \"minimum\": 1,\n \"maximum\": 30,\n \"description\": \"Investment time horizon in years\"\n }\n },\n \"required\": [\"stocks\", \"risk_tolerance\"]\n },\n \"report_type\": {\n \"type\": \"string\",\n \"enum\": [\"summary\", \"detailed\", \"risk_analysis\"]\n }\n },\n \"required\": [\"portfolio\"]\n }\n }\n}\n
This example shows how to structure complex nested objects, arrays, and various parameter types while following OpenAI's function calling schema.
"},{"location":"swarms/examples/agent_with_tools/","title":"Basic Agent Example","text":"This tutorial demonstrates how to create and use tools (callables) with the Swarms framework. Tools are Python functions that your agent can call to perform specific tasks, interact with external services, or process data. We'll show you how to build well-structured tools and integrate them with your agent.
"},{"location":"swarms/examples/agent_with_tools/#prerequisites","title":"Prerequisites","text":"Python 3.7+
OpenAI API key
Swarms library
Tools are functions that your agent can use to interact with external services, process data, or perform specific tasks. Here's a guide on how to build effective tools for your agent:
"},{"location":"swarms/examples/agent_with_tools/#tool-structure-best-practices","title":"Tool Structure Best Practices","text":"Type Hints: Always use type hints to specify input and output types
Docstrings: Include comprehensive docstrings with description, args, returns, and examples
Error Handling: Implement proper error handling and return consistent JSON responses
Rate Limiting: Include rate limiting when dealing with APIs
Input Validation: Validate input parameters before processing
Here's a template for creating a well-structured tool:
from typing import Optional, Dict, Any\nimport json\n\ndef example_tool(param1: str, param2: Optional[int] = None) -> str:\n \"\"\"\n Brief description of what the tool does.\n\n Args:\n param1 (str): Description of first parameter\n param2 (Optional[int]): Description of optional parameter\n\n Returns:\n str: JSON formatted string containing the result\n\n Raises:\n ValueError: Description of when this error occurs\n RequestException: Description of when this error occurs\n\n Example:\n >>> result = example_tool(\"test\", 123)\n >>> print(result)\n {\"status\": \"success\", \"data\": {\"key\": \"value\"}}\n \"\"\"\n try:\n # Input validation\n if not isinstance(param1, str):\n raise ValueError(\"param1 must be a string\")\n\n # Main logic\n result: Dict[str, Any] = {\n \"status\": \"success\",\n \"data\": {\n \"param1\": param1,\n \"param2\": param2\n }\n }\n\n # Return JSON string\n return json.dumps(result, indent=2)\n\n except ValueError as e:\n return json.dumps({\"error\": f\"Validation error: {str(e)}\"})\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n
"},{"location":"swarms/examples/agent_with_tools/#building-api-integration-tools","title":"Building API Integration Tools","text":"When building tools that interact with external APIs:
def get_api_data(endpoint: str, params: Dict[str, Any]) -> str:\n \"\"\"\n Generic API data fetcher with proper error handling.\n\n Args:\n endpoint (str): API endpoint to call\n params (Dict[str, Any]): Query parameters\n\n Returns:\n str: JSON formatted response\n \"\"\"\n try:\n response = requests.get(\n endpoint,\n params=params,\n timeout=10\n )\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n except requests.RequestException as e:\n return json.dumps({\"error\": f\"API error: {str(e)}\"})\n
"},{"location":"swarms/examples/agent_with_tools/#data-processing-tools","title":"Data Processing Tools","text":"Example of a tool that processes data:
from typing import List, Dict\nimport pandas as pd\n\ndef process_market_data(prices: List[float], window: int = 14) -> str:\n \"\"\"\n Calculate technical indicators from price data.\n\n Args:\n prices (List[float]): List of historical prices\n window (int): Rolling window size for calculations\n\n Returns:\n str: JSON formatted string with calculated indicators\n\n Example:\n >>> prices = [100, 101, 99, 102, 98, 103]\n >>> result = process_market_data(prices, window=3)\n >>> print(result)\n {\"sma\": 101.0, \"volatility\": 2.1}\n \"\"\"\n try:\n df = pd.DataFrame({\"price\": prices})\n\n results: Dict[str, float] = {\n \"sma\": df[\"price\"].rolling(window).mean().iloc[-1],\n \"volatility\": df[\"price\"].rolling(window).std().iloc[-1]\n }\n\n return json.dumps(results, indent=2)\n\n except Exception as e:\n return json.dumps({\"error\": f\"Processing error: {str(e)}\"})\n
"},{"location":"swarms/examples/agent_with_tools/#adding-tools-to-your-agent","title":"Adding Tools to Your Agent","text":"Once you've created your tools, add them to your agent like this:
agent = Agent(\n agent_name=\"Your-Agent\",\n agent_description=\"Description of your agent\",\n system_prompt=\"System prompt for your agent\",\n tools=[\n example_tool,\n get_api_data,\n rate_limited_api_call,\n process_market_data\n ]\n)\n
"},{"location":"swarms/examples/agent_with_tools/#tutorial-steps","title":"Tutorial Steps","text":"pip3 install -U swarms\n
.env
file:OPENAI_API_KEY=\"your-api-key-here\"\nWORKSPACE_DIR=\"agent_workspace\"\n
agent_name
: A unique identifier for your agent
agent_description
: A detailed description of your agent's capabilities
system_prompt
: The core instructions that define your agent's behavior
model_name
: The GPT model to use
Additional configuration options for temperature and output format
Run the example code below:
import json\nimport requests\nfrom swarms import Agent\nfrom typing import List\nimport time\n\n\ndef get_coin_price(coin_id: str, vs_currency: str) -> str:\n \"\"\"\n Get the current price of a specific cryptocurrency.\n\n Args:\n coin_id (str): The CoinGecko ID of the cryptocurrency (e.g., 'bitcoin', 'ethereum')\n vs_currency (str, optional): The target currency. Defaults to \"usd\".\n\n Returns:\n str: JSON formatted string containing the coin's current price and market data\n\n Raises:\n requests.RequestException: If the API request fails\n\n Example:\n >>> result = get_coin_price(\"bitcoin\")\n >>> print(result)\n {\"bitcoin\": {\"usd\": 45000, \"usd_market_cap\": 850000000000, ...}}\n \"\"\"\n try:\n url = \"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": coin_id,\n \"vs_currencies\": vs_currency,\n \"include_market_cap\": True,\n \"include_24hr_vol\": True,\n \"include_24hr_change\": True,\n \"include_last_updated_at\": True,\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n return json.dumps(data, indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\n \"error\": f\"Failed to fetch price for {coin_id}: {str(e)}\"\n }\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_top_cryptocurrencies(limit: int, vs_currency: str) -> str:\n \"\"\"\n Fetch the top cryptocurrencies by market capitalization.\n\n Args:\n limit (int, optional): Number of coins to retrieve (1-250). Defaults to 10.\n vs_currency (str, optional): The target currency. Defaults to \"usd\".\n\n Returns:\n str: JSON formatted string containing top cryptocurrencies with detailed market data\n\n Raises:\n requests.RequestException: If the API request fails\n ValueError: If limit is not between 1 and 250\n\n Example:\n >>> result = get_top_cryptocurrencies(5)\n >>> print(result)\n [{\"id\": \"bitcoin\", \"name\": \"Bitcoin\", \"current_price\": 45000, ...}]\n \"\"\"\n try:\n if not 1 <= limit <= 250:\n raise ValueError(\"Limit must be between 1 and 250\")\n\n url = \"https://api.coingecko.com/api/v3/coins/markets\"\n params = {\n \"vs_currency\": vs_currency,\n \"order\": \"market_cap_desc\",\n \"per_page\": limit,\n \"page\": 1,\n \"sparkline\": False,\n \"price_change_percentage\": \"24h,7d\",\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Simplify the data structure for better readability\n simplified_data = []\n for coin in data:\n simplified_data.append(\n {\n \"id\": coin.get(\"id\"),\n \"symbol\": coin.get(\"symbol\"),\n \"name\": coin.get(\"name\"),\n \"current_price\": coin.get(\"current_price\"),\n \"market_cap\": coin.get(\"market_cap\"),\n \"market_cap_rank\": coin.get(\"market_cap_rank\"),\n \"total_volume\": coin.get(\"total_volume\"),\n \"price_change_24h\": coin.get(\n \"price_change_percentage_24h\"\n ),\n \"price_change_7d\": coin.get(\n \"price_change_percentage_7d_in_currency\"\n ),\n \"last_updated\": coin.get(\"last_updated\"),\n }\n )\n\n return json.dumps(simplified_data, indent=2)\n\n except (requests.RequestException, ValueError) as e:\n return json.dumps(\n {\n \"error\": f\"Failed to fetch top cryptocurrencies: {str(e)}\"\n }\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef search_cryptocurrencies(query: str) -> str:\n \"\"\"\n Search for cryptocurrencies by name or symbol.\n\n Args:\n query (str): The search term (coin name or symbol)\n\n Returns:\n str: JSON formatted string containing search results with coin details\n\n Raises:\n requests.RequestException: If the API request fails\n\n Example:\n >>> result = search_cryptocurrencies(\"ethereum\")\n >>> print(result)\n {\"coins\": [{\"id\": \"ethereum\", \"name\": \"Ethereum\", \"symbol\": \"eth\", ...}]}\n \"\"\"\n try:\n url = \"https://api.coingecko.com/api/v3/search\"\n params = {\"query\": query}\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Extract and format the results\n result = {\n \"coins\": data.get(\"coins\", [])[\n :10\n ], # Limit to top 10 results\n \"query\": query,\n \"total_results\": len(data.get(\"coins\", [])),\n }\n\n return json.dumps(result, indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f'Failed to search for \"{query}\": {str(e)}'}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_jupiter_quote(\n input_mint: str,\n output_mint: str,\n amount: float,\n slippage: float = 0.5,\n) -> str:\n \"\"\"\n Get a quote for token swaps using Jupiter Protocol on Solana.\n\n Args:\n input_mint (str): Input token mint address\n output_mint (str): Output token mint address\n amount (float): Amount of input tokens to swap\n slippage (float, optional): Slippage tolerance percentage. Defaults to 0.5.\n\n Returns:\n str: JSON formatted string containing the swap quote details\n\n Example:\n >>> result = get_jupiter_quote(\"SOL_MINT_ADDRESS\", \"USDC_MINT_ADDRESS\", 1.0)\n >>> print(result)\n {\"inputAmount\": \"1000000000\", \"outputAmount\": \"22.5\", \"route\": [...]}\n \"\"\"\n try:\n url = \"https://lite-api.jup.ag/swap/v1/quote\"\n params = {\n \"inputMint\": input_mint,\n \"outputMint\": output_mint,\n \"amount\": str(int(amount * 1e9)), # Convert to lamports\n \"slippageBps\": int(slippage * 100),\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to get Jupiter quote: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_htx_market_data(symbol: str) -> str:\n \"\"\"\n Get market data for a trading pair from HTX exchange.\n\n Args:\n symbol (str): Trading pair symbol (e.g., 'btcusdt', 'ethusdt')\n\n Returns:\n str: JSON formatted string containing market data\n\n Example:\n >>> result = get_htx_market_data(\"btcusdt\")\n >>> print(result)\n {\"symbol\": \"btcusdt\", \"price\": \"45000\", \"volume\": \"1000000\", ...}\n \"\"\"\n try:\n url = \"https://api.htx.com/market/detail/merged\"\n params = {\"symbol\": symbol.lower()}\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to fetch HTX market data: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_token_historical_data(\n token_id: str, days: int = 30, vs_currency: str = \"usd\"\n) -> str:\n \"\"\"\n Get historical price and market data for a cryptocurrency.\n\n Args:\n token_id (str): The CoinGecko ID of the cryptocurrency\n days (int, optional): Number of days of historical data. Defaults to 30.\n vs_currency (str, optional): The target currency. Defaults to \"usd\".\n\n Returns:\n str: JSON formatted string containing historical price and market data\n\n Example:\n >>> result = get_token_historical_data(\"bitcoin\", 7)\n >>> print(result)\n {\"prices\": [[timestamp, price], ...], \"market_caps\": [...], \"volumes\": [...]}\n \"\"\"\n try:\n url = f\"https://api.coingecko.com/api/v3/coins/{token_id}/market_chart\"\n params = {\n \"vs_currency\": vs_currency,\n \"days\": days,\n \"interval\": \"daily\",\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to fetch historical data: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_defi_stats() -> str:\n \"\"\"\n Get global DeFi statistics including TVL, trading volumes, and dominance.\n\n Returns:\n str: JSON formatted string containing global DeFi statistics\n\n Example:\n >>> result = get_defi_stats()\n >>> print(result)\n {\"total_value_locked\": 50000000000, \"defi_dominance\": 15.5, ...}\n \"\"\"\n try:\n url = \"https://api.coingecko.com/api/v3/global/decentralized_finance_defi\"\n response = requests.get(url, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to fetch DeFi stats: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_jupiter_tokens() -> str:\n \"\"\"\n Get list of tokens supported by Jupiter Protocol on Solana.\n\n Returns:\n str: JSON formatted string containing supported tokens\n\n Example:\n >>> result = get_jupiter_tokens()\n >>> print(result)\n {\"tokens\": [{\"symbol\": \"SOL\", \"mint\": \"...\", \"decimals\": 9}, ...]}\n \"\"\"\n try:\n url = \"https://lite-api.jup.ag/tokens/v1/mints/tradable\"\n response = requests.get(url, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to fetch Jupiter tokens: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_htx_trading_pairs() -> str:\n \"\"\"\n Get list of all trading pairs available on HTX exchange.\n\n Returns:\n str: JSON formatted string containing trading pairs information\n\n Example:\n >>> result = get_htx_trading_pairs()\n >>> print(result)\n {\"symbols\": [{\"symbol\": \"btcusdt\", \"state\": \"online\", \"type\": \"spot\"}, ...]}\n \"\"\"\n try:\n url = \"https://api.htx.com/v1/common/symbols\"\n response = requests.get(url, timeout=10)\n response.raise_for_status()\n return json.dumps(response.json(), indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to fetch HTX trading pairs: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_market_sentiment(coin_ids: List[str]) -> str:\n \"\"\"\n Get market sentiment data including social metrics and developer activity.\n\n Args:\n coin_ids (List[str]): List of CoinGecko coin IDs\n\n Returns:\n str: JSON formatted string containing market sentiment data\n\n Example:\n >>> result = get_market_sentiment([\"bitcoin\", \"ethereum\"])\n >>> print(result)\n {\"bitcoin\": {\"sentiment_score\": 75, \"social_volume\": 15000, ...}, ...}\n \"\"\"\n try:\n sentiment_data = {}\n for coin_id in coin_ids:\n url = f\"https://api.coingecko.com/api/v3/coins/{coin_id}\"\n params = {\n \"localization\": False,\n \"tickers\": False,\n \"market_data\": False,\n \"community_data\": True,\n \"developer_data\": True,\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n data = response.json()\n\n sentiment_data[coin_id] = {\n \"community_score\": data.get(\"community_score\"),\n \"developer_score\": data.get(\"developer_score\"),\n \"public_interest_score\": data.get(\n \"public_interest_score\"\n ),\n \"community_data\": data.get(\"community_data\"),\n \"developer_data\": data.get(\"developer_data\"),\n }\n\n # Rate limiting to avoid API restrictions\n time.sleep(0.6)\n\n return json.dumps(sentiment_data, indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\"error\": f\"Failed to fetch market sentiment: {str(e)}\"}\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\n# Initialize the agent with expanded tools\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Advanced financial advisor agent with comprehensive cryptocurrency market analysis capabilities across multiple platforms including Jupiter Protocol and HTX\",\n system_prompt=\"You are an advanced financial advisor agent with access to real-time cryptocurrency data from multiple sources including CoinGecko, Jupiter Protocol, and HTX. You can help users analyze market trends, check prices, find trading opportunities, perform swaps, and get detailed market insights. Always provide accurate, up-to-date information and explain market data in an easy-to-understand way.\",\n max_loops=1,\n max_tokens=4096,\n model_name=\"gpt-4o-mini\",\n dynamic_temperature_enabled=True,\n output_type=\"all\",\n tools=[\n get_coin_price,\n get_top_cryptocurrencies,\n search_cryptocurrencies,\n get_jupiter_quote,\n get_htx_market_data,\n get_token_historical_data,\n get_defi_stats,\n get_jupiter_tokens,\n get_htx_trading_pairs,\n get_market_sentiment,\n ],\n # Upload your tools to the tools parameter here!\n)\n\n# agent.run(\"Use defi stats to find the best defi project to invest in\")\nagent.run(\"Get the market sentiment for bitcoin\")\n# Automatically executes any number and combination of tools you have uploaded to the tools parameter!\n
"},{"location":"swarms/examples/agents_as_tools/","title":"Agents as Tools Tutorial","text":"This tutorial demonstrates how to create a powerful multi-agent system where agents can delegate tasks to specialized sub-agents. This pattern is particularly useful for complex tasks that require different types of expertise or capabilities.
"},{"location":"swarms/examples/agents_as_tools/#overview","title":"Overview","text":"The Agents as Tools pattern allows you to:
Create specialized agents with specific capabilities
Have agents delegate tasks to other agents
Chain multiple agents together for complex workflows
Maintain separation of concerns between different agent roles
Python 3.8 or higher
Basic understanding of Python programming
Familiarity with async/await concepts (optional)
Install the swarms package using pip:
pip install -U swarms\n
"},{"location":"swarms/examples/agents_as_tools/#basic-setup","title":"Basic Setup","text":"WORKSPACE_DIR=\"agent_workspace\"\nANTHROPIC_API_KEY=\"\"\n
"},{"location":"swarms/examples/agents_as_tools/#step-by-step-guide","title":"Step-by-Step Guide","text":"Define Your Tools
Create functions that will serve as tools for your agents
Add proper type hints and detailed docstrings
Include error handling and logging
Example:
def my_tool(param: str) -> str:\n \"\"\"Detailed description of what the tool does.\n\n Args:\n param: Description of the parameter\n\n Returns:\n Description of the return value\n \"\"\"\n # Tool implementation\n return result\n
Create Specialized Agents
Define agents with specific roles and capabilities
Configure each agent with appropriate settings
Assign relevant tools to each agent
specialized_agent = Agent(\n agent_name=\"Specialist\",\n agent_description=\"Expert in specific domain\",\n system_prompt=\"Detailed instructions for the agent\",\n tools=[tool1, tool2]\n)\n
Set Up the Director Agent
Create a high-level agent that coordinates other agents
Give it access to specialized agents as tools
Define clear delegation rules
director = Agent(\n agent_name=\"Director\",\n agent_description=\"Coordinates other agents\",\n tools=[specialized_agent.run]\n)\n
Execute Multi-Agent Workflows
Start with the director agent
Let it delegate tasks as needed
Handle responses and chain results
result = director.run(\"Your high-level task description\")\n
"},{"location":"swarms/examples/agents_as_tools/#code","title":"Code","text":"import json\nimport requests\nfrom swarms import Agent\n\ndef create_python_file(code: str, filename: str) -> str:\n \"\"\"Create a Python file with the given code and execute it using Python 3.12.\n\n This function takes a string containing Python code, writes it to a file, and executes it\n using Python 3.12 via subprocess. The file will be created in the current working directory.\n If a file with the same name already exists, it will be overwritten.\n\n Args:\n code (str): The Python code to write to the file. This should be valid Python 3.12 code.\n filename (str): The name of the file to create and execute.\n\n Returns:\n str: A detailed message indicating the file was created and the execution result.\n\n Raises:\n IOError: If there are any issues writing to the file.\n subprocess.SubprocessError: If there are any issues executing the file.\n\n Example:\n >>> code = \"print('Hello, World!')\"\n >>> result = create_python_file(code, \"test.py\")\n >>> print(result)\n 'Python file created successfully. Execution result: Hello, World!'\n \"\"\"\n import subprocess\n import os\n import datetime\n\n # Get current timestamp for logging\n timestamp = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n\n # Write the code to file\n with open(filename, \"w\") as f:\n f.write(code)\n\n # Get file size and permissions\n file_stats = os.stat(filename)\n file_size = file_stats.st_size\n file_permissions = oct(file_stats.st_mode)[-3:]\n\n # Execute the file using Python 3.12 and capture output\n try:\n result = subprocess.run(\n [\"python3.12\", filename],\n capture_output=True,\n text=True,\n check=True\n )\n\n # Create detailed response\n response = f\"\"\"\nFile Creation Details:\n----------------------\nTimestamp: {timestamp}\nFilename: {filename}\nFile Size: {file_size} bytes\nFile Permissions: {file_permissions}\nLocation: {os.path.abspath(filename)}\n\nExecution Details:\n-----------------\nExit Code: {result.returncode}\nExecution Time: {result.returncode} seconds\n\nOutput:\n-------\n{result.stdout}\n\nError Output (if any):\n--------------------\n{result.stderr}\n\"\"\"\n return response\n except subprocess.CalledProcessError as e:\n error_response = f\"\"\"\nFile Creation Details:\n----------------------\nTimestamp: {timestamp}\nFilename: {filename}\nFile Size: {file_size} bytes\nFile Permissions: {file_permissions}\nLocation: {os.path.abspath(filename)}\n\nExecution Error:\n---------------\nExit Code: {e.returncode}\nError Message: {e.stderr}\n\nCommand Output:\n-------------\n{e.stdout}\n\"\"\"\n return error_response\n\n\n\n\n\n\ndef update_python_file(code: str, filename: str) -> str:\n \"\"\"Update an existing Python file with new code and execute it using Python 3.12.\n\n This function takes a string containing Python code and updates an existing Python file.\n If the file doesn't exist, it will be created. The file will be executed using Python 3.12.\n\n Args:\n code (str): The Python code to write to the file. This should be valid Python 3.12 code.\n filename (str): The name of the file to update and execute.\n\n Returns:\n str: A detailed message indicating the file was updated and the execution result.\n\n Raises:\n IOError: If there are any issues writing to the file.\n subprocess.SubprocessError: If there are any issues executing the file.\n\n Example:\n >>> code = \"print('Updated code!')\"\n >>> result = update_python_file(code, \"my_script.py\")\n >>> print(result)\n 'Python file updated successfully. Execution result: Updated code!'\n \"\"\"\n import subprocess\n import os\n import datetime\n\n # Get current timestamp for logging\n timestamp = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n\n # Check if file exists and get its stats\n file_exists = os.path.exists(filename)\n if file_exists:\n old_stats = os.stat(filename)\n old_size = old_stats.st_size\n old_permissions = oct(old_stats.st_mode)[-3:]\n\n # Write the code to file\n with open(filename, \"w\") as f:\n f.write(code)\n\n # Get new file stats\n new_stats = os.stat(filename)\n new_size = new_stats.st_size\n new_permissions = oct(new_stats.st_mode)[-3:]\n\n # Execute the file using Python 3.12 and capture output\n try:\n result = subprocess.run(\n [\"python3.12\", filename],\n capture_output=True,\n text=True,\n check=True\n )\n\n # Create detailed response\n response = f\"\"\"\nFile Update Details:\n-------------------\nTimestamp: {timestamp}\nFilename: {filename}\nPrevious Status: {'Existed' if file_exists else 'Did not exist'}\nPrevious Size: {old_size if file_exists else 'N/A'} bytes\nPrevious Permissions: {old_permissions if file_exists else 'N/A'}\nNew Size: {new_size} bytes\nNew Permissions: {new_permissions}\nLocation: {os.path.abspath(filename)}\n\nExecution Details:\n-----------------\nExit Code: {result.returncode}\nExecution Time: {result.returncode} seconds\n\nOutput:\n-------\n{result.stdout}\n\nError Output (if any):\n--------------------\n{result.stderr}\n\"\"\"\n return response\n except subprocess.CalledProcessError as e:\n error_response = f\"\"\"\n File Update Details:\n -------------------\n Timestamp: {timestamp}\n Filename: {filename}\n Previous Status: {'Existed' if file_exists else 'Did not exist'}\n Previous Size: {old_size if file_exists else 'N/A'} bytes\n Previous Permissions: {old_permissions if file_exists else 'N/A'}\n New Size: {new_size} bytes\n New Permissions: {new_permissions}\n Location: {os.path.abspath(filename)}\n\n Execution Error:\n ---------------\n Exit Code: {e.returncode}\n Error Message: {e.stderr}\n\n Command Output:\n -------------\n {e.stdout}\n \"\"\"\n return error_response\n\n\ndef run_quant_trading_agent(task: str) -> str:\n \"\"\"Run a quantitative trading agent to analyze and execute trading strategies.\n\n This function initializes and runs a specialized quantitative trading agent that can:\n - Develop and backtest trading strategies\n - Analyze market data for alpha opportunities\n - Implement risk management frameworks\n - Optimize portfolio allocations\n - Conduct quantitative research\n - Monitor market microstructure\n - Evaluate trading system performance\n\n Args:\n task (str): The specific trading task or analysis to perform\n\n Returns:\n str: The agent's response or analysis results\n\n Example:\n >>> result = run_quant_trading_agent(\"Analyze SPY ETF for mean reversion opportunities\")\n >>> print(result)\n \"\"\"\n # Initialize the agent\n agent = Agent(\n agent_name=\"Quantitative-Trading-Agent\",\n agent_description=\"Advanced quantitative trading and algorithmic analysis agent\",\n system_prompt=\"\"\"You are an expert quantitative trading agent with deep expertise in:\n - Algorithmic trading strategies and implementation\n - Statistical arbitrage and market making\n - Risk management and portfolio optimization\n - High-frequency trading systems\n - Market microstructure analysis\n - Quantitative research methodologies\n - Financial mathematics and stochastic processes\n - Machine learning applications in trading\n\n Your core responsibilities include:\n 1. Developing and backtesting trading strategies\n 2. Analyzing market data and identifying alpha opportunities\n 3. Implementing risk management frameworks\n 4. Optimizing portfolio allocations\n 5. Conducting quantitative research\n 6. Monitoring market microstructure\n 7. Evaluating trading system performance\n\n You maintain strict adherence to:\n - Mathematical rigor in all analyses\n - Statistical significance in strategy development\n - Risk-adjusted return optimization\n - Market impact minimization\n - Regulatory compliance\n - Transaction cost analysis\n - Performance attribution\n\n You communicate in precise, technical terms while maintaining clarity for stakeholders.\"\"\",\n max_loops=2,\n model_name=\"claude-3-5-sonnet-20240620\",\n tools=[create_python_file, update_python_file, backtest_summary],\n )\n\n out = agent.run(task)\n return out\n\n\n\ndef backtest_summary(report: str) -> str:\n \"\"\"Generate a summary of a backtest report, but only if the backtest was profitable.\n\n This function should only be used when the backtest results show a positive return.\n Using this function for unprofitable backtests may lead to misleading conclusions.\n\n Args:\n report (str): The backtest report containing performance metrics\n\n Returns:\n str: A formatted summary of the backtest report\n\n Example:\n >>> result = backtest_summary(\"Total Return: +15.2%, Sharpe: 1.8\")\n >>> print(result)\n 'The backtest report is: Total Return: +15.2%, Sharpe: 1.8'\n \"\"\"\n return f\"The backtest report is: {report}\"\n\ndef get_coin_price(coin_id: str, vs_currency: str) -> str:\n \"\"\"\n Get the current price of a specific cryptocurrency.\n\n Args:\n coin_id (str): The CoinGecko ID of the cryptocurrency (e.g., 'bitcoin', 'ethereum')\n vs_currency (str, optional): The target currency. Defaults to \"usd\".\n\n Returns:\n str: JSON formatted string containing the coin's current price and market data\n\n Raises:\n requests.RequestException: If the API request fails\n\n Example:\n >>> result = get_coin_price(\"bitcoin\")\n >>> print(result)\n {\"bitcoin\": {\"usd\": 45000, \"usd_market_cap\": 850000000000, ...}}\n \"\"\"\n try:\n url = \"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": coin_id,\n \"vs_currencies\": vs_currency,\n \"include_market_cap\": True,\n \"include_24hr_vol\": True,\n \"include_24hr_change\": True,\n \"include_last_updated_at\": True,\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n return json.dumps(data, indent=2)\n\n except requests.RequestException as e:\n return json.dumps(\n {\n \"error\": f\"Failed to fetch price for {coin_id}: {str(e)}\"\n }\n )\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\n\ndef run_crypto_quant_agent(task: str) -> str:\n \"\"\"\n Run a crypto quantitative trading agent with specialized tools for cryptocurrency market analysis.\n\n This function initializes and runs a quantitative trading agent specifically designed for\n cryptocurrency markets. The agent is equipped with tools for price fetching and can perform\n various quantitative analyses including algorithmic trading strategy development, risk management,\n and market microstructure analysis.\n\n Args:\n task (str): The task or query to be processed by the crypto quant agent.\n\n Returns:\n str: The agent's response to the given task.\n\n Example:\n >>> response = run_crypto_quant_agent(\"Analyze the current market conditions for Bitcoin\")\n >>> print(response)\n \"Based on current market analysis...\"\n \"\"\"\n # Initialize the agent with expanded tools\n quant_agent = Agent(\n agent_name=\"Crypto-Quant-Agent\",\n agent_description=\"Advanced quantitative trading agent specializing in cryptocurrency markets with algorithmic analysis capabilities\",\n system_prompt=\"\"\"You are an expert quantitative trading agent specializing in cryptocurrency markets. Your capabilities include:\n - Algorithmic trading strategy development and backtesting\n - Statistical arbitrage and market making for crypto assets\n - Risk management and portfolio optimization for digital assets\n - High-frequency trading system design for crypto markets\n - Market microstructure analysis of crypto exchanges\n - Quantitative research methodologies for crypto assets\n - Financial mathematics and stochastic processes\n - Machine learning applications in crypto trading\n\n You maintain strict adherence to:\n - Mathematical rigor in all analyses\n - Statistical significance in strategy development\n - Risk-adjusted return optimization\n - Market impact minimization\n - Regulatory compliance\n - Transaction cost analysis\n - Performance attribution\n\n You communicate in precise, technical terms while maintaining clarity for stakeholders.\"\"\",\n max_loops=1,\n max_tokens=4096,\n model_name=\"gpt-4.1-mini\",\n dynamic_temperature_enabled=True,\n output_type=\"final\",\n tools=[\n get_coin_price,\n ],\n )\n\n return quant_agent.run(task)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Director-Agent\",\n agent_description=\"Strategic director and project management agent\",\n system_prompt=\"\"\"You are an expert Director Agent with comprehensive capabilities in:\n - Strategic planning and decision making\n - Project management and coordination\n - Resource allocation and optimization\n - Team leadership and delegation\n - Risk assessment and mitigation\n - Stakeholder management\n - Process optimization\n - Quality assurance\n\n Your core responsibilities include:\n 1. Developing and executing strategic initiatives\n 2. Coordinating cross-functional projects\n 3. Managing resource allocation\n 4. Setting and tracking KPIs\n 5. Ensuring project deliverables\n 6. Risk management and mitigation\n 7. Stakeholder communication\n\n You maintain strict adherence to:\n - Best practices in project management\n - Data-driven decision making\n - Clear communication protocols\n - Quality standards\n - Timeline management\n - Budget constraints\n - Regulatory compliance\n\n You communicate with clarity and authority while maintaining professionalism and ensuring all stakeholders are aligned.\"\"\",\n max_loops=1,\n model_name=\"gpt-4o-mini\",\n output_type=\"final\",\n interactive=False,\n tools=[run_quant_trading_agent],\n)\n\nout = agent.run(\"\"\"\n Please call the quantitative trading agent to generate Python code for an Bitcoin backtest using the CoinGecko API.\n Provide a comprehensive description of the backtest methodology and trading strategy.\n Consider the API limitations of CoinGecko and utilize only free, open-source libraries that don't require API keys. Use the requests library to fetch the data. Create a specialized strategy for the backtest focused on the orderbook and other data for price action.\n The goal is to create a backtest that can predict the price action of the coin based on the orderbook and other data.\n Maximize the profit of the backtest. Please use the OKX price API for the orderbook and other data. Be very explicit in your implementation.\n Be very precise with the instructions you give to the agent and tell it to a 400 lines of good code.\n\"\"\")\nprint(out)\n
"},{"location":"swarms/examples/agents_as_tools/#best-practices","title":"Best Practices","text":"Category Best Practice Description Tool Design Single Purpose Keep tools focused and single-purpose Clear Naming Use clear, descriptive names Error Handling Include comprehensive error handling Documentation Add detailed documentation Agent Configuration Clear Role Give each agent a clear, specific role System Prompts Provide detailed system prompts Model Parameters Configure appropriate model and parameters Resource Limits Set reasonable limits on iterations and tokens Error Handling Multi-level Implement proper error handling at each level Logging Include logging for debugging API Management Handle API rate limits and timeouts Fallbacks Provide fallback options when possible Performance Optimization Async Operations Use async operations where appropriate Caching Implement caching when possible Token Usage Monitor and optimize token usage Batch Processing Consider batch operations for efficiency"},{"location":"swarms/examples/aggregate/","title":"Aggregate Multi-Agent Responses","text":"The aggregate
function allows you to run multiple agents concurrently on the same task and then synthesize their responses using an intelligent aggregator agent. This is useful for getting diverse perspectives on a problem and then combining them into a comprehensive analysis.
You can get started by first installing swarms with the following command, or click here for more detailed installation instructions:
pip3 install -U swarms\n
"},{"location":"swarms/examples/aggregate/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"\"\nOPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\n
"},{"location":"swarms/examples/aggregate/#how-it-works","title":"How It Works","text":"workers
list run the same task simultaneouslyfrom swarms.structs.agent import Agent\nfrom swarms.structs.ma_blocks import aggregate\n\n\n# Create specialized agents for different perspectives\nagents = [\n Agent(\n agent_name=\"Sector-Financial-Analyst\",\n agent_description=\"Senior financial analyst at BlackRock.\",\n system_prompt=\"You are a financial analyst tasked with optimizing asset allocations for a $50B portfolio. Provide clear, quantitative recommendations for each sector.\",\n max_loops=1,\n model_name=\"gpt-4o-mini\",\n max_tokens=3000,\n ),\n Agent(\n agent_name=\"Sector-Risk-Analyst\",\n agent_description=\"Expert risk management analyst.\",\n system_prompt=\"You are a risk analyst responsible for advising on risk allocation within a $50B portfolio. Provide detailed insights on risk exposures for each sector.\",\n max_loops=1,\n model_name=\"gpt-4o-mini\",\n max_tokens=3000,\n ),\n Agent(\n agent_name=\"Tech-Sector-Analyst\",\n agent_description=\"Technology sector analyst.\",\n system_prompt=\"You are a tech sector analyst focused on capital and risk allocations. Provide data-backed insights for the tech sector.\",\n max_loops=1,\n model_name=\"gpt-4o-mini\",\n max_tokens=3000,\n ),\n]\n\n# Run the aggregate function\nresult = aggregate(\n workers=agents,\n task=\"What is the best sector to invest in?\",\n type=\"all\", # Get complete conversation history\n aggregator_model_name=\"anthropic/claude-3-sonnet-20240229\"\n)\n\nprint(result)\n
"},{"location":"swarms/examples/basic_agent/","title":"Basic Agent Example","text":"This example demonstrates how to create and configure a sophisticated AI agent using the Swarms framework. In this tutorial, we'll build a Quantitative Trading Agent that can analyze financial markets and provide investment insights. The agent is powered by GPT models and can be customized for various financial analysis tasks.
"},{"location":"swarms/examples/basic_agent/#prerequisites","title":"Prerequisites","text":"Python 3.7+
OpenAI API key
Swarms library
pip3 install -U swarms\n
.env
file:OPENAI_API_KEY=\"your-api-key-here\"\nWORKSPACE_DIR=\"agent_workspace\"\n
agent_name
: A unique identifier for your agent
agent_description
: A detailed description of your agent's capabilities
system_prompt
: The core instructions that define your agent's behavior
model_name
: The GPT model to use
Additional configuration options for temperature and output format
Run the example code below:
import time\nfrom swarms import Agent\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Quantitative-Trading-Agent\",\n agent_description=\"Advanced quantitative trading and algorithmic analysis agent\",\n system_prompt=\"\"\"You are an expert quantitative trading agent with deep expertise in:\n - Algorithmic trading strategies and implementation\n - Statistical arbitrage and market making\n - Risk management and portfolio optimization\n - High-frequency trading systems\n - Market microstructure analysis\n - Quantitative research methodologies\n - Financial mathematics and stochastic processes\n - Machine learning applications in trading\n\n Your core responsibilities include:\n 1. Developing and backtesting trading strategies\n 2. Analyzing market data and identifying alpha opportunities\n 3. Implementing risk management frameworks\n 4. Optimizing portfolio allocations\n 5. Conducting quantitative research\n 6. Monitoring market microstructure\n 7. Evaluating trading system performance\n\n You maintain strict adherence to:\n - Mathematical rigor in all analyses\n - Statistical significance in strategy development\n - Risk-adjusted return optimization\n - Market impact minimization\n - Regulatory compliance\n - Transaction cost analysis\n - Performance attribution\n\n You communicate in precise, technical terms while maintaining clarity for stakeholders.\"\"\",\n max_loops=1,\n model_name=\"gpt-4o-mini\",\n dynamic_temperature_enabled=True,\n output_type=\"json\",\n safety_prompt_on=True,\n)\n\nout = agent.run(\"What are the best top 3 etfs for gold coverage?\")\n\ntime.sleep(10)\nprint(out)\n
"},{"location":"swarms/examples/basic_agent/#example-output","title":"Example Output","text":"The agent will return a JSON response containing recommendations for gold ETFs based on the query.
"},{"location":"swarms/examples/basic_agent/#customization","title":"Customization","text":"You can modify the system prompt and agent parameters to create specialized agents for different use cases:
Use Case Description Market Analysis Analyze market trends, patterns, and indicators to identify trading opportunities Portfolio Management Optimize asset allocation and rebalancing strategies Risk Assessment Evaluate and mitigate potential risks in trading strategies Trading Strategy Development Design and implement algorithmic trading strategies"},{"location":"swarms/examples/claude/","title":"Agent with Anthropic/Claude","text":"Get their api keys and put it in the .env
Select your model_name like claude-3-sonnet-20240229
follows LiteLLM conventions
from swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"claude-3-sonnet-20240229\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/cohere/","title":"Agent with Cohere","text":"Add your COHERE_API_KEY
in the .env
file
Select your model_name like command-r
follows LiteLLM conventions
from swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"command-r\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/concurrent_workflow/","title":"ConcurrentWorkflow Examples","text":"The ConcurrentWorkflow architecture enables parallel execution of multiple agents, allowing them to work simultaneously on different aspects of a task. This is particularly useful for complex tasks that can be broken down into independent subtasks.
"},{"location":"swarms/examples/concurrent_workflow/#prerequisites","title":"Prerequisites","text":"pip3 install -U swarms\n
"},{"location":"swarms/examples/concurrent_workflow/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\nGROQ_API_KEY=\"\"\n
"},{"location":"swarms/examples/concurrent_workflow/#basic-usage","title":"Basic Usage","text":""},{"location":"swarms/examples/concurrent_workflow/#1-initialize-specialized-agents","title":"1. Initialize Specialized Agents","text":"from swarms import Agent\nfrom swarms.structs.concurrent_workflow import ConcurrentWorkflow\n\n# Initialize market research agent\nmarket_researcher = Agent(\n agent_name=\"Market-Researcher\",\n system_prompt=\"\"\"You are a market research specialist. Your tasks include:\n 1. Analyzing market trends and patterns\n 2. Identifying market opportunities and threats\n 3. Evaluating competitor strategies\n 4. Assessing customer needs and preferences\n 5. Providing actionable market insights\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n temperature=0.7,\n)\n\n# Initialize financial analyst agent\nfinancial_analyst = Agent(\n agent_name=\"Financial-Analyst\",\n system_prompt=\"\"\"You are a financial analysis expert. Your responsibilities include:\n 1. Analyzing financial statements\n 2. Evaluating investment opportunities\n 3. Assessing risk factors\n 4. Providing financial forecasts\n 5. Recommending financial strategies\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n temperature=0.7,\n)\n\n# Initialize technical analyst agent\ntechnical_analyst = Agent(\n agent_name=\"Technical-Analyst\",\n system_prompt=\"\"\"You are a technical analysis specialist. Your focus areas include:\n 1. Analyzing price patterns and trends\n 2. Evaluating technical indicators\n 3. Identifying support and resistance levels\n 4. Assessing market momentum\n 5. Providing trading recommendations\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n temperature=0.7,\n)\n\n# Create list of agents\nagents = [market_researcher, financial_analyst, technical_analyst]\n\n# Initialize the concurrent workflow with dashboard\nrouter = ConcurrentWorkflow(\n name=\"market-analysis-router\",\n agents=agents,\n max_loops=1,\n show_dashboard=True, # Enable the real-time dashboard\n)\n\n# Run the workflow\nresult = router.run(\n \"Analyze Tesla (TSLA) stock from market, financial, and technical perspectives\"\n)\n
"},{"location":"swarms/examples/concurrent_workflow/#features","title":"Features","text":""},{"location":"swarms/examples/concurrent_workflow/#real-time-dashboard","title":"Real-time Dashboard","text":"The ConcurrentWorkflow now includes a real-time dashboard feature that can be enabled by setting show_dashboard=True
. This provides:
Ensure tasks can be processed concurrently
Agent Configuration:
Set meaningful system prompts
Resource Management:
Manage memory usage
Error Handling:
Here's a complete example showing how to use ConcurrentWorkflow for a comprehensive market analysis:
from swarms import Agent\nfrom swarms.structs.concurrent_workflow import ConcurrentWorkflow\n\n# Initialize specialized agents\nmarket_analyst = Agent(\n agent_name=\"Market-Analyst\",\n system_prompt=\"\"\"You are a market analysis specialist focusing on:\n 1. Market trends and patterns\n 2. Competitive analysis\n 3. Market opportunities\n 4. Industry dynamics\n 5. Growth potential\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n temperature=0.7,\n)\n\nfinancial_analyst = Agent(\n agent_name=\"Financial-Analyst\",\n system_prompt=\"\"\"You are a financial analysis expert specializing in:\n 1. Financial statements analysis\n 2. Ratio analysis\n 3. Cash flow analysis\n 4. Valuation metrics\n 5. Risk assessment\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n temperature=0.7,\n)\n\nrisk_analyst = Agent(\n agent_name=\"Risk-Analyst\",\n system_prompt=\"\"\"You are a risk assessment specialist focusing on:\n 1. Market risks\n 2. Operational risks\n 3. Financial risks\n 4. Regulatory risks\n 5. Strategic risks\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1,\n temperature=0.7,\n)\n\n# Create the concurrent workflow with dashboard\nworkflow = ConcurrentWorkflow(\n name=\"comprehensive-analysis-workflow\",\n agents=[market_analyst, financial_analyst, risk_analyst],\n max_loops=1,\n show_dashboard=True, # Enable real-time monitoring\n)\n\ntry:\n result = workflow.run(\n \"\"\"Provide a comprehensive analysis of Apple Inc. (AAPL) including:\n 1. Market position and competitive analysis\n 2. Financial performance and health\n 3. Risk assessment and mitigation strategies\"\"\"\n )\n\n # Process and display results\n print(\"\\nAnalysis Results:\")\n print(\"=\" * 50)\n for agent_output in result:\n print(f\"\\nAnalysis from {agent_output['agent']}:\")\n print(\"-\" * 40)\n print(agent_output['output'])\n\nexcept Exception as e:\n print(f\"Error during analysis: {str(e)}\")\n
This guide demonstrates how to effectively use the ConcurrentWorkflow architecture with its new dashboard feature for parallel processing of complex tasks using multiple specialized agents.
"},{"location":"swarms/examples/deepseek/","title":"Agent with DeepSeek","text":"Add your DEEPSEEK_API_KEY
in the .env
file
Select your model_name like deepseek/deepseek-chat
follows LiteLLM conventions
Execute your agent!
from swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"deepseek/deepseek-chat\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/deepseek/#r1","title":"R1","text":"This is a simple example of how to use the DeepSeek Reasoner model otherwise known as R1.
import os\nfrom swarms import Agent\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"deepseek/deepseek-reasoner\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/groq/","title":"Agent with Groq","text":"Add your GROQ_API_KEY
Initiate your agent
Run your agent
import os\n\nfrom swarm_models import OpenAIChat\n\nfrom swarms import Agent\n\ncompany = \"NVDA\"\n\n\n# Initialize the Managing Director agent\nmanaging_director = Agent(\n agent_name=\"Managing-Director\",\n system_prompt=f\"\"\"\n As the Managing Director at Blackstone, your role is to oversee the entire investment analysis process for potential acquisitions. \n Your responsibilities include:\n 1. Setting the overall strategy and direction for the analysis\n 2. Coordinating the efforts of the various team members and ensuring a comprehensive evaluation\n 3. Reviewing the findings and recommendations from each team member\n 4. Making the final decision on whether to proceed with the acquisition\n\n For the current potential acquisition of {company}, direct the tasks for the team to thoroughly analyze all aspects of the company, including its financials, industry position, technology, market potential, and regulatory compliance. Provide guidance and feedback as needed to ensure a rigorous and unbiased assessment.\n \"\"\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"managing-director.json\",\n)\n
"},{"location":"swarms/examples/groupchat_example/","title":"GroupChat Example","text":"Overview
Learn how to create and configure a group chat with multiple AI agents using the Swarms framework. This example demonstrates how to set up agents for expense analysis and budget advising.
"},{"location":"swarms/examples/groupchat_example/#prerequisites","title":"Prerequisites","text":"Before You Begin
Make sure you have: - Python 3.7+ installed - A valid API key for your model provider - The Swarms package installed
"},{"location":"swarms/examples/groupchat_example/#installation","title":"Installation","text":"pip install swarms\n
"},{"location":"swarms/examples/groupchat_example/#environment-setup","title":"Environment Setup","text":"API Key Configuration
Set your API key in the .env
file:
OPENAI_API_KEY=\"your-api-key-here\"\n
"},{"location":"swarms/examples/groupchat_example/#code-implementation","title":"Code Implementation","text":""},{"location":"swarms/examples/groupchat_example/#import-required-modules","title":"Import Required Modules","text":"from dotenv import load_dotenv\nimport os\nfrom swarms import Agent, GroupChat\n
"},{"location":"swarms/examples/groupchat_example/#configure-agents","title":"Configure Agents","text":"Agent Configuration
Here's how to set up your agents with specific roles:
# Expense Analysis Agent\nagent1 = Agent(\n agent_name=\"Expense-Analysis-Agent\",\n description=\"You are an accounting agent specializing in analyzing potential expenses.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n autosave=False,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n streaming_on=False,\n max_tokens=15000,\n)\n\n# Budget Adviser Agent\nagent2 = Agent(\n agent_name=\"Budget-Adviser-Agent\",\n description=\"You are a budget adviser who provides insights on managing and optimizing expenses.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n autosave=False,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n streaming_on=False,\n max_tokens=15000,\n)\n
"},{"location":"swarms/examples/groupchat_example/#initialize-groupchat","title":"Initialize GroupChat","text":"GroupChat Setup
Configure the GroupChat with your agents:
agents = [agent1, agent2]\n\nchat = GroupChat(\n name=\"Expense Advisory\",\n description=\"Accounting group focused on discussing potential expenses\",\n agents=agents,\n max_loops=1,\n output_type=\"all\",\n)\n
"},{"location":"swarms/examples/groupchat_example/#run-the-chat","title":"Run the Chat","text":"Execute the Chat
Start the conversation between agents:
history = chat.run(\n \"What potential expenses should we consider for the upcoming quarter? Please collaborate to outline a comprehensive list.\"\n)\n
"},{"location":"swarms/examples/groupchat_example/#complete-example","title":"Complete Example","text":"Full Implementation
Here's the complete code combined:
from dotenv import load_dotenv\nimport os\nfrom swarms import Agent, GroupChat\n\nif __name__ == \"__main__\":\n # Load environment variables\n load_dotenv()\n api_key = os.getenv(\"OPENAI_API_KEY\")\n\n # Configure agents\n agent1 = Agent(\n agent_name=\"Expense-Analysis-Agent\",\n description=\"You are an accounting agent specializing in analyzing potential expenses.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n autosave=False,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n streaming_on=False,\n max_tokens=15000,\n )\n\n agent2 = Agent(\n agent_name=\"Budget-Adviser-Agent\",\n description=\"You are a budget adviser who provides insights on managing and optimizing expenses.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n autosave=False,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n streaming_on=False,\n max_tokens=15000,\n )\n\n # Initialize GroupChat\n agents = [agent1, agent2]\n chat = GroupChat(\n name=\"Expense Advisory\",\n description=\"Accounting group focused on discussing potential expenses\",\n agents=agents,\n max_loops=1,\n output_type=\"all\",\n )\n\n # Run the chat\n history = chat.run(\n \"What potential expenses should we consider for the upcoming quarter? Please collaborate to outline a comprehensive list.\"\n )\n
"},{"location":"swarms/examples/groupchat_example/#configuration-options","title":"Configuration Options","text":"Key Parameters
Parameter Description Defaultmax_loops
Maximum number of conversation loops 1 autosave
Enable automatic saving of chat history False dashboard
Enable dashboard visualization False verbose
Enable detailed logging True dynamic_temperature_enabled
Enable dynamic temperature adjustment True retry_attempts
Number of retry attempts for failed operations 1 context_length
Maximum context length for the model 200000 max_tokens
Maximum tokens for model output 15000"},{"location":"swarms/examples/groupchat_example/#next-steps","title":"Next Steps","text":"What to Try Next
max_loops
parameter to allow for longer conversationsCommon Issues
.env
fileverbose
output for detailed error messages.env
file in the root directory and add your API key: GROQ_API_KEY
from swarms import Agent, SwarmRouter, HybridHierarchicalClusterSwarm\n\n\n# Core Legal Agent Definitions with short, simple prompts\nlitigation_agent = Agent(\n agent_name=\"Litigator\",\n system_prompt=\"You handle lawsuits. Analyze facts, build arguments, and develop case strategy.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\ncorporate_agent = Agent(\n agent_name=\"Corporate-Attorney\",\n system_prompt=\"You handle business law. Advise on corporate structure, governance, and transactions.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\nip_agent = Agent(\n agent_name=\"IP-Attorney\",\n system_prompt=\"You protect intellectual property. Handle patents, trademarks, copyrights, and trade secrets.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\nemployment_agent = Agent(\n agent_name=\"Employment-Attorney\",\n system_prompt=\"You handle workplace matters. Address hiring, termination, discrimination, and labor issues.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\nparalegal_agent = Agent(\n agent_name=\"Paralegal\",\n system_prompt=\"You assist attorneys. Conduct research, draft documents, and organize case files.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\ndoc_review_agent = Agent(\n agent_name=\"Document-Reviewer\",\n system_prompt=\"You examine documents. Extract key information and identify relevant content.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\n# Practice Area Swarm Routers\nlitigation_swarm = SwarmRouter(\n name=\"litigation-practice\",\n description=\"Handle all aspects of litigation\",\n agents=[litigation_agent, paralegal_agent, doc_review_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\ncorporate_swarm = SwarmRouter(\n name=\"corporate-practice\",\n description=\"Handle business and corporate legal matters\",\n agents=[corporate_agent, paralegal_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\nip_swarm = SwarmRouter(\n name=\"ip-practice\",\n description=\"Handle intellectual property matters\",\n agents=[ip_agent, paralegal_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\nemployment_swarm = SwarmRouter(\n name=\"employment-practice\",\n description=\"Handle employment and labor law matters\",\n agents=[employment_agent, paralegal_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\n# Cross-functional Swarm Router\nm_and_a_swarm = SwarmRouter(\n name=\"mergers-acquisitions\",\n description=\"Handle mergers and acquisitions\",\n agents=[\n corporate_agent,\n ip_agent,\n employment_agent,\n doc_review_agent,\n ],\n swarm_type=\"ConcurrentWorkflow\",\n)\n\ndispute_swarm = SwarmRouter(\n name=\"dispute-resolution\",\n description=\"Handle complex disputes requiring multiple specialties\",\n agents=[litigation_agent, corporate_agent, doc_review_agent],\n swarm_type=\"ConcurrentWorkflow\",\n)\n\n\nhybrid_hiearchical_swarm = HybridHierarchicalClusterSwarm(\n name=\"hybrid-hiearchical-swarm\",\n description=\"A hybrid hiearchical swarm that uses a hybrid hiearchical peer model to solve complex tasks.\",\n swarms=[\n litigation_swarm,\n corporate_swarm,\n ip_swarm,\n employment_swarm,\n m_and_a_swarm,\n dispute_swarm,\n ],\n max_loops=1,\n router_agent_model_name=\"gpt-4o-mini\",\n)\n\n\nif __name__ == \"__main__\":\n hybrid_hiearchical_swarm.run(\n \"What is the best way to file for a patent? for ai technology \"\n )\n
"},{"location":"swarms/examples/hierarchical_swarm_example/","title":"Hierarchical Swarm Examples","text":"This page provides simple, practical examples of how to use the HierarchicalSwarm
for various real-world scenarios.
from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized financial analysis agents\nmarket_research_agent = Agent(\n agent_name=\"Market-Research-Specialist\",\n agent_description=\"Expert in market research, trend analysis, and competitive intelligence\",\n system_prompt=\"\"\"You are a senior market research specialist with expertise in:\n - Market trend analysis and forecasting\n - Competitive landscape assessment\n - Consumer behavior analysis\n - Industry report generation\n - Market opportunity identification\n - Risk assessment and mitigation strategies\"\"\",\n model_name=\"gpt-4o\",\n)\n\nfinancial_analyst_agent = Agent(\n agent_name=\"Financial-Analysis-Expert\",\n agent_description=\"Specialist in financial statement analysis, valuation, and investment research\",\n system_prompt=\"\"\"You are a senior financial analyst with deep expertise in:\n - Financial statement analysis (income statement, balance sheet, cash flow)\n - Valuation methodologies (DCF, comparable company analysis, precedent transactions)\n - Investment research and due diligence\n - Financial modeling and forecasting\n - Risk assessment and portfolio analysis\n - ESG (Environmental, Social, Governance) analysis\"\"\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the hierarchical swarm\nfinancial_analysis_swarm = HierarchicalSwarm(\n name=\"Financial-Analysis-Hierarchical-Swarm\",\n description=\"A hierarchical swarm for comprehensive financial analysis with specialized agents\",\n agents=[market_research_agent, financial_analyst_agent],\n max_loops=2,\n verbose=True,\n)\n\n# Execute financial analysis\ntask = \"Conduct a comprehensive analysis of Tesla (TSLA) stock including market position, financial health, and investment potential\"\nresult = financial_analysis_swarm.run(task=task)\nprint(result)\n
"},{"location":"swarms/examples/hierarchical_swarm_example/#development-team-example","title":"Development Team Example","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized development agents\nfrontend_developer_agent = Agent(\n agent_name=\"Frontend-Developer\",\n agent_description=\"Senior frontend developer expert in modern web technologies and user experience\",\n system_prompt=\"\"\"You are a senior frontend developer with expertise in:\n - Modern JavaScript frameworks (React, Vue, Angular)\n - TypeScript and modern ES6+ features\n - CSS frameworks and responsive design\n - State management (Redux, Zustand, Context API)\n - Web performance optimization\n - Accessibility (WCAG) and SEO best practices\"\"\",\n model_name=\"gpt-4o\",\n)\n\nbackend_developer_agent = Agent(\n agent_name=\"Backend-Developer\",\n agent_description=\"Senior backend developer specializing in server-side development and API design\",\n system_prompt=\"\"\"You are a senior backend developer with expertise in:\n - Server-side programming languages (Python, Node.js, Java, Go)\n - Web frameworks (Django, Flask, Express, Spring Boot)\n - Database design and optimization (SQL, NoSQL)\n - API design and REST/GraphQL implementation\n - Authentication and authorization systems\n - Microservices architecture and containerization\"\"\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the development swarm\ndevelopment_department_swarm = HierarchicalSwarm(\n name=\"Autonomous-Development-Department\",\n description=\"A fully autonomous development department with specialized agents\",\n agents=[frontend_developer_agent, backend_developer_agent],\n max_loops=3,\n verbose=True,\n)\n\n# Execute development project\ntask = \"Create a simple web app that allows users to upload a file and then download it. The app should be built with React and Node.js.\"\nresult = development_department_swarm.run(task=task)\nprint(result)\n
"},{"location":"swarms/examples/hierarchical_swarm_example/#single-step-execution","title":"Single Step Execution","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create analysis agents\nmarket_agent = Agent(\n agent_name=\"Market-Analyst\",\n agent_description=\"Expert in market analysis and trends\",\n model_name=\"gpt-4o\",\n)\n\ntechnical_agent = Agent(\n agent_name=\"Technical-Analyst\",\n agent_description=\"Specialist in technical analysis and patterns\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the swarm\nswarm = HierarchicalSwarm(\n name=\"Analysis-Swarm\",\n description=\"A hierarchical swarm for comprehensive analysis\",\n agents=[market_agent, technical_agent],\n max_loops=1,\n verbose=True,\n)\n\n# Execute a single step\ntask = \"Analyze the current market trends for electric vehicles\"\nfeedback = swarm.step(task=task)\nprint(\"Director Feedback:\", feedback)\n
"},{"location":"swarms/examples/hierarchical_swarm_example/#batch-processing","title":"Batch Processing","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create analysis agents\nmarket_agent = Agent(\n agent_name=\"Market-Analyst\",\n agent_description=\"Expert in market analysis and trends\",\n model_name=\"gpt-4o\",\n)\n\ntechnical_agent = Agent(\n agent_name=\"Technical-Analyst\",\n agent_description=\"Specialist in technical analysis and patterns\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the swarm\nswarm = HierarchicalSwarm(\n name=\"Analysis-Swarm\",\n description=\"A hierarchical swarm for comprehensive analysis\",\n agents=[market_agent, technical_agent],\n max_loops=2,\n verbose=True,\n)\n\n# Execute multiple tasks\ntasks = [\n \"Analyze Apple (AAPL) stock performance\",\n \"Evaluate Microsoft (MSFT) market position\",\n \"Assess Google (GOOGL) competitive landscape\"\n]\n\nresults = swarm.batched_run(tasks=tasks)\nfor i, result in enumerate(results):\n print(f\"Task {i+1} Result:\", result)\n
"},{"location":"swarms/examples/hierarchical_swarm_example/#research-team-example","title":"Research Team Example","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized research agents\nresearch_manager = Agent(\n agent_name=\"Research-Manager\",\n agent_description=\"Manages research operations and coordinates research tasks\",\n system_prompt=\"You are a research manager responsible for overseeing research projects and coordinating research efforts.\",\n model_name=\"gpt-4o\",\n)\n\ndata_analyst = Agent(\n agent_name=\"Data-Analyst\",\n agent_description=\"Analyzes data and generates insights\",\n system_prompt=\"You are a data analyst specializing in processing and analyzing data to extract meaningful insights.\",\n model_name=\"gpt-4o\",\n)\n\nresearch_assistant = Agent(\n agent_name=\"Research-Assistant\",\n agent_description=\"Assists with research tasks and data collection\",\n system_prompt=\"You are a research assistant who helps gather information and support research activities.\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the research swarm\nresearch_swarm = HierarchicalSwarm(\n name=\"Research-Team-Swarm\",\n description=\"A hierarchical swarm for comprehensive research projects\",\n agents=[research_manager, data_analyst, research_assistant],\n max_loops=2,\n verbose=True,\n)\n\n# Execute research project\ntask = \"Conduct a comprehensive market analysis for a new AI-powered productivity tool\"\nresult = research_swarm.run(task=task)\nprint(result)\n
"},{"location":"swarms/examples/hierarchical_swarm_example/#key-takeaways","title":"Key Takeaways","text":"max_loops
based on task complexity (1-3 for most tasks)For more detailed information about the HierarchicalSwarm
API and advanced usage patterns, see the main documentation.
The Interactive GroupChat is a powerful multi-agent architecture that enables dynamic collaboration between multiple AI agents. This architecture allows agents to communicate with each other, respond to mentions using @agent_name
syntax, and work together to solve complex tasks through structured conversation flows.
The Interactive GroupChat implements a collaborative swarm architecture where multiple specialized agents work together in a coordinated manner. Key features include:
@agent_name
syntaxFor comprehensive documentation on Interactive GroupChat, visit: Interactive GroupChat Documentation
"},{"location":"swarms/examples/igc_example/#step-by-step-showcase","title":"Step-by-Step Showcase","text":"@agent_name
mentions to direct specific agentsInstall the swarms package using pip:
pip install -U swarms\n
"},{"location":"swarms/examples/igc_example/#basic-setup","title":"Basic Setup","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\n
"},{"location":"swarms/examples/igc_example/#code","title":"Code","text":"\"\"\"\nInteractiveGroupChat Speaker Function Examples\n\nThis example demonstrates how to use different speaker functions in the InteractiveGroupChat:\n- Round Robin: Agents speak in a fixed order, cycling through the list\n- Random: Agents speak in random order\n- Priority: Agents speak based on priority weights\n- Custom: User-defined speaker functions\n\nThe example also shows how agents can mention each other using @agent_name syntax.\n\"\"\"\n\nfrom swarms import Agent\nfrom swarms.structs.interactive_groupchat import (\n InteractiveGroupChat,\n random_speaker,\n)\n\n\ndef create_example_agents():\n \"\"\"Create example agents for demonstration.\"\"\"\n\n # Create agents with different expertise\n analyst = Agent(\n agent_name=\"analyst\",\n system_prompt=\"You are a data analyst. You excel at analyzing data, creating charts, and providing insights.\",\n model_name=\"gpt-4.1\",\n streaming_on=True,\n print_on=True,\n )\n\n researcher = Agent(\n agent_name=\"researcher\",\n system_prompt=\"You are a research specialist. You are great at gathering information, fact-checking, and providing detailed research.\",\n model_name=\"gpt-4.1\",\n streaming_on=True,\n print_on=True,\n )\n\n writer = Agent(\n agent_name=\"writer\",\n system_prompt=\"You are a content writer. You excel at writing clear, engaging content and summarizing information.\",\n model_name=\"gpt-4.1\",\n streaming_on=True,\n print_on=True,\n )\n\n return [analyst, researcher, writer]\n\n\ndef example_random():\n agents = create_example_agents()\n\n # Create group chat with random speaker function\n group_chat = InteractiveGroupChat(\n name=\"Random Team\",\n description=\"A team that speaks in random order\",\n agents=agents,\n speaker_function=random_speaker,\n interactive=False,\n )\n\n # Test the random behavior\n task = \"Let's create a marketing strategy. @analyst @researcher @writer please contribute.\"\n\n response = group_chat.run(task)\n print(f\"Response:\\n{response}\\n\")\n\n\nif __name__ == \"__main__\":\n # example_round_robin()\n example_random()\n
"},{"location":"swarms/examples/igc_example/#connect-with-us","title":"Connect With Us","text":"Join our community of agent engineers and researchers for technical support, cutting-edge updates, and exclusive access to world-class agent engineering insights!
Platform Description Link \ud83d\udcda Documentation Official documentation and guides docs.swarms.world \ud83d\udcdd Blog Latest updates and technical articles Medium \ud83d\udcac Discord Live chat and community support Join Discord \ud83d\udc26 Twitter Latest news and announcements @kyegomez \ud83d\udc65 LinkedIn Professional network and updates The Swarm Corporation \ud83d\udcfa YouTube Tutorials and demos Swarms Channel \ud83c\udfab Events Join our community events Sign up here \ud83d\ude80 Onboarding Session Get onboarded with Kye Gomez, creator and lead maintainer of Swarms Book Session"},{"location":"swarms/examples/interactive_groupchat_example/","title":"Interactive GroupChat Example","text":"This is an example of the InteractiveGroupChat module in swarms. Click here for full documentation
"},{"location":"swarms/examples/interactive_groupchat_example/#installation","title":"Installation","text":"You can get started by first installing swarms with the following command, or click here for more detailed installation instructions:
pip3 install -U swarms\n
"},{"location":"swarms/examples/interactive_groupchat_example/#environment-variables","title":"Environment Variables","text":"OPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\nGROQ_API_KEY=\"\"\n
"},{"location":"swarms/examples/interactive_groupchat_example/#code","title":"Code","text":""},{"location":"swarms/examples/interactive_groupchat_example/#interactive-session-in-terminal","title":"Interactive Session in Terminal","text":"from swarms import Agent\nfrom swarms.structs.interactive_groupchat import InteractiveGroupChat\n\n\nif __name__ == \"__main__\":\n # Initialize agents\n financial_advisor = Agent(\n agent_name=\"FinancialAdvisor\",\n system_prompt=\"You are a financial advisor specializing in investment strategies and portfolio management.\",\n random_models_on=True,\n output_type=\"final\",\n )\n\n tax_expert = Agent(\n agent_name=\"TaxExpert\",\n system_prompt=\"You are a tax expert who provides guidance on tax optimization and compliance.\",\n random_models_on=True,\n output_type=\"final\",\n )\n\n investment_analyst = Agent(\n agent_name=\"InvestmentAnalyst\",\n system_prompt=\"You are an investment analyst focusing on market trends and investment opportunities.\",\n random_models_on=True,\n output_type=\"final\",\n )\n\n # Create a list of agents including both Agent instances and callables\n agents = [\n financial_advisor,\n tax_expert,\n investment_analyst,\n ]\n\n # Initialize another chat instance in interactive mode\n interactive_chat = InteractiveGroupChat(\n name=\"Interactive Financial Advisory Team\",\n description=\"An interactive team of financial experts providing comprehensive financial advice\",\n agents=agents,\n max_loops=1,\n output_type=\"all\",\n interactive=True,\n )\n\n try:\n # Start the interactive session\n print(\"\\nStarting interactive session...\")\n # interactive_chat.run(\"What is the best methodology to accumulate gold and silver commodities, and what is the best long-term strategy to accumulate them?\")\n interactive_chat.start_interactive_session()\n except Exception as e:\n print(f\"An error occurred in interactive mode: {e}\")\n
"},{"location":"swarms/examples/interactive_groupchat_example/#run-method-manual-method","title":"Run Method // Manual Method","text":"from swarms import Agent\nfrom swarms.structs.interactive_groupchat import InteractiveGroupChat\n\n\nif __name__ == \"__main__\":\n # Initialize agents\n financial_advisor = Agent(\n agent_name=\"FinancialAdvisor\",\n system_prompt=\"You are a financial advisor specializing in investment strategies and portfolio management.\",\n random_models_on=True,\n output_type=\"final\",\n )\n\n tax_expert = Agent(\n agent_name=\"TaxExpert\",\n system_prompt=\"You are a tax expert who provides guidance on tax optimization and compliance.\",\n random_models_on=True,\n output_type=\"final\",\n )\n\n investment_analyst = Agent(\n agent_name=\"InvestmentAnalyst\",\n system_prompt=\"You are an investment analyst focusing on market trends and investment opportunities.\",\n random_models_on=True,\n output_type=\"final\",\n )\n\n # Create a list of agents including both Agent instances and callables\n agents = [\n financial_advisor,\n tax_expert,\n investment_analyst,\n ]\n\n # Initialize another chat instance in interactive mode\n interactive_chat = InteractiveGroupChat(\n name=\"Interactive Financial Advisory Team\",\n description=\"An interactive team of financial experts providing comprehensive financial advice\",\n agents=agents,\n max_loops=1,\n output_type=\"all\",\n interactive=False,\n )\n\n try:\n # Start the interactive session\n print(\"\\nStarting interactive session...\")\n # interactive_chat.run(\"What is the best methodology to accumulate gold and silver commodities, and what is the best long-term strategy to accumulate them?\")\n interactive_chat.run('@TaxExpert how can I understand tax tactics for crypto payroll in solana?')\n except Exception as e:\n print(f\"An error occurred in interactive mode: {e}\")\n
"},{"location":"swarms/examples/llama4/","title":"Llama4 Model Integration","text":"Prerequisites
swarms
library installedHere's a simple example of integrating Llama4 model for crypto risk analysis:
from dotenv import load_dotenv\nfrom swarms import Agent\nfrom swarms.utils.vllm_wrapper import VLLM\n\nload_dotenv()\nmodel = VLLM(model_name=\"meta-llama/Llama-4-Maverick-17B-128E\")\n
"},{"location":"swarms/examples/llama4/#available-models","title":"Available Models","text":"Model Name Description Type meta-llama/Llama-4-Maverick-17B-128E Base model with 128 experts Base meta-llama/Llama-4-Maverick-17B-128E-Instruct Instruction-tuned version with 128 experts Instruct meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 FP8 quantized instruction model Instruct (Optimized) meta-llama/Llama-4-Scout-17B-16E Base model with 16 experts Base meta-llama/Llama-4-Scout-17B-16E-Instruct Instruction-tuned version with 16 experts Instruct Model Selection
CRYPTO_RISK_ANALYSIS_PROMPT = \"\"\"\nYou are a cryptocurrency risk analysis expert. Your role is to:\n\n1. Analyze market risks:\n - Volatility assessment\n - Market sentiment analysis\n - Trading volume patterns\n - Price trend evaluation\n\n2. Evaluate technical risks:\n - Network security\n - Protocol vulnerabilities\n - Smart contract risks\n - Technical scalability\n\n3. Consider regulatory risks:\n - Current regulations\n - Potential regulatory changes\n - Compliance requirements\n - Geographic restrictions\n\n4. Assess fundamental risks:\n - Team background\n - Project development status\n - Competition analysis\n - Use case viability\n\nProvide detailed, balanced analysis with both risks and potential mitigations.\nBase your analysis on established crypto market principles and current market conditions.\n\"\"\"\n
"},{"location":"swarms/examples/llama4/#2-initialize-agent","title":"2. Initialize Agent","text":"agent = Agent(\n agent_name=\"Crypto-Risk-Analysis-Agent\",\n agent_description=\"Agent for analyzing risks in cryptocurrency investments\",\n system_prompt=CRYPTO_RISK_ANALYSIS_PROMPT,\n max_loops=1,\n llm=model,\n)\n
"},{"location":"swarms/examples/llama4/#full-code","title":"Full Code","text":"from dotenv import load_dotenv\n\nfrom swarms import Agent\nfrom swarms.utils.vllm_wrapper import VLLM\n\nload_dotenv()\n\n# Define custom system prompt for crypto risk analysis\nCRYPTO_RISK_ANALYSIS_PROMPT = \"\"\"\nYou are a cryptocurrency risk analysis expert. Your role is to:\n\n1. Analyze market risks:\n - Volatility assessment\n - Market sentiment analysis\n - Trading volume patterns\n - Price trend evaluation\n\n2. Evaluate technical risks:\n - Network security\n - Protocol vulnerabilities\n - Smart contract risks\n - Technical scalability\n\n3. Consider regulatory risks:\n - Current regulations\n - Potential regulatory changes\n - Compliance requirements\n - Geographic restrictions\n\n4. Assess fundamental risks:\n - Team background\n - Project development status\n - Competition analysis\n - Use case viability\n\nProvide detailed, balanced analysis with both risks and potential mitigations.\nBase your analysis on established crypto market principles and current market conditions.\n\"\"\"\n\nmodel = VLLM(model_name=\"meta-llama/Llama-4-Maverick-17B-128E\")\n\n# Initialize the agent with custom prompt\nagent = Agent(\n agent_name=\"Crypto-Risk-Analysis-Agent\",\n agent_description=\"Agent for analyzing risks in cryptocurrency investments\",\n system_prompt=CRYPTO_RISK_ANALYSIS_PROMPT,\n max_loops=1,\n llm=model,\n)\n\nprint(\n agent.run(\n \"Conduct a risk analysis of the top cryptocurrencies. Think for 2 loops internally\"\n )\n)\n
Resource Usage
The Llama4 model requires significant computational resources. Ensure your system meets the minimum requirements.
"},{"location":"swarms/examples/llama4/#faq","title":"FAQ","text":"What is the purpose of max_loops parameter?The max_loops
parameter determines how many times the agent will iterate through its thinking process. In this example, it's set to 1 for a single pass analysis.
Yes, you can replace the VLLM wrapper with other compatible models. Just ensure you update the model initialization accordingly.
How do I customize the system prompt?You can modify the CRYPTO_RISK_ANALYSIS_PROMPT
string to match your specific use case while maintaining the structured format.
Best Practices
Sample Usage
response = agent.run(\n \"Conduct a risk analysis of the top cryptocurrencies. Think for 2 loops internally\"\n)\nprint(response)\n
"},{"location":"swarms/examples/lumo/","title":"Lumo Example","text":"Introducing Lumo-70B-Instruct - the largest and most advanced AI model ever created for the Solana ecosystem. Built on Meta's groundbreaking LLaMa 3.3 70B Instruct foundation, this revolutionary model represents a quantum leap in blockchain-specific artificial intelligence. With an unprecedented 70 billion parameters and trained on the most comprehensive Solana documentation dataset ever assembled, Lumo-70B-Instruct sets a new standard for developer assistance in the blockchain space.
from swarms import Agent\nfrom transformers import LlamaForCausalLM, AutoTokenizer\nimport torch\nfrom transformers import BitsAndBytesConfig\n\nclass Lumo:\n \"\"\"\n A class for generating text using the Lumo model with 4-bit quantization.\n \"\"\"\n def __init__(self):\n \"\"\"\n Initializes the Lumo model with 4-bit quantization and a tokenizer.\n \"\"\"\n # Configure 4-bit quantization\n bnb_config = BitsAndBytesConfig(\n load_in_4bit=True,\n bnb_4bit_quant_type=\"nf4\",\n bnb_4bit_compute_dtype=torch.float16,\n llm_int8_enable_fp32_cpu_offload=True\n )\n\n self.model = LlamaForCausalLM.from_pretrained(\n \"lumolabs-ai/Lumo-70B-Instruct\",\n device_map=\"auto\",\n quantization_config=bnb_config,\n use_cache=False,\n attn_implementation=\"sdpa\"\n )\n self.tokenizer = AutoTokenizer.from_pretrained(\"lumolabs-ai/Lumo-70B-Instruct\")\n\n def run(self, task: str) -> str:\n \"\"\"\n Generates text based on the given prompt using the Lumo model.\n\n Args:\n prompt (str): The input prompt for the model.\n\n Returns:\n str: The generated text.\n \"\"\"\n inputs = self.tokenizer(task, return_tensors=\"pt\").to(self.model.device)\n outputs = self.model.generate(**inputs, max_new_tokens=100)\n return self.tokenizer.decode(outputs[0], skip_special_tokens=True)\n\n\n\n\nAgent(\n agent_name=\"Solana-Analysis-Agent\",\n llm=Lumo(),\n max_loops=\"auto\",\n interactive=True,\n streaming_on=True,\n).run(\"How do i create a smart contract in solana?\")\n
"},{"location":"swarms/examples/mixture_of_agents/","title":"MixtureOfAgents Examples","text":"The MixtureOfAgents architecture combines multiple specialized agents with an aggregator agent to process complex tasks. This architecture is particularly effective for tasks requiring diverse expertise and consensus-building among different specialists.
"},{"location":"swarms/examples/mixture_of_agents/#prerequisites","title":"Prerequisites","text":"pip3 install -U swarms\n
"},{"location":"swarms/examples/mixture_of_agents/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\nGROQ_API_KEY=\"\"\n
"},{"location":"swarms/examples/mixture_of_agents/#basic-usage","title":"Basic Usage","text":""},{"location":"swarms/examples/mixture_of_agents/#1-initialize-specialized-agents","title":"1. Initialize Specialized Agents","text":"from swarms import Agent, MixtureOfAgents\n\n# Initialize specialized agents\nlegal_expert = Agent(\n agent_name=\"Legal-Expert\",\n system_prompt=\"\"\"You are a legal expert specializing in contract law. Your responsibilities include:\n 1. Analyzing legal documents and contracts\n 2. Identifying potential legal risks\n 3. Ensuring regulatory compliance\n 4. Providing legal recommendations\n 5. Drafting and reviewing legal documents\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\nfinancial_expert = Agent(\n agent_name=\"Financial-Expert\",\n system_prompt=\"\"\"You are a financial expert specializing in business finance. Your tasks include:\n 1. Analyzing financial implications\n 2. Evaluating costs and benefits\n 3. Assessing financial risks\n 4. Providing financial projections\n 5. Recommending financial strategies\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\nbusiness_expert = Agent(\n agent_name=\"Business-Expert\",\n system_prompt=\"\"\"You are a business strategy expert. Your focus areas include:\n 1. Analyzing business models\n 2. Evaluating market opportunities\n 3. Assessing competitive advantages\n 4. Providing strategic recommendations\n 5. Planning business development\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\n# Initialize aggregator agent\naggregator = Agent(\n agent_name=\"Decision-Aggregator\",\n system_prompt=\"\"\"You are a decision aggregator responsible for:\n 1. Synthesizing input from multiple experts\n 2. Resolving conflicting viewpoints\n 3. Prioritizing recommendations\n 4. Providing coherent final decisions\n 5. Ensuring comprehensive coverage of all aspects\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n
"},{"location":"swarms/examples/mixture_of_agents/#2-create-and-run-mixtureofagents","title":"2. Create and Run MixtureOfAgents","text":"# Create list of specialist agents\nspecialists = [legal_expert, financial_expert, business_expert]\n\n# Initialize the mixture of agents\nmoa = MixtureOfAgents(\n agents=specialists,\n aggregator_agent=aggregator,\n layers=3,\n)\n\n# Run the analysis\nresult = moa.run(\n \"Analyze the proposed merger between Company A and Company B, considering legal, financial, and business aspects.\"\n)\n
"},{"location":"swarms/examples/mixture_of_agents/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/examples/mixture_of_agents/#1-custom-configuration-with-system-prompts","title":"1. Custom Configuration with System Prompts","text":"# Initialize MixtureOfAgents with custom aggregator prompt\nmoa = MixtureOfAgents(\n agents=specialists,\n aggregator_agent=aggregator,\n aggregator_system_prompt=\"\"\"As the decision aggregator, synthesize the analyses from all specialists into a coherent recommendation:\n 1. Summarize key points from each specialist\n 2. Identify areas of agreement and disagreement\n 3. Weigh different perspectives\n 4. Provide a balanced final recommendation\n 5. Highlight key risks and opportunities\"\"\",\n layers=3,\n)\n\nresult = moa.run(\"Evaluate the potential acquisition of StartupX\")\n
"},{"location":"swarms/examples/mixture_of_agents/#2-error-handling-and-validation","title":"2. Error Handling and Validation","text":"try:\n moa = MixtureOfAgents(\n agents=specialists,\n aggregator_agent=aggregator,\n layers=3,\n verbose=True,\n )\n\n result = moa.run(\"Complex analysis task\")\n\n # Validate and process results\n if result:\n print(\"Analysis complete:\")\n print(result)\n else:\n print(\"Analysis failed to produce results\")\n\nexcept Exception as e:\n print(f\"Error in analysis: {str(e)}\")\n
"},{"location":"swarms/examples/mixture_of_agents/#best-practices","title":"Best Practices","text":"Set suitable model parameters
Aggregator Configuration:
Configure conflict resolution strategies
Layer Management:
Adjust based on task complexity
Quality Control:
Here's a complete example showing how to use MixtureOfAgents for a comprehensive business analysis:
import os\nfrom swarms import Agent, MixtureOfAgents\n\n# Initialize specialist agents\nmarket_analyst = Agent(\n agent_name=\"Market-Analyst\",\n system_prompt=\"\"\"You are a market analysis specialist focusing on:\n 1. Market size and growth\n 2. Competitive landscape\n 3. Customer segments\n 4. Market trends\n 5. Entry barriers\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\nfinancial_analyst = Agent(\n agent_name=\"Financial-Analyst\",\n system_prompt=\"\"\"You are a financial analysis expert specializing in:\n 1. Financial performance\n 2. Valuation metrics\n 3. Cash flow analysis\n 4. Investment requirements\n 5. ROI projections\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\nrisk_analyst = Agent(\n agent_name=\"Risk-Analyst\",\n system_prompt=\"\"\"You are a risk assessment specialist focusing on:\n 1. Market risks\n 2. Operational risks\n 3. Financial risks\n 4. Regulatory risks\n 5. Strategic risks\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\n# Initialize aggregator\naggregator = Agent(\n agent_name=\"Strategic-Aggregator\",\n system_prompt=\"\"\"You are a strategic decision aggregator responsible for:\n 1. Synthesizing specialist analyses\n 2. Identifying key insights\n 3. Evaluating trade-offs\n 4. Making recommendations\n 5. Providing action plans\"\"\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\n# Create and configure MixtureOfAgents\ntry:\n moa = MixtureOfAgents(\n agents=[market_analyst, financial_analyst, risk_analyst],\n aggregator_agent=aggregator,\n aggregator_system_prompt=\"\"\"Synthesize the analyses from all specialists to provide:\n 1. Comprehensive situation analysis\n 2. Key opportunities and risks\n 3. Strategic recommendations\n 4. Implementation considerations\n 5. Success metrics\"\"\",\n layers=3,\n verbose=True,\n )\n\n # Run the analysis\n result = moa.run(\n \"\"\"Evaluate the business opportunity for expanding into the electric vehicle market:\n 1. Market potential and competition\n 2. Financial requirements and projections\n 3. Risk assessment and mitigation strategies\"\"\"\n )\n\n # Process and display results\n print(\"\\nComprehensive Analysis Results:\")\n print(\"=\" * 50)\n print(result)\n print(\"=\" * 50)\n\nexcept Exception as e:\n print(f\"Error during analysis: {str(e)}\")\n
This comprehensive guide demonstrates how to effectively use the MixtureOfAgents architecture for complex analysis tasks requiring multiple expert perspectives and consensus-building.
"},{"location":"swarms/examples/moa_example/","title":"Mixture of Agents Example","text":"The Mixture of Agents (MoA) is a sophisticated multi-agent architecture that implements parallel processing with iterative refinement. This approach processes multiple specialized agents simultaneously, concatenates their outputs, and then performs multiple parallel runs to achieve consensus or enhanced results.
"},{"location":"swarms/examples/moa_example/#how-it-works","title":"How It Works","text":"n
layers/iterations to improve qualityThis architecture is particularly effective for complex tasks that benefit from diverse perspectives and iterative improvement, such as financial analysis, risk assessment, and multi-faceted problem solving.
"},{"location":"swarms/examples/moa_example/#installation","title":"Installation","text":"Install the swarms package using pip:
pip install -U swarms\n
"},{"location":"swarms/examples/moa_example/#basic-setup","title":"Basic Setup","text":"WORKSPACE_DIR=\"agent_workspace\"\nANTHROPIC_API_KEY=\"\"\n
"},{"location":"swarms/examples/moa_example/#code","title":"Code","text":"from swarms import Agent, MixtureOfAgents\n\n# Agent 1: Risk Metrics Calculator\nrisk_metrics_agent = Agent(\n agent_name=\"Risk-Metrics-Calculator\",\n agent_description=\"Calculates key risk metrics like VaR, Sharpe ratio, and volatility\",\n system_prompt=\"\"\"You are a risk metrics specialist. Calculate and explain:\n - Value at Risk (VaR)\n - Sharpe ratio\n - Volatility\n - Maximum drawdown\n - Beta coefficient\n\n Provide clear, numerical results with brief explanations.\"\"\",\n max_loops=1,\n # model_name=\"gpt-4o-mini\",\n random_model_enabled=True,\n dynamic_temperature_enabled=True,\n output_type=\"str-all-except-first\",\n max_tokens=4096,\n)\n\n# Agent 2: Portfolio Risk Analyzer\nportfolio_risk_agent = Agent(\n agent_name=\"Portfolio-Risk-Analyzer\",\n agent_description=\"Analyzes portfolio diversification and concentration risk\",\n system_prompt=\"\"\"You are a portfolio risk analyst. Focus on:\n - Portfolio diversification analysis\n - Concentration risk assessment\n - Correlation analysis\n - Sector/asset allocation risk\n - Liquidity risk evaluation\n\n Provide actionable insights for risk reduction.\"\"\",\n max_loops=1,\n # model_name=\"gpt-4o-mini\",\n random_model_enabled=True,\n dynamic_temperature_enabled=True,\n output_type=\"str-all-except-first\",\n max_tokens=4096,\n)\n\n# Agent 3: Market Risk Monitor\nmarket_risk_agent = Agent(\n agent_name=\"Market-Risk-Monitor\",\n agent_description=\"Monitors market conditions and identifies risk factors\",\n system_prompt=\"\"\"You are a market risk monitor. Identify and assess:\n - Market volatility trends\n - Economic risk factors\n - Geopolitical risks\n - Interest rate risks\n - Currency risks\n\n Provide current risk alerts and trends.\"\"\",\n max_loops=1,\n # model_name=\"gpt-4o-mini\",\n random_model_enabled=True,\n dynamic_temperature_enabled=True,\n output_type=\"str-all-except-first\",\n max_tokens=4096,\n)\n\n\nswarm = MixtureOfAgents(\n agents=[\n risk_metrics_agent,\n portfolio_risk_agent,\n market_risk_agent,\n ],\n layers=1,\n max_loops=1,\n output_type=\"final\",\n)\n\n\nout = swarm.run(\n \"Calculate VaR and Sharpe ratio for a portfolio with 15% annual return and 20% volatility\"\n)\n\nprint(out)\n
"},{"location":"swarms/examples/moa_example/#support-and-community","title":"Support and Community","text":"If you're facing issues or want to learn more, check out the following resources to join our Discord, stay updated on Twitter, and watch tutorials on YouTube!
Platform Link Description \ud83d\udcda Documentation docs.swarms.world Official documentation and guides \ud83d\udcdd Blog Medium Latest updates and technical articles \ud83d\udcac Discord Join Discord Live chat and community support \ud83d\udc26 Twitter @kyegomez Latest news and announcements \ud83d\udc65 LinkedIn The Swarm Corporation Professional network and updates \ud83d\udcfa YouTube Swarms Channel Tutorials and demos \ud83c\udfab Events Sign up here Join our community events"},{"location":"swarms/examples/model_providers/","title":"Model Providers Overview","text":"Swarms supports a vast array of model providers, giving you the flexibility to choose the best model for your specific use case. Whether you need high-performance inference, cost-effective solutions, or specialized capabilities, Swarms has you covered.
"},{"location":"swarms/examples/model_providers/#supported-model-providers","title":"Supported Model Providers","text":"Provider Description Documentation OpenAI Industry-leading language models including GPT-4, GPT-4o, and GPT-4o-mini. Perfect for general-purpose tasks, creative writing, and complex reasoning. OpenAI Integration Anthropic/Claude Advanced AI models known for their safety, helpfulness, and reasoning capabilities. Claude models excel at analysis, coding, and creative tasks. Claude Integration Groq Ultra-fast inference platform offering real-time AI responses. Ideal for applications requiring low latency and high throughput. Groq Integration Cohere Enterprise-grade language models with strong performance on business applications, text generation, and semantic search. Cohere Integration DeepSeek Advanced reasoning models including the DeepSeek Reasoner (R1). Excellent for complex problem-solving and analytical tasks. DeepSeek Integration Ollama Local model deployment platform allowing you to run open-source models on your own infrastructure. No API keys required. Ollama Integration OpenRouter Unified API gateway providing access to hundreds of models from various providers through a single interface. OpenRouter Integration XAI xAI's Grok models offering unique capabilities for research, analysis, and creative tasks with advanced reasoning abilities. XAI Integration vLLM High-performance inference library for serving large language models with optimized memory usage and throughput. vLLM Integration Llama4 Meta's latest open-source language models including Llama-4-Maverick and Llama-4-Scout variants with expert routing capabilities. Llama4 Integration"},{"location":"swarms/examples/model_providers/#quick-start","title":"Quick Start","text":"All model providers follow a consistent pattern in Swarms. Here's the basic template:
from swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize agent with your chosen model\nagent = Agent(\n agent_name=\"Your-Agent-Name\",\n model_name=\"gpt-4o-mini\", # Varies by provider\n system_prompt=\"Your system prompt here\",\n agent_description=\"Description of what your agent does.\",\n)\n\n# Run your agent\nresponse = agent.run(\"Your query here\")\n
"},{"location":"swarms/examples/model_providers/#model-selection-guide","title":"Model Selection Guide","text":""},{"location":"swarms/examples/model_providers/#for-high-performance-applications","title":"For High-Performance Applications","text":"OpenAI GPT-4o: Best overall performance and reasoning
Anthropic Claude: Excellent safety and analysis capabilities
DeepSeek R1: Advanced reasoning and problem-solving
OpenAI GPT-4o-mini: Great performance at lower cost
Ollama: Free local deployment
OpenRouter: Access to cost-effective models
Groq: Ultra-fast inference
vLLM: Optimized for high throughput
Llama4: Expert routing for complex workflows
XAI Grok: Advanced research capabilities
Cohere: Strong business applications
Most providers require API keys. Add them to your .env
file:
# OpenAI\nOPENAI_API_KEY=your_openai_key\n\n# Anthropic\nANTHROPIC_API_KEY=your_anthropic_key\n\n# Groq\nGROQ_API_KEY=your_groq_key\n\n# Cohere\nCOHERE_API_KEY=your_cohere_key\n\n# DeepSeek\nDEEPSEEK_API_KEY=your_deepseek_key\n\n# OpenRouter\nOPENROUTER_API_KEY=your_openrouter_key\n\n# XAI\nXAI_API_KEY=your_xai_key\n
No API Key Required
Ollama and vLLM can be run locally without API keys, making them perfect for development and testing.
"},{"location":"swarms/examples/model_providers/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms/examples/model_providers/#multi-model-workflows","title":"Multi-Model Workflows","text":"Swarms allows you to create workflows that use different models for different tasks:
from swarms import Agent, ConcurrentWorkflow\n\n# Research agent using Claude for analysis\nresearch_agent = Agent(\n agent_name=\"Research-Agent\",\n model_name=\"claude-3-sonnet-20240229\",\n system_prompt=\"You are a research expert.\"\n)\n\n# Creative agent using GPT-4o for content generation\ncreative_agent = Agent(\n agent_name=\"Creative-Agent\", \n model_name=\"gpt-4o\",\n system_prompt=\"You are a creative content expert.\"\n)\n\n# Workflow combining both agents\nworkflow = ConcurrentWorkflow(\n name=\"Research-Creative-Workflow\",\n agents=[research_agent, creative_agent]\n)\n
"},{"location":"swarms/examples/model_providers/#model-routing","title":"Model Routing","text":"Automatically route tasks to the most appropriate model:
from swarms import Agent, ModelRouter\n\n# Define model preferences for different task types\nmodel_router = ModelRouter(\n models={\n \"analysis\": \"claude-3-sonnet-20240229\",\n \"creative\": \"gpt-4o\", \n \"fast\": \"gpt-4o-mini\",\n \"local\": \"ollama/llama2\"\n }\n)\n\n# Agent will automatically choose the best model\nagent = Agent(\n agent_name=\"Smart-Agent\",\n llm=model_router,\n system_prompt=\"You are a versatile assistant.\"\n)\n
"},{"location":"swarms/examples/model_providers/#getting-help","title":"Getting Help","text":"Documentation: Each provider has detailed documentation with examples
Community: Join the Swarms community for support and best practices
Issues: Report bugs and request features on GitHub
Discussions: Share your use cases and learn from others
Ready to Get Started?
Choose a model provider from the table above and follow the detailed integration guide. Each provider offers unique capabilities that can enhance your Swarms applications.
"},{"location":"swarms/examples/multi_agent_router_minimal/","title":"MultiAgentRouter Minimal Example","text":"This example shows how to route a task to the most suitable agent using SwarmRouter
with swarm_type=\"MultiAgentRouter\"
.
from swarms import Agent\nfrom swarms.structs.swarm_router import SwarmRouter\n\nagents = [\n Agent(\n agent_name=\"Researcher\",\n system_prompt=\"Answer questions briefly.\",\n model_name=\"gpt-4o-mini\",\n ),\n Agent(\n agent_name=\"Coder\",\n system_prompt=\"Write small Python functions.\",\n model_name=\"gpt-4o-mini\",\n ),\n]\n\nrouter = SwarmRouter(\n name=\"multi-agent-router-demo\",\n description=\"Routes tasks to the most suitable agent\",\n agents=agents,\n swarm_type=\"MultiAgentRouter\"\n)\n\nresult = router.run(\"Write a function that adds two numbers\")\nprint(result)\n
View the source on GitHub.
"},{"location":"swarms/examples/multiple_images/","title":"Processing Multiple Images","text":"This tutorial shows how to process multiple images with a single agent using Swarms' multi-modal capabilities. You'll learn to configure an agent for batch image analysis, enabling efficient processing for quality control, object detection, or image comparison tasks.
"},{"location":"swarms/examples/multiple_images/#installation","title":"Installation","text":"Install the swarms package using pip:
pip install -U swarms\n
"},{"location":"swarms/examples/multiple_images/#basic-setup","title":"Basic Setup","text":"WORKSPACE_DIR=\"agent_workspace\"\nANTHROPIC_API_KEY=\"\"\n
"},{"location":"swarms/examples/multiple_images/#code","title":"Code","text":"Create a list of images by their file paths
Pass it into the Agent.run(imgs=[str])
parameter
Activate summarize_multiple_images=True
if you want the agent to output a summary of the image analyses
from swarms import Agent\nfrom swarms.prompts.logistics import (\n Quality_Control_Agent_Prompt,\n)\n\n\n# Image for analysis\nfactory_image = \"image.jpg\"\n\n# Quality control agent\nquality_control_agent = Agent(\n agent_name=\"Quality Control Agent\",\n agent_description=\"A quality control agent that analyzes images and provides a detailed report on the quality of the product in the image.\",\n model_name=\"claude-3-5-sonnet-20240620\",\n system_prompt=Quality_Control_Agent_Prompt,\n multi_modal=True,\n max_loops=1,\n output_type=\"str-all-except-first\",\n summarize_multiple_images=True,\n)\n\n\nresponse = quality_control_agent.run(\n task=\"what is in the image?\",\n imgs=[factory_image, factory_image],\n)\n\nprint(response)\n
"},{"location":"swarms/examples/multiple_images/#support-and-community","title":"Support and Community","text":"If you're facing issues or want to learn more, check out the following resources to join our Discord, stay updated on Twitter, and watch tutorials on YouTube!
Platform Link Description \ud83d\udcda Documentation docs.swarms.world Official documentation and guides \ud83d\udcdd Blog Medium Latest updates and technical articles \ud83d\udcac Discord Join Discord Live chat and community support \ud83d\udc26 Twitter @kyegomez Latest news and announcements \ud83d\udc65 LinkedIn The Swarm Corporation Professional network and updates \ud83d\udcfa YouTube Swarms Channel Tutorials and demos \ud83c\udfab Events Sign up here Join our community events"},{"location":"swarms/examples/ollama/","title":"Agent with Ollama","text":"ollama/llama2
follows LiteLLM conventionsfrom swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"ollama/llama2\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/openai_example/","title":"Agent with GPT-4o-Mini","text":"OPENAI_API_KEY=\"your_key\"
to your .env
filegpt-4o-mini
or gpt-4o
from swarms import Agent\n\nAgent(\n agent_name=\"Stock-Analysis-Agent\",\n model_name=\"gpt-4o-mini\",\n max_loops=\"auto\",\n interactive=True,\n streaming_on=True,\n).run(\"What are 5 hft algorithms\")\n
"},{"location":"swarms/examples/openrouter/","title":"Agent with OpenRouter","text":"Add your OPENROUTER_API_KEY
in the .env
file
Select your model_name like openrouter/google/palm-2-chat-bison
follows LiteLLM conventions
Execute your agent!
from swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"openrouter/google/palm-2-chat-bison\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/quant_crypto_agent/","title":"Quant Crypto Agent","text":"Agent
class from the swarms
library.fetch_htx_data
and coin_gecko_coin_api
tools to fetch data from the htx
and CoinGecko
APIs.Agent
class to create an agent that can analyze the current state of a crypto asset.swarms
library.swarms_tools
library..env
file with the OPENAI_API_KEY
environment variables.pip install swarms swarms-tools python-dotenv\n
"},{"location":"swarms/examples/quant_crypto_agent/#code","title":"Code:","text":"from swarms import Agent\nfrom dotenv import load_dotenv\nfrom swarms_tools import fetch_htx_data, coin_gecko_coin_api\n\nload_dotenv()\n\nCRYPTO_ANALYST_SYSTEM_PROMPT = \"\"\"\nYou are an expert cryptocurrency financial analyst with deep expertise in:\n1. Technical Analysis\n - Chart patterns and indicators (RSI, MACD, Bollinger Bands)\n - Volume analysis and market momentum\n - Support and resistance levels\n - Trend analysis and price action\n\n2. Fundamental Analysis\n - Tokenomics evaluation\n - Network metrics (TVL, daily active users, transaction volume)\n - Protocol revenue and growth metrics\n - Market capitalization analysis\n - Token utility and use cases\n\n3. Market Analysis\n - Market sentiment analysis\n - Correlation with broader crypto market\n - Impact of macro events\n - Institutional adoption metrics\n - DeFi and NFT market analysis\n\n4. Risk Assessment\n - Volatility metrics\n - Liquidity analysis\n - Smart contract risks\n - Regulatory considerations\n - Exchange exposure risks\n\n5. Data Analysis Methods\n - On-chain metrics analysis\n - Whale wallet tracking\n - Exchange inflow/outflow\n - Mining/Staking statistics\n - Network health indicators\n\nWhen analyzing crypto assets, always:\n1. Start with a comprehensive market overview\n2. Examine both on-chain and off-chain metrics\n3. Consider multiple timeframes (short, medium, long-term)\n4. Evaluate risk-reward ratios\n5. Assess market sentiment and momentum\n6. Consider regulatory and security factors\n7. Analyze correlations with BTC, ETH, and traditional markets\n8. Examine liquidity and volume profiles\n9. Review recent protocol developments and updates\n10. Consider macro economic factors\n\nFormat your analysis with:\n- Clear section headings\n- Relevant metrics and data points\n- Risk warnings and disclaimers\n- Price action analysis\n- Market sentiment summary\n- Technical indicators\n- Fundamental factors\n- Clear recommendations with rationale\n\nRemember to:\n- Always provide data-driven insights\n- Include both bullish and bearish scenarios\n- Highlight key risk factors\n- Consider market cycles and seasonality\n- Maintain objectivity in analysis\n- Cite sources for data and claims\n- Update analysis based on new market conditions\n\"\"\"\n\n# Initialize the crypto analysis agent\nagent = Agent(\n agent_name=\"Crypto-Analysis-Expert\",\n agent_description=\"Expert cryptocurrency financial analyst and market researcher\",\n system_prompt=CRYPTO_ANALYST_SYSTEM_PROMPT,\n max_loops=\"auto\",\n model_name=\"gpt-4o\",\n dynamic_temperature_enabled=True,\n user_name=\"crypto_analyst\",\n output_type=\"str\",\n interactive=True,\n)\n\nprint(fetch_htx_data(\"sol\"))\nprint(coin_gecko_coin_api(\"solana\"))\n\n# Example usage\nagent.run(\n f\"\"\"\n Analyze the current state of Solana (SOL), including:\n 1. Technical analysis of price action\n 2. On-chain metrics and network health\n 3. Recent protocol developments\n 4. Market sentiment\n 5. Risk factors\n Please provide a comprehensive analysis with data-driven insights.\n\n # Solana CoinGecko Data\n Real-tim data from Solana CoinGecko: \\n {coin_gecko_coin_api(\"solana\")}\n\n \"\"\"\n)\n
"},{"location":"swarms/examples/sequential_example/","title":"Sequential Workflow Example","text":"Overview
Learn how to create a sequential workflow with multiple specialized AI agents using the Swarms framework. This example demonstrates how to set up a legal practice workflow with different types of legal agents working in sequence.
"},{"location":"swarms/examples/sequential_example/#prerequisites","title":"Prerequisites","text":"Before You Begin
Make sure you have:
Python 3.7+ installed
A valid API key for your model provider
The Swarms package installed
pip3 install -U swarms\n
"},{"location":"swarms/examples/sequential_example/#environment-setup","title":"Environment Setup","text":"API Key Configuration
Set your API key in the .env
file:
OPENAI_API_KEY=\"your-api-key-here\"\n
"},{"location":"swarms/examples/sequential_example/#code-implementation","title":"Code Implementation","text":""},{"location":"swarms/examples/sequential_example/#import-required-modules","title":"Import Required Modules","text":"from swarms import Agent, SequentialWorkflow\n
"},{"location":"swarms/examples/sequential_example/#configure-agents","title":"Configure Agents","text":"Legal Agent Configuration
Here's how to set up your specialized legal agents:
# Litigation Agent\nlitigation_agent = Agent(\n agent_name=\"Alex Johnson\",\n system_prompt=\"As a Litigator, you specialize in navigating the complexities of lawsuits. Your role involves analyzing intricate facts, constructing compelling arguments, and devising effective case strategies to achieve favorable outcomes for your clients.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n)\n\n# Corporate Attorney Agent\ncorporate_agent = Agent(\n agent_name=\"Emily Carter\",\n system_prompt=\"As a Corporate Attorney, you provide expert legal advice on business law matters. You guide clients on corporate structure, governance, compliance, and transactions, ensuring their business operations align with legal requirements.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n)\n\n# IP Attorney Agent\nip_agent = Agent(\n agent_name=\"Michael Smith\",\n system_prompt=\"As an IP Attorney, your expertise lies in protecting intellectual property rights. You handle various aspects of IP law, including patents, trademarks, copyrights, and trade secrets, helping clients safeguard their innovations.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n)\n
"},{"location":"swarms/examples/sequential_example/#initialize-sequential-workflow","title":"Initialize Sequential Workflow","text":"Workflow Setup
Configure the SequentialWorkflow with your agents:
swarm = SequentialWorkflow(\n agents=[litigation_agent, corporate_agent, ip_agent],\n name=\"litigation-practice\",\n description=\"Handle all aspects of litigation with a focus on thorough legal analysis and effective case management.\",\n)\n
"},{"location":"swarms/examples/sequential_example/#run-the-workflow","title":"Run the Workflow","text":"Execute the Workflow
Start the sequential workflow:
swarm.run(\"Create a report on how to patent an all-new AI invention and what platforms to use and more.\")\n
"},{"location":"swarms/examples/sequential_example/#complete-example","title":"Complete Example","text":"Full Implementation
Here's the complete code combined:
from swarms import Agent, SequentialWorkflow\n\n# Core Legal Agent Definitions with enhanced system prompts\nlitigation_agent = Agent(\n agent_name=\"Alex Johnson\",\n system_prompt=\"As a Litigator, you specialize in navigating the complexities of lawsuits. Your role involves analyzing intricate facts, constructing compelling arguments, and devising effective case strategies to achieve favorable outcomes for your clients.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n)\n\ncorporate_agent = Agent(\n agent_name=\"Emily Carter\",\n system_prompt=\"As a Corporate Attorney, you provide expert legal advice on business law matters. You guide clients on corporate structure, governance, compliance, and transactions, ensuring their business operations align with legal requirements.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n)\n\nip_agent = Agent(\n agent_name=\"Michael Smith\",\n system_prompt=\"As an IP Attorney, your expertise lies in protecting intellectual property rights. You handle various aspects of IP law, including patents, trademarks, copyrights, and trade secrets, helping clients safeguard their innovations.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n)\n\n# Initialize and run the workflow\nswarm = SequentialWorkflow(\n agents=[litigation_agent, corporate_agent, ip_agent],\n name=\"litigation-practice\",\n description=\"Handle all aspects of litigation with a focus on thorough legal analysis and effective case management.\",\n)\n\nswarm.run(\"Create a report on how to patent an all-new AI invention and what platforms to use and more.\")\n
"},{"location":"swarms/examples/sequential_example/#agent-roles","title":"Agent Roles","text":"Specialized Legal Agents
Agent Role Expertise Alex Johnson Litigator Lawsuit navigation, case strategy Emily Carter Corporate Attorney Business law, compliance Michael Smith IP Attorney Patents, trademarks, copyrights"},{"location":"swarms/examples/sequential_example/#configuration-options","title":"Configuration Options","text":"Key Parameters
Parameter Description Defaultagent_name
Human-readable name for the agent Required system_prompt
Detailed role description and expertise Required model_name
LLM model to use \"gpt-4o-mini\" max_loops
Maximum number of processing loops 1"},{"location":"swarms/examples/sequential_example/#next-steps","title":"Next Steps","text":"What to Try Next
Common Issues
Ensure your API key is correctly set in the .env
file
Check that all required dependencies are installed
Verify that your model provider's API is accessible
Monitor agent responses for quality and relevance
The SwarmRouter is a flexible routing system designed to manage different types of swarms for task execution. It provides a unified interface to interact with various swarm types, including AgentRearrange
, MixtureOfAgents
, SpreadSheetSwarm
, SequentialWorkflow
, and ConcurrentWorkflow
.
pip3 install -U swarms\n
"},{"location":"swarms/examples/swarm_router/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\nGROQ_API_KEY=\"\"\n
"},{"location":"swarms/examples/swarm_router/#basic-usage","title":"Basic Usage","text":""},{"location":"swarms/examples/swarm_router/#1-initialize-specialized-agents","title":"1. Initialize Specialized Agents","text":"from swarms import Agent\nfrom swarms.structs.swarm_router import SwarmRouter, SwarmType\n\n# Initialize specialized agents\ndata_extractor_agent = Agent(\n agent_name=\"Data-Extractor\",\n system_prompt=\"You are a data extraction specialist...\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\nsummarizer_agent = Agent(\n agent_name=\"Document-Summarizer\",\n system_prompt=\"You are a document summarization expert...\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\nfinancial_analyst_agent = Agent(\n agent_name=\"Financial-Analyst\",\n system_prompt=\"You are a financial analysis specialist...\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n
"},{"location":"swarms/examples/swarm_router/#2-create-swarmrouter-with-sequential-workflow","title":"2. Create SwarmRouter with Sequential Workflow","text":"sequential_router = SwarmRouter(\n name=\"SequentialRouter\",\n description=\"Process tasks in sequence\",\n agents=[data_extractor_agent, summarizer_agent, financial_analyst_agent],\n swarm_type=SwarmType.SequentialWorkflow,\n max_loops=1\n)\n\n# Run a task\nresult = sequential_router.run(\"Analyze and summarize the quarterly financial report\")\n
"},{"location":"swarms/examples/swarm_router/#3-create-swarmrouter-with-concurrent-workflow","title":"3. Create SwarmRouter with Concurrent Workflow","text":"concurrent_router = SwarmRouter(\n name=\"ConcurrentRouter\",\n description=\"Process tasks concurrently\",\n agents=[data_extractor_agent, summarizer_agent, financial_analyst_agent],\n swarm_type=SwarmType.ConcurrentWorkflow,\n max_loops=1\n)\n\n# Run a task\nresult = concurrent_router.run(\"Evaluate multiple aspects of the company simultaneously\")\n
"},{"location":"swarms/examples/swarm_router/#4-create-swarmrouter-with-agentrearrange","title":"4. Create SwarmRouter with AgentRearrange","text":"rearrange_router = SwarmRouter(\n name=\"RearrangeRouter\",\n description=\"Dynamically rearrange agents for optimal task processing\",\n agents=[data_extractor_agent, summarizer_agent, financial_analyst_agent],\n swarm_type=SwarmType.AgentRearrange,\n flow=f\"{data_extractor_agent.agent_name} -> {summarizer_agent.agent_name} -> {financial_analyst_agent.agent_name}\",\n max_loops=1\n)\n\n# Run a task\nresult = rearrange_router.run(\"Process and analyze company documents\")\n
"},{"location":"swarms/examples/swarm_router/#5-create-swarmrouter-with-mixtureofagents","title":"5. Create SwarmRouter with MixtureOfAgents","text":"mixture_router = SwarmRouter(\n name=\"MixtureRouter\",\n description=\"Combine multiple expert agents\",\n agents=[data_extractor_agent, summarizer_agent, financial_analyst_agent],\n swarm_type=SwarmType.MixtureOfAgents,\n max_loops=1\n)\n\n# Run a task\nresult = mixture_router.run(\"Provide comprehensive analysis of company performance\")\n
"},{"location":"swarms/examples/swarm_router/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms/examples/swarm_router/#1-error-handling-and-logging","title":"1. Error Handling and Logging","text":"try:\n result = router.run(\"Complex analysis task\")\n\n # Retrieve and print logs\n for log in router.get_logs():\n print(f\"{log.timestamp} - {log.level}: {log.message}\")\nexcept Exception as e:\n print(f\"Error occurred: {str(e)}\")\n
"},{"location":"swarms/examples/swarm_router/#2-custom-configuration","title":"2. Custom Configuration","text":"router = SwarmRouter(\n name=\"CustomRouter\",\n description=\"Custom router configuration\",\n agents=[data_extractor_agent, summarizer_agent, financial_analyst_agent],\n swarm_type=SwarmType.SequentialWorkflow,\n max_loops=3,\n autosave=True,\n verbose=True,\n output_type=\"json\"\n)\n
"},{"location":"swarms/examples/swarm_router/#best-practices","title":"Best Practices","text":""},{"location":"swarms/examples/swarm_router/#choose-the-appropriate-swarm-type-based-on-your-task-requirements","title":"Choose the appropriate swarm type based on your task requirements:","text":"Swarm Type Use Case SequentialWorkflow
Tasks that need to be processed in order ConcurrentWorkflow
Independent tasks that can be processed simultaneously AgentRearrange
Tasks requiring dynamic agent organization MixtureOfAgents
Complex tasks needing multiple expert perspectives"},{"location":"swarms/examples/swarm_router/#configure-agents-appropriately","title":"Configure agents appropriately:","text":"Configuration Aspect Description Agent Names & Descriptions Set meaningful and descriptive names that reflect the agent's role and purpose System Prompts Define clear, specific prompts that outline the agent's responsibilities and constraints Model Parameters Configure appropriate parameters like temperature, max_tokens, and other model-specific settings"},{"location":"swarms/examples/swarm_router/#implement-proper-error-handling","title":"Implement proper error handling:","text":"Error Handling Practice Description Try-Except Blocks Implement proper exception handling with try-except blocks Log Monitoring Regularly monitor and analyze system logs for potential issues Edge Case Handling Implement specific handling for edge cases and unexpected scenarios"},{"location":"swarms/examples/swarm_router/#optimize-performance","title":"Optimize performance:","text":"Performance Optimization Description Concurrent Processing Utilize parallel processing capabilities when tasks can be executed simultaneously Max Loops Configuration Set appropriate iteration limits based on task complexity and requirements Resource Management Continuously monitor and optimize system resource utilization"},{"location":"swarms/examples/swarm_router/#example-implementation","title":"Example Implementation","text":"Here's a complete example showing how to use SwarmRouter in a real-world scenario:
import os\nfrom swarms import Agent\nfrom swarms.structs.swarm_router import SwarmRouter, SwarmType\n\n# Initialize specialized agents\nresearch_agent = Agent(\n agent_name=\"ResearchAgent\",\n system_prompt=\"You are a research specialist...\",\n model_name=\"gpt-4o\",\n max_loops=1\n)\n\nanalysis_agent = Agent(\n agent_name=\"AnalysisAgent\",\n system_prompt=\"You are an analysis expert...\",\n model_name=\"gpt-4o\",\n max_loops=1\n)\n\nsummary_agent = Agent(\n agent_name=\"SummaryAgent\",\n system_prompt=\"You are a summarization specialist...\",\n model_name=\"gpt-4o\",\n max_loops=1\n)\n\n# Create router with sequential workflow\nrouter = SwarmRouter(\n name=\"ResearchAnalysisRouter\",\n description=\"Process research and analysis tasks\",\n agents=[research_agent, analysis_agent, summary_agent],\n swarm_type=SwarmType.SequentialWorkflow,\n max_loops=1,\n verbose=True\n)\n\n# Run complex task\ntry:\n result = router.run(\n \"Research and analyze the impact of AI on healthcare, \"\n \"providing a comprehensive summary of findings.\"\n )\n print(\"Task Result:\", result)\n\n # Print logs\n for log in router.get_logs():\n print(f\"{log.timestamp} - {log.level}: {log.message}\")\n\nexcept Exception as e:\n print(f\"Error processing task: {str(e)}\")\n
This comprehensive guide demonstrates how to effectively use the SwarmRouter in various scenarios, making it easier to manage and orchestrate multiple agents for complex tasks.
"},{"location":"swarms/examples/swarms_api_finance/","title":"Finance Swarm Example","text":".env
file in the root directory and add your API key:SWARMS_API_KEY=<your-api-key>\n
import os\nimport requests\nfrom dotenv import load_dotenv\nimport json\n\nload_dotenv()\n\n# Retrieve API key securely from .env\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\n# Headers for secure API communication\nheaders = {\"x-api-key\": API_KEY, \"Content-Type\": \"application/json\"}\n\ndef create_financial_swarm(equity_data: str):\n \"\"\"\n Constructs and triggers a full-stack financial swarm consisting of three agents:\n Equity Analyst, Risk Assessor, and Market Advisor.\n Each agent is provided with a comprehensive, detailed system prompt to ensure high reliability.\n \"\"\"\n\n payload = {\n \"swarm_name\": \"Enhanced Financial Analysis Swarm\",\n \"description\": \"A swarm of agents specialized in performing comprehensive financial analysis, risk assessment, and market recommendations.\",\n \"agents\": [\n {\n \"agent_name\": \"Equity Analyst\",\n \"description\": \"Agent specialized in analyzing equities data to provide insights on stock performance and valuation.\",\n \"system_prompt\": (\n \"You are an experienced equity analyst with expertise in financial markets and stock valuation. \"\n \"Your role is to analyze the provided equities data, including historical performance, financial statements, and market trends. \"\n \"Provide a detailed analysis of the stock's potential, including valuation metrics and growth prospects. \"\n \"Consider macroeconomic factors, industry trends, and company-specific news. Your analysis should be clear, actionable, and well-supported by data.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 4000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Risk Assessor\",\n \"description\": \"Agent responsible for evaluating the risks associated with equity investments.\",\n \"system_prompt\": (\n \"You are a certified risk management professional with expertise in financial risk assessment. \"\n \"Your task is to evaluate the risks associated with the provided equities data, including market risk, credit risk, and operational risk. \"\n \"Provide a comprehensive risk analysis, including potential scenarios and their impact on investment performance. \"\n \"Your output should be detailed, reliable, and compliant with current risk management standards.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 3000,\n \"temperature\": 0.2,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Market Advisor\",\n \"description\": \"Agent dedicated to suggesting investment strategies based on market conditions and equity analysis.\",\n \"system_prompt\": (\n \"You are a knowledgeable market advisor with expertise in investment strategies and portfolio management. \"\n \"Based on the analysis provided by the Equity Analyst and the risk assessment, your task is to recommend a comprehensive investment strategy. \"\n \"Your suggestions should include asset allocation, diversification strategies, and considerations for market conditions. \"\n \"Explain the rationale behind each recommendation and reference relevant market data where applicable. \"\n \"Your recommendations should be reliable, detailed, and clearly prioritized based on risk and return.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 5000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": equity_data,\n }\n\n # Payload includes the equity data as the task to be processed by the swarm\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload,\n )\n\n if response.status_code == 200:\n print(\"Swarm successfully executed!\")\n return json.dumps(response.json(), indent=4)\n else:\n print(f\"Error {response.status_code}: {response.text}\")\n return None\n\n\n# Example Equity Data for the Swarm to analyze\nif __name__ == \"__main__\":\n equity_data = (\n \"Analyze the equity data for Company XYZ, which has shown a 15% increase in revenue over the last quarter, \"\n \"with a P/E ratio of 20 and a market cap of $1 billion. Consider the current market conditions and potential risks.\"\n )\n\n financial_output = create_financial_swarm(equity_data)\n print(financial_output)\n
python financial_swarm.py\n
"},{"location":"swarms/examples/swarms_api_medical/","title":"Medical Swarm Example","text":".env
file in the root directory and add your API key:SWARMS_API_KEY=<your-api-key>\n
import os\nimport requests\nfrom dotenv import load_dotenv\nimport json\n\nload_dotenv()\n\n# Retrieve API key securely from .env\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\n# Headers for secure API communication\nheaders = {\"x-api-key\": API_KEY, \"Content-Type\": \"application/json\"}\n\ndef create_medical_swarm(patient_case: str):\n \"\"\"\n Constructs and triggers a full-stack medical swarm consisting of three agents:\n Diagnostic Specialist, Medical Coder, and Treatment Advisor.\n Each agent is provided with a comprehensive, detailed system prompt to ensure high reliability.\n \"\"\"\n\n payload = {\n \"swarm_name\": \"Enhanced Medical Diagnostic Swarm\",\n \"description\": \"A swarm of agents specialized in performing comprehensive medical diagnostics, analysis, and coding.\",\n \"agents\": [\n {\n \"agent_name\": \"Diagnostic Specialist\",\n \"description\": \"Agent specialized in analyzing patient history, symptoms, lab results, and imaging data to produce accurate diagnoses.\",\n \"system_prompt\": (\n \"You are an experienced, board-certified medical diagnostician with over 20 years of clinical practice. \"\n \"Your role is to analyze all available patient information\u2014including history, symptoms, lab tests, and imaging results\u2014\"\n \"with extreme attention to detail and clinical nuance. Provide a comprehensive differential diagnosis considering \"\n \"common, uncommon, and rare conditions. Always cross-reference clinical guidelines and evidence-based medicine. \"\n \"Explain your reasoning step by step and provide a final prioritized list of potential diagnoses along with their likelihood. \"\n \"Consider patient demographics, comorbidities, and risk factors. Your diagnosis should be reliable, clear, and actionable.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 4000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Medical Coder\",\n \"description\": \"Agent responsible for translating medical diagnoses and procedures into accurate standardized medical codes (ICD-10, CPT, etc.).\",\n \"system_prompt\": (\n \"You are a certified and experienced medical coder, well-versed in ICD-10, CPT, and other coding systems. \"\n \"Your task is to convert detailed medical diagnoses and treatment procedures into precise, standardized codes. \"\n \"Consider all aspects of the clinical documentation including severity, complications, and comorbidities. \"\n \"Provide clear explanations for the codes chosen, referencing the latest coding guidelines and payer policies where relevant. \"\n \"Your output should be comprehensive, reliable, and fully compliant with current medical coding standards.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 3000,\n \"temperature\": 0.2,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Treatment Advisor\",\n \"description\": \"Agent dedicated to suggesting evidence-based treatment options, including pharmaceutical and non-pharmaceutical interventions.\",\n \"system_prompt\": (\n \"You are a highly knowledgeable medical treatment specialist with expertise in the latest clinical guidelines and research. \"\n \"Based on the diagnostic conclusions provided, your task is to recommend a comprehensive treatment plan. \"\n \"Your suggestions should include first-line therapies, potential alternative treatments, and considerations for patient-specific factors \"\n \"such as allergies, contraindications, and comorbidities. Explain the rationale behind each treatment option and reference clinical guidelines where applicable. \"\n \"Your recommendations should be reliable, detailed, and clearly prioritized based on efficacy and safety.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 5000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": patient_case,\n }\n\n # Payload includes the patient case as the task to be processed by the swar\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload,\n )\n\n if response.status_code == 200:\n print(\"Swarm successfully executed!\")\n return json.dumps(response.json(), indent=4)\n else:\n print(f\"Error {response.status_code}: {response.text}\")\n return None\n\n\n# Example Patient Task for the Swarm to diagnose and analyze\nif __name__ == \"__main__\":\n patient_case = (\n \"Patient is a 55-year-old male presenting with severe chest pain, shortness of breath, elevated blood pressure, \"\n \"nausea, and a family history of cardiovascular disease. Blood tests show elevated troponin levels, and EKG indicates ST-segment elevations. \"\n \"The patient is currently unstable. Provide a detailed diagnosis, coding, and treatment plan.\"\n )\n\n diagnostic_output = create_medical_swarm(patient_case)\n print(diagnostic_output)\n
python medical_swarm.py\n
"},{"location":"swarms/examples/swarms_api_ml_model/","title":"ML Model Code Generation Swarm Example","text":".env
file in the root directory and add your API key:SWARMS_API_KEY=<your-api-key>\n
import os\nimport requests\nfrom dotenv import load_dotenv\nimport json\n\nload_dotenv()\n\n# Retrieve API key securely from .env\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\n# Headers for secure API communication\nheaders = {\"x-api-key\": API_KEY, \"Content-Type\": \"application/json\"}\n\ndef create_ml_code_swarm(task_description: str):\n \"\"\"\n Constructs and triggers a swarm of agents for generating a complete machine learning project using PyTorch.\n The swarm includes:\n - Model Code Generator: Generates the PyTorch model architecture code.\n - Training Script Generator: Creates a comprehensive training, validation, and testing script using PyTorch.\n - Unit Test Creator: Produces extensive unit tests and helper code, ensuring correctness of the model and training scripts.\n Each agent's prompt is highly detailed to output only Python code, with exclusive use of PyTorch.\n \"\"\"\n payload = {\n \"swarm_name\": \"Comprehensive PyTorch Code Generation Swarm\",\n \"description\": (\n \"A production-grade swarm of agents tasked with generating a complete machine learning project exclusively using PyTorch. \"\n \"The swarm is divided into distinct roles: one agent generates the core model architecture code; \"\n \"another creates the training and evaluation scripts including data handling; and a third produces \"\n \"extensive unit tests and helper functions. Each agent's instructions are highly detailed to ensure that the \"\n \"output is strictly Python code with PyTorch as the only deep learning framework.\"\n ),\n \"agents\": [\n {\n \"agent_name\": \"Model Code Generator\",\n \"description\": \"Generates the complete machine learning model architecture code using PyTorch.\",\n \"system_prompt\": (\n \"You are an expert machine learning engineer with a deep understanding of PyTorch. \"\n \"Your task is to generate production-ready Python code that defines a complete deep learning model architecture exclusively using PyTorch. \"\n \"The code must include all necessary imports, class or function definitions, and should be structured in a modular and scalable manner. \"\n \"Follow PEP8 standards and output only code\u2014no comments, explanations, or extraneous text. \"\n \"Your model definition should include proper layer initialization, activation functions, dropout, and any custom components as required. \"\n \"Ensure that the entire output is strictly Python code based on PyTorch.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 2,\n \"max_tokens\": 4000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Training Script Generator\",\n \"description\": \"Creates a comprehensive training, validation, and testing script using PyTorch.\",\n \"system_prompt\": (\n \"You are a highly skilled software engineer specializing in machine learning pipeline development with PyTorch. \"\n \"Your task is to generate Python code that builds a complete training pipeline using PyTorch. \"\n \"The script must include robust data loading, preprocessing, augmentation, and a complete training loop, along with validation and testing procedures. \"\n \"All necessary imports should be included and the code should assume that the model code from the previous agent is available via proper module imports. \"\n \"Follow best practices for reproducibility and modularity, and output only code without any commentary or non-code text. \"\n \"The entire output must be strictly Python code that uses PyTorch for all deep learning operations.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 3000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Unit Test Creator\",\n \"description\": \"Develops a suite of unit tests and helper functions for verifying the PyTorch model and training pipeline.\",\n \"system_prompt\": (\n \"You are an experienced software testing expert with extensive experience in writing unit tests for machine learning projects in PyTorch. \"\n \"Your task is to generate Python code that consists solely of unit tests and any helper functions required to validate both the PyTorch model and the training pipeline. \"\n \"Utilize testing frameworks such as pytest or unittest. The tests should cover key functionalities such as model instantiation, forward pass correctness, \"\n \"training loop execution, data preprocessing verification, and error handling. \"\n \"Ensure that your output is only Python code, without any additional text or commentary, and that it is ready to be integrated into a CI/CD pipeline. \"\n \"The entire output must exclusively use PyTorch as the deep learning framework.\"\n ),\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 3000,\n \"temperature\": 0.3,\n \"auto_generate_prompt\": False\n }\n ],\n \"max_loops\": 3,\n \"swarm_type\": \"SequentialWorkflow\" # Sequential workflow: later agents can assume outputs from earlier ones\n }\n\n # The task description provides the high-level business requirement for the swarm.\n payload = {\n \"task\": task_description,\n \"swarm\": payload\n }\n\n response = requests.post(\n f\"{BASE_URL}/swarm/completion\",\n headers=headers,\n json=payload,\n )\n\n if response.status_code == 200:\n print(\"PyTorch Code Generation Swarm successfully executed!\")\n return json.dumps(response.json(), indent=4)\n else:\n print(f\"Error {response.status_code}: {response.text}\")\n return None\n\n# Example business task for the swarm: generating a full-stack machine learning pipeline for image classification using PyTorch.\nif __name__ == \"__main__\":\n task_description = (\n \"Develop a full-stack machine learning pipeline for image classification using PyTorch. \"\n \"The project must include a deep learning model using a CNN architecture for image recognition, \"\n \"a comprehensive training script for data preprocessing, augmentation, training, validation, and testing, \"\n \"and an extensive suite of unit tests to validate every component. \"\n \"Each component's output must be strictly Python code with no additional text or commentary, using PyTorch exclusively.\"\n )\n\n output = create_ml_code_swarm(task_description)\n print(output)\n
"},{"location":"swarms/examples/swarms_dao/","title":"Swarms DAO Example","text":"This example demonstrates how to create a swarm of agents to collaborate on a task. The agents are designed to work together to create a comprehensive strategy for a DAO focused on decentralized governance for climate action.
You can customize the agents and their system prompts to fit your specific needs.
And, this example is using the deepseek-reasoner
model, which is a large language model that is optimized for reasoning tasks.
import random\nfrom swarms import Agent\n\n# System prompts for each agent\nMARKETING_AGENT_SYS_PROMPT = \"\"\"\nYou are the Marketing Strategist Agent for a DAO. Your role is to develop, implement, and optimize all marketing and branding strategies to align with the DAO's mission and vision. The DAO is focused on decentralized governance for climate action, funding projects aimed at reducing carbon emissions, and incentivizing community participation through its native token.\n\n### Objectives:\n1. **Brand Awareness**: Build a globally recognized and trusted brand for the DAO.\n2. **Community Growth**: Expand the DAO's community by onboarding individuals passionate about climate action and blockchain technology.\n3. **Campaign Execution**: Launch high-impact marketing campaigns on platforms like Twitter, Discord, and YouTube to engage and retain community members.\n4. **Partnerships**: Identify and build partnerships with like-minded organizations, NGOs, and influencers.\n5. **Content Strategy**: Design educational and engaging content, including infographics, blog posts, videos, and AMAs.\n\n### Instructions:\n- Thoroughly analyze the product description and DAO mission.\n- Collaborate with the Growth, Product, Treasury, and Operations agents to align marketing strategies with overall goals.\n- Create actionable steps for social media growth, community engagement, and brand storytelling.\n- Leverage analytics to refine marketing strategies, focusing on measurable KPIs like engagement, conversion rates, and member retention.\n- Suggest innovative methods to make the DAO's mission resonate with a broader audience (e.g., gamified incentives, contests, or viral campaigns).\n- Ensure every strategy emphasizes transparency, sustainability, and long-term impact.\n\"\"\"\n\nPRODUCT_AGENT_SYS_PROMPT = \"\"\"\nYou are the Product Manager Agent for a DAO focused on decentralized governance for climate action. Your role is to design, manage, and optimize the DAO's product roadmap. This includes defining key features, prioritizing user needs, and ensuring product alignment with the DAO\u2019s mission of reducing carbon emissions and incentivizing community participation.\n\n### Objectives:\n1. **User-Centric Design**: Identify the DAO community\u2019s needs and design features to enhance their experience.\n2. **Roadmap Prioritization**: Develop a prioritized product roadmap based on community feedback and alignment with climate action goals.\n3. **Integration**: Suggest technical solutions and tools for seamless integration with other platforms and blockchains.\n4. **Continuous Improvement**: Regularly evaluate product features and recommend optimizations to improve usability, engagement, and adoption.\n\n### Instructions:\n- Collaborate with the Marketing and Growth agents to understand user feedback and market trends.\n- Engage the Treasury Agent to ensure product development aligns with budget constraints and revenue goals.\n- Suggest mechanisms for incentivizing user engagement, such as staking rewards or gamified participation.\n- Design systems that emphasize decentralization, transparency, and scalability.\n- Provide detailed feature proposals, technical specifications, and timelines for implementation.\n- Ensure all features are optimized for both experienced blockchain users and newcomers to Web3.\n\"\"\"\n\nGROWTH_AGENT_SYS_PROMPT = \"\"\"\nYou are the Growth Strategist Agent for a DAO focused on decentralized governance for climate action. Your primary role is to identify and implement growth strategies to increase the DAO\u2019s user base and engagement.\n\n### Objectives:\n1. **User Acquisition**: Identify effective strategies to onboard more users to the DAO.\n2. **Retention**: Suggest ways to improve community engagement and retain active members.\n3. **Data-Driven Insights**: Leverage data analytics to identify growth opportunities and areas of improvement.\n4. **Collaborative Growth**: Work with other agents to align growth efforts with marketing, product development, and treasury goals.\n\n### Instructions:\n- Collaborate with the Marketing Agent to optimize campaigns for user acquisition.\n- Analyze user behavior and suggest actionable insights to improve retention.\n- Recommend partnerships with influential figures or organizations to enhance the DAO's visibility.\n- Propose growth experiments (A/B testing, new incentives, etc.) and analyze their effectiveness.\n- Suggest tools for data collection and analysis, ensuring privacy and transparency.\n- Ensure growth strategies align with the DAO's mission of sustainability and climate action.\n\"\"\"\n\nTREASURY_AGENT_SYS_PROMPT = \"\"\"\nYou are the Treasury Management Agent for a DAO focused on decentralized governance for climate action. Your role is to oversee the DAO's financial operations, including budgeting, funding allocation, and financial reporting.\n\n### Objectives:\n1. **Financial Transparency**: Maintain clear and detailed reports of the DAO's financial status.\n2. **Budget Management**: Allocate funds strategically to align with the DAO's goals and priorities.\n3. **Fundraising**: Identify and recommend strategies for fundraising to ensure the DAO's financial sustainability.\n4. **Cost Optimization**: Suggest ways to reduce operational costs without sacrificing quality.\n\n### Instructions:\n- Collaborate with all other agents to align funding with the DAO's mission and strategic goals.\n- Propose innovative fundraising campaigns (e.g., NFT drops, token sales) to generate revenue.\n- Analyze financial risks and suggest mitigation strategies.\n- Ensure all recommendations prioritize the DAO's mission of reducing carbon emissions and driving global climate action.\n- Provide periodic financial updates and propose budget reallocations based on current needs.\n\"\"\"\n\nOPERATIONS_AGENT_SYS_PROMPT = \"\"\"\nYou are the Operations Coordinator Agent for a DAO focused on decentralized governance for climate action. Your role is to ensure smooth day-to-day operations, coordinate workflows, and manage governance processes.\n\n### Objectives:\n1. **Workflow Optimization**: Streamline operational processes to maximize efficiency and effectiveness.\n2. **Task Coordination**: Manage and delegate tasks to ensure timely delivery of goals.\n3. **Governance**: Oversee governance processes, including proposal management and voting mechanisms.\n4. **Communication**: Ensure seamless communication between all agents and community members.\n\n### Instructions:\n- Collaborate with other agents to align operations with DAO objectives.\n- Facilitate communication and task coordination between Marketing, Product, Growth, and Treasury agents.\n- Create efficient workflows to handle DAO proposals and governance activities.\n- Suggest tools or platforms to improve operational efficiency.\n- Provide regular updates on task progress and flag any blockers or risks.\n\"\"\"\n\n# Initialize agents\nmarketing_agent = Agent(\n agent_name=\"Marketing-Agent\",\n system_prompt=MARKETING_AGENT_SYS_PROMPT,\n model_name=\"deepseek/deepseek-reasoner\",\n autosave=True,\n dashboard=False,\n verbose=True,\n)\n\nproduct_agent = Agent(\n agent_name=\"Product-Agent\",\n system_prompt=PRODUCT_AGENT_SYS_PROMPT,\n model_name=\"deepseek/deepseek-reasoner\",\n autosave=True,\n dashboard=False,\n verbose=True,\n)\n\ngrowth_agent = Agent(\n agent_name=\"Growth-Agent\",\n system_prompt=GROWTH_AGENT_SYS_PROMPT,\n model_name=\"deepseek/deepseek-reasoner\",\n autosave=True,\n dashboard=False,\n verbose=True,\n)\n\ntreasury_agent = Agent(\n agent_name=\"Treasury-Agent\",\n system_prompt=TREASURY_AGENT_SYS_PROMPT,\n model_name=\"deepseek/deepseek-reasoner\",\n autosave=True,\n dashboard=False,\n verbose=True,\n)\n\noperations_agent = Agent(\n agent_name=\"Operations-Agent\",\n system_prompt=OPERATIONS_AGENT_SYS_PROMPT,\n model_name=\"deepseek/deepseek-reasoner\",\n autosave=True,\n dashboard=False,\n verbose=True,\n)\n\nagents = [marketing_agent, product_agent, growth_agent, treasury_agent, operations_agent]\n\n\nclass DAOSwarmRunner:\n \"\"\"\n A class to manage and run a swarm of agents in a discussion.\n \"\"\"\n\n def __init__(self, agents: list, max_loops: int = 5, shared_context: str = \"\") -> None:\n \"\"\"\n Initializes the DAO Swarm Runner.\n\n Args:\n agents (list): A list of agents in the swarm.\n max_loops (int, optional): The maximum number of discussion loops between agents. Defaults to 5.\n shared_context (str, optional): The shared context for all agents to base their discussion on. Defaults to an empty string.\n \"\"\"\n self.agents = agents\n self.max_loops = max_loops\n self.shared_context = shared_context\n self.discussion_history = []\n\n def run(self, task: str) -> str:\n \"\"\"\n Runs the swarm in a random discussion.\n\n Args:\n task (str): The task or context that agents will discuss.\n\n Returns:\n str: The final discussion output after all loops.\n \"\"\"\n print(f\"Task: {task}\")\n print(\"Initializing Random Discussion...\")\n\n # Initialize the discussion with the shared context\n current_message = f\"Task: {task}\\nContext: {self.shared_context}\"\n self.discussion_history.append(current_message)\n\n # Run the agents in a randomized discussion\n for loop in range(self.max_loops):\n print(f\"\\n--- Loop {loop + 1}/{self.max_loops} ---\")\n # Choose a random agent\n agent = random.choice(self.agents)\n print(f\"Agent {agent.agent_name} is responding...\")\n\n # Run the agent and get a response\n response = agent.run(current_message)\n print(f\"Agent {agent.agent_name} says:\\n{response}\\n\")\n\n # Append the response to the discussion history\n self.discussion_history.append(f\"{agent.agent_name}: {response}\")\n\n # Update the current message for the next agent\n current_message = response\n\n print(\"\\n--- Discussion Complete ---\")\n return \"\\n\".join(self.discussion_history)\n\n\nswarm = DAOSwarmRunner(agents=agents, max_loops=1, shared_context=\"\")\n\n# User input for product description\nproduct_description = \"\"\"\nThe DAO is focused on decentralized governance for climate action. \nIt funds projects aimed at reducing carbon emissions and incentivizes community participation with a native token.\n\"\"\"\n\n# Assign a shared context for all agents\nswarm.shared_context = product_description\n\n# Run the swarm\ntask = \"\"\"\nAnalyze the product description and create a collaborative strategy for marketing, product, growth, treasury, and operations. Ensure all recommendations align with the DAO's mission of reducing carbon emissions.\n\"\"\"\noutput = swarm.run(task)\n\n# Print the swarm output\nprint(\"Collaborative Strategy Output:\\n\", output)\n
"},{"location":"swarms/examples/swarms_of_browser_agents/","title":"Swarms x Browser Use","text":"Import required modules
Configure your agent first by making a new class
Set your api keys for your model provider in the .env
file such as OPENAI_API_KEY=\"sk-\"
Conigure your ConcurrentWorkflow
pip install swarms browser-use langchain-openai\n
","text":""},{"location":"swarms/examples/swarms_of_browser_agents/#main","title":"Main","text":"import asyncio\n\nfrom browser_use import Agent\nfrom dotenv import load_dotenv\nfrom langchain_openai import ChatOpenAI\n\nfrom swarms import ConcurrentWorkflow\n\nload_dotenv()\n\n\nclass BrowserAgent:\n def __init__(self, agent_name: str = \"BrowserAgent\"):\n self.agent_name = agent_name\n\n async def browser_agent_test(self, task: str):\n agent = Agent(\n task=task,\n llm=ChatOpenAI(model=\"gpt-4o\"),\n )\n result = await agent.run()\n return result\n\n def run(self, task: str):\n return asyncio.run(self.browser_agent_test(task))\n\n\nswarm = ConcurrentWorkflow(\n agents=[BrowserAgent() for _ in range(3)],\n)\n\nswarm.run(\n \"\"\"\n Go to pump.fun.\n\n 2. Make an account: use email: \"test@test.com\" and password: \"test1234\"\n\n 3. Make a coin called and give it a cool description and etc. Fill in the form\n\n 4. Sit back and watch the coin grow in value.\n\n \"\"\"\n)\n
"},{"location":"swarms/examples/swarms_tools_htx/","title":"Swarms Tools Example with HTX + CoinGecko","text":"pip3 install swarms swarms-tools
OPENAI_API_KEY
to your .env
filefrom swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\nfrom swarms_tools import (\n coin_gecko_coin_api,\n fetch_htx_data,\n)\n\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=1,\n model_name=\"gpt-4o\",\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n return_step_meta=False,\n output_type=\"str\", # \"json\", \"dict\", \"csv\" OR \"string\" \"yaml\" and\n auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task\n max_tokens=4000, # max output tokens\n saved_state_path=\"agent_00.json\",\n interactive=False,\n)\n\nagent.run(\n f\"Analyze the $swarms token on HTX with data: {fetch_htx_data('swarms')}. Additionally, consider the following CoinGecko data: {coin_gecko_coin_api('swarms')}\"\n)\n
"},{"location":"swarms/examples/swarms_tools_htx_gecko/","title":"Swarms Tools Example with HTX + CoinGecko","text":"pip3 install swarms swarms-tools
OPENAI_API_KEY
to your .env
fileswarms_tools_htx_gecko.py
from swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\nfrom swarms_tools import (\n fetch_stock_news,\n coin_gecko_coin_api,\n fetch_htx_data,\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=1,\n model_name=\"gpt-4o\",\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=3,\n context_length=8192,\n return_step_meta=False,\n output_type=\"str\", # \"json\", \"dict\", \"csv\" OR \"string\" \"yaml\" and\n auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task\n max_tokens=4000, # max output tokens\n saved_state_path=\"agent_00.json\",\n interactive=False,\n tools=[fetch_stock_news, coin_gecko_coin_api, fetch_htx_data],\n)\n\nagent.run(\"Analyze the $swarms token on htx\")\n
"},{"location":"swarms/examples/templates_index/","title":"The Swarms Index","text":"The Swarms Index is a comprehensive catalog of repositories under The Swarm Corporation, showcasing a wide array of tools, frameworks, and templates designed for building, deploying, and managing autonomous AI agents and multi-agent systems. These repositories focus on enterprise-grade solutions, spanning industries like healthcare, finance, marketing, and more, with an emphasis on scalability, security, and performance. Many repositories include templates to help developers quickly set up production-ready applications.
Name Description Link Phala-Deployment-Template A guide and template for running Swarms Agents in a Trusted Execution Environment (TEE) using Phala Cloud, ensuring secure and isolated execution. https://github.com/The-Swarm-Corporation/Phala-Deployment-Template Swarms-API-Status-Page A status page for monitoring the health and performance of the Swarms API. https://github.com/The-Swarm-Corporation/Swarms-API-Status-Page Swarms-API-Phala-Template A deployment solution template for running Swarms API on Phala Cloud, optimized for secure and scalable agent orchestration. https://github.com/The-Swarm-Corporation/Swarms-API-Phala-Template DevSwarm Develop production-grade applications effortlessly with a single prompt, powered by a swarm of v0-driven autonomous agents operating 24/7 for fully autonomous software development. https://github.com/The-Swarm-Corporation/DevSwarm Enterprise-Grade-Agents-Course A comprehensive course teaching students to build, deploy, and manage autonomous agents for enterprise workflows using the Swarms library, focusing on scalability and integration. https://github.com/The-Swarm-Corporation/Enterprise-Grade-Agents-Course agentverse A collection of agents from top frameworks like Langchain, Griptape, and CrewAI, integrated into the Swarms ecosystem. https://github.com/The-Swarm-Corporation/agentverse InsuranceSwarm A swarm of agents to automate document processing and fraud detection in insurance claims. https://github.com/The-Swarm-Corporation/InsuranceSwarm swarms-examples A vast array of examples for enterprise-grade and production-ready applications using the Swarms framework. https://github.com/The-Swarm-Corporation/swarms-examples auto-ai-research-team Automates AI research at an OpenAI level to accelerate innovation using swarms of agents. https://github.com/The-Swarm-Corporation/auto-ai-research-team Agents-Beginner-Guide A definitive beginner's guide to AI agents and multi-agent systems, explaining fundamentals and industry applications. https://github.com/The-Swarm-Corporation/Agents-Beginner-Guide Solana-Ecosystem-MCP A collection of Solana tools wrapped in MCP servers for blockchain development. https://github.com/The-Swarm-Corporation/Solana-Ecosystem-MCP automated-crypto-fund A fully automated crypto fund leveraging swarms of LLM agents for real-money trading. https://github.com/The-Swarm-Corporation/automated-crypto-fund Mryaid The first multi-agent social media platform powered by Swarms. https://github.com/The-Swarm-Corporation/Mryaid pharma-swarm A swarm of autonomous agents for chemical analysis in the pharmaceutical industry. https://github.com/The-Swarm-Corporation/pharma-swarm Automated-Prompt-Engineering-Hub A hub for tools and resources focused on automated prompt engineering for generative AI. https://github.com/The-Swarm-Corporation/Automated-Prompt-Engineering-Hub Multi-Agent-Template-App A simple, reliable, and high-performance template for building multi-agent applications. https://github.com/The-Swarm-Corporation/Multi-Agent-Template-App Cookbook Examples and guides for using the Swarms Framework effectively. https://github.com/The-Swarm-Corporation/Cookbook SwarmDB A production-grade message queue system for agent communication and LLM backend load balancing. https://github.com/The-Swarm-Corporation/SwarmDB CryptoTaxSwarm A personal advisory tax swarm for cryptocurrency transactions. https://github.com/The-Swarm-Corporation/CryptoTaxSwarm Multi-Agent-Marketing-Course A course on automating marketing operations with enterprise-grade multi-agent collaboration. https://github.com/The-Swarm-Corporation/Multi-Agent-Marketing-Course Swarms-BrandBook Branding guidelines and assets for Swarms.ai, embodying innovation and collaboration. https://github.com/The-Swarm-Corporation/Swarms-BrandBook AgentAPI A definitive API for managing and interacting with AI agents. https://github.com/The-Swarm-Corporation/AgentAPI Research-Paper-Writer-Swarm Automates the creation of high-quality research papers in LaTeX using Swarms agents. https://github.com/The-Swarm-Corporation/Research-Paper-Writer-Swarm swarms-sdk A Python client for the Swarms API, providing a simple interface for managing AI swarms. https://github.com/The-Swarm-Corporation/swarms-sdk FluidAPI A framework for interacting with APIs using natural language, simplifying complex requests. https://github.com/The-Swarm-Corporation/FluidAPI MedicalCoderSwarm A multi-agent system for comprehensive medical diagnosis and coding using specialized AI agents. https://github.com/The-Swarm-Corporation/MedicalCoderSwarm BackTesterAgent An AI-powered backtesting framework for automated trading strategy validation and optimization. https://github.com/The-Swarm-Corporation/BackTesterAgent .ai The first natural language programming language powered by Swarms. https://github.com/The-Swarm-Corporation/.ai AutoHedge An autonomous hedge fund leveraging swarm intelligence for market analysis and trade execution. https://github.com/The-Swarm-Corporation/AutoHedge radiology-swarm A multi-agent system for advanced radiological analysis, diagnosis, and treatment planning. https://github.com/The-Swarm-Corporation/radiology-swarm MedGuard A Python library ensuring HIPAA compliance for LLM agents in healthcare applications. https://github.com/The-Swarm-Corporation/MedGuard doc-master A lightweight Python library for automated file reading and content extraction. https://github.com/The-Swarm-Corporation/doc-master Open-Aladdin An open-source risk-management tool for stock and security risk analysis. https://github.com/The-Swarm-Corporation/Open-Aladdin TickrAgent A scalable Python library for building financial agents for comprehensive stock analysis. https://github.com/The-Swarm-Corporation/TickrAgent NewsAgent An enterprise-grade news aggregation agent for fetching, querying, and summarizing news. https://github.com/The-Swarm-Corporation/NewsAgent Research-Paper-Hive A platform for discovering and engaging with relevant research papers efficiently. https://github.com/The-Swarm-Corporation/Research-Paper-Hive MedInsight-Pro Revolutionizes medical research summarization for healthcare innovators. https://github.com/The-Swarm-Corporation/MedInsight-Pro swarms-memory Pre-built wrappers for RAG systems like ChromaDB, Weaviate, and Pinecone. https://github.com/The-Swarm-Corporation/swarms-memory CryptoAgent An enterprise-grade solution for fetching, analyzing, and summarizing cryptocurrency data. https://github.com/The-Swarm-Corporation/CryptoAgent AgentParse A high-performance parsing library for mapping structured data into agent-understandable blocks. https://github.com/The-Swarm-Corporation/AgentParse CodeGuardian An intelligent agent for automating the generation of production-grade unit tests for Python code. https://github.com/The-Swarm-Corporation/CodeGuardian Marketing-Swarm-Template A framework for creating multi-platform marketing content using Swarms AI agents. https://github.com/The-Swarm-Corporation/Marketing-Swarm-Template HTX-Swarm A multi-agent system for real-time market analysis of HTX exchange data. https://github.com/The-Swarm-Corporation/HTX-Swarm MultiModelOptimizer A hierarchical parameter synchronization approach for joint training of transformer models. https://github.com/The-Swarm-Corporation/MultiModelOptimizer MortgageUnderwritingSwarm A multi-agent pipeline for automating mortgage underwriting processes. https://github.com/The-Swarm-Corporation/MortgageUnderwritingSwarm DermaSwarm A multi-agent system for dermatologists to diagnose and treat skin conditions collaboratively. https://github.com/The-Swarm-Corporation/DermaSwarm IoTAgents Integrates IoT data with AI agents for seamless parsing and processing of data streams. https://github.com/The-Swarm-Corporation/IoTAgents eth-agent An autonomous agent for analyzing on-chain Ethereum data. https://github.com/The-Swarm-Corporation/eth-agent Medical-Swarm-One-Click A template for building safe, reliable, and production-grade medical multi-agent systems. https://github.com/The-Swarm-Corporation/Medical-Swarm-One-Click Swarms-Example-1-Click-Template A one-click template for building Swarms applications quickly. https://github.com/The-Swarm-Corporation/Swarms-Example-1-Click-Template Custom-Swarms-Spec-Template An official specification template for custom swarm development using the Swarms Framework. https://github.com/The-Swarm-Corporation/Custom-Swarms-Spec-Template Swarms-LlamaIndex-RAG-Template A template for integrating Llama Index into Swarms applications for RAG capabilities. https://github.com/The-Swarm-Corporation/Swarms-LlamaIndex-RAG-Template ForexTreeSwarm A forex market analysis system using a swarm of AI agents organized in a forest structure. https://github.com/The-Swarm-Corporation/ForexTreeSwarm Generalist-Mathematician-Swarm A swarm of agents for solving complex mathematical problems collaboratively. https://github.com/The-Swarm-Corporation/Generalist-Mathematician-Swarm Multi-Modal-XRAY-Diagnosis-Medical-Swarm-Template A template for analyzing X-rays, MRIs, and more using a swarm of agents. https://github.com/The-Swarm-Corporation/Multi-Modal-XRAY-Diagnosis-Medical-Swarm-Template AgentRAGProtocol A protocol for integrating Retrieval-Augmented Generation (RAG) into AI agents. https://github.com/The-Swarm-Corporation/AgentRAGProtocol Multi-Agent-RAG-Template A template for creating collaborative AI agent teams for document processing and analysis. https://github.com/The-Swarm-Corporation/Multi-Agent-RAG-Template REACT-Yaml-Agent An implementation of a REACT agent using YAML instead of JSON. https://github.com/The-Swarm-Corporation/REACT-Yaml-Agent SwarmsXGCP A template for deploying Swarms agents on Google Cloud Run. https://github.com/The-Swarm-Corporation/SwarmsXGCP Legal-Swarm-Template A one-click template for building legal-focused Swarms applications. https://github.com/The-Swarm-Corporation/Legal-Swarm-Template swarms_sim A simulation of a swarm of agents in a professional workplace environment. https://github.com/The-Swarm-Corporation/swarms_sim medical-problems A repository for medical problems to create Swarms applications for. https://github.com/The-Swarm-Corporation/medical-problems swarm-ecosystem An overview of the Swarm Ecosystem and its components. https://github.com/The-Swarm-Corporation/swarm-ecosystem swarms_ecosystem_md MDX documentation for the Swarm Ecosystem. https://github.com/The-Swarm-Corporation/swarms_ecosystem_md Hierarchical Swarm Examples Simple, practical examples of HierarchicalSwarm usage for various real-world scenarios. Documentation"},{"location":"swarms/examples/unique_swarms/","title":"Unique Swarms","text":"In this section, we present a diverse collection of unique swarms, each with its own distinct characteristics and applications. These examples are designed to illustrate the versatility and potential of swarm intelligence in various domains. By exploring these examples, you can gain a deeper understanding of how swarms can be leveraged to solve complex problems and improve decision-making processes.
"},{"location":"swarms/examples/unique_swarms/#documentation","title":"Documentation","text":""},{"location":"swarms/examples/unique_swarms/#table-of-contents","title":"Table of Contents","text":"All swarm architectures accept these base parameters:
agents: AgentListType
- List of Agent objects to participate in the swarmtasks: List[str]
- List of tasks to be processed by the agentsreturn_full_history: bool
(optional) - If True, returns conversation history. Defaults to TrueReturn types are generally Union[dict, List[str]]
, where: - If return_full_history=True
: Returns a dictionary containing the full conversation history - If return_full_history=False
: Returns a list of agent responses
def circular_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)\n
Information Flow:
flowchart LR\n subgraph Circular Flow\n A1((Agent 1)) --> A2((Agent 2))\n A2 --> A3((Agent 3))\n A3 --> A4((Agent 4))\n A4 --> A1\n end\n Task1[Task 1] --> A1\n Task2[Task 2] --> A2\n Task3[Task 3] --> A3
Best Used When:
You need continuous processing of tasks
Tasks need to be processed by every agent in sequence
You want predictable, ordered task distribution
Key Features:
Tasks move in a circular pattern through all agents
Each agent processes each task once
Maintains strict ordering of task processing
def linear_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)\n
Information Flow:
flowchart LR\n Input[Task Input] --> A1\n subgraph Sequential Processing\n A1((Agent 1)) --> A2((Agent 2))\n A2 --> A3((Agent 3))\n A3 --> A4((Agent 4))\n A4 --> A5((Agent 5))\n end\n A5 --> Output[Final Result]
Best Used When:
Tasks need sequential, pipeline-style processing
Each agent performs a specific transformation step
Order of processing is critical
def star_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)\n
Information Flow:
flowchart TD\n subgraph Star Pattern\n A1((Central Agent))\n A2((Agent 2))\n A3((Agent 3))\n A4((Agent 4))\n A5((Agent 5))\n A1 --> A2\n A1 --> A3\n A1 --> A4\n A1 --> A5\n end\n Task[Initial Task] --> A1\n A2 --> Result2[Result 2]\n A3 --> Result3[Result 3]\n A4 --> Result4[Result 4]\n A5 --> Result5[Result 5]
Best Used When:
You need centralized control
Tasks require coordination or oversight
You want to maintain a single point of task distribution
def mesh_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)\n
Information Flow:
flowchart TD\n subgraph Mesh Network\n A1((Agent 1)) <--> A2((Agent 2))\n A2 <--> A3((Agent 3))\n A1 <--> A4((Agent 4))\n A2 <--> A5((Agent 5))\n A3 <--> A6((Agent 6))\n A4 <--> A5\n A5 <--> A6\n end\n Tasks[Task Pool] --> A1\n Tasks --> A2\n Tasks --> A3\n Tasks --> A4\n Tasks --> A5\n Tasks --> A6
Best Used When:
You need maximum flexibility
Task processing order isn't critical
You want fault tolerance
def fibonacci_swarm(agents: AgentListType, tasks: List[str])\n
Information Flow:
flowchart TD\n subgraph Fibonacci Pattern\n L1[Level 1: 1 Agent] --> L2[Level 2: 1 Agent]\n L2 --> L3[Level 3: 2 Agents]\n L3 --> L4[Level 4: 3 Agents]\n L4 --> L5[Level 5: 5 Agents]\n end\n Task[Initial Task] --> L1\n L5 --> Results[Processed Results]
Best Used When:
You need natural scaling patterns
Tasks have increasing complexity
You want organic growth in processing capacity
def pyramid_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)\n
Information Flow:
flowchart TD\n subgraph Pyramid Structure\n A1((Leader Agent))\n A2((Manager 1))\n A3((Manager 2))\n A4((Worker 1))\n A5((Worker 2))\n A6((Worker 3))\n A7((Worker 4))\n A1 --> A2\n A1 --> A3\n A2 --> A4\n A2 --> A5\n A3 --> A6\n A3 --> A7\n end\n Task[Complex Task] --> A1\n A4 --> Result1[Output 1]\n A5 --> Result2[Output 2]\n A6 --> Result3[Output 3]\n A7 --> Result4[Output 4]
Best Used When:
You need hierarchical task processing
Tasks require multiple levels of oversight
You want organized task delegation
def grid_swarm(agents: AgentListType, tasks: List[str])\n
Information Flow:
flowchart TD\n subgraph Grid Layout\n A1((1)) <--> A2((2)) <--> A3((3))\n A4((4)) <--> A5((5)) <--> A6((6))\n A7((7)) <--> A8((8)) <--> A9((9))\n A1 <--> A4 <--> A7\n A2 <--> A5 <--> A8\n A3 <--> A6 <--> A9\n end\n Tasks[Task Queue] --> A1\n Tasks --> A5\n Tasks --> A9
Best Used When:
Tasks have spatial relationships
You need neighbor-based processing
You want structured parallel processing
def one_to_one(sender: Agent, receiver: Agent, task: str, max_loops: int = 1) -> str\n
Information Flow:
flowchart LR\n Task[Task] --> S((Sender))\n S --> R((Receiver))\n R --> Result[Result]
Best Used When:
Direct agent communication is needed
Tasks require back-and-forth interaction
You need controlled message exchange
async def broadcast(sender: Agent, agents: AgentListType, task: str) -> None\n
Information Flow:
flowchart TD\n T[Task] --> S((Sender))\n S --> A1((Agent 1))\n S --> A2((Agent 2))\n S --> A3((Agent 3))\n S --> A4((Agent 4))
Best Used When:
Information needs to reach all agents
Tasks require global coordination
You need system-wide updates
Consider fault tolerance needs
Performance Considerations:
Match pattern to available resources
Error Handling:
Monitor agent performance and task completion
Scaling:
Circular Swarm
Distributed Computing
Grid Swarm
Hierarchical Systems
Star Swarm
Dynamic Workloads
Fibonacci Swarm
Conflict-Free Processing
import asyncio\nfrom typing import List\n\nfrom swarms.structs.agent import Agent\nfrom swarms.structs.swarming_architectures import (\n broadcast,\n circular_swarm,\n exponential_swarm,\n fibonacci_swarm,\n grid_swarm,\n linear_swarm,\n mesh_swarm,\n one_to_three,\n prime_swarm,\n sigmoid_swarm,\n sinusoidal_swarm,\n staircase_swarm,\n star_swarm,\n)\n\n\ndef create_finance_agents() -> List[Agent]:\n \"\"\"Create specialized finance agents\"\"\"\n return [\n Agent(\n agent_name=\"MarketAnalyst\",\n system_prompt=\"You are a market analysis expert. Analyze market trends and provide insights.\",\n model_name=\"gpt-4o-mini\"\n ),\n Agent(\n agent_name=\"RiskManager\",\n system_prompt=\"You are a risk management specialist. Evaluate risks and provide mitigation strategies.\",\n model_name=\"gpt-4o-mini\"\n ),\n Agent(\n agent_name=\"PortfolioManager\",\n system_prompt=\"You are a portfolio management expert. Optimize investment portfolios and asset allocation.\",\n model_name=\"gpt-4o-mini\"\n ),\n Agent(\n agent_name=\"ComplianceOfficer\",\n system_prompt=\"You are a financial compliance expert. Ensure regulatory compliance and identify issues.\",\n model_name=\"gpt-4o-mini\"\n )\n ]\n\ndef create_healthcare_agents() -> List[Agent]:\n \"\"\"Create specialized healthcare agents\"\"\"\n return [\n Agent(\n agent_name=\"Diagnostician\",\n system_prompt=\"You are a medical diagnostician. Analyze symptoms and suggest potential diagnoses.\",\n model_name=\"gpt-4o-mini\"\n ),\n Agent(\n agent_name=\"Treatment_Planner\",\n system_prompt=\"You are a treatment planning specialist. Develop comprehensive treatment plans.\",\n model_name=\"gpt-4o-mini\"\n ),\n Agent(\n agent_name=\"MedicalResearcher\",\n system_prompt=\"You are a medical researcher. Analyze latest research and provide evidence-based recommendations.\",\n model_name=\"gpt-4o-mini\"\n ),\n Agent(\n agent_name=\"PatientCareCoordinator\",\n system_prompt=\"You are a patient care coordinator. Manage patient care workflow and coordination.\",\n model_name=\"gpt-4o-mini\"\n )\n ]\n\ndef print_separator():\n print(\"\\n\" + \"=\"*50 + \"\\n\")\n\ndef run_finance_circular_swarm():\n \"\"\"Investment analysis workflow using circular swarm\"\"\"\n print_separator()\n print(\"FINANCE - INVESTMENT ANALYSIS (Circular Swarm)\")\n\n agents = create_finance_agents()\n tasks = [\n \"Analyze Tesla stock performance for Q4 2024\",\n \"Assess market risks and potential hedging strategies\",\n \"Recommend portfolio adjustments based on analysis\"\n ]\n\n print(\"\\nTasks:\")\n for i, task in enumerate(tasks, 1):\n print(f\"{i}. {task}\")\n\n result = circular_swarm(agents, tasks)\n print(\"\\nResults:\")\n for log in result['history']:\n print(f\"\\n{log['agent_name']}:\")\n print(f\"Task: {log['task']}\")\n print(f\"Response: {log['response']}\")\n\ndef run_healthcare_grid_swarm():\n \"\"\"Patient diagnosis and treatment planning using grid swarm\"\"\"\n print_separator()\n print(\"HEALTHCARE - PATIENT DIAGNOSIS (Grid Swarm)\")\n\n agents = create_healthcare_agents()\n tasks = [\n \"Review patient symptoms: fever, fatigue, joint pain\",\n \"Research latest treatment protocols\",\n \"Develop preliminary treatment plan\",\n \"Coordinate with specialists\"\n ]\n\n print(\"\\nTasks:\")\n for i, task in enumerate(tasks, 1):\n print(f\"{i}. {task}\")\n\n result = grid_swarm(agents, tasks)\n print(\"\\nGrid swarm processing completed\")\n print(result)\n\ndef run_finance_linear_swarm():\n \"\"\"Loan approval process using linear swarm\"\"\"\n print_separator()\n print(\"FINANCE - LOAN APPROVAL PROCESS (Linear Swarm)\")\n\n agents = create_finance_agents()[:3]\n tasks = [\n \"Review loan application and credit history\",\n \"Assess risk factors and compliance requirements\",\n \"Generate final loan recommendation\"\n ]\n\n print(\"\\nTasks:\")\n for i, task in enumerate(tasks, 1):\n print(f\"{i}. {task}\")\n\n result = linear_swarm(agents, tasks)\n print(\"\\nResults:\")\n for log in result['history']:\n print(f\"\\n{log['agent_name']}:\")\n print(f\"Task: {log['task']}\")\n print(f\"Response: {log['response']}\")\n\ndef run_healthcare_star_swarm():\n \"\"\"Complex medical case management using star swarm\"\"\"\n print_separator()\n print(\"HEALTHCARE - COMPLEX CASE MANAGEMENT (Star Swarm)\")\n\n agents = create_healthcare_agents()\n tasks = [\n \"Complex case: Patient with multiple chronic conditions\",\n \"Develop integrated care plan\"\n ]\n\n print(\"\\nTasks:\")\n for i, task in enumerate(tasks, 1):\n print(f\"{i}. {task}\")\n\n result = star_swarm(agents, tasks)\n print(\"\\nResults:\")\n for log in result['history']:\n print(f\"\\n{log['agent_name']}:\")\n print(f\"Task: {log['task']}\")\n print(f\"Response: {log['response']}\")\n\ndef run_finance_mesh_swarm():\n \"\"\"Market risk assessment using mesh swarm\"\"\"\n print_separator()\n print(\"FINANCE - MARKET RISK ASSESSMENT (Mesh Swarm)\")\n\n agents = create_finance_agents()\n tasks = [\n \"Analyze global market conditions\",\n \"Assess currency exchange risks\",\n \"Evaluate sector-specific risks\",\n \"Review portfolio exposure\"\n ]\n\n print(\"\\nTasks:\")\n for i, task in enumerate(tasks, 1):\n print(f\"{i}. {task}\")\n\n result = mesh_swarm(agents, tasks)\n print(\"\\nResults:\")\n for log in result['history']:\n print(f\"\\n{log['agent_name']}:\")\n print(f\"Task: {log['task']}\")\n print(f\"Response: {log['response']}\")\n\ndef run_mathematical_finance_swarms():\n \"\"\"Complex financial analysis using mathematical swarms\"\"\"\n print_separator()\n print(\"FINANCE - MARKET PATTERN ANALYSIS\")\n\n agents = create_finance_agents()\n tasks = [\n \"Analyze historical market patterns\",\n \"Predict market trends using technical analysis\",\n \"Identify potential arbitrage opportunities\"\n ]\n\n print(\"\\nTasks:\")\n for i, task in enumerate(tasks, 1):\n print(f\"{i}. {task}\")\n\n print(\"\\nFibonacci Swarm Results:\")\n result = fibonacci_swarm(agents, tasks.copy())\n print(result)\n\n print(\"\\nPrime Swarm Results:\")\n result = prime_swarm(agents, tasks.copy())\n print(result)\n\n print(\"\\nExponential Swarm Results:\")\n result = exponential_swarm(agents, tasks.copy())\n print(result)\n\ndef run_healthcare_pattern_swarms():\n \"\"\"Patient monitoring using pattern swarms\"\"\"\n print_separator()\n print(\"HEALTHCARE - PATIENT MONITORING PATTERNS\")\n\n agents = create_healthcare_agents()\n task = \"Monitor and analyze patient vital signs: BP, heart rate, temperature, O2 saturation\"\n\n print(f\"\\nTask: {task}\")\n\n print(\"\\nStaircase Pattern Analysis:\")\n result = staircase_swarm(agents, task)\n print(result)\n\n print(\"\\nSigmoid Pattern Analysis:\")\n result = sigmoid_swarm(agents, task)\n print(result)\n\n print(\"\\nSinusoidal Pattern Analysis:\")\n result = sinusoidal_swarm(agents, task)\n print(result)\n\nasync def run_communication_examples():\n \"\"\"Communication patterns for emergency scenarios\"\"\"\n print_separator()\n print(\"EMERGENCY COMMUNICATION PATTERNS\")\n\n # Finance market alert\n finance_sender = create_finance_agents()[0]\n finance_receivers = create_finance_agents()[1:]\n market_alert = \"URGENT: Major market volatility detected - immediate risk assessment required\"\n\n print(\"\\nFinance Market Alert:\")\n print(f\"Alert: {market_alert}\")\n result = await broadcast(finance_sender, finance_receivers, market_alert)\n print(\"\\nBroadcast Results:\")\n for log in result['history']:\n print(f\"\\n{log['agent_name']}:\")\n print(f\"Response: {log['response']}\")\n\n # Healthcare emergency\n health_sender = create_healthcare_agents()[0]\n health_receivers = create_healthcare_agents()[1:4]\n emergency_case = \"EMERGENCY: Trauma patient with multiple injuries - immediate consultation required\"\n\n print(\"\\nHealthcare Emergency:\")\n print(f\"Case: {emergency_case}\")\n result = await one_to_three(health_sender, health_receivers, emergency_case)\n print(\"\\nConsultation Results:\")\n for log in result['history']:\n print(f\"\\n{log['agent_name']}:\")\n print(f\"Response: {log['response']}\")\n\nasync def run_all_examples():\n \"\"\"Execute all swarm examples\"\"\"\n print(\"\\n=== SWARM ARCHITECTURE EXAMPLES ===\\n\")\n\n # Finance examples\n run_finance_circular_swarm()\n run_finance_linear_swarm()\n run_finance_mesh_swarm()\n run_mathematical_finance_swarms()\n\n # Healthcare examples\n run_healthcare_grid_swarm()\n run_healthcare_star_swarm()\n run_healthcare_pattern_swarms()\n\n # Communication examples\n await run_communication_examples()\n\n print(\"\\n=== ALL EXAMPLES COMPLETED ===\")\n\nif __name__ == \"__main__\":\n asyncio.run(run_all_examples())\n
"},{"location":"swarms/examples/vision_processing/","title":"Vision Processing Examples","text":"This example demonstrates how to use vision-enabled agents in Swarms to analyze images and process visual information. You'll learn how to work with both OpenAI and Anthropic vision models for various use cases.
"},{"location":"swarms/examples/vision_processing/#prerequisites","title":"Prerequisites","text":"Python 3.7+
OpenAI API key (for GPT-4V)
Anthropic API key (for Claude 3)
Swarms library
pip3 install -U swarms\n
"},{"location":"swarms/examples/vision_processing/#environment-variables","title":"Environment Variables","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\" # Required for GPT-4V\nANTHROPIC_API_KEY=\"\" # Required for Claude 3\n
"},{"location":"swarms/examples/vision_processing/#working-with-images","title":"Working with Images","text":""},{"location":"swarms/examples/vision_processing/#supported-image-formats","title":"Supported Image Formats","text":"Vision-enabled agents support various image formats:
Format Description JPEG/JPG Standard image format with lossy compression PNG Lossless format supporting transparency GIF Animated format (only first frame used) WebP Modern format with both lossy and lossless compression"},{"location":"swarms/examples/vision_processing/#image-guidelines","title":"Image Guidelines","text":"from swarms.structs import Agent\nfrom swarms.prompts.logistics import Quality_Control_Agent_Prompt\n\n# Load your image\nfactory_image = \"path/to/your/image.jpg\" # Local file path\n# Or use a URL\n# factory_image = \"https://example.com/image.jpg\"\n\n# Initialize quality control agent with GPT-4V\nquality_control_agent = Agent(\n agent_name=\"Quality Control Agent\",\n agent_description=\"A quality control agent that analyzes images and provides detailed quality reports.\",\n model_name=\"gpt-4.1-mini\",\n system_prompt=Quality_Control_Agent_Prompt,\n multi_modal=True,\n max_loops=1\n)\n\n# Run the analysis\nresponse = quality_control_agent.run(\n task=\"Analyze this image and provide a detailed quality control report\",\n img=factory_image\n)\n\nprint(response)\n
"},{"location":"swarms/examples/vision_processing/#2-visual-analysis-with-claude-3","title":"2. Visual Analysis with Claude 3","text":"from swarms.structs import Agent\nfrom swarms.prompts.logistics import Visual_Analysis_Prompt\n\n# Load your image\nproduct_image = \"path/to/your/product.jpg\"\n\n# Initialize visual analysis agent with Claude 3\nvisual_analyst = Agent(\n agent_name=\"Visual Analyst\",\n agent_description=\"An agent that performs detailed visual analysis of products and scenes.\",\n model_name=\"anthropic/claude-3-opus-20240229\",\n system_prompt=Visual_Analysis_Prompt,\n multi_modal=True,\n max_loops=1\n)\n\n# Run the analysis\nresponse = visual_analyst.run(\n task=\"Provide a comprehensive analysis of this product image\",\n img=product_image\n)\n\nprint(response)\n
"},{"location":"swarms/examples/vision_processing/#3-image-batch-processing","title":"3. Image Batch Processing","text":"from swarms.structs import Agent\nimport os\n\ndef process_image_batch(image_folder, agent):\n \"\"\"Process multiple images in a folder\"\"\"\n results = []\n for image_file in os.listdir(image_folder):\n if image_file.lower().endswith(('.png', '.jpg', '.jpeg', '.webp')):\n image_path = os.path.join(image_folder, image_file)\n response = agent.run(\n task=\"Analyze this image\",\n img=image_path\n )\n results.append((image_file, response))\n return results\n\n# Example usage\nimage_folder = \"path/to/image/folder\"\nbatch_results = process_image_batch(image_folder, visual_analyst)\n
"},{"location":"swarms/examples/vision_processing/#best-practices","title":"Best Practices","text":"Category Best Practice Description Image Preparation Format Support Ensure images are in supported formats (JPEG, PNG, GIF, WebP) Size & Quality Optimize image size and quality for better processing Image Quality Use clear, well-lit images for accurate analysis Model Selection GPT-4V Usage Use for general vision tasks and detailed analysis Claude 3 Usage Use for complex reasoning and longer outputs Batch Processing Consider batch processing for multiple images Error Handling Path Validation Always validate image paths before processing API Error Handling Implement proper error handling for API calls Rate Monitoring Monitor API rate limits and token usage Performance Optimization Result Caching Cache results when processing the same images Batch Processing Use batch processing for multiple images Parallel Processing Implement parallel processing for large datasets"},{"location":"swarms/examples/vision_tools/","title":"Agents with Vision and Tool Usage","text":"This tutorial demonstrates how to create intelligent agents that can analyze images and use custom tools to perform specific actions based on their visual observations. You'll learn to build a quality control agent that can process images, identify potential security concerns, and automatically trigger appropriate responses using function calling capabilities.
"},{"location":"swarms/examples/vision_tools/#what-youll-learn","title":"What You'll Learn","text":"This approach is perfect for:
Quality Control Systems: Automated inspection of manufacturing processes
Security Monitoring: Real-time threat detection and response
Object Detection: Identifying and categorizing items in images
Compliance Checking: Ensuring standards are met in various environments
Automated Reporting: Generating detailed analysis reports from visual data
Install the swarms package using pip:
pip install -U swarms\n
"},{"location":"swarms/examples/vision_tools/#basic-setup","title":"Basic Setup","text":"WORKSPACE_DIR=\"agent_workspace\"\nOPENAI_API_KEY=\"\"\n
"},{"location":"swarms/examples/vision_tools/#code","title":"Code","text":"Create tools for your agent as a function with types and documentation
Pass tools to your agent Agent(tools=[list_of_callables])
Add your image path to the run method like: Agent().run(task=task, img=img)
from swarms.structs import Agent\nfrom swarms.prompts.logistics import (\n Quality_Control_Agent_Prompt,\n)\n\n\n# Image for analysis\nfactory_image = \"image.jpg\"\n\n\ndef security_analysis(danger_level: str) -> str:\n \"\"\"\n Analyzes the security danger level and returns an appropriate response.\n\n Args:\n danger_level (str, optional): The level of danger to analyze.\n Can be \"low\", \"medium\", \"high\", or None. Defaults to None.\n\n Returns:\n str: A string describing the danger level assessment.\n - \"No danger level provided\" if danger_level is None\n - \"No danger\" if danger_level is \"low\"\n - \"Medium danger\" if danger_level is \"medium\"\n - \"High danger\" if danger_level is \"high\"\n - \"Unknown danger level\" for any other value\n \"\"\"\n if danger_level is None:\n return \"No danger level provided\"\n\n if danger_level == \"low\":\n return \"No danger\"\n\n if danger_level == \"medium\":\n return \"Medium danger\"\n\n if danger_level == \"high\":\n return \"High danger\"\n\n return \"Unknown danger level\"\n\n\ncustom_system_prompt = f\"\"\"\n{Quality_Control_Agent_Prompt}\n\nYou have access to tools that can help you with your analysis. When you need to perform a security analysis, you MUST use the security_analysis function with an appropriate danger level (low, medium, or high) based on your observations.\n\nAlways use the available tools when they are relevant to the task. If you determine there is any level of danger or security concern, call the security_analysis function with the appropriate danger level.\n\"\"\"\n\n# Quality control agent\nquality_control_agent = Agent(\n agent_name=\"Quality Control Agent\",\n agent_description=\"A quality control agent that analyzes images and provides a detailed report on the quality of the product in the image.\",\n # model_name=\"anthropic/claude-3-opus-20240229\",\n model_name=\"gpt-4o-mini\",\n system_prompt=custom_system_prompt,\n multi_modal=True,\n max_loops=1,\n output_type=\"str-all-except-first\",\n # tools_list_dictionary=[schema],\n tools=[security_analysis],\n)\n\n\nresponse = quality_control_agent.run(\n task=\"Analyze the image and then perform a security analysis. Based on what you see in the image, determine if there is a low, medium, or high danger level and call the security_analysis function with that danger level\",\n img=factory_image,\n)\n
"},{"location":"swarms/examples/vision_tools/#support-and-community","title":"Support and Community","text":"If you're facing issues or want to learn more, check out the following resources to join our Discord, stay updated on Twitter, and watch tutorials on YouTube!
Platform Link Description \ud83d\udcda Documentation docs.swarms.world Official documentation and guides \ud83d\udcdd Blog Medium Latest updates and technical articles \ud83d\udcac Discord Join Discord Live chat and community support \ud83d\udc26 Twitter @kyegomez Latest news and announcements \ud83d\udc65 LinkedIn The Swarm Corporation Professional network and updates \ud83d\udcfa YouTube Swarms Channel Tutorials and demos \ud83c\udfab Events Sign up here Join our community events"},{"location":"swarms/examples/vllm/","title":"VLLM Swarm Agents","text":"Quick Summary
This guide demonstrates how to create a sophisticated multi-agent system using VLLM and Swarms for comprehensive stock market analysis. You'll learn how to configure and orchestrate multiple AI agents working together to provide deep market insights.
"},{"location":"swarms/examples/vllm/#overview","title":"Overview","text":"The example showcases how to build a stock analysis system with 5 specialized agents:
Each agent has specific expertise and works collaboratively through a concurrent workflow.
"},{"location":"swarms/examples/vllm/#prerequisites","title":"Prerequisites","text":"Requirements
Before starting, ensure you have:
Setup Steps
Install the Swarms package:
pip install swarms\n
Install VLLM dependencies (if not already installed):
pip install vllm\n
Here's a complete example of setting up the stock analysis swarm:
from swarms import Agent, ConcurrentWorkflow\nfrom swarms.utils.vllm_wrapper import VLLMWrapper\n\n# Initialize the VLLM wrapper\nvllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n system_prompt=\"You are a helpful assistant.\",\n)\n
Model Selection
The example uses Llama-2-7b-chat, but you can use any VLLM-compatible model. Make sure you have the necessary permissions and resources to run your chosen model.
"},{"location":"swarms/examples/vllm/#agent-configuration","title":"Agent Configuration","text":""},{"location":"swarms/examples/vllm/#technical-analysis-agent","title":"Technical Analysis Agent","text":"technical_analyst = Agent(\n agent_name=\"Technical-Analysis-Agent\",\n agent_description=\"Expert in technical analysis and chart patterns\",\n system_prompt=\"\"\"You are an expert Technical Analysis Agent specializing in market technicals and chart patterns. Your responsibilities include:\n\n1. PRICE ACTION ANALYSIS\n- Identify key support and resistance levels\n- Analyze price trends and momentum\n- Detect chart patterns (e.g., head & shoulders, triangles, flags)\n- Evaluate volume patterns and their implications\n\n2. TECHNICAL INDICATORS\n- Calculate and interpret moving averages (SMA, EMA)\n- Analyze momentum indicators (RSI, MACD, Stochastic)\n- Evaluate volume indicators (OBV, Volume Profile)\n- Monitor volatility indicators (Bollinger Bands, ATR)\n\n3. TRADING SIGNALS\n- Generate clear buy/sell signals based on technical criteria\n- Identify potential entry and exit points\n- Set appropriate stop-loss and take-profit levels\n- Calculate position sizing recommendations\n\n4. RISK MANAGEMENT\n- Assess market volatility and trend strength\n- Identify potential reversal points\n- Calculate risk/reward ratios for trades\n- Suggest position sizing based on risk parameters\n\nYour analysis should be data-driven, precise, and actionable. Always include specific price levels, time frames, and risk parameters in your recommendations.\"\"\",\n max_loops=1,\n llm=vllm,\n)\n
Agent Customization
Each agent can be customized with different:
System prompts
Temperature settings
Max token limits
Response formats
To execute the swarm analysis:
swarm = ConcurrentWorkflow(\n name=\"Stock-Analysis-Swarm\",\n description=\"A swarm of agents that analyze stocks and provide comprehensive analysis.\",\n agents=stock_analysis_agents,\n)\n\n# Run the analysis\nresponse = swarm.run(\"Analyze the best etfs for gold and other similar commodities in volatile markets\")\n
"},{"location":"swarms/examples/vllm/#full-code-example","title":"Full Code Example","text":"from swarms import Agent, ConcurrentWorkflow\nfrom swarms.utils.vllm_wrapper import VLLMWrapper\n\n# Initialize the VLLM wrapper\nvllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n system_prompt=\"You are a helpful assistant.\",\n)\n\n# Technical Analysis Agent\ntechnical_analyst = Agent(\n agent_name=\"Technical-Analysis-Agent\",\n agent_description=\"Expert in technical analysis and chart patterns\",\n system_prompt=\"\"\"You are an expert Technical Analysis Agent specializing in market technicals and chart patterns. Your responsibilities include:\n\n1. PRICE ACTION ANALYSIS\n- Identify key support and resistance levels\n- Analyze price trends and momentum\n- Detect chart patterns (e.g., head & shoulders, triangles, flags)\n- Evaluate volume patterns and their implications\n\n2. TECHNICAL INDICATORS\n- Calculate and interpret moving averages (SMA, EMA)\n- Analyze momentum indicators (RSI, MACD, Stochastic)\n- Evaluate volume indicators (OBV, Volume Profile)\n- Monitor volatility indicators (Bollinger Bands, ATR)\n\n3. TRADING SIGNALS\n- Generate clear buy/sell signals based on technical criteria\n- Identify potential entry and exit points\n- Set appropriate stop-loss and take-profit levels\n- Calculate position sizing recommendations\n\n4. RISK MANAGEMENT\n- Assess market volatility and trend strength\n- Identify potential reversal points\n- Calculate risk/reward ratios for trades\n- Suggest position sizing based on risk parameters\n\nYour analysis should be data-driven, precise, and actionable. Always include specific price levels, time frames, and risk parameters in your recommendations.\"\"\",\n max_loops=1,\n llm=vllm,\n)\n\n# Fundamental Analysis Agent\nfundamental_analyst = Agent(\n agent_name=\"Fundamental-Analysis-Agent\",\n agent_description=\"Expert in company fundamentals and valuation\",\n system_prompt=\"\"\"You are an expert Fundamental Analysis Agent specializing in company valuation and financial metrics. Your core responsibilities include:\n\n1. FINANCIAL STATEMENT ANALYSIS\n- Analyze income statements, balance sheets, and cash flow statements\n- Calculate and interpret key financial ratios\n- Evaluate revenue growth and profit margins\n- Assess company's debt levels and cash position\n\n2. VALUATION METRICS\n- Calculate fair value using multiple valuation methods:\n * Discounted Cash Flow (DCF)\n * Price-to-Earnings (P/E)\n * Price-to-Book (P/B)\n * Enterprise Value/EBITDA\n- Compare valuations against industry peers\n\n3. BUSINESS MODEL ASSESSMENT\n- Evaluate competitive advantages and market position\n- Analyze industry dynamics and market share\n- Assess management quality and corporate governance\n- Identify potential risks and growth opportunities\n\n4. ECONOMIC CONTEXT\n- Consider macroeconomic factors affecting the company\n- Analyze industry cycles and trends\n- Evaluate regulatory environment and compliance\n- Assess global market conditions\n\nYour analysis should be comprehensive, focusing on both quantitative metrics and qualitative factors that impact long-term value.\"\"\",\n max_loops=1,\n llm=vllm,\n)\n\n# Market Sentiment Agent\nsentiment_analyst = Agent(\n agent_name=\"Market-Sentiment-Agent\",\n agent_description=\"Expert in market psychology and sentiment analysis\",\n system_prompt=\"\"\"You are an expert Market Sentiment Agent specializing in analyzing market psychology and investor behavior. Your key responsibilities include:\n\n1. SENTIMENT INDICATORS\n- Monitor and interpret market sentiment indicators:\n * VIX (Fear Index)\n * Put/Call Ratio\n * Market Breadth\n * Investor Surveys\n- Track institutional vs retail investor behavior\n\n2. NEWS AND SOCIAL MEDIA ANALYSIS\n- Analyze news flow and media sentiment\n- Monitor social media trends and discussions\n- Track analyst recommendations and changes\n- Evaluate corporate insider trading patterns\n\n3. MARKET POSITIONING\n- Assess hedge fund positioning and exposure\n- Monitor short interest and short squeeze potential\n- Track fund flows and asset allocation trends\n- Analyze options market sentiment\n\n4. CONTRARIAN SIGNALS\n- Identify extreme sentiment readings\n- Detect potential market turning points\n- Analyze historical sentiment patterns\n- Provide contrarian trading opportunities\n\nYour analysis should combine quantitative sentiment metrics with qualitative assessment of market psychology and crowd behavior.\"\"\",\n max_loops=1,\n llm=vllm,\n)\n\n# Quantitative Strategy Agent\nquant_analyst = Agent(\n agent_name=\"Quantitative-Strategy-Agent\",\n agent_description=\"Expert in quantitative analysis and algorithmic strategies\",\n system_prompt=\"\"\"You are an expert Quantitative Strategy Agent specializing in data-driven investment strategies. Your primary responsibilities include:\n\n1. FACTOR ANALYSIS\n- Analyze and monitor factor performance:\n * Value\n * Momentum\n * Quality\n * Size\n * Low Volatility\n- Calculate factor exposures and correlations\n\n2. STATISTICAL ANALYSIS\n- Perform statistical arbitrage analysis\n- Calculate and monitor pair trading opportunities\n- Analyze market anomalies and inefficiencies\n- Develop mean reversion strategies\n\n3. RISK MODELING\n- Build and maintain risk models\n- Calculate portfolio optimization metrics\n- Monitor correlation matrices\n- Analyze tail risk and stress scenarios\n\n4. ALGORITHMIC STRATEGIES\n- Develop systematic trading strategies\n- Backtest and validate trading algorithms\n- Monitor strategy performance metrics\n- Optimize execution algorithms\n\nYour analysis should be purely quantitative, based on statistical evidence and mathematical models rather than subjective opinions.\"\"\",\n max_loops=1,\n llm=vllm,\n)\n\n# Portfolio Strategy Agent\nportfolio_strategist = Agent(\n agent_name=\"Portfolio-Strategy-Agent\",\n agent_description=\"Expert in portfolio management and asset allocation\",\n system_prompt=\"\"\"You are an expert Portfolio Strategy Agent specializing in portfolio construction and management. Your core responsibilities include:\n\n1. ASSET ALLOCATION\n- Develop strategic asset allocation frameworks\n- Recommend tactical asset allocation shifts\n- Optimize portfolio weightings\n- Balance risk and return objectives\n\n2. PORTFOLIO ANALYSIS\n- Calculate portfolio risk metrics\n- Monitor sector and factor exposures\n- Analyze portfolio correlation matrix\n- Track performance attribution\n\n3. RISK MANAGEMENT\n- Implement portfolio hedging strategies\n- Monitor and adjust position sizing\n- Set stop-loss and rebalancing rules\n- Develop drawdown protection strategies\n\n4. PORTFOLIO OPTIMIZATION\n- Calculate efficient frontier analysis\n- Optimize for various objectives:\n * Maximum Sharpe Ratio\n * Minimum Volatility\n * Maximum Diversification\n- Consider transaction costs and taxes\n\nYour recommendations should focus on portfolio-level decisions that optimize risk-adjusted returns while meeting specific investment objectives.\"\"\",\n max_loops=1,\n llm=vllm,\n)\n\n# Create a list of all agents\nstock_analysis_agents = [\n technical_analyst,\n fundamental_analyst,\n sentiment_analyst,\n quant_analyst,\n portfolio_strategist\n]\n\nswarm = ConcurrentWorkflow(\n name=\"Stock-Analysis-Swarm\",\n description=\"A swarm of agents that analyze stocks and provide a comprehensive analysis of the current trends and opportunities.\",\n agents=stock_analysis_agents,\n)\n\nswarm.run(\"Analyze the best etfs for gold and other similiar commodities in volatile markets\")\n
"},{"location":"swarms/examples/vllm/#best-practices","title":"Best Practices","text":"Optimization Tips
Agent Design
Use clear role definitions
Include error handling guidelines
Resource Management
Monitor memory usage with large models
Implement proper cleanup procedures
Use batching for multiple queries
Output Handling
Implement proper logging
Format outputs consistently
Include error checking
Troubleshooting
Common issues you might encounter:
Memory Issues
Problem: VLLM consuming too much memory
Solution: Adjust batch sizes and model parameters
Agent Coordination
Problem: Agents providing conflicting information
Solution: Implement consensus mechanisms or priority rules
Performance
Problem: Slow response times
Solution: Use proper batching and optimize model loading
Yes, you can initialize multiple VLLM wrappers with different models for each agent. However, be mindful of memory usage.
How many agents can run concurrently?The number depends on your hardware resources. Start with 3-5 agents and scale based on performance.
Can I customize agent communication patterns?Yes, you can modify the ConcurrentWorkflow class or create custom workflows for specific communication patterns.
"},{"location":"swarms/examples/vllm/#advanced-configuration","title":"Advanced Configuration","text":"Extended Settings
vllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n system_prompt=\"You are a helpful assistant.\",\n temperature=0.7,\n max_tokens=2048,\n top_p=0.95,\n)\n
"},{"location":"swarms/examples/vllm/#contributing","title":"Contributing","text":"Get Involved
We welcome contributions! Here's how you can help:
Additional Reading
Overview
vLLM is a high-performance and easy-to-use library for LLM inference and serving. This guide explains how to integrate vLLM with Swarms for efficient, production-grade language model deployment.
"},{"location":"swarms/examples/vllm_integration/#installation","title":"Installation","text":"Prerequisites
Before you begin, make sure you have Python 3.8+ installed on your system.
pippoetrypip install -U vllm swarms\n
poetry add vllm swarms\n
"},{"location":"swarms/examples/vllm_integration/#basic-usage","title":"Basic Usage","text":"Here's a simple example of how to use vLLM with Swarms:
basic_usage.pyfrom swarms.utils.vllm_wrapper import VLLMWrapper\n\n# Initialize the vLLM wrapper\nvllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n system_prompt=\"You are a helpful assistant.\",\n temperature=0.7,\n max_tokens=4000\n)\n\n# Run inference\nresponse = vllm.run(\"What is the capital of France?\")\nprint(response)\n
"},{"location":"swarms/examples/vllm_integration/#vllmwrapper-class","title":"VLLMWrapper Class","text":"Class Overview
The VLLMWrapper
class provides a convenient interface for working with vLLM models.
model_name
str Name of the model to use \"meta-llama/Llama-2-7b-chat-hf\" system_prompt
str System prompt to use None stream
bool Whether to stream the output False temperature
float Sampling temperature 0.5 max_tokens
int Maximum number of tokens to generate 4000"},{"location":"swarms/examples/vllm_integration/#example-with-custom-parameters","title":"Example with Custom Parameters","text":"custom_parameters.pyvllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-13b-chat-hf\",\n system_prompt=\"You are an expert in artificial intelligence.\",\n temperature=0.8,\n max_tokens=2000\n)\n
"},{"location":"swarms/examples/vllm_integration/#integration-with-agents","title":"Integration with Agents","text":"You can easily integrate vLLM with Swarms agents for more complex workflows:
agent_integration.pyfrom swarms import Agent\nfrom swarms.utils.vllm_wrapper import VLLMWrapper\n\n# Initialize vLLM\nvllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n system_prompt=\"You are a helpful assistant.\"\n)\n\n# Create an agent with vLLM\nagent = Agent(\n agent_name=\"Research-Agent\",\n agent_description=\"Expert in conducting research and analysis\",\n system_prompt=\"\"\"You are an expert research agent. Your tasks include:\n 1. Analyzing complex topics\n 2. Providing detailed summaries\n 3. Making data-driven recommendations\"\"\",\n llm=vllm,\n max_loops=1\n)\n\n# Run the agent\nresponse = agent.run(\"Research the impact of AI on healthcare\")\n
"},{"location":"swarms/examples/vllm_integration/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms/examples/vllm_integration/#batch-processing","title":"Batch Processing","text":"Performance Optimization
Use batch processing for efficient handling of multiple tasks simultaneously.
batch_processing.pytasks = [\n \"What is machine learning?\",\n \"Explain neural networks\",\n \"Describe deep learning\"\n]\n\nresults = vllm.batched_run(tasks, batch_size=3)\n
"},{"location":"swarms/examples/vllm_integration/#error-handling","title":"Error Handling","text":"Error Management
Always implement proper error handling in production environments.
error_handling.pyfrom loguru import logger\n\ntry:\n response = vllm.run(\"Complex task\")\nexcept Exception as error:\n logger.error(f\"Error occurred: {error}\")\n
"},{"location":"swarms/examples/vllm_integration/#best-practices","title":"Best Practices","text":"Recommended Practices
Model SelectionSystem ResourcesPrompt EngineeringError HandlingPerformanceHere's an example of creating a multi-agent system using vLLM:
multi_agent_system.pyfrom swarms import Agent, ConcurrentWorkflow\nfrom swarms.utils.vllm_wrapper import VLLMWrapper\n\n# Initialize vLLM\nvllm = VLLMWrapper(\n model_name=\"meta-llama/Llama-2-7b-chat-hf\",\n system_prompt=\"You are a helpful assistant.\"\n)\n\n# Create specialized agents\nresearch_agent = Agent(\n agent_name=\"Research-Agent\",\n agent_description=\"Expert in research\",\n system_prompt=\"You are a research expert.\",\n llm=vllm\n)\n\nanalysis_agent = Agent(\n agent_name=\"Analysis-Agent\",\n agent_description=\"Expert in analysis\",\n system_prompt=\"You are an analysis expert.\",\n llm=vllm\n)\n\n# Create a workflow\nagents = [research_agent, analysis_agent]\nworkflow = ConcurrentWorkflow(\n name=\"Research-Analysis-Workflow\",\n description=\"Comprehensive research and analysis workflow\",\n agents=agents\n)\n\n# Run the workflow\nresult = workflow.run(\"Analyze the impact of renewable energy\")\n
"},{"location":"swarms/examples/xai/","title":"Agent with XAI","text":"Add your XAI_API_KEY
in the .env
file
Select your model_name like xai/grok-beta
follows LiteLLM conventions
Execute your agent!
from swarms import Agent\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Initialize the agent with ChromaDB memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n model_name=\"xai/grok-beta\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n)\n\n# Run a query\nagent.run(\"What are the components of a startup's stock incentive equity plan?\")\n
"},{"location":"swarms/examples/yahoo_finance/","title":"Swarms Tools Example with Yahoo Finance","text":"pip3 install swarms swarms-tools
OPENAI_API_KEY
to your .env
fileyahoo_finance_agent.py
from swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\nfrom swarms_tools import (\n yahoo_finance_api,\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=1,\n model_name=\"gpt-4o\",\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=3,\n context_length=8192,\n return_step_meta=False,\n output_type=\"str\", # \"json\", \"dict\", \"csv\" OR \"string\" \"yaml\" and\n auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task\n max_tokens=4000, # max output tokens\n saved_state_path=\"agent_00.json\",\n interactive=False,\n tools=[yahoo_finance_api],\n)\n\nagent.run(\"Analyze the latest metrics for nvidia\")\n# Less than 30 lines of code....\n
"},{"location":"swarms/framework/","title":"Index","text":""},{"location":"swarms/framework/#swarms-framework-conceptual-breakdown","title":"Swarms Framework Conceptual Breakdown","text":"The swarms
framework is a sophisticated structure designed to orchestrate the collaborative work of multiple agents in a hierarchical manner. This breakdown provides a conceptual and visual representation of the framework, highlighting the interactions between models, tools, memory, agents, and swarms.
The framework can be visualized as a multi-layered hierarchy:
Below are visual graphs illustrating the hierarchical and tree structure of the swarms
framework.
graph TD;\n Agents --> Swarm\n subgraph Agents_Collection\n Agent1\n Agent2\n Agent3\n end\n subgraph Individual_Agents\n Agent1 --> Models\n Agent1 --> Tools\n Agent1 --> Memory\n Agent2 --> Models\n Agent2 --> Tools\n Agent2 --> Memory\n Agent3 --> Models\n Agent3 --> Tools\n Agent3 --> Memory\n end
"},{"location":"swarms/framework/#3-multiple-agents-form-a-swarm","title":"3. Multiple Agents Form a Swarm","text":"graph TD;\n Swarm1 --> Struct\n Swarm2 --> Struct\n Swarm3 --> Struct\n subgraph Swarms_Collection\n Swarm1\n Swarm2\n Swarm3\n end\n subgraph Individual_Swarms\n Swarm1 --> Agent1\n Swarm1 --> Agent2\n Swarm1 --> Agent3\n Swarm2 --> Agent4\n Swarm2 --> Agent5\n Swarm2 --> Agent6\n Swarm3 --> Agent7\n Swarm3 --> Agent8\n Swarm3 --> Agent9\n end
"},{"location":"swarms/framework/#4-structs-organizing-multiple-swarms","title":"4. Structs Organizing Multiple Swarms","text":"graph TD;\n Struct --> Swarms_Collection\n subgraph High_Level_Structs\n Struct1\n Struct2\n Struct3\n end\n subgraph Struct1\n Swarm1\n Swarm2\n end\n subgraph Struct2\n Swarm3\n end\n subgraph Struct3\n Swarm4\n Swarm5\n end
"},{"location":"swarms/framework/#directory-breakdown","title":"Directory Breakdown","text":"The directory structure of the swarms
framework is organized to support its hierarchical architecture:
swarms/\n\u251c\u2500\u2500 agents/\n\u251c\u2500\u2500 artifacts/\n\u251c\u2500\u2500 marketplace/\n\u251c\u2500\u2500 memory/\n\u251c\u2500\u2500 models/\n\u251c\u2500\u2500 prompts/\n\u251c\u2500\u2500 schemas/\n\u251c\u2500\u2500 structs/\n\u251c\u2500\u2500 telemetry/\n\u251c\u2500\u2500 tools/\n\u251c\u2500\u2500 utils/\n\u2514\u2500\u2500 __init__.py\n
"},{"location":"swarms/framework/#summary","title":"Summary","text":"The swarms
framework is designed to facilitate complex multi-agent interactions through a structured and layered approach. By leveraging foundational components like models, tools, and memory, individual agents are empowered to perform specialized tasks. These agents are then coordinated within swarms to achieve collective goals, and swarms are managed within high-level structs to orchestrate sophisticated workflows.
This hierarchical design ensures scalability, flexibility, and robustness, making the swarms
framework a powerful tool for various applications in AI, data analysis, optimization, and beyond.
In the Swarms framework, agents are designed to perform tasks autonomously by leveraging large language models (LLMs), various tools, and long-term memory systems. This guide provides an extensive conceptual walkthrough of how an agent operates, detailing the sequence of actions it takes to complete a task and how it utilizes its internal components.
"},{"location":"swarms/framework/agents_explained/#agent-components-overview","title":"Agent Components Overview","text":"The workflow of an agent can be divided into several stages: task initiation, initial LLM processing, tool usage, memory interaction, and final LLM processing.
"},{"location":"swarms/framework/agents_explained/#stage-1-task-initiation","title":"Stage 1: Task Initiation","text":"graph TD\n A[Task Initiation] -->|Receives Task| B[Initial LLM Processing]\n B -->|Interprets Task| C[Tool Usage]\n C -->|Calls Tools| D[Function 1]\n C -->|Calls Tools| E[Function 2]\n D -->|Returns Data| C\n E -->|Returns Data| C\n C -->|Provides Data| F[Memory Interaction]\n F -->|Stores and Retrieves Data| G[RAG System]\n G -->|ChromaDB/Pinecone| H[Enhanced Data]\n F -->|Provides Enhanced Data| I[Final LLM Processing]\n I -->|Generates Final Response| J[Output]
"},{"location":"swarms/framework/agents_explained/#explanation-of-each-stage","title":"Explanation of Each Stage","text":""},{"location":"swarms/framework/agents_explained/#stage-1-task-initiation_1","title":"Stage 1: Task Initiation","text":"The Swarms framework's agents are powerful units that combine LLMs, tools, and long-term memory systems to perform complex tasks efficiently. By leveraging function calling for tools and RAG systems like ChromaDB and Pinecone, agents can enhance their capabilities and deliver highly relevant and accurate results. This conceptual guide and walkthrough provide a detailed understanding of how agents operate within the Swarms framework, enabling the development of sophisticated and collaborative AI systems.
"},{"location":"swarms/framework/code_cleanliness/","title":"Code Cleanliness in Python: A Comprehensive Guide","text":"Code cleanliness is an essential aspect of software development that ensures code is easy to read, understand, and maintain. Clean code leads to fewer bugs, easier debugging, and more efficient collaboration among developers. This blog article delves into the principles of writing clean Python code, emphasizing the use of type annotations, docstrings, and the Loguru logging library. We'll explore the importance of each component and provide practical examples to illustrate best practices.
"},{"location":"swarms/framework/code_cleanliness/#table-of-contents","title":"Table of Contents","text":"Code cleanliness refers to the practice of writing code that is easy to read, understand, and maintain. Clean code follows consistent conventions and is organized logically, making it easier for developers to collaborate and for new team members to get up to speed quickly.
"},{"location":"swarms/framework/code_cleanliness/#why-clean-code-matters","title":"Why Clean Code Matters","text":"Type annotations in Python provide a way to specify the types of variables, function arguments, and return values. They enhance code readability and help catch type-related errors early in the development process.
"},{"location":"swarms/framework/code_cleanliness/#benefits-of-type-annotations","title":"Benefits of Type Annotations","text":"from typing import List\n\ndef calculate_average(numbers: List[float]) -> float:\n \"\"\"\n Calculates the average of a list of numbers.\n\n Args:\n numbers (List[float]): A list of numbers.\n\n Returns:\n float: The average of the numbers.\n \"\"\"\n return sum(numbers) / len(numbers)\n
In this example, the calculate_average
function takes a list of floats as input and returns a float. The type annotations make it clear what types are expected and returned, enhancing readability and maintainability.
Docstrings are an essential part of writing clean code in Python. They provide inline documentation for modules, classes, methods, and functions. Effective docstrings improve code readability and make it easier for other developers to understand and use your code.
"},{"location":"swarms/framework/code_cleanliness/#benefits-of-docstrings","title":"Benefits of Docstrings","text":"def calculate_factorial(n: int) -> int:\n \"\"\"\n Calculates the factorial of a given non-negative integer.\n\n Args:\n n (int): The non-negative integer to calculate the factorial of.\n\n Returns:\n int: The factorial of the given number.\n\n Raises:\n ValueError: If the input is a negative integer.\n \"\"\"\n if n < 0:\n raise ValueError(\"Input must be a non-negative integer.\")\n factorial = 1\n for i in range(1, n + 1):\n factorial *= i\n return factorial\n
In this example, the docstring clearly explains the purpose of the calculate_factorial
function, its arguments, return value, and the exception it may raise.
Proper code structure is crucial for code cleanliness. A well-structured codebase is easier to navigate, understand, and maintain. Here are some best practices for structuring your Python code:
"},{"location":"swarms/framework/code_cleanliness/#organizing-code-into-modules-and-packages","title":"Organizing Code into Modules and Packages","text":"Organize your code into modules and packages to group related functionality together. This makes it easier to find and manage code.
# project/\n# \u251c\u2500\u2500 main.py\n# \u251c\u2500\u2500 utils/\n# \u2502 \u251c\u2500\u2500 __init__.py\n# \u2502 \u251c\u2500\u2500 file_utils.py\n# \u2502 \u2514\u2500\u2500 math_utils.py\n# \u2514\u2500\u2500 models/\n# \u251c\u2500\u2500 __init__.py\n# \u251c\u2500\u2500 user.py\n# \u2514\u2500\u2500 product.py\n
"},{"location":"swarms/framework/code_cleanliness/#using-functions-and-classes","title":"Using Functions and Classes","text":"Break down your code into small, reusable functions and classes. This makes your code more modular and easier to test.
class User:\n def __init__(self, name: str, age: int):\n \"\"\"\n Initializes a new user.\n\n Args:\n name (str): The name of the user.\n age (int): The age of the user.\n \"\"\"\n self.name = name\n self.age = age\n\n def greet(self) -> str:\n \"\"\"\n Greets the user.\n\n Returns:\n str: A greeting message.\n \"\"\"\n return f\"Hello, {self.name}!\"\n
"},{"location":"swarms/framework/code_cleanliness/#keeping-functions-small","title":"Keeping Functions Small","text":"Functions should do one thing and do it well. Keep functions small and focused on a single task.
def save_user(user: User, filename: str) -> None:\n \"\"\"\n Saves user data to a file.\n\n Args:\n user (User): The user object to save.\n filename (str): The name of the file to save the user data to.\n \"\"\"\n with open(filename, 'w') as file:\n file.write(f\"{user.name},{user.age}\")\n
"},{"location":"swarms/framework/code_cleanliness/#5-error-handling-and-logging-with-loguru","title":"5. Error Handling and Logging with Loguru","text":"Effective error handling and logging are critical components of clean code. They help you manage and diagnose issues that arise during the execution of your code.
"},{"location":"swarms/framework/code_cleanliness/#error-handling-best-practices","title":"Error Handling Best Practices","text":"except
clause.finally
blocks or context managers to ensure that resources are properly cleaned up.def divide_numbers(numerator: float, denominator: float) -> float:\n \"\"\"\n Divides the numerator by the denominator.\n\n Args:\n numerator (float): The number to be divided.\n denominator (float): The number to divide by.\n\n Returns:\n float: The result of the division.\n\n Raises:\n ValueError: If the denominator is zero.\n \"\"\"\n if denominator == 0:\n raise ValueError(\"The denominator cannot be zero.\")\n return numerator / denominator\n
"},{"location":"swarms/framework/code_cleanliness/#logging-with-loguru","title":"Logging with Loguru","text":"Loguru is a powerful logging library for Python that makes logging simple and enjoyable. It provides a clean and easy-to-use API for logging messages with different severity levels.
"},{"location":"swarms/framework/code_cleanliness/#installing-loguru","title":"Installing Loguru","text":"pip install loguru\n
"},{"location":"swarms/framework/code_cleanliness/#basic-usage-of-loguru","title":"Basic Usage of Loguru","text":"from loguru import logger\n\nlogger.debug(\"This is a debug message\")\nlogger.info(\"This is an info message\")\nlogger.warning(\"This is a warning message\")\nlogger.error(\"This is an error message\")\nlogger.critical(\"This is a critical message\")\n
"},{"location":"swarms/framework/code_cleanliness/#example-of-logging-in-a-function","title":"Example of Logging in a Function","text":"from loguru import logger\n\ndef fetch_data(url: str) -> str:\n \"\"\"\n Fetches data from a given URL and returns it as a string.\n\n Args:\n url (str): The URL to fetch data from.\n\n Returns:\n str: The data fetched from the URL.\n\n Raises:\n requests.exceptions.RequestException: If there is an error with the request.\n \"\"\"\n try:\n logger.info(f\"Fetching data from {url}\")\n response = requests.get(url)\n response.raise_for_status()\n logger.info(\"Data fetched successfully\")\n return response.text\n except requests.exceptions.RequestException as e:\n logger.error(f\"Error fetching data: {e}\")\n raise\n
In this example, Loguru is used to log messages at different severity levels. The fetch_data
function logs informational messages when fetching data and logs an error message if an exception is raised.
Refactoring is the process of restructuring existing code without changing its external behavior. It is an essential practice for maintaining clean code. Refactoring helps improve code readability, reduce complexity, and eliminate redundancy.
"},{"location":"swarms/framework/code_cleanliness/#identifying-code-smells","title":"Identifying Code Smells","text":"Code smells are indicators of potential issues in the code that may require refactoring. Common code smells include: 1. Long Methods: Methods that are too long and do too many things. 2. Duplicated Code: Code that is duplicated in multiple places. 3. Large Classes: Classes that have too many responsibilities. 4. Poor Naming: Variables, functions, or classes with unclear or misleading names.
"},{"location":"swarms/framework/code_cleanliness/#refactoring-techniques","title":"Refactoring Techniques","text":"better readability.
"},{"location":"swarms/framework/code_cleanliness/#example-of-refactoring","title":"Example of Refactoring","text":"Before refactoring:
def process_data(data: List[int]) -> int:\n total = 0\n for value in data:\n if value > 0:\n total += value\n return total\n
After refactoring:
def filter_positive_values(data: List[int]) -> List[int]:\n \"\"\"\n Filters the positive values from the input data.\n\n Args:\n data (List[int]): The input data.\n\n Returns:\n List[int]: A list of positive values.\n \"\"\"\n return [value for value in data if value > 0]\n\ndef sum_values(values: List[int]) -> int:\n \"\"\"\n Sums the values in the input list.\n\n Args:\n values (List[int]): A list of values to sum.\n\n Returns:\n int: The sum of the values.\n \"\"\"\n return sum(values)\n\ndef process_data(data: List[int]) -> int:\n \"\"\"\n Processes the data by filtering positive values and summing them.\n\n Args:\n data (List[int]): The input data.\n\n Returns:\n int: The sum of the positive values.\n \"\"\"\n positive_values = filter_positive_values(data)\n return sum_values(positive_values)\n
In this example, the process_data
function is refactored into smaller, more focused functions. This improves readability and maintainability.
def read_file(file_path: str) -> str:\n \"\"\"\n Reads the content of a file and returns it as a string.\n\n Args:\n file_path (str): The path to the file to read.\n\n Returns:\n str: The content of the file.\n\n Raises:\n FileNotFoundError: If the file does not exist.\n IOError: If there is an error reading the file.\n \"\"\"\n try:\n with open(file_path, 'r') as file:\n return file.read()\n except FileNotFoundError as e:\n logger.error(f\"File not found: {file_path}\")\n raise\n except IOError as e:\n logger.error(f\"Error reading file: {file_path}\")\n raise\n
"},{"location":"swarms/framework/code_cleanliness/#example-2-fetching-data-from-a-url","title":"Example 2: Fetching Data from a URL","text":"import requests\nfrom loguru import logger\n\ndef fetch_data(url: str) -> str:\n \"\"\"\n Fetches data from a given URL and returns it as a string.\n\n Args:\n url (str): The URL to fetch data from.\n\n Returns:\n str: The data fetched from the URL.\n\n Raises:\n requests.exceptions.RequestException: If there is an error with the request.\n \"\"\"\n try:\n logger.info(f\"Fetching data from {url}\")\n response = requests.get(url)\n response.raise_for_status()\n logger.info(\"Data fetched successfully\")\n return response.text\n except requests.exceptions.RequestException as e:\n logger.error(f\"Error fetching data: {e}\")\n raise\n
"},{"location":"swarms/framework/code_cleanliness/#example-3-calculating-factorial","title":"Example 3: Calculating Factorial","text":"def calculate_factorial(n: int) -> int:\n \"\"\"\n Calculates the factorial of a given non-negative integer.\n\n Args:\n n (int): The non-negative integer to calculate the factorial of.\n\n Returns:\n int: The factorial of the given number.\n\n Raises:\n ValueError: If the input is a negative integer.\n \"\"\"\n if n < 0:\n raise ValueError(\"Input must be a non-negative integer.\")\n factorial = 1\n for i in range(1, n + 1):\n factorial *= i\n return factorial\n
"},{"location":"swarms/framework/code_cleanliness/#8-conclusion","title":"8. Conclusion","text":"Writing clean code in Python is crucial for developing maintainable, readable, and error-free software. By using type annotations, writing effective docstrings, structuring your code properly, and leveraging logging with Loguru, you can significantly improve the quality of your codebase.
Remember to refactor your code regularly to eliminate code smells and improve readability. Clean code not only makes your life as a developer easier but also enhances collaboration and reduces the likelihood of bugs.
By following the principles and best practices outlined in this article, you'll be well on your way to writing clean, maintainable Python code.
"},{"location":"swarms/framework/concept/","title":"Concept","text":"To create a comprehensive overview of the Swarms framework, we can break it down into key concepts such as models, agents, tools, Retrieval-Augmented Generation (RAG) systems, and swarm systems. Below are conceptual explanations of these components along with mermaid diagrams to illustrate their interactions.
"},{"location":"swarms/framework/concept/#swarms-framework-overview","title":"Swarms Framework Overview","text":""},{"location":"swarms/framework/concept/#1-models","title":"1. Models","text":"Models are the core component of the Swarms framework, representing the neural networks and machine learning models used to perform various tasks. These can be Large Language Models (LLMs), vision models, or any other AI models.
"},{"location":"swarms/framework/concept/#2-agents","title":"2. Agents","text":"Agents are autonomous units that use models to perform specific tasks. In the Swarms framework, agents can leverage tools and interact with RAG systems.
Swarm systems involve multiple agents working collaboratively to achieve complex tasks. These systems coordinate and communicate among agents to ensure efficient and effective task execution.
"},{"location":"swarms/framework/concept/#mermaid-diagrams","title":"Mermaid Diagrams","text":""},{"location":"swarms/framework/concept/#models","title":"Models","text":"graph TD\n A[Model] -->|Uses| B[Data]\n A -->|Trains| C[Algorithm]\n A -->|Outputs| D[Predictions]
"},{"location":"swarms/framework/concept/#agents-llms-with-tools-and-rag-systems","title":"Agents: LLMs with Tools and RAG Systems","text":"graph TD\n A[Agent] -->|Uses| B[LLM]\n A -->|Interacts with| C[Tool]\n C -->|Provides Data to| B\n A -->|Queries| D[RAG System]\n D -->|Retrieves Information from| E[Database]\n D -->|Generates Responses with| F[Generative Model]
"},{"location":"swarms/framework/concept/#swarm-systems","title":"Swarm Systems","text":"graph TD\n A[Swarm System]\n A -->|Coordinates| B[Agent 1]\n A -->|Coordinates| C[Agent 2]\n A -->|Coordinates| D[Agent 3]\n B -->|Communicates with| C\n C -->|Communicates with| D\n D -->|Communicates with| B\n B -->|Performs Task| E[Task 1]\n C -->|Performs Task| F[Task 2]\n D -->|Performs Task| G[Task 3]\n E -->|Reports to| A\n F -->|Reports to| A\n G -->|Reports to| A
"},{"location":"swarms/framework/concept/#conceptualization","title":"Conceptualization","text":"The Swarms framework leverages models, agents, tools, RAG systems, and swarm systems to create a robust, collaborative environment for executing complex AI tasks. By coordinating multiple agents and enhancing their capabilities with tools and retrieval-augmented generation, Swarms can handle sophisticated and multi-faceted applications effectively.
"},{"location":"swarms/framework/reference/","title":"API Reference Documentation","text":""},{"location":"swarms/framework/reference/#swarms__init__","title":"swarms.__init__
","text":"Description: This module initializes the Swarms package by concurrently executing the bootup process and activating Sentry for telemetry. It imports various components from other modules within the Swarms package.
Imports: - concurrent.futures
: A module that provides a high-level interface for asynchronously executing callables.
swarms.telemetry.bootup
: Contains the bootup
function for initializing telemetry.
swarms.telemetry.sentry_active
: Contains the activate_sentry
function to enable Sentry for error tracking.
Other modules from the Swarms package are imported for use, including agents, artifacts, prompts, structs, telemetry, tools, utils, and schemas.
Concurrent Execution: The module uses ThreadPoolExecutor
to run the bootup
and activate_sentry
functions concurrently.
import concurrent.futures\nfrom swarms.telemetry.bootup import bootup # noqa: E402, F403\nfrom swarms.telemetry.sentry_active import activate_sentry\n\n# Use ThreadPoolExecutor to run bootup and activate_sentry concurrently\nwith concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:\n executor.submit(bootup)\n executor.submit(activate_sentry)\n\nfrom swarms.agents import * # noqa: E402, F403\nfrom swarms.artifacts import * # noqa: E402, F403\nfrom swarms.prompts import * # noqa: E402, F403\nfrom swarms.structs import * # noqa: E402, F403\nfrom swarms.telemetry import * # noqa: E402, F403\nfrom swarms.tools import * # noqa: E402, F403\nfrom swarms.utils import * # noqa: E402, F403\nfrom swarms.schemas import * # noqa: E402, F403\n
Note: There are no documentable functions or classes within this module itself, as it primarily serves to execute initial setup tasks and import other modules.
"},{"location":"swarms/framework/reference/#swarmsartifactsbase_artifact","title":"swarms.artifacts.base_artifact
","text":"Description: This module defines the BaseArtifact
abstract base class for representing artifacts in the system. It provides methods to convert artifact values to various formats and enforces the implementation of an addition method for subclasses.
Imports: - json
: A module for parsing JSON data.
uuid
: A module for generating unique identifiers.
ABC
, abstractmethod
: Tools from the abc
module to define abstract base classes.
dataclass
: A decorator for creating data classes.
Any
: A type hint for any data type.
BaseArtifact
","text":"Description: An abstract base class for artifacts that includes common attributes and methods for handling artifact values.
Attributes: - id
(str
): A unique identifier for the artifact, generated if not provided.
name
(str
): The name of the artifact. If not provided, it defaults to the artifact's ID.
value
(Any
): The value associated with the artifact.
Methods:
__post_init__(self) -> None
Description: Initializes the artifact, setting the id
and name
attributes if they are not provided.
Parameters: None.
Return: None.
value_to_bytes(cls, value: Any) -> bytes
Description: Converts the given value to bytes.
Parameters:
value
(Any
): The value to convert.
Return:
(bytes
): The value converted to bytes.
value_to_dict(cls, value: Any) -> dict
Description: Converts the given value to a dictionary.
Parameters:
value
(Any
): The value to convert.
Return:
(dict
): The value converted to a dictionary.
to_text(self) -> str
Description: Converts the artifact's value to a text representation.
Parameters: None.
Return:
(str
): The string representation of the artifact's value.
__str__(self) -> str
Description: Returns a string representation of the artifact.
Parameters: None.
Return:
(str
): The string representation of the artifact.
__bool__(self) -> bool
Description: Returns the boolean value of the artifact based on its value.
Parameters: None.
Return:
(bool
): The boolean value of the artifact.
__len__(self) -> int
Description: Returns the length of the artifact's value.
Parameters: None.
Return:
(int
): The length of the artifact's value.
__add__(self, other: BaseArtifact) -> BaseArtifact
Description: Abstract method for adding two artifacts together. Must be implemented by subclasses.
Parameters:
other
(BaseArtifact
): The other artifact to add.
Return:
(BaseArtifact
): The result of adding the two artifacts.
Example:
from swarms.artifacts.base_artifact import BaseArtifact\n\nclass MyArtifact(BaseArtifact):\n def __add__(self, other: BaseArtifact) -> BaseArtifact:\n\n return MyArtifact(id=self.id, name=self.name, value=self.value + other.value)\n\nartifact1 = MyArtifact(id=\"123\", name=\"Artifact1\", value=10)\nartifact2 = MyArtifact(id=\"456\", name=\"Artifact2\", value=20)\nresult = artifact1 + artifact2\nprint(result) # Output: MyArtifact with the combined value\n
"},{"location":"swarms/framework/reference/#swarmsartifactstext_artifact","title":"swarms.artifacts.text_artifact
","text":"Description: This module defines the TextArtifact
class, which represents a text-based artifact. It extends the BaseArtifact
class and includes attributes and methods specific to handling text values, including encoding options, embedding generation, and token counting.
Imports: - dataclass
, field
: Decorators and functions from the dataclasses
module for creating data classes.
Callable
: A type hint indicating a callable object from the typing
module.
BaseArtifact
: The abstract base class for artifacts, imported from swarms.artifacts.base_artifact
.
TextArtifact
","text":"Description: Represents a text artifact with additional functionality for handling text values, encoding, and embeddings.
Attributes: - value
(str
): The text value of the artifact.
encoding
(str
, optional): The encoding of the text (default is \"utf-8\").
encoding_error_handler
(str
, optional): The error handler for encoding errors (default is \"strict\").
tokenizer
(Callable
, optional): A callable for tokenizing the text value.
_embedding
(list[float]
): The embedding of the text artifact (default is an empty list).
Properties: - embedding
(Optional[list[float]]
): Returns the embedding of the text artifact if available; otherwise, returns None
.
Methods:
__add__(self, other: BaseArtifact) -> TextArtifact
Description: Concatenates the text value of this artifact with the text value of another artifact.
Parameters:
other
(BaseArtifact
): The other artifact to concatenate with.
Return:
(TextArtifact
): A new TextArtifact
instance with the concatenated value.
__bool__(self) -> bool
Description: Checks if the text value of the artifact is non-empty.
Parameters: None.
Return:
(bool
): True
if the text value is non-empty; otherwise, False
.
generate_embedding(self, model) -> list[float] | None
Description: Generates the embedding of the text artifact using a given embedding model.
Parameters:
model
: An embedding model that provides the embed_string
method.
Return:
(list[float] | None
): The generated embedding as a list of floats, or None
if the embedding could not be generated.
token_count(self) -> int
Description: Counts the number of tokens in the text artifact using a specified tokenizer.
Parameters: None.
Return:
(int
): The number of tokens in the text value.
to_bytes(self) -> bytes
Description: Converts the text value of the artifact to bytes using the specified encoding and error handler.
Parameters: None.
Return:
(bytes
): The text value encoded as bytes.
Example:
from swarms.artifacts.text_artifact import TextArtifact\n\n# Create a TextArtifact instance\ntext_artifact = TextArtifact(value=\"Hello, World!\")\n\n# Generate embedding (assuming an appropriate model is provided)\n# embedding = text_artifact.generate_embedding(model)\n\n# Count tokens in the text artifact\ntoken_count = text_artifact.token_count()\n\n# Convert to bytes\nbytes_value = text_artifact.to_bytes()\n\nprint(text_artifact) # Output: Hello, World!\nprint(token_count) # Output: Number of tokens\nprint(bytes_value) # Output: b'Hello, World!'\n
"},{"location":"swarms/framework/reference/#swarmsartifactsmain_artifact","title":"swarms.artifacts.main_artifact
","text":"Description: This module defines the Artifact
class, which represents a file artifact with versioning capabilities. It allows for the creation, editing, saving, loading, and exporting of file artifacts, as well as managing their version history. The module also includes a FileVersion
class to encapsulate the details of each version of the artifact.
Imports: - time
: A module for time-related functions.
logger
: A logging utility from swarms.utils.loguru_logger
.
os
: A module providing a way of using operating system-dependent functionality.
json
: A module for parsing JSON data.
List
, Union
, Dict
, Any
: Type hints from the typing
module.
BaseModel
, Field
, validator
: Tools from the pydantic
module for data validation and settings management.
datetime
: A module for manipulating dates and times.
FileVersion
","text":"Description: Represents a version of a file with its content and timestamp.
Attributes: - version_number
(int
): The version number of the file.
content
(str
): The content of the file version.
timestamp
(str
): The timestamp of the file version, formatted as \"YYYY-MM-DD HH:MM:SS\".
Methods:
__str__(self) -> str
Description: Returns a string representation of the file version.
Parameters: None.
Return:
(str
): A formatted string containing the version number, timestamp, and content.
Artifact
","text":"Description: Represents a file artifact with attributes to manage its content and version history.
Attributes: - file_path
(str
): The path to the file.
file_type
(str
): The type of the file (e.g., \".txt\").
contents
(str
): The contents of the file.
versions
(List[FileVersion]
): The list of file versions.
edit_count
(int
): The number of times the file has been edited.
Methods:
validate_file_type(cls, v, values) -> str
Description: Validates the file type based on the file extension.
Parameters:
v
(str
): The file type to validate.
values
(dict
): A dictionary of other field values.
Return:
(str
): The validated file type.
create(self, initial_content: str) -> None
Description: Creates a new file artifact with the initial content.
Parameters:
initial_content
(str
): The initial content to set for the artifact.
Return: None.
edit(self, new_content: str) -> None
Description: Edits the artifact's content, tracking the change in the version history.
Parameters:
new_content
(str
): The new content to set for the artifact.
Return: None.
save(self) -> None
Description: Saves the current artifact's contents to the specified file path.
Parameters: None.
Return: None.
load(self) -> None
Description: Loads the file contents from the specified file path into the artifact.
Parameters: None.
Return: None.
get_version(self, version_number: int) -> Union[FileVersion, None]
Description: Retrieves a specific version of the artifact by its version number.
Parameters:
version_number
(int
): The version number to retrieve.
Return:
(FileVersion | None
): The requested version if found; otherwise, None
.
get_contents(self) -> str
Description: Returns the current contents of the artifact as a string.
Parameters: None.
Return:
(str
): The current contents of the artifact.
get_version_history(self) -> str
Description: Returns the version history of the artifact as a formatted string.
Parameters: None.
Return:
(str
): A formatted string containing the version history.
export_to_json(self, file_path: str) -> None
Description: Exports the artifact to a JSON file.
Parameters:
file_path
(str
): The path to the JSON file where the artifact will be saved.
Return: None.
import_from_json(cls, file_path: str) -> \"Artifact\"
Description: Imports an artifact from a JSON file.
Parameters:
file_path
(str
): The path to the JSON file to import the artifact from.
Return:
(Artifact
): The imported artifact instance.
get_metrics(self) -> str
Description: Returns all metrics of the artifact as a formatted string.
Parameters: None.
Return:
(str
): A string containing all metrics of the artifact.
to_dict(self) -> Dict[str, Any]
Description: Converts the artifact instance to a dictionary representation.
Parameters: None.
Return:
(Dict[str, Any]
): The dictionary representation of the artifact.
from_dict(cls, data: Dict[str, Any]) -> \"Artifact\"
Description: Creates an artifact instance from a dictionary representation.
Parameters:
data
(Dict[str, Any]
): The dictionary to create the artifact from.
Return:
(Artifact
): The created artifact instance.
Example:
from swarms.artifacts.main_artifact import Artifact\n\n# Create an Artifact instance\nartifact = Artifact(file_path=\"example.txt\", file_type=\".txt\")\nartifact.create(\"Initial content\")\nartifact.edit(\"First edit\")\nartifact.edit(\"Second edit\")\nartifact.save()\n\n# Export to JSON\nartifact.export_to_json(\"artifact.json\")\n\n# Import from JSON\nimported_artifact = Artifact.import_from_json(\"artifact.json\")\n\n# Get metrics\nprint(artifact.get_metrics())\n
"},{"location":"swarms/framework/reference/#swarmsartifacts__init__","title":"swarms.artifacts.__init__
","text":"Description: This module serves as the initialization point for the artifacts subpackage within the Swarms framework. It imports and exposes the key classes related to artifacts, including BaseArtifact
, TextArtifact
, and Artifact
, making them available for use in other parts of the application.
Imports: - BaseArtifact
: The abstract base class for artifacts, imported from swarms.artifacts.base_artifact
.
TextArtifact
: A class representing text-based artifacts, imported from swarms.artifacts.text_artifact
.
Artifact
: A class representing file artifacts with versioning capabilities, imported from swarms.artifacts.main_artifact
.
Exported Classes: - BaseArtifact
: The base class for all artifacts.
TextArtifact
: A specialized artifact class for handling text values.
Artifact
: A class for managing file artifacts, including their content and version history.
Example:
from swarms.artifacts import *\n\n# Create instances of the artifact classes\nbase_artifact = BaseArtifact(id=\"1\", name=\"Base Artifact\", value=\"Some value\") # This will raise an error since BaseArtifact is abstract\ntext_artifact = TextArtifact(value=\"Sample text\")\nfile_artifact = Artifact(file_path=\"example.txt\", file_type=\".txt\")\n\n# Use the classes as needed\nprint(text_artifact) # Output: Sample text\n
Note: Since BaseArtifact
is an abstract class, it cannot be instantiated directly.
swarms.agents.__init__
","text":"Description: This module serves as the initialization point for the agents subpackage within the Swarms framework. It imports and exposes key classes and functions related to agent operations, including stopping conditions and the ToolAgent
class, making them available for use in other parts of the application.
Imports: - check_cancelled
: A function to check if the operation has been cancelled.
check_complete
: A function to check if the operation is complete.
check_done
: A function to check if the operation is done.
check_end
: A function to check if the operation has ended.
check_error
: A function to check if there was an error during the operation.
check_exit
: A function to check if the operation has exited.
check_failure
: A function to check if the operation has failed.
check_finished
: A function to check if the operation has finished.
check_stopped
: A function to check if the operation has been stopped.
check_success
: A function to check if the operation was successful.
ToolAgent
: A class representing an agent that utilizes tools.
Exported Classes and Functions: - ToolAgent
: The class for managing tool-based agents.
check_done
: Checks if the operation is done.
check_finished
: Checks if the operation has finished.
check_complete
: Checks if the operation is complete.
check_success
: Checks if the operation was successful.
check_failure
: Checks if the operation has failed.
check_error
: Checks if there was an error during the operation.
check_stopped
: Checks if the operation has been stopped.
check_cancelled
: Checks if the operation has been cancelled.
check_exit
: Checks if the operation has exited.
check_end
: Checks if the operation has ended.
Example:
from swarms.agents import *\n\n# Create an instance of ToolAgent\ntool_agent = ToolAgent()\n\n# Check the status of an operation\nif check_done():\n print(\"The operation is done.\")\n
Note: The specific implementations of the stopping condition functions and the ToolAgent
class are not detailed in this module, as they are imported from other modules within the swarms.agents
package.
swarms.agents.tool_agent
","text":"Description: This module defines the ToolAgent
class, which represents a specialized agent capable of performing tasks using a specified model and tokenizer. It is designed to run operations that require input validation against a JSON schema, generating outputs based on defined tasks.
Imports: - Any
, Optional
, Callable
: Type hints from the typing
module for flexible parameter types.
Agent
: The base class for agents, imported from swarms.structs.agent
.
Jsonformer
: A class responsible for transforming JSON data, imported from swarms.tools.json_former
.
logger
: A logging utility from swarms.utils.loguru_logger
.
ToolAgent
","text":"Description: Represents a tool agent that performs a specific task using a model and tokenizer. It facilitates the execution of tasks by calling the appropriate model or using the defined JSON schema for structured output.
Attributes: - name
(str
): The name of the tool agent.
description
(str
): A description of what the tool agent does.
model
(Any
): The model used by the tool agent for processing.
tokenizer
(Any
): The tokenizer used by the tool agent to prepare input data.
json_schema
(Any
): The JSON schema that defines the structure of the expected output.
max_number_tokens
(int
): The maximum number of tokens to generate (default is 500).
parsing_function
(Optional[Callable]
): A function for parsing the output, if provided.
llm
(Any
): A language model, if utilized instead of a custom model.
Methods:
__init__(self, name: str, description: str, model: Any, tokenizer: Any, json_schema: Any, max_number_tokens: int, parsing_function: Optional[Callable], llm: Any, *args, **kwargs) -> None
Description: Initializes a new instance of the ToolAgent class.
Parameters:
name
(str
): The name of the tool agent.
description
(str
): A description of the tool agent.
model
(Any
): The model to use (if applicable).
tokenizer
(Any
): The tokenizer to use (if applicable).
json_schema
(Any
): The JSON schema that outlines the expected output format.
max_number_tokens
(int
): Maximum token output size.
parsing_function
(Optional[Callable]
): Optional function to parse the output.
llm
(Any
): The language model to use as an alternative to a custom model.
*args
and **kwargs
: Additional arguments and keyword arguments for flexibility.
Return: None.
run(self, task: str, *args, **kwargs) -> Any
Description: Executes the tool agent for the specified task, utilizing either a model or a language model based on provided parameters.
Parameters:
task
(str
): The task or prompt to be processed by the tool agent.
*args
: Additional positional arguments for flexibility.
**kwargs
: Additional keyword arguments for flexibility.
Return:
(Any
): The output generated by the tool agent based on the input task.
Raises:
Exception
: If neither model
nor llm
is provided or if an error occurs during task execution.
Example:
from transformers import AutoModelForCausalLM, AutoTokenizer\nfrom swarms.agents.tool_agent import ToolAgent\n\n# Load model and tokenizer\nmodel = AutoModelForCausalLM.from_pretrained(\"databricks/dolly-v2-12b\")\n\ntokenizer = AutoTokenizer.from_pretrained(\"databricks/dolly-v2-12b\")\n\n\n# Define a JSON schema\njson_schema = {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\"type\": \"string\"},\n \"age\": {\"type\": \"number\"},\n \"is_student\": {\"type\": \"boolean\"},\n \"courses\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"}\n }\n }\n}\n\n# Create and run a ToolAgent\ntask = \"Generate a person's information based on the following schema:\"\nagent = ToolAgent(model=model, tokenizer=tokenizer, json_schema=json_schema)\ngenerated_data = agent.run(task)\n\nprint(generated_data)\n
"},{"location":"swarms/framework/reference/#swarmsagentsstopping_conditions","title":"swarms.agents.stopping_conditions
","text":"Description: This module contains a set of functions that check specific stopping conditions based on strings. These functions return boolean values indicating the presence of certain keywords, which can be used to determine the status of an operation or process.
"},{"location":"swarms/framework/reference/#functions","title":"Functions:","text":"check_done(s: str) -> bool
Description: Checks if the string contains the keyword \"\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"\" is found in the string; otherwise, False
.
check_finished(s: str) -> bool
Description: Checks if the string contains the keyword \"finished\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"finished\" is found in the string; otherwise, False
.
check_complete(s: str) -> bool
Description: Checks if the string contains the keyword \"complete\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"complete\" is found in the string; otherwise, False
.
check_success(s: str) -> bool
Description: Checks if the string contains the keyword \"success\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"success\" is found in the string; otherwise, False
.
check_failure(s: str) -> bool
Description: Checks if the string contains the keyword \"failure\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"failure\" is found in the string; otherwise, False
.
check_error(s: str) -> bool
Description: Checks if the string contains the keyword \"error\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"error\" is found in the string; otherwise, False
.
check_stopped(s: str) -> bool
Description: Checks if the string contains the keyword \"stopped\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"stopped\" is found in the string; otherwise, False
.
check_cancelled(s: str) -> bool
Description: Checks if the string contains the keyword \"cancelled\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"cancelled\" is found in the string; otherwise, False
.
check_exit(s: str) -> bool
Description: Checks if the string contains the keyword \"exit\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"exit\" is found in the string; otherwise, False
.
check_end(s: str) -> bool
Description: Checks if the string contains the keyword \"end\".
Parameters:
s
(str
): The input string to check.
Return:
(bool
): True
if \"end\" is found in the string; otherwise, False
.
Example:
from swarms.agents.stopping_conditions import check_done, check_error\n\nstatus_message = \"The process has finished and <DONE>!\"\n\nif check_done(status_message):\n print(\"The operation is done!\")\n\nif check_error(status_message):\n print(\"An error has occurred!\")\n
Note: Each of these functions provides a simple way to check for specific keywords in a given string, which can be helpful in managing and monitoring tasks or operations.
"},{"location":"swarms/framework/reference/#schemas","title":"Schemas","text":""},{"location":"swarms/framework/reference/#swarmsschemasbase_schemas","title":"swarms.schemas.base_schemas
","text":"Description: This module defines various Pydantic models that represent schemas used in machine learning applications. These models facilitate data validation and serialization for different types of content, such as model cards, chat messages, and responses.
Imports: - uuid
: A module for generating unique identifiers.
time
: A module for time-related functions.
List
, Literal
, Optional
, Union
: Type hints from the typing
module for flexible parameter types.
BaseModel
, Field
: Tools from the pydantic
module for data validation and settings management.
ModelCard
","text":"Description: A Pydantic model that represents a model card, which provides metadata about a machine learning model.
Attributes: - id
(str
): The unique identifier for the model.
object
(str
): A fixed string indicating the type of object (\"model\").
created
(int
): The timestamp of model creation, defaults to the current time.
owned_by
(str
): The owner of the model.
root
(Optional[str]
): The root model identifier if applicable.
parent
(Optional[str]
): The parent model identifier if applicable.
permission
(Optional[list]
): A list of permissions associated with the model.
ModelList
","text":"Description: A Pydantic model that represents a list of model cards.
Attributes: - object
(str
): A fixed string indicating the type of object (\"list\").
data
(List[ModelCard]
): A list containing instances of ModelCard
.ImageUrl
","text":"Description: A Pydantic model representing an image URL.
Attributes: - url
(str
): The URL of the image.
TextContent
","text":"Description: A Pydantic model representing text content.
Attributes: - type
(Literal[\"text\"]
): A fixed string indicating the type of content (text).
text
(str
): The actual text content.ImageUrlContent
","text":"Description: A Pydantic model representing image content via URL.
Attributes: - type
(Literal[\"image_url\"]
): A fixed string indicating the type of content (image URL).
image_url
(ImageUrl
): An instance of ImageUrl
containing the URL of the image.ContentItem
","text":"Description: A type alias for a union of TextContent
and ImageUrlContent
, representing any content type that can be processed.
ChatMessageInput
","text":"Description: A Pydantic model representing an input message for chat applications.
Attributes: - role
(str
): The role of the sender (e.g., \"user\", \"assistant\", or \"system\").
content
(Union[str, List[ContentItem]]
): The content of the message, which can be a string or a list of content items.ChatMessageResponse
","text":"Description: A Pydantic model representing a response message in chat applications.
Attributes: - role
(str
): The role of the sender (e.g., \"user\", \"assistant\", or \"system\").
content
(str
, optional): The content of the response message.DeltaMessage
","text":"Description: A Pydantic model representing a delta update for messages in chat applications.
Attributes: - role
(Optional[Literal[\"user\", \"assistant\", \"system\"]]
): The role of the sender, if specified.
content
(Optional[str]
): The content of the delta message, if provided.ChatCompletionRequest
","text":"Description: A Pydantic model representing a request for chat completion.
Attributes: - model
(str
): The model to use for completing the chat (default is \"gpt-4o\").
messages
(List[ChatMessageInput]
): A list of input messages for the chat.
temperature
(Optional[float]
): Controls the randomness of the output (default is 0.8).
top_p
(Optional[float]
): An alternative to sampling with temperature (default is 0.8).
max_tokens
(Optional[int]
): The maximum number of tokens to generate (default is 4000).
stream
(Optional[bool]
): If true, the response will be streamed (default is False).
repetition_penalty
(Optional[float]
): A penalty for repeated tokens (default is 1.0).
echo
(Optional[bool]
): If true, the input will be echoed in the output (default is False).
ChatCompletionResponseChoice
","text":"Description: A Pydantic model representing a choice in a chat completion response.
Attributes: - index
(int
): The index of the choice.
input
(str
): The input message.
message
(ChatMessageResponse
): The output message.
ChatCompletionResponseStreamChoice
","text":"Description: A Pydantic model representing a choice in a streamed chat completion response.
Attributes: - index
(int
): The index of the choice.
delta
(DeltaMessage
): The delta update for the message.UsageInfo
","text":"Description: A Pydantic model representing usage information for a chat completion request.
Attributes: - prompt_tokens
(int
): The number of tokens used in the prompt (default is 0).
total_tokens
(int
): The total number of tokens used (default is 0).
completion_tokens
(Optional[int]
): The number of tokens used in the completion (default is 0).
ChatCompletionResponse
","text":"Description: A Pydantic model representing a response from a chat completion request.
Attributes: - model
(str
): The model used for the completion.
object
(Literal[\"chat.completion\", \"chat.completion.chunk\"]
): The type of response object.
choices
(List[Union[ChatCompletionResponseChoice, ChatCompletionResponseStreamChoice]]
): A list of choices from the completion.
created
(Optional[int]
): The timestamp of when the response was created.
AgentChatCompletionResponse
","text":"Description: A Pydantic model representing a completion response from an agent.
Attributes: - id
(Optional[str]
): The ID of the agent that generated the completion response (default is a new UUID).
agent_name
(Optional[str]
): The name of the agent that generated the response.
object
(Optional[Literal[\"chat.completion\", \"chat.completion.chunk\"]]
): The type of response object.
choices
(Optional[ChatCompletionResponseChoice]
): The choice from the completion response.
created
(Optional[int]
): The timestamp of when the response was created.
Example:
from swarms.schemas.base_schemas import ChatCompletionRequest, ChatMessageInput\n\n# Create a chat completion request\nrequest = ChatCompletionRequest(\n model=\"gpt-4\",\n\n messages=[\n ChatMessageInput(role=\"user\", content=\"Hello! How can I help you?\")\n ]\n)\n
Note: The Pydantic models in this module provide a structured way to handle data related to machine learning models and chat interactions, ensuring that the data adheres to defined schemas.
"},{"location":"swarms/framework/reference/#swarmsschemasplan","title":"swarms.schemas.plan
","text":"Description: This module defines the Plan
class, which represents a sequence of steps in a structured format. It utilizes Pydantic for data validation and configuration, ensuring that each plan consists of a list of defined steps.
Imports: - List
: A type hint from the typing
module for work with lists.
BaseModel
: The Pydantic base class for data models, providing validation and serialization features.
Step
: A model representing individual steps in the plan, imported from swarms.schemas.agent_step_schemas
.
Plan
","text":"Description: Represents a sequence of steps that comprise a plan. This class ensures that the data structure adheres to the expected model for steps.
Attributes: - steps
(List[Step]
): A list of steps, where each step is an instance of the Step
model.
Config: - orm_mode
(bool): Enables compatibility with ORM models to facilitate data loading from database objects.
Example:
from swarms.schemas.plan import Plan\nfrom swarms.schemas.agent_step_schemas import Step\n\n# Create a list of steps\nsteps = [\n Step(/* initialize step attributes */),\n Step(/* initialize step attributes */),\n]\n\n# Create a Plan instance\nplan = Plan(steps=steps)\n\n# Access the steps\nfor step in plan.steps:\n print(step)\n
Note: The Plan
class relies on the Step
model for its structure, ensuring that the steps in a plan conform to the validation rules defined in the Step
model.
swarms.schemas.__init__
","text":"Description: This module serves as the initialization point for the schemas subpackage within the Swarms framework. It imports and exposes key classes related to agent steps and agent input schemas, making them available for use in other parts of the application.
Imports: - Step
: A model representing an individual step in an agent's operation, imported from swarms.schemas.agent_step_schemas
.
ManySteps
: A model representing multiple steps, also imported from swarms.schemas.agent_step_schemas
.
AgentSchema
: A model representing the schema for agent inputs, imported from swarms.schemas.agent_input_schema
.
Exported Classes: - Step
: The class for defining individual steps in an agent's operation.
ManySteps
: The class for defining multiple steps in an agent's operation.
AgentSchema
: The class for defining the input schema for agents.
Example:
from swarms.schemas import *\n\n# Create an instance of Step\nstep = Step(/* initialize step attributes */)\n\n# Create an instance of ManySteps\nmany_steps = ManySteps(steps=[step, step])\n\n# Create an instance of AgentSchema\nagent_schema = AgentSchema(/* initialize agent schema attributes */)\n
Note: This module acts as a central point for importing and utilizing the various schema classes defined in the Swarms framework, facilitating structured data handling for agents and their operations.
"},{"location":"swarms/framework/reference/#swarmsschemasagent_step_schemas","title":"swarms.schemas.agent_step_schemas
","text":"Description: This module defines the Step
and ManySteps
classes, which represent individual steps and collections of steps in a task, respectively. These classes utilize Pydantic for data validation and serialization, ensuring that each step adheres to the defined schema.
Imports: - time
: A module for time-related functions.
uuid
: A module for generating unique identifiers.
List
, Optional
, Any
: Type hints from the typing
module for flexible parameter types.
BaseModel
, Field
: Tools from the pydantic
module for data validation and settings management.
AgentChatCompletionResponse
: A model representing the response from an agent's chat completion, imported from swarms.schemas.base_schemas
.
get_current_time() -> str
","text":"Description: Returns the current time formatted as \"YYYY-MM-DD HH:MM:SS\".
Return: - (str
): The current time as a formatted string.
Step
","text":"Description: A Pydantic model representing a single step in a task, including its ID, completion time, and response from an agent.
Attributes: - step_id
(Optional[str]
): The unique identifier for the step, generated if not provided.
time
(Optional[float]
): The time taken to complete the task step, formatted as a string.
response
(Optional[AgentChatCompletionResponse]
): The response from the agent for this step.
ManySteps
","text":"Description: A Pydantic model representing a collection of steps associated with a specific agent and task.
Attributes: - agent_id
(Optional[str]
): The unique identifier for the agent.
agent_name
(Optional[str]
): The name of the agent.
task
(Optional[str]
): The name of the task being performed.
max_loops
(Optional[Any]
): The maximum number of steps in the task.
run_id
(Optional[str]
): The ID of the task this collection of steps belongs to.
steps
(Optional[List[Step]]
): A list of Step
instances representing the steps of the task.
full_history
(Optional[str]
): A string containing the full history of the task.
total_tokens
(Optional[int]
): The total number of tokens generated during the task.
stopping_token
(Optional[str]
): The token at which the task stopped.
interactive
(Optional[bool]
): Indicates whether the task is interactive.
dynamic_temperature_enabled
(Optional[bool]
): Indicates whether dynamic temperature adjustments are enabled for the task.
Example:
from swarms.schemas.agent_step_schemas import Step, ManySteps\n\n# Create a step instance\nstep = Step(step_id=\"12345\", response=AgentChatCompletionResponse(...))\n\n# Create a ManySteps instance\nmany_steps = ManySteps(\n agent_id=\"agent-1\",\n\n agent_name=\"Test Agent\",\n task=\"Example Task\",\n max_loops=5,\n steps=[step],\n full_history=\"Task executed successfully.\",\n total_tokens=100\n)\n\nprint(many_steps)\n
Note: The Step
and ManySteps
classes provide structured representations of task steps, ensuring that all necessary information is captured and validated according to the defined schemas.
swarms.schemas.agent_input_schema
","text":"Description: This module defines the AgentSchema
class using Pydantic, which represents the input parameters necessary for configuring an agent in the Swarms framework. It includes a variety of attributes for specifying the agent's behavior, model settings, and operational parameters.
Imports: - Any
, Callable
, Dict
, List
, Optional
: Type hints from the typing
module for flexible parameter types.
BaseModel
, Field
: Tools from the pydantic
module for data validation and settings management.
validator
: A decorator from Pydantic used for custom validation of fields.
AgentSchema
","text":"Description: Represents the configuration for an agent, including attributes that govern its behavior, capabilities, and interaction with language models. This class ensures that the input data adheres to defined validation rules.
Attributes: - llm
(Any
): The language model to use.
max_tokens
(int
): The maximum number of tokens the agent can generate, must be greater than or equal to 1.
context_window
(int
): The size of the context window, must be greater than or equal to 1.
user_name
(str
): The name of the user interacting with the agent.
agent_name
(str
): The name of the agent.
system_prompt
(str
): The system prompt provided to the agent.
template
(Optional[str]
): An optional template for the agent, default is None
.
max_loops
(Optional[int]
): The maximum number of loops the agent can perform (default is 1, must be greater than or equal to 1).
stopping_condition
(Optional[Callable[[str], bool]]
): A callable function that defines a stopping condition for the agent.
loop_interval
(Optional[int]
): The interval between loops (default is 0, must be greater than or equal to 0).
retry_attempts
(Optional[int]
): Number of times to retry an operation if it fails (default is 3, must be greater than or equal to 0).
retry_interval
(Optional[int]
): The time between retry attempts (default is 1, must be greater than or equal to 0).
return_history
(Optional[bool]
): Flag indicating whether to return the history of the agent's operations (default is False
).
stopping_token
(Optional[str]
): Token indicating when to stop processing (default is None
).
dynamic_loops
(Optional[bool]
): Indicates whether dynamic loops are enabled (default is False
).
interactive
(Optional[bool]
): Indicates whether the agent operates in an interactive mode (default is False
).
dashboard
(Optional[bool]
): Flag indicating whether a dashboard interface is enabled (default is False
).
agent_description
(Optional[str]
): A description of the agent's functionality (default is None
).
tools
(Optional[List[Callable]]
): List of callable tools the agent can use (default is None
).
dynamic_temperature_enabled
(Optional[bool]
): Indicates whether dynamic temperature adjustments are enabled (default is False
).
Additional attributes for managing various functionalities and configurations related to the agent's behavior, such as logging, saving states, and managing tools.
check_list_items_not_none(v): Ensures that items within certain list attributes (tools
, docs
, sop_list
, etc.) are not None
.
check_optional_callable_not_none(v): Ensures that optional callable attributes are either None
or callable.
Example:
from swarms.schemas.agent_input_schema import AgentSchema\n\n# Define the agent configuration data\nagent_data = {\n \"llm\": \"OpenAIChat\",\n \"max_tokens\": 4096,\n \"context_window\": 8192,\n \"user_name\": \"Human\",\n \"agent_name\": \"test-agent\",\n\n \"system_prompt\": \"Custom system prompt\",\n}\n\n# Create an AgentSchema instance\nagent = AgentSchema(**agent_data)\nprint(agent)\n
Note: The AgentSchema
class provides a structured way to configure agents in the Swarms framework, ensuring that all necessary parameters are validated before use.
In modern software development, automated testing is crucial for ensuring the reliability and functionality of your code. One of the most popular testing frameworks for Python is pytest
.
This blog will provide an in-depth look at how to run tests using pytest
, including testing a single file, multiple files, every file in the test repository, and providing guidelines for contributors to run tests reliably.
pytest
is a testing framework for Python that makes it easy to write simple and scalable test cases. It supports fixtures, parameterized testing, and has a rich plugin architecture. pytest
is widely used because of its ease of use and powerful features that help streamline the testing process.
To get started with pytest
, you need to install it. You can install pytest
using pip
:
pip install pytest\n
"},{"location":"swarms/framework/test/#writing-your-first-test","title":"Writing Your First Test","text":"Before diving into running tests, let\u2019s write a simple test. Create a file named test_sample.py
with the following content:
def test_addition():\n assert 1 + 1 == 2\n\ndef test_subtraction():\n assert 2 - 1 == 1\n
In this example, we have defined two basic tests: test_addition
and test_subtraction
.
To run a single test file, you can use the pytest
command followed by the filename. For example, to run the tests in test_sample.py
, use the following command:
pytest test_sample.py\n
The output will show the test results, including the number of tests passed, failed, or skipped.
"},{"location":"swarms/framework/test/#running-multiple-test-files","title":"Running Multiple Test Files","text":"You can also run multiple test files by specifying their filenames separated by a space. For example:
pytest test_sample.py test_another_sample.py\n
If you have multiple test files in a directory, you can run all of them by specifying the directory name:
pytest tests/\n
"},{"location":"swarms/framework/test/#running-all-tests-in-the-repository","title":"Running All Tests in the Repository","text":"To run all tests in the repository, navigate to the root directory of your project and simply run:
pytest\n
pytest
will automatically discover and run all the test files that match the pattern test_*.py
or *_test.py
.
pytest
automatically discovers test files and test functions based on their naming conventions. By default, it looks for files that match the pattern test_*.py
or *_test.py
and functions or methods that start with test_
.
pytest
allows you to use markers to group tests or add metadata to them. Markers can be used to run specific subsets of tests. For example, you can mark a test as slow
and then run only the slow tests or skip them.
import pytest\n\n@pytest.mark.slow\ndef test_long_running():\n import time\n time.sleep(5)\n assert True\n\ndef test_fast():\n assert True\n
To run only the tests marked as slow
, use the -m
option:
pytest -m slow\n
"},{"location":"swarms/framework/test/#parameterized-tests","title":"Parameterized Tests","text":"pytest
supports parameterized testing, which allows you to run a test with different sets of input data. This can be done using the @pytest.mark.parametrize
decorator.
import pytest\n\n@pytest.mark.parametrize(\"a,b,expected\", [\n (1, 2, 3),\n (2, 3, 5),\n (3, 5, 8),\n])\ndef test_add(a, b, expected):\n assert a + b == expected\n
In this example, test_add
will run three times with different sets of input data.
Fixtures are a powerful feature of pytest
that allow you to set up some context for your tests. They can be used to provide a fixed baseline upon which tests can reliably and repeatedly execute.
import pytest\n\n@pytest.fixture\ndef sample_data():\n return {\"name\": \"John\", \"age\": 30}\n\ndef test_sample_data(sample_data):\n assert sample_data[\"name\"] == \"John\"\n assert sample_data[\"age\"] == 30\n
Fixtures can be used to share setup and teardown code between tests.
"},{"location":"swarms/framework/test/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/framework/test/#running-tests-in-parallel","title":"Running Tests in Parallel","text":"pytest
can run tests in parallel using the pytest-xdist
plugin. To install pytest-xdist
, run:
pip install pytest-xdist\n
To run tests in parallel, use the -n
option followed by the number of CPU cores you want to use:
pytest -n 4\n
"},{"location":"swarms/framework/test/#generating-test-reports","title":"Generating Test Reports","text":"pytest
can generate detailed test reports. You can use the --html
option to generate an HTML report:
pip install pytest-html\npytest --html=report.html\n
This command will generate a file named report.html
with a detailed report of the test results.
You can use the pytest-cov
plugin to measure code coverage. To install pytest-cov
, run:
pip install pytest-cov\n
To generate a coverage report, use the --cov
option followed by the module name:
pytest --cov=my_module\n
This command will show the coverage summary in the terminal. You can also generate an HTML report:
pytest --cov=my_module --cov-report=html\n
The coverage report will be generated in the htmlcov
directory.
For contributors and team members, it\u2019s important to run tests reliably to ensure consistent results. Here are some guidelines:
Set Up a Virtual Environment: Use a virtual environment to manage dependencies and ensure a consistent testing environment.
python -m venv venv\nsource venv/bin/activate # On Windows use `venv\\Scripts\\activate`\n
Install Dependencies: Install all required dependencies from the requirements.txt
file.
pip install -r requirements.txt\n
Run Tests Before Pushing: Ensure all tests pass before pushing code to the repository.
Use Continuous Integration (CI): Set up CI pipelines to automatically run tests on each commit or pull request.
Here is an example of a GitHub Actions workflow to run tests using pytest
:
name: Python package\n\non: [push, pull_request]\n\njobs:\n build:\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v2\n - name: Set up Python\n uses: actions/setup-python@v2\n with:\n python-version: '3.8'\n - name: Install dependencies\n run: |\n python -m pip install --upgrade pip\n pip install -r requirements.txt\n - name: Run tests\n run: |\n pytest\n
This configuration will run the tests on every push and pull request, ensuring that your codebase remains stable.
"},{"location":"swarms/framework/test/#conclusion","title":"Conclusion","text":"pytest
is a powerful and flexible testing framework that makes it easy to write and run tests for your Python code. By following the guidelines and best practices outlined in this blog, you can ensure that your tests are reliable and your codebase is robust. Whether you are testing a single file, multiple files, or the entire repository, pytest
provides the tools you need to automate and streamline your testing process.
Happy testing!
"},{"location":"swarms/framework/vision/","title":"Vision","text":""},{"location":"swarms/framework/vision/#swarms-vision","title":"Swarms Vision","text":"Swarms is dedicated to transforming enterprise automation by offering a robust and intuitive interface for multi-agent collaboration and seamless integration with multiple models. Our mission is to enable enterprises to enhance their operational efficiency and effectiveness through intelligent automation.
"},{"location":"swarms/framework/vision/#vision-statement","title":"Vision Statement","text":"To become the preeminent framework for orchestrating multi-agent collaboration and integration, empowering enterprises to achieve exceptional automation efficiency and operational excellence.
"},{"location":"swarms/framework/vision/#core-principles","title":"Core Principles","text":"graph TD\n A[Swarms Framework] --> B[Multi-Agent Collaboration]\n A --> C[Integration with Multiple Models]\n A --> D[Enterprise Automation]\n A --> E[Open Ecosystem]\n\n B --> F[Seamless Communication]\n B --> G[Collaboration Protocols]\n\n C --> H[Model Integration]\n C --> I[Framework Compatibility]\n\n D --> J[Operational Efficiency]\n D --> K[Reliability and Scalability]\n\n E --> L[Encourage Innovation]\n E --> M[Community Driven]
"},{"location":"swarms/framework/vision/#multi-agent-collaboration","title":"Multi-Agent Collaboration","text":"graph TD\n B[Multi-Agent Collaboration] --> F[Seamless Communication]\n B --> G[Collaboration Protocols]\n\n F --> N[Cross-Agent Messaging]\n F --> O[Task Coordination]\n F --> P[Real-Time Updates]\n\n G --> Q[Standard APIs]\n G --> R[Extensible Protocols]\n G --> S[Security and Compliance]\n\n N --> T[Agent Messaging Hub]\n O --> U[Task Assignment and Monitoring]\n P --> V[Instantaneous Data Sync]\n\n Q --> W[Unified API Interface]\n R --> X[Customizable Protocols]\n S --> Y[Compliance with Standards]\n S --> Z[Secure Communication Channels]
"},{"location":"swarms/framework/vision/#integration-with-multiple-models","title":"Integration with Multiple Models","text":"graph TD\n C[Integration with Multiple Models] --> H[Model Integration]\n C --> I[Framework Compatibility]\n\n H --> R[Plug-and-Play Models]\n H --> S[Model Orchestration]\n H --> T[Model Versioning]\n\n I --> U[Support for OpenAI]\n I --> V[Support for Anthropic]\n I --> W[Support for Gemini]\n I --> X[Support for LangChain]\n I --> Y[Support for AutoGen]\n I --> Z[Support for Custom Models]\n\n R --> AA[Easy Model Integration]\n S --> AB[Dynamic Model Orchestration]\n T --> AC[Version Control]\n\n U --> AD[Integration with OpenAI Models]\n V --> AE[Integration with Anthropic Models]\n W --> AF[Integration with Gemini Models]\n X --> AG[Integration with LangChain Models]\n Y --> AH[Integration with AutoGen Models]\n Z --> AI[Support for Proprietary Models]
"},{"location":"swarms/framework/vision/#enterprise-automation","title":"Enterprise Automation","text":"graph TD\n D[Enterprise Automation] --> J[Operational Efficiency]\n D --> K[Reliability and Scalability]\n\n J --> Y[Automate Workflows]\n J --> Z[Reduce Manual Work]\n J --> AA[Increase Productivity]\n\n K --> AB[High Uptime]\n K --> AC[Enterprise-Grade Security]\n K --> AD[Scalable Solutions]\n\n Y --> AE[Workflow Automation Tools]\n Z --> AF[Eliminate Redundant Tasks]\n AA --> AG[Boost Employee Efficiency]\n\n AB --> AH[Robust Infrastructure]\n AC --> AI[Security Compliance]\n AD --> AJ[Scale with Demand]
"},{"location":"swarms/framework/vision/#open-ecosystem","title":"Open Ecosystem","text":"graph TD\n E[Open Ecosystem] --> L[Encourage Innovation]\n E --> M[Community Driven]\n\n L --> AC[Open Source Contributions]\n L --> AD[Hackathons and Workshops]\n L --> AE[Research and Development]\n\n M --> AF[Active Community Support]\n M --> AG[Collaborative Development]\n M --> AH[Shared Resources]\n\n AC --> AI[Community Contributions]\n AD --> AJ[Innovative Events]\n AE --> AK[Continuous R&D]\n\n AF --> AL[Supportive Community]\n AG --> AM[Joint Development Projects]\n AH --> AN[Shared Knowledge Base]
"},{"location":"swarms/framework/vision/#conclusion","title":"Conclusion","text":"Swarms excels in enabling seamless communication and coordination between multiple agents, fostering a collaborative environment where agents can work together to solve complex tasks. Our platform supports cross-agent messaging, task coordination, and real-time updates, ensuring that all agents are synchronized and can efficiently contribute to the collective goal.
Swarms provides robust integration capabilities with a wide array of models, including OpenAI, Anthropic, Gemini, LangChain, AutoGen, and custom models. This ensures that enterprises can leverage the best models available to meet their specific needs, while also allowing for dynamic model orchestration and version control to keep operations up-to-date and effective.
Our framework is designed to enhance operational efficiency through automation. By automating workflows, reducing manual work, and increasing productivity, Swarms helps enterprises achieve higher efficiency and operational excellence. Our solutions are built for high uptime, enterprise-grade security, and scalability, ensuring reliable and secure operations.
Swarms promotes an open and extensible ecosystem, encouraging community-driven innovation and development. We support open-source contributions, organize hackathons and workshops, and continuously invest in research and development. Our active community fosters collaborative development, shared resources, and a supportive environment for innovation.
Swarms is dedicated to providing a comprehensive and powerful framework for enterprises seeking to automate operations through multi-agent collaboration and integration with various models. Our commitment to an open ecosystem, enterprise-grade automation solutions, and seamless multi-agent collaboration ensures that Swarms remains the leading choice for enterprises aiming to achieve operational excellence through intelligent automation.
"},{"location":"swarms/install/docker_setup/","title":"Docker Setup Guide for Contributors to Swarms","text":"Welcome to the swarms
project Docker setup guide. This document will help you establish a Docker-based environment for contributing to swarms
. Docker provides a consistent and isolated environment, ensuring that all contributors can work in the same settings, reducing the \"it works on my machine\" syndrome.
The purpose of this guide is to:
This guide covers:
swarms
repositoryswarms
application in a Docker containersudo apt-get update
.sudo apt-get install docker-ce docker-ce-cli containerd.io\n
sudo docker run hello-world\n
"},{"location":"swarms/install/docker_setup/#post-installation-steps-for-linux","title":"Post-installation Steps for Linux","text":"git clone https://github.com/your-username/swarms.git\ncd swarms\n
"},{"location":"swarms/install/docker_setup/#docker-basics","title":"Docker Basics","text":""},{"location":"swarms/install/docker_setup/#dockerfile-overview","title":"Dockerfile Overview","text":"swarms
project.docker build -t swarms-dev .\n
"},{"location":"swarms/install/docker_setup/#running-a-container","title":"Running a Container","text":"docker run -it --rm swarms-dev\n
"},{"location":"swarms/install/docker_setup/#development-workflow-with-docker","title":"Development Workflow with Docker","text":""},{"location":"swarms/install/docker_setup/#running-the-application","title":"Running the Application","text":"swarms
application within Docker.pytest
within the Docker environment.docker-compose.yml
file for the swarms
project.Creating a Dockerfile for deploying the swarms
framework to the cloud involves setting up the necessary environment to run your Python application, ensuring all dependencies are installed, and configuring the container to execute the desired tasks. Here's an example Dockerfile that sets up such an environment:
# Use an official Python runtime as a parent image\nFROM python:3.11-slim\n\n# Set environment variables\nENV PYTHONDONTWRITEBYTECODE 1\nENV PYTHONUNBUFFERED 1\n\n# Set the working directory in the container\nWORKDIR /usr/src/swarm_cloud\n\n# Install system dependencies\nRUN apt-get update \\\n && apt-get -y install gcc \\\n && apt-get clean\n\n# Install Python dependencies\n# COPY requirements.txt and pyproject.toml if you're using poetry for dependency management\nCOPY requirements.txt .\nRUN pip install --upgrade pip\nRUN pip install --no-cache-dir -r requirements.txt\n\n# Install the 'swarms' package, assuming it's available on PyPI\nENV SWARM_API_KEY=your_swarm_api_key_here\nENV OPENAI_API_KEY=your_openai_key\nRUN pip install swarms\n\n# Copy the rest of the application\nCOPY . .\n\n# Add entrypoint script if needed\n# COPY ./entrypoint.sh .\n# RUN chmod +x /usr/src/swarm_cloud/entrypoint.sh\n\n# Expose port if your application has a web interface\n# EXPOSE 5000\n\n# Define environment variable for the swarm to work\n# Add Docker CMD or ENTRYPOINT script to run the application\n# CMD python your_swarm_startup_script.py\n# Or use the entrypoint script if you have one\n# ENTRYPOINT [\"/usr/src/swarm_cloud/entrypoint.sh\"]\n\n# If you're using `CMD` to execute a Python script, make sure it's executable\n# RUN chmod +x your_swarm_startup_script.py\n
To build and run this Docker image:
requirements.txt
with your actual requirements file or pyproject.toml
and poetry.lock
if you're using Poetry.your_swarm_startup_script.py
with the script that starts your application.COPY
and RUN
lines for entrypoint.sh
.EXPOSE
line and set it to the correct port.Now, build your Docker image:
docker build -t swarm-cloud .\n
And run it:
docker run -d --name my-swarm-app swarm-cloud\n
For deploying to the cloud, you'll need to push your Docker image to a container registry (like Docker Hub or a private registry), then pull it from your cloud environment to run it. Cloud providers often have services specifically for this purpose (like AWS ECS, GCP GKE, or Azure AKS). The deployment process will involve:
Remember to secure sensitive data, use tagged releases for your images, and follow best practices for operating in the cloud.
"},{"location":"swarms/install/env/","title":"Environment Variables","text":""},{"location":"swarms/install/env/#overview","title":"Overview","text":"Swarms uses environment variables for configuration management and secure credential storage. This approach keeps sensitive information like API keys out of your code and allows for easy configuration changes across different environments.
"},{"location":"swarms/install/env/#core-environment-variables","title":"Core Environment Variables","text":""},{"location":"swarms/install/env/#framework-configuration","title":"Framework Configuration","text":"Configuration Variables Variable Description ExampleSWARMS_VERBOSE_GLOBAL
Controls global logging verbosity True
or False
WORKSPACE_DIR
Defines the workspace directory for agent operations agent_workspace
"},{"location":"swarms/install/env/#llm-provider-api-keys","title":"LLM Provider API Keys","text":"OpenAIAnthropicGroqGoogleHugging FacePerplexity AIAI21CohereMistral AITogether AI OPENAI_API_KEY=\"your-openai-key\"\n
ANTHROPIC_API_KEY=\"your-anthropic-key\"\n
GROQ_API_KEY=\"your-groq-key\"\n
GEMINI_API_KEY=\"your-gemini-key\"\n
HUGGINGFACE_TOKEN=\"your-huggingface-token\"\n
PPLX_API_KEY=\"your-perplexity-key\"\n
AI21_API_KEY=\"your-ai21-key\"\n
COHERE_API_KEY=\"your-cohere-key\"\n
MISTRAL_API_KEY=\"your-mistral-key\"\n
TOGETHER_API_KEY=\"your-together-key\"\n
"},{"location":"swarms/install/env/#tool-provider-keys","title":"Tool Provider Keys","text":"Search ToolsAnalytics & MonitoringBrowser Automation BING_BROWSER_API=\"your-bing-key\"\nBRAVESEARCH_API_KEY=\"your-brave-key\"\nTAVILY_API_KEY=\"your-tavily-key\"\nYOU_API_KEY=\"your-you-key\"\n
EXA_API_KEY=\"your-exa-key\"\n
MULTION_API_KEY=\"your-multion-key\"\n
"},{"location":"swarms/install/env/#security-best-practices","title":"Security Best Practices","text":""},{"location":"swarms/install/env/#environment-file-management","title":"Environment File Management","text":".env
file in your project root.env
files to version control.env
to your .gitignore
: echo \".env\" >> .gitignore\n
Important Security Considerations
Create a .env.example
template without actual values:
# Required Configuration\nOPENAI_API_KEY=\"\"\nANTHROPIC_API_KEY=\"\"\nGROQ_API_KEY=\"\"\nWORKSPACE_DIR=\"agent_workspace\"\n\n# Optional Configuration\nSWARMS_VERBOSE_GLOBAL=\"False\"\n
"},{"location":"swarms/install/env/#loading-environment-variables","title":"Loading Environment Variables","text":"from dotenv import load_dotenv\nimport os\n\n# Load environment variables\nload_dotenv()\n\n# Access variables\nworkspace_dir = os.getenv(\"WORKSPACE_DIR\")\nopenai_key = os.getenv(\"OPENAI_API_KEY\")\n
"},{"location":"swarms/install/env/#environment-setup-guide","title":"Environment Setup Guide","text":"1. Install Dependencies2. Create Environment File3. Configure Variables4. Verify Setup pip install python-dotenv\n
cp .env.example .env\n
.env
in your text editorimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\nassert os.getenv(\"OPENAI_API_KEY\") is not None, \"OpenAI API key not found\"\n
"},{"location":"swarms/install/env/#environment-specific-configuration","title":"Environment-Specific Configuration","text":"DevelopmentProductionTesting WORKSPACE_DIR=\"agent_workspace\"\nSWARMS_VERBOSE_GLOBAL=\"True\"\n
WORKSPACE_DIR=\"/var/swarms/workspace\"\nSWARMS_VERBOSE_GLOBAL=\"False\"\n
WORKSPACE_DIR=\"test_workspace\"\nSWARMS_VERBOSE_GLOBAL=\"True\"\n
"},{"location":"swarms/install/env/#troubleshooting","title":"Troubleshooting","text":""},{"location":"swarms/install/env/#common-issues","title":"Common Issues","text":"Environment Variables Not Loading .env
file exists in project rootload_dotenv()
is called before accessing variablesYou can install swarms
with pip in a Python>=3.10 environment.
Before you begin, ensure you have the following installed:
pip >= 21.0
UV is a fast Python package installer and resolver written in Rust. It's significantly faster than pip and provides better dependency resolution.
Basic InstallationDevelopment Installation# Install UV first\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n# Install swarms using UV\nuv pip install swarms\n
# Clone the repository\ngit clone https://github.com/kyegomez/swarms.git\ncd swarms\n\n# Install in editable mode\nuv pip install -e .\n
For desktop installation with extras:
uv pip install -e .[desktop]\n
Using virtualenvUsing AnacondaUsing Poetry Clone the repository and navigate to the root directory:
git clone https://github.com/kyegomez/swarms.git\ncd swarms\n
Setup Python environment and activate it:
python3 -m venv venv\nsource venv/bin/activate\npip install --upgrade pip\n
Install Swarms:
Headless install:
pip install -e .\n
Desktop install:
pip install -e .[desktop]\n
Create and activate an Anaconda environment:
conda create -n swarms python=3.10\nconda activate swarms\n
Clone the repository and navigate to the root directory:
git clone https://github.com/kyegomez/swarms.git\ncd swarms\n
Install Swarms:
Headless install:
pip install -e .\n
Desktop install:
pip install -e .[desktop]\n
Clone the repository and navigate to the root directory:
git clone https://github.com/kyegomez/swarms.git\ncd swarms\n
Setup Python environment and activate it:
poetry env use python3.10\npoetry shell\n
Install Swarms:
Headless install:
poetry install\n
Desktop install:
poetry install --extras \"desktop\"\n
Docker is an excellent option for creating isolated and reproducible environments, suitable for both development and production. Contact us if there are any issues with the docker setup
Pull the Docker image:
docker pull swarmscorp/swarms:tagname\n
Run the Docker container:
docker run -it --rm swarmscorp/swarms:tagname\n
Build and run a custom Docker image:
# Use Python 3.11 instead of 3.13\nFROM python:3.11-slim\n\n# Set environment variables\nENV PYTHONDONTWRITEBYTECODE=1 \\\n PYTHONUNBUFFERED=1 \\\n WORKSPACE_DIR=\"agent_workspace\" \\\n OPENAI_API_KEY=\"your_swarm_api_key_here\"\n\n# Set the working directory\nWORKDIR /usr/src/swarms\n\n# Install system dependencies\nRUN apt-get update && apt-get install -y \\\n build-essential \\\n gcc \\\n g++ \\\n gfortran \\\n && rm -rf /var/lib/apt/lists/*\n\n# Install swarms package\nRUN pip3 install -U swarm-models\nRUN pip3 install -U swarms\n\n# Copy the application\nCOPY . .\n
Kubernetes provides an automated way to deploy, scale, and manage containerized applications.
Create a Deployment YAML file:
apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: swarms-deployment\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: swarms\n template:\n metadata:\n labels:\n app: swarms\n spec:\n containers:\n - name: swarms\n image: kyegomez/swarms\n ports:\n - containerPort: 8080\n
Apply the Deployment:
kubectl apply -f deployment.yaml\n
Expose the Deployment:
kubectl expose deployment swarms-deployment --type=LoadBalancer --name=swarms-service\n
Integrating Swarms into your CI/CD pipeline ensures automated testing and deployment.
"},{"location":"swarms/install/install/#headless-installation","title":"Headless Installation","text":"The headless installation of swarms
is designed for environments where graphical user interfaces (GUI) are not needed, making it more lightweight and suitable for server-side applications.
pip install swarms\n
"},{"location":"swarms/install/install/#using-github-actions","title":"Using GitHub Actions","text":"# .github/workflows/ci.yml\nname: CI\n\non:\n push:\n branches: [ main ]\n pull_request:\n branches: [ main ]\n\njobs:\n build:\n\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v2\n - name: Set up Python\n uses: actions/setup-python@v2\n with:\n python-version: 3.10\n - name: Install dependencies\n run: |\n python -m venv venv\n source venv/bin/activate\n pip install --upgrade pip\n pip install -e .\n - name: Run tests\n run: |\n source venv/bin/activate\n pytest\n
"},{"location":"swarms/install/install/#using-jenkins","title":"Using Jenkins","text":"pipeline {\n agent any\n\n stages {\n stage('Clone repository') {\n steps {\n git 'https://github.com/kyegomez/swarms.git'\n }\n }\n stage('Setup Python') {\n steps {\n sh 'python3 -m venv venv'\n sh 'source venv/bin/activate && pip install --upgrade pip'\n }\n }\n stage('Install dependencies') {\n steps {\n sh 'source venv/bin/activate && pip install -e .'\n }\n }\n stage('Run tests') {\n steps {\n sh 'source venv/bin/activate && pytest'\n }\n }\n }\n}\n
"},{"location":"swarms/install/install/#rust","title":"Rust","text":"Cargo install Get started with the Rust implementation of Swarms. Get started with the docs here
cargo add swarms-rs\n
"},{"location":"swarms/install/quickstart/","title":"Quickstart","text":""},{"location":"swarms/install/quickstart/#quickstart","title":"Quickstart","text":"Swarms is an enterprise-grade, production-ready multi-agent collaboration framework that enables you to orchestrate agents to work collaboratively at scale to automate real-world activities. Follow this quickstart guide to get up and running with Swarms, including setting up your environment, building an agent, and leveraging multi-agent methods.
"},{"location":"swarms/install/quickstart/#requirements","title":"Requirements","text":".env
file with API keys from your providers like OPENAI_API_KEY
, ANTHROPIC_API_KEY
WORKSPACE_DIR=\"agent_workspace\"\n
To install Swarms, run:
$ pip install -U swarms\n
"},{"location":"swarms/install/quickstart/#usage-example-single-agent","title":"Usage Example: Single Agent","text":"Here's a simple example of creating a financial analysis agent powered by OpenAI's GPT-4o-mini model. This agent will analyze financial queries like how to set up a ROTH IRA.
from swarms.structs.agent import Agent\n\n# Initialize the agent with GPT-4o-mini model\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"Analyze financial situations and provide advice...\",\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n saved_state_path=\"finance_agent.json\",\n model_name=\"gpt-4o-mini\",\n)\n\n# Run your query\nout = agent.run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\"\n)\nprint(out)\n
"},{"location":"swarms/install/quickstart/#agent-class","title":"Agent Class","text":"agent_name
: Name of the agent.system_prompt
: System-level instruction guiding the agent's behavior.model_name
: Name of the model to use (e.g., \"gpt-4o-mini\").max_loops
: Max iterations for a task.autosave
: Auto-saves the state after each iteration.
Methods:
run(task: str)
: Executes the agent's task.ingest_docs(doc_path: str)
: Ingests documents into the agent's knowledge base.filtered_run(task: str)
: Runs agent with a filtered system prompt.The create_agents_from_yaml
function works by reading agent configurations from a YAML file. Below is an example of what your YAML file (agents_config.yaml
) should look like this. Example YAML Configuration (agents_config.yaml
):
agents:\n - agent_name: \"Financial-Analysis-Agent\"\n system_prompt: \"You are a financial analysis expert. Analyze market trends and provide investment recommendations.\"\n model_name: \"claude-3-opus-20240229\"\n max_loops: 1\n autosave: false\n dashboard: false\n verbose: false\n dynamic_temperature_enabled: false\n user_name: \"swarms_corp\"\n retry_attempts: 1\n context_length: 200000\n return_step_meta: false\n output_type: \"str\"\n temperature: 0.1\n max_tokens: 2000\n task: \"Analyze tech stocks for 2024 investment strategy. Provide detailed analysis and recommendations.\"\n\n - agent_name: \"Risk-Analysis-Agent\"\n system_prompt: \"You are a risk analysis expert. Evaluate investment risks and provide mitigation strategies.\"\n model_name: \"claude-3-opus-20240229\"\n max_loops: 1\n autosave: false\n dashboard: false\n verbose: false\n dynamic_temperature_enabled: false\n user_name: \"swarms_corp\"\n retry_attempts: 1\n context_length: 150000\n return_step_meta: false\n output_type: \"str\"\n temperature: 0.1\n max_tokens: 2000\n task: \"Conduct a comprehensive risk analysis of the top 5 tech companies in 2024. Include risk factors and mitigation strategies.\"\n\nswarm_architecture:\n name: \"Financial Analysis Swarm\"\n description: \"A swarm for comprehensive financial and risk analysis\"\n max_loops: 1\n swarm_type: \"SequentialWorkflow\"\n task: \"Analyze tech stocks and their associated risks for 2024 investment strategy\"\n autosave: false\n return_json: true\n
"},{"location":"swarms/install/quickstart/#key-configuration-fields","title":"Key Configuration Fields:","text":"Now, create the main Python script that will use the create_agents_from_yaml
function.
main.py
:","text":"from swarms.agents.create_agents_from_yaml import create_agents_from_yaml\n\n# Create agents and get task results\ntask_results = create_agents_from_yaml(\n yaml_file=\"agents_config.yaml\",\n return_type=\"run_swarm\"\n)\n\nprint(task_results)\n
"},{"location":"swarms/install/quickstart/#example-run","title":"Example Run:","text":"python main.py\n
This will: 1. Load agent configurations from agents_config.yaml
. 2. Create the agents specified in the YAML file. 3. Run the tasks provided for each agent. 4. Output the task results to the console.
The create_agents_from_yaml
function supports multiple return types. You can control what is returned by setting the return_type
parameter to \"agents\"
, \"tasks\"
, or \"both\"
.
return_type=\"agents\"
:agents = create_agents_from_yaml(yaml_file, return_type=\"agents\")\nfor agent in agents:\n print(f\"Agent {agent.agent_name} created.\")\n
return_type=\"tasks\"
:task_results = create_agents_from_yaml(yaml_file, return_type=\"tasks\")\nfor result in task_results:\n print(f\"Agent {result['agent_name']} executed task '{result['task']}' with output: {result['output']}\")\n
return_type=\"both\"
:agents, task_results = create_agents_from_yaml(yaml_file, return_type=\"both\")\n# Process agents and tasks separately\n
"},{"location":"swarms/install/quickstart/#step-4-yaml-structure-for-multiple-agents","title":"Step 4: YAML Structure for Multiple Agents","text":"The YAML file can define any number of agents, each with its own unique configuration. You can scale this setup by adding more agents and tasks to the agents
list within the YAML file.
agents:\n - agent_name: \"Agent1\"\n # Agent1 config...\n\n - agent_name: \"Agent2\"\n # Agent2 config...\n\n - agent_name: \"Agent3\"\n # Agent3 config...\n
Each agent will be initialized according to its configuration, and tasks (if provided) will be executed automatically.
"},{"location":"swarms/install/quickstart/#integrating-external-agents","title":"Integrating External Agents","text":"Integrating external agents from other agent frameworks is easy with swarms.
Steps:
Agent
.run(task: str) -> str
method that runs the agent and returns the response. For example, here's an example on how to create an agent from griptape.
Here's how you can create a custom Griptape agent that integrates with the Swarms framework by inheriting from the Agent
class in Swarms and overriding the run(task: str) -> str
method.
from swarms import (\n Agent as SwarmsAgent,\n) # Import the base Agent class from Swarms\nfrom griptape.structures import Agent as GriptapeAgent\nfrom griptape.tools import (\n WebScraperTool,\n FileManagerTool,\n PromptSummaryTool,\n)\n\n\n# Create a custom agent class that inherits from SwarmsAgent\nclass GriptapeSwarmsAgent(SwarmsAgent):\n def __init__(self, *args, **kwargs):\n # Initialize the Griptape agent with its tools\n self.agent = GriptapeAgent(\n input=\"Load {{ args[0] }}, summarize it, and store it in a file called {{ args[1] }}.\",\n tools=[\n WebScraperTool(off_prompt=True),\n PromptSummaryTool(off_prompt=True),\n FileManagerTool(),\n ],\n *args,\n **kwargs,\n # Add additional settings\n )\n\n # Override the run method to take a task and execute it using the Griptape agent\n def run(self, task: str) -> str:\n # Extract URL and filename from task (you can modify this parsing based on task structure)\n url, filename = task.split(\n \",\"\n ) # Example of splitting task string\n # Execute the Griptape agent with the task inputs\n result = self.agent.run(url.strip(), filename.strip())\n # Return the final result as a string\n return str(result)\n\n\n# Example usage:\ngriptape_swarms_agent = GriptapeSwarmsAgent()\noutput = griptape_swarms_agent.run(\n \"https://griptape.ai, griptape.txt\"\n)\nprint(output)\n
"},{"location":"swarms/install/quickstart/#key-components","title":"Key Components:","text":"SwarmsAgent
class and integrates the Griptape agent.WebScraperTool
, PromptSummaryTool
, FileManagerTool
) allow for web scraping, summarization, and file management.You can now easily plug this custom Griptape agent into the Swarms Framework and use it to run tasks!
"},{"location":"swarms/install/quickstart/#overview-of-swarm-architectures-in-the-swarms-framework","title":"Overview of Swarm Architectures in the Swarms Framework","text":""},{"location":"swarms/install/quickstart/#1-sequential-workflow","title":"1. Sequential Workflow","text":"Overview: The SequentialWorkflow
enables tasks to be executed one after the other. Each agent processes its task and passes the output to the next agent in the sequence.
graph TD;\n A[Task Input] --> B[Blog Generator Agent];\n B --> C[Summarizer Agent];\n C --> D[Task Output];
"},{"location":"swarms/install/quickstart/#code-example","title":"Code Example:","text":"from swarms import Agent, SequentialWorkflow\n\n# Initialize agents without importing a specific LLM class\nagent1 = Agent(\n agent_name=\"Blog generator\",\n system_prompt=\"Generate a blog post\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1\n)\nagent2 = Agent(\n agent_name=\"Summarizer\",\n system_prompt=\"Summarize the blog post\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1\n)\n\n# Create Sequential workflow\nworkflow = SequentialWorkflow(agents=[agent1, agent2], max_loops=1)\n\n# Run workflow\noutput = workflow.run(\"Generate a blog post on how swarms of agents can help businesses grow.\")\nprint(output)\n
"},{"location":"swarms/install/quickstart/#2-agent-rearrange","title":"2. Agent Rearrange","text":"Overview: AgentRearrange
allows the orchestration of agents in both sequential and parallel configurations. The user can define a flexible flow of tasks between agents.
graph TD;\n A[Director Agent] --> B[Worker 1 Agent];\n A --> C[Worker 2 Agent];\n B --> D[Task Completed];\n C --> D[Task Completed];
"},{"location":"swarms/install/quickstart/#code-example_1","title":"Code Example:","text":"from swarms import Agent, AgentRearrange\n\n# Initialize agents using model_name (no explicit LLM import)\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs tasks\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1\n)\nworker1 = Agent(\n agent_name=\"Worker1\",\n system_prompt=\"Generate a transcript\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1\n)\nworker2 = Agent(\n agent_name=\"Worker2\",\n system_prompt=\"Summarize the transcript\",\n model_name=\"claude-3-sonnet-20240229\",\n max_loops=1\n)\n\n# Define the flow and create the rearranged system\nflow = \"Director -> Worker1 -> Worker2\"\nagent_system = AgentRearrange(agents=[director, worker1, worker2], flow=flow)\n\n# Run it\noutput = agent_system.run(\"Create a YouTube transcript and summary\")\nprint(output)\n
"},{"location":"swarms/install/quickstart/#4-mixture-of-agents","title":"4. Mixture of Agents","text":"Overview: MixtureOfAgents
is a parallelized architecture where agents perform tasks concurrently and then feed their results back into a loop for final aggregation. This is useful for highly parallelizable tasks.
graph TD;\n A[Director Agent] --> B[Accountant 1];\n A --> C[Accountant 2];\n B --> D[Final Aggregation];\n C --> D[Final Aggregation];
"},{"location":"swarms/install/quickstart/#code-example_2","title":"Code Example:","text":"from swarms import Agent, OpenAIChat, MixtureOfAgents\n\n# Initialize agents\ndirector = Agent(agent_name=\"Director\", system_prompt=\"Directs tasks\", llm=OpenAIChat(), max_loops=1)\naccountant1 = Agent(agent_name=\"Accountant1\", system_prompt=\"Prepare financial statements\", llm=OpenAIChat(), max_loops=1)\naccountant2 = Agent(agent_name=\"Accountant2\", system_prompt=\"Audit financial records\", llm=OpenAIChat(), max_loops=1)\n\n# Create Mixture of Agents swarm\nswarm = MixtureOfAgents(name=\"Mixture of Accountants\", agents=[director, accountant1, accountant2], layers=3, final_agent=director)\n\n# Run the swarm\noutput = swarm.run(\"Prepare financial statements and audit financial records\")\nprint(output)\n
"},{"location":"swarms/install/quickstart/#5-spreadsheet-swarm","title":"5. Spreadsheet Swarm","text":"Overview: SpreadSheetSwarm
enables the management of thousands of agents simultaneously, where each agent operates on its own thread. It's ideal for overseeing large-scale agent outputs.
graph TD;\n A[Spreadsheet Swarm] --> B[Twitter Agent];\n A --> C[Instagram Agent];\n A --> D[Facebook Agent];\n A --> E[LinkedIn Agent];\n A --> F[Email Agent];
"},{"location":"swarms/install/quickstart/#code-example_3","title":"Code Example:","text":"from swarms import Agent\nfrom swarms.structs.spreadsheet_swarm import SpreadSheetSwarm\n\n# Initialize agents for different marketing platforms using model_name\nagents = [\n Agent(\n agent_name=\"Twitter Agent\",\n system_prompt=\"Create a tweet\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n ),\n Agent(\n agent_name=\"Instagram Agent\",\n system_prompt=\"Create an Instagram post\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n ),\n Agent(\n agent_name=\"Facebook Agent\",\n system_prompt=\"Create a Facebook post\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n ),\n Agent(\n agent_name=\"LinkedIn Agent\",\n system_prompt=\"Create a LinkedIn post\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n ),\n Agent(\n agent_name=\"Email Agent\",\n system_prompt=\"Write a marketing email\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n ),\n]\n\n# Create the Spreadsheet Swarm\nswarm = SpreadSheetSwarm(\n agents=agents,\n save_file_path=\"real_estate_marketing_spreadsheet.csv\",\n run_all_agents=False,\n max_loops=2\n)\n\n# Run the swarm\nswarm.run(\"Create posts to promote luxury properties in North Texas.\")\n
These are the key swarm architectures available in the Swarms Framework. Each one is designed to solve different types of multi-agent orchestration problems, from sequential tasks to large-scale parallel processing.
"},{"location":"swarms/install/quickstart/#overview-of-swarm-architectures","title":"Overview of Swarm Architectures","text":""},{"location":"swarms/install/quickstart/#workflow-classes","title":"Workflow Classes","text":"Chains agents, where one agent's output becomes the next agent's input.
AgentRearrange:
Implements top-down control, where a boss agent coordinates tasks among sub-agents.
Spreadsheet Swarm:
This guide details the environment variables used in the Swarms framework for configuration and customization of your agent-based applications.
"},{"location":"swarms/install/workspace_manager/#configuration-setup","title":"Configuration Setup","text":"Create a .env
file in your project's root directory to configure the Swarms framework. This file will contain all necessary environment variables for customizing your agent's behavior, logging, and analytics.
WORKSPACE_DIR
","text":"./workspace
WORKSPACE_DIR=/path/to/your/workspace\n
SWARMS_AUTOUPDATE_ON
","text":"false
SWARMS_AUTOUPDATE_ON=true\n
false
if you need version stabilitytrue
for development environmentsUSE_TELEMETRY
","text":"false
USE_TELEMETRY=true\n
SWARMS_API_KEY
","text":"SWARMS_API_KEY=your_api_key_here\n
Create a new .env
file:
touch .env\n
Add your configuration:
# Basic configuration\nWORKSPACE_DIR=./my_workspace\n\n# Enable auto-updates\nSWARMS_AUTOUPDATE_ON=true\n\n# Enable telemetry\nUSE_TELEMETRY=true\n\n# Add your Swarms API key\nSWARMS_API_KEY=your_api_key_here\n
Obtain your API key:
.env
file to version control.env
to your .gitignore
fileKeep your API keys secure and rotate them periodically
Workspace Organization:
Monitor workspace size to prevent disk space issues
Telemetry Management:
Review collected data periodically
Auto-Update Management:
WORKSPACE_DIR=./dev_workspace\nSWARMS_AUTOUPDATE_ON=true\nUSE_TELEMETRY=true\nSWARMS_API_KEY=sk_test_xxxxxxxxxxxx\n
"},{"location":"swarms/install/workspace_manager/#production-setup","title":"Production Setup","text":"WORKSPACE_DIR=/var/log/swarms/prod_workspace\nSWARMS_AUTOUPDATE_ON=false\nUSE_TELEMETRY=true\nSWARMS_API_KEY=sk_prod_xxxxxxxxxxxx\n
"},{"location":"swarms/install/workspace_manager/#testing-environment","title":"Testing Environment","text":"WORKSPACE_DIR=./test_workspace\nSWARMS_AUTOUPDATE_ON=true\nUSE_TELEMETRY=false\nSWARMS_API_KEY=sk_test_xxxxxxxxxxxx\n
"},{"location":"swarms/install/workspace_manager/#troubleshooting","title":"Troubleshooting","text":"Common issues and solutions:
Check disk space availability
API Key Problems:
Check for proper environment variable loading
Telemetry Issues:
Check for proper boolean values
Auto-Update Issues:
In this guide, we will cover how to integrate various memory systems from the Swarms Memory framework into an agent class. The Swarms Memory framework allows for the integration of different database-backed memory systems, enabling agents to retain and query long-term knowledge effectively. We'll walk through examples of integrating with Pinecone, ChromaDB, and Faiss, showcasing how to configure custom functions and embed memory functionality into an agent class.
"},{"location":"swarms/memory/diy_memory/#installation","title":"Installation","text":"First, you need to install the Swarms Memory package:
$ pip install swarms swarms-memory\n
"},{"location":"swarms/memory/diy_memory/#integrating-chromadb-with-the-agent-class","title":"Integrating ChromaDB with the Agent Class","text":"ChromaDB is a simple, high-performance vector store for use with embeddings. Here's how you can integrate ChromaDB:
from swarms_memory import ChromaDB\nfrom swarms.structs.agent import Agent\n\n# Initialize ChromaDB memory\nchromadb_memory = ChromaDB(\n metric=\"cosine\",\n output_dir=\"finance_agent_rag\",\n)\n\n# Initialize the Financial Analysis Agent with GPT-4o-mini model\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n model_name=\"gpt-4o-mini\",\n long_term_memory=chromadb_memory,\n)\n\n# Run a query\nresponse = agent.run(\n \"What are the components of a startup's stock incentive equity plan?\"\n)\nprint(response)\n
"},{"location":"swarms/memory/diy_memory/#integrating-faiss-with-the-agent-class","title":"Integrating Faiss with the Agent Class","text":"Faiss is a library for efficient similarity search and clustering of dense vectors. Here's how you can integrate Faiss:
from typing import List, Dict, Any\nfrom swarms_memory.faiss_wrapper import FAISSDB\nfrom swarms import Agent\nfrom swarm_models import Anthropic\nfrom transformers import AutoTokenizer, AutoModel\nimport torch\nimport os\n\n# Custom embedding function using a HuggingFace model\ndef custom_embedding_function(text: str) -> List[float]:\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n model = AutoModel.from_pretrained(\"bert-base-uncased\")\n inputs = tokenizer(\n text,\n return_tensors=\"pt\",\n padding=True,\n truncation=True,\n max_length=512,\n )\n with torch.no_grad():\n outputs = model(**inputs)\n embeddings = (\n outputs.last_hidden_state.mean(dim=1).squeeze().tolist()\n )\n return embeddings\n\n# Initialize the FAISS memory wrapper\nfaiss_memory = FAISSDB(\n dimension=768,\n index_type=\"Flat\",\n embedding_function=custom_embedding_function,\n metric=\"cosine\",\n)\n\n# Model\nmodel = Anthropic(anthropic_api_key=os.getenv(\"ANTHROPIC_API_KEY\"))\n\n# Initialize the agent with Faiss memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"Agent system prompt here\",\n agent_description=\"Agent performs financial analysis.\",\n llm=model,\n long_term_memory=faiss_memory,\n)\n\n# Run a query\nagent.run(\"Explain the differences between various types of financial instruments.\")\n
"},{"location":"swarms/memory/diy_memory/#mermaid-graphs-for-visualizing-integration","title":"Mermaid Graphs for Visualizing Integration","text":"To help visualize the integration process, here's a Mermaid graph illustrating how an agent interacts with the memory systems:
graph TD;\n A[Agent] -->|Queries| B[Memory System]\n B --> C{Pinecone / ChromaDB / Faiss}\n C --> D[Embedding Function]\n D --> E[LLM Model]\n E --> F[Query Results]\n F -->|Returns| A
This graph shows the flow from the agent sending queries to the memory system, which processes them using the embedding function and LLM model, and finally returns the results back to the agent.
"},{"location":"swarms/memory/diy_memory/#conclusion","title":"Conclusion","text":"Integrating various memory systems from the Swarms Memory framework into the agent class enables the creation of powerful, memory-augmented agents capable of retaining and recalling information over time. Whether you're using Pinecone, ChromaDB, or Faiss, the process involves initializing the memory system, embedding functions, and then passing this memory system to the agent class. The examples and visualizations provided should help you get started with building your own memory-augmented agents.
Happy coding!
"},{"location":"swarms/models/","title":"Swarm Models","text":"$ pip3 install -U swarm-models\n
Welcome to the documentation for the llm section of the swarms package, designed to facilitate seamless integration with various AI language models and APIs. This package empowers developers, end-users, and system administrators to interact with AI models from different providers, such as OpenAI, Hugging Face, Google PaLM, and Anthropic.
"},{"location":"swarms/models/#table-of-contents","title":"Table of Contents","text":"The OpenAI class provides an interface to interact with OpenAI's language models. It allows both synchronous and asynchronous interactions.
Constructor:
OpenAI(api_key: str, system: str = None, console: bool = True, model: str = None, params: dict = None, save_messages: bool = True)\n
Attributes: - api_key
(str): Your OpenAI API key.
system
(str, optional): A system message to be used in conversations.
console
(bool, default=True): Display console logs.
model
(str, optional): Name of the language model to use.
params
(dict, optional): Additional parameters for model interactions.
save_messages
(bool, default=True): Save conversation messages.
Methods:
run(message: str, **kwargs) -> str
: Generate a response using the OpenAI model.
generate_async(message: str, **kwargs) -> str
: Generate a response asynchronously.
ask_multiple(ids: List[str], question_template: str) -> List[str]
: Query multiple IDs simultaneously.
stream_multiple(ids: List[str], question_template: str) -> List[str]
: Stream multiple responses.
Usage Example:
import asyncio\n\nfrom swarm_models import OpenAI\n\nchat = OpenAI(api_key=\"YOUR_OPENAI_API_KEY\")\n\nresponse = chat.run(\"Hello, how can I assist you?\")\nprint(response)\n\nids = [\"id1\", \"id2\", \"id3\"]\nasync_responses = asyncio.run(chat.ask_multiple(ids, \"How is {id}?\"))\nprint(async_responses)\n
"},{"location":"swarms/models/#2-huggingface-swarm_modelshuggingfacellm","title":"2. HuggingFace (swarm_models.HuggingFaceLLM)","text":"The HuggingFaceLLM class allows interaction with language models from Hugging Face.
Constructor:
HuggingFaceLLM(model_id: str, device: str = None, max_length: int = 20, quantize: bool = False, quantization_config: dict = None)\n
Attributes:
model_id
(str): ID or name of the Hugging Face model.
device
(str, optional): Device to run the model on (e.g., 'cuda', 'cpu').
max_length
(int, default=20): Maximum length of generated text.
quantize
(bool, default=False): Apply model quantization.
quantization_config
(dict, optional): Configuration for quantization.
Methods:
run(prompt_text: str, max_length: int = None) -> str
: Generate text based on a prompt.Usage Example:
from swarm_models import HuggingFaceLLM\n\nmodel_id = \"gpt2\"\nhugging_face_model = HuggingFaceLLM(model_id=model_id)\n\nprompt = \"Once upon a time\"\ngenerated_text = hugging_face_model.run(prompt)\nprint(generated_text)\n
"},{"location":"swarms/models/#3-anthropic-swarm_modelsanthropic","title":"3. Anthropic (swarm_models.Anthropic)","text":"The Anthropic class enables interaction with Anthropic's large language models.
Constructor:
Anthropic(model: str = \"claude-2\", max_tokens_to_sample: int = 256, temperature: float = None, top_k: int = None, top_p: float = None, streaming: bool = False, default_request_timeout: int = None)\n
Attributes:
model
(str): Name of the Anthropic model.
max_tokens_to_sample
(int, default=256): Maximum tokens to sample.
temperature
(float, optional): Temperature for text generation.
top_k
(int, optional): Top-k sampling value.
top_p
(float, optional): Top-p sampling value.
streaming
(bool, default=False): Enable streaming mode.
default_request_timeout
(int, optional): Default request timeout.
Methods:
run(prompt: str, stop: List[str] = None) -> str
: Generate text based on a prompt.Usage Example:
from swarm_models import Anthropic\n\nanthropic = Anthropic()\nprompt = \"Once upon a time\"\ngenerated_text = anthropic.run(prompt)\nprint(generated_text)\n
This concludes the documentation for the \"models\" folder, providing you with tools to seamlessly integrate with various language models and APIs. Happy coding!
"},{"location":"swarms/models/agent_and_models/","title":"Model Integration in Agents","text":"About Model Integration
Agents supports multiple model providers through LiteLLM integration, allowing you to easily switch between different language models. This document outlines the available providers and how to use them with agents.
"},{"location":"swarms/models/agent_and_models/#important-note-on-model-names","title":"Important Note on Model Names","text":"Required Format
When specifying a model in an agent, you must use the format provider/model_name
. For example:
\"openai/gpt-4\"\n\"anthropic/claude-3-opus-latest\"\n\"cohere/command-r-plus\"\n
This format ensures the agent knows which provider to use for the specified model."},{"location":"swarms/models/agent_and_models/#available-model-providers","title":"Available Model Providers","text":""},{"location":"swarms/models/agent_and_models/#openai","title":"OpenAI","text":"OpenAI Models openai
gpt-4
gpt-3.5-turbo
gpt-4-turbo-preview
anthropic
claude-3-opus-latest
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-5-sonnet-latest
claude-3-5-sonnet-20240620
claude-3-7-sonnet-latest
claude-3-7-sonnet-20250219
claude-3-5-sonnet-20241022
claude-3-haiku-20240307
claude-3-5-haiku-20241022
claude-3-5-haiku-latest
claude-2
claude-2.1
claude-instant-1
claude-instant-1.2
cohere
command
command-r
command-r-08-2024
command-r7b-12-2024
command-light
command-r-plus
command-r-plus-08-2024
google
gemini-pro
gemini-pro-vision
mistral
mistral-tiny
mistral-small
mistral-medium
To use a different model with your Swarms agent, specify the model name in the model_name
parameter when initializing the Agent, using the provider/model_name format:
from swarms import Agent\n\n# Using OpenAI's GPT-4\nagent = Agent(\n agent_name=\"Research-Agent\",\n model_name=\"openai/gpt-4o\", # Note the provider/model_name format\n # ... other parameters\n)\n\n# Using Anthropic's Claude\nagent = Agent(\n agent_name=\"Analysis-Agent\",\n model_name=\"anthropic/claude-3-sonnet-20240229\", # Note the provider/model_name format\n # ... other parameters\n)\n\n# Using Cohere's Command\nagent = Agent(\n agent_name=\"Text-Agent\",\n model_name=\"cohere/command-r-plus\", # Note the provider/model_name format\n # ... other parameters\n)\n
"},{"location":"swarms/models/agent_and_models/#model-configuration","title":"Model Configuration","text":"When using different models, you can configure various parameters:
agent = Agent(\n agent_name=\"Custom-Agent\",\n model_name=\"openai/gpt-4\",\n temperature=0.7, # Controls randomness (0.0 to 1.0)\n max_tokens=2000, # Maximum tokens in response\n top_p=0.9, # Nucleus sampling parameter\n frequency_penalty=0.0, # Reduces repetition\n presence_penalty=0.0, # Encourages new topics\n # ... other parameters\n)\n
"},{"location":"swarms/models/agent_and_models/#best-practices","title":"Best Practices","text":""},{"location":"swarms/models/agent_and_models/#model-selection","title":"Model Selection","text":"Choosing the Right Model
Error Management
Cost Considerations
agent = Agent(\n agent_name=\"Analysis-Agent\",\n model_name=\"openai/gpt-4\", # Note the provider/model_name format\n temperature=0.3, # Lower temperature for more focused responses\n max_tokens=4000\n)\n
"},{"location":"swarms/models/agent_and_models/#2-creative-tasks-claude","title":"2. Creative Tasks (Claude)","text":"agent = Agent(\n agent_name=\"Creative-Agent\",\n model_name=\"anthropic/claude-3-sonnet-20240229\", # Note the provider/model_name format\n temperature=0.8, # Higher temperature for more creative responses\n max_tokens=2000\n)\n
"},{"location":"swarms/models/agent_and_models/#3-vision-tasks-gemini","title":"3. Vision Tasks (Gemini)","text":"agent = Agent(\n agent_name=\"Vision-Agent\",\n model_name=\"google/gemini-pro-vision\", # Note the provider/model_name format\n temperature=0.4,\n max_tokens=1000\n)\n
"},{"location":"swarms/models/agent_and_models/#troubleshooting","title":"Troubleshooting","text":"Common Issues
If you encounter issues with specific models:
Anthropic
Class","text":""},{"location":"swarms/models/anthropic/#overview-and-introduction","title":"Overview and Introduction","text":"The Anthropic
class provides an interface to interact with the Anthropic large language models. This class encapsulates the necessary functionality to request completions from the Anthropic API based on a provided prompt and other configurable parameters.
Anthropic
","text":"class Anthropic:\n \"\"\"Anthropic large language models.\"\"\"\n
"},{"location":"swarms/models/anthropic/#parameters","title":"Parameters:","text":"model (str)
: The name of the model to use for completions. Default is \"claude-2\".
max_tokens_to_sample (int)
: Maximum number of tokens to generate in the output. Default is 256.
temperature (float, optional)
: Sampling temperature. A higher value will make the output more random, while a lower value will make it more deterministic.
top_k (int, optional)
: Sample from the top-k most probable next tokens. Setting this parameter can reduce randomness in the output.
top_p (float, optional)
: Sample from the smallest set of tokens such that their cumulative probability exceeds the specified value. Used in nucleus sampling to provide a balance between randomness and determinism.
streaming (bool)
: Whether to stream the output or not. Default is False.
default_request_timeout (int, optional)
: Default timeout in seconds for API requests. Default is 600.
_default_params(self) -> dict
","text":"Provides the default parameters for calling the Anthropic API.
Returns: A dictionary containing the default parameters.
generate(self, prompt: str, stop: list[str] = None) -> str
","text":"Calls out to Anthropic's completion endpoint to generate text based on the given prompt.
Parameters:
prompt (str)
: The input text to provide context for the generated text.
stop (list[str], optional)
: Sequences to indicate when the model should stop generating.
Returns: A string containing the model's generated completion based on the prompt.
__call__(self, prompt: str, stop: list[str] = None) -> str
","text":"An alternative to the generate
method that allows calling the class instance directly.
Parameters:
prompt (str)
: The input text to provide context for the generated text.
stop (list[str], optional)
: Sequences to indicate when the model should stop generating.
Returns: A string containing the model's generated completion based on the prompt.
# Import necessary modules and classes\nfrom swarm_models import Anthropic\n\n# Initialize an instance of the Anthropic class\nmodel = Anthropic(anthropic_api_key=\"\")\n\n# Using the run method\ncompletion_1 = model.run(\"What is the capital of France?\")\nprint(completion_1)\n\n# Using the __call__ method\ncompletion_2 = model(\"How far is the moon from the earth?\", stop=[\"miles\", \"km\"])\nprint(completion_2)\n
"},{"location":"swarms/models/anthropic/#mathematical-formula","title":"Mathematical Formula","text":"The underlying operations of the Anthropic
class involve probabilistic sampling based on token logits from the Anthropic model. Mathematically, the process of generating a token \\( t \\) from the given logits \\( l \\) can be described by the softmax function:
Where: - \\( P(t) \\) is the probability of token \\( t \\). - \\( l_t \\) is the logit corresponding to token \\( t \\). - The summation runs over all possible tokens.
The temperature, top-k, and top-p parameters are further used to modulate the probabilities.
"},{"location":"swarms/models/anthropic/#additional-information-and-tips","title":"Additional Information and Tips","text":"Ensure you have a valid ANTHROPIC_API_KEY
set as an environment variable or passed during class instantiation.
Always handle exceptions that may arise from API timeouts or invalid prompts.
Anthropic's official documentation
Token-based sampling in Language Models for a deeper understanding of token sampling.
The Language Model Interface (BaseLLM
) is a flexible and extensible framework for working with various language models. This documentation provides a comprehensive guide to the interface, its attributes, methods, and usage examples. Whether you're using a pre-trained language model or building your own, this interface can help streamline the process of text generation, chatbots, summarization, and more.
The BaseLLM
class provides a common interface for language models. It can be initialized with various parameters to customize model behavior. Here are the initialization parameters:
model_name
The name of the language model to use. None max_tokens
The maximum number of tokens in the generated text. None temperature
The temperature parameter for controlling randomness in text generation. None top_k
The top-k parameter for filtering words in text generation. None top_p
The top-p parameter for filtering words in text generation. None system_prompt
A system-level prompt to set context for generation. None beam_width
The beam width for beam search. None num_return_sequences
The number of sequences to return in the output. None seed
The random seed for reproducibility. None frequency_penalty
The frequency penalty parameter for promoting word diversity. None presence_penalty
The presence penalty parameter for discouraging repetitions. None stop_token
A stop token to indicate the end of generated text. None length_penalty
The length penalty parameter for controlling the output length. None role
The role of the language model (e.g., assistant, user, etc.). None max_length
The maximum length of generated sequences. None do_sample
Whether to use sampling during text generation. None early_stopping
Whether to use early stopping during text generation. None num_beams
The number of beams to use in beam search. None repition_penalty
The repetition penalty parameter for discouraging repeated tokens. None pad_token_id
The token ID for padding. None eos_token_id
The token ID for the end of a sequence. None bos_token_id
The token ID for the beginning of a sequence. None device
The device to run the model on (e.g., 'cpu' or 'cuda'). None"},{"location":"swarms/models/base_llm/#attributes","title":"Attributes","text":"model_name
: The name of the language model being used.max_tokens
: The maximum number of tokens in generated text.temperature
: The temperature parameter controlling randomness.top_k
: The top-k parameter for word filtering.top_p
: The top-p parameter for word filtering.system_prompt
: A system-level prompt for context.beam_width
: The beam width for beam search.num_return_sequences
: The number of output sequences.seed
: The random seed for reproducibility.frequency_penalty
: The frequency penalty parameter.presence_penalty
: The presence penalty parameter.stop_token
: The stop token to indicate text end.length_penalty
: The length penalty parameter.role
: The role of the language model.max_length
: The maximum length of generated sequences.do_sample
: Whether to use sampling during generation.early_stopping
: Whether to use early stopping.num_beams
: The number of beams in beam search.repition_penalty
: The repetition penalty parameter.pad_token_id
: The token ID for padding.eos_token_id
: The token ID for the end of a sequence.bos_token_id
: The token ID for the beginning of a sequence.device
: The device used for model execution.history
: A list of conversation history.The BaseLLM
class defines several methods for working with language models:
run(task: Optional[str] = None, *args, **kwargs) -> str
: Generate text using the language model. This method is abstract and must be implemented by subclasses.
arun(task: Optional[str] = None, *args, **kwargs)
: An asynchronous version of run
for concurrent text generation.
batch_run(tasks: List[str], *args, **kwargs)
: Generate text for a batch of tasks.
abatch_run(tasks: List[str], *args, **kwargs)
: An asynchronous version of batch_run
for concurrent batch generation.
chat(task: str, history: str = \"\") -> str
: Conduct a chat with the model, providing a conversation history.
__call__(task: str) -> str
: Call the model to generate text.
_tokens_per_second() -> float
: Calculate tokens generated per second.
_num_tokens(text: str) -> int
: Calculate the number of tokens in a text.
_time_for_generation(task: str) -> float
: Measure the time taken for text generation.
generate_summary(text: str) -> str
: Generate a summary of the provided text.
set_temperature(value: float)
: Set the temperature parameter.
set_max_tokens(value: int)
: Set the maximum number of tokens.
clear_history()
: Clear the conversation history.
enable_logging(log_file: str = \"model.log\")
: Initialize logging for the model.
log_event(message: str)
: Log an event.
save_checkpoint(checkpoint_dir: str = \"checkpoints\")
: Save the model state as a checkpoint.
load_checkpoint(checkpoint_path: str)
: Load the model state from a checkpoint.
toggle_creative_mode(enable: bool)
: Toggle creative mode for the model.
track_resource_utilization()
: Track and report resource utilization.
`
get_generation_time() -> float`: Get the time taken for text generation.
set_max_length(max_length: int)
: Set the maximum length of generated sequences.
set_model_name(model_name: str)
: Set the model name.
set_frequency_penalty(frequency_penalty: float)
: Set the frequency penalty parameter.
set_presence_penalty(presence_penalty: float)
: Set the presence penalty parameter.
set_stop_token(stop_token: str)
: Set the stop token.
set_length_penalty(length_penalty: float)
: Set the length penalty parameter.
set_role(role: str)
: Set the role of the model.
set_top_k(top_k: int)
: Set the top-k parameter.
set_top_p(top_p: float)
: Set the top-p parameter.
set_num_beams(num_beams: int)
: Set the number of beams.
set_do_sample(do_sample: bool)
: Set whether to use sampling.
set_early_stopping(early_stopping: bool)
: Set whether to use early stopping.
set_seed(seed: int)
: Set the random seed.
set_device(device: str)
: Set the device for model execution.
The BaseLLM
class serves as the base for implementing specific language models. Subclasses of BaseLLM
should implement the run
method to define how text is generated for a given task. This design allows flexibility in integrating different language models while maintaining a common interface.
To demonstrate how to use the BaseLLM
interface, let's create an example using a hypothetical language model. We'll initialize an instance of the model and generate text for a simple task.
# Import the BaseLLM class\nfrom swarm_models import BaseLLM\n\n# Create an instance of the language model\nlanguage_model = BaseLLM(\n model_name=\"my_language_model\",\n max_tokens=50,\n temperature=0.7,\n top_k=50,\n top_p=0.9,\n device=\"cuda\",\n)\n\n# Generate text for a task\ntask = \"Translate the following English text to French: 'Hello, world.'\"\ngenerated_text = language_model.run(task)\n\n# Print the generated text\nprint(generated_text)\n
In this example, we've created an instance of our hypothetical language model, configured its parameters, and used the run
method to generate text for a translation task.
The BaseLLM
interface provides additional features for customization and control:
batch_run
: Generate text for a batch of tasks efficiently.arun
and abatch_run
: Asynchronous versions of run
and batch_run
for concurrent text generation.chat
: Conduct a conversation with the model by providing a history of the conversation.__call__
: Allow the model to be called directly to generate text.These features enhance the flexibility and utility of the interface in various applications, including chatbots, language translation, and content generation.
"},{"location":"swarms/models/base_llm/#6-performance-metrics","title":"6. Performance Metrics","text":"The BaseLLM
class offers methods for tracking performance metrics:
_tokens_per_second
: Calculate tokens generated per second._num_tokens
: Calculate the number of tokens in a text._time_for_generation
: Measure the time taken for text generation.These metrics help assess the efficiency and speed of text generation, enabling optimizations as needed.
"},{"location":"swarms/models/base_llm/#7-logging-and-checkpoints","title":"7. Logging and Checkpoints","text":"Logging and checkpointing are crucial for tracking model behavior and ensuring reproducibility:
enable_logging
: Initialize logging for the model.log_event
: Log events and activities.save_checkpoint
: Save the model state as a checkpoint.load_checkpoint
: Load the model state from a checkpoint.These capabilities aid in debugging, monitoring, and resuming model experiments.
"},{"location":"swarms/models/base_llm/#8-resource-utilization-tracking","title":"8. Resource Utilization Tracking","text":"The track_resource_utilization
method is a placeholder for tracking and reporting resource utilization, such as CPU and memory usage. It can be customized to suit specific monitoring needs.
The Language Model Interface (BaseLLM
) is a versatile framework for working with language models. Whether you're using pre-trained models or developing your own, this interface provides a consistent and extensible foundation. By following the provided guidelines and examples, you can integrate and customize language models for various natural language processing tasks.
BaseMultiModalModel
Documentation","text":"Swarms is a Python library that provides a framework for running multimodal AI models. It allows you to combine text and image inputs and generate coherent and context-aware responses. This library is designed to be extensible, allowing you to integrate various multimodal models.
"},{"location":"swarms/models/base_multimodal_model/#table-of-contents","title":"Table of Contents","text":"Swarms is designed to simplify the process of working with multimodal AI models. These models are capable of understanding and generating content based on both textual and image inputs. With this library, you can run such models and receive context-aware responses.
"},{"location":"swarms/models/base_multimodal_model/#2-installation","title":"2. Installation","text":"To install swarms, you can use pip:
pip install swarms\n
"},{"location":"swarms/models/base_multimodal_model/#3-getting-started","title":"3. Getting Started","text":"To get started with Swarms, you'll need to import the library and create an instance of the BaseMultiModalModel
class. This class serves as the foundation for running multimodal models.
from swarm_models import BaseMultiModalModel\n\nmodel = BaseMultiModalModel(\n model_name=\"your_model_name\",\n temperature=0.5,\n max_tokens=500,\n max_workers=10,\n top_p=1,\n top_k=50,\n beautify=False,\n device=\"cuda\",\n max_new_tokens=500,\n retries=3,\n)\n
You can customize the initialization parameters based on your model's requirements.
"},{"location":"swarms/models/base_multimodal_model/#4-basemultimodalmodel-class","title":"4. BaseMultiModalModel Class","text":""},{"location":"swarms/models/base_multimodal_model/#initialization","title":"Initialization","text":"The BaseMultiModalModel
class is initialized with several parameters that control its behavior. Here's a breakdown of the initialization parameters:
model_name
The name of the multimodal model to use. None temperature
The temperature parameter for controlling randomness in text generation. 0.5 max_tokens
The maximum number of tokens in the generated text. 500 max_workers
The maximum number of concurrent workers for running tasks. 10 top_p
The top-p parameter for filtering words in text generation. 1 top_k
The top-k parameter for filtering words in text generation. 50 beautify
Whether to beautify the output text. False device
The device to run the model on (e.g., 'cuda' or 'cpu'). 'cuda' max_new_tokens
The maximum number of new tokens allowed in generated responses. 500 retries
The number of retries in case of an error during text generation. 3 system_prompt
A system-level prompt to set context for generation. None meta_prompt
A meta prompt to provide guidance for including image labels in responses. None"},{"location":"swarms/models/base_multimodal_model/#methods","title":"Methods","text":"The BaseMultiModalModel
class defines various methods for running multimodal models and managing interactions:
run(task: str, img: str) -> str
: Run the multimodal model with a text task and an image URL to generate a response.
arun(task: str, img: str) -> str
: Run the multimodal model asynchronously with a text task and an image URL to generate a response.
get_img_from_web(img: str) -> Image
: Fetch an image from a URL and return it as a PIL Image.
encode_img(img: str) -> str
: Encode an image to base64 format.
get_img(img: str) -> Image
: Load an image from the local file system and return it as a PIL Image.
clear_chat_history()
: Clear the chat history maintained by the model.
run_many(tasks: List[str], imgs: List[str]) -> List[str]
: Run the model on multiple text tasks and image URLs concurrently and return a list of responses.
run_batch(tasks_images: List[Tuple[str, str]]) -> List[str]
: Process a batch of text tasks and image URLs and return a list of responses.
run_batch_async(tasks_images: List[Tuple[str, str]]) -> List[str]
: Process a batch of text tasks and image URLs asynchronously and return a list of responses.
run_batch_async_with_retries(tasks_images: List[Tuple[str, str]]) -> List[str]
: Process a batch of text tasks and image URLs asynchronously with retries in case of errors and return a list of responses.
unique_chat_history() -> List[str]
: Get the unique chat history stored by the model.
run_with_retries(task: str, img: str) -> str
: Run the model with retries in case of an error.
run_batch_with_retries(tasks_images: List[Tuple[str, str]]) -> List[str]
: Run a batch of tasks with retries in case of errors and return a list of responses.
_tokens_per_second() -> float
: Calculate the tokens generated per second during text generation.
_time_for_generation(task: str) -> float
: Measure the time taken for text generation for a specific task.
generate_summary(text: str) -> str
: Generate a summary of the provided text.
set_temperature(value: float)
: Set the temperature parameter for controlling randomness in text generation.
set_max_tokens(value: int)
: Set the maximum number of tokens allowed in generated responses.
get_generation_time() -> float
: Get the time taken for text generation for the last task.
get_chat_history() -> List[str]
: Get the chat history, including all interactions.
get_unique_chat_history() -> List[str]
: Get the unique chat history, removing duplicate interactions.
get_chat_history_length() -> int
: Get the length of the chat history.
get_unique_chat_history_length() -> int
: Get the length of the unique chat history.
get_chat_history_tokens() -> int
: Get the total number of tokens in the chat history.
print_beautiful(content: str, color: str = 'cyan')
: Print content beautifully using colored text.
stream(content: str)
: Stream the content, printing it character by character.
meta_prompt() -> str
: Get the meta prompt that provides guidance for including image labels in responses.
Let's explore some usage examples of the MultiModalAI library:
"},{"location":"swarms/models/base_multimodal_model/#example-1-running","title":"Example 1: Running","text":"the Model
# Import the library\nfrom swarm_models import BaseMultiModalModel\n\n# Create an instance of the model\nmodel = BaseMultiModalModel(\n model_name=\"your_model_name\",\n temperature=0.5,\n max_tokens=500,\n device=\"cuda\",\n)\n\n# Run the model with a text task and an image URL\nresponse = model.run(\n \"Generate a summary of this text\", \"https://www.example.com/image.jpg\"\n)\nprint(response)\n
"},{"location":"swarms/models/base_multimodal_model/#example-2-running-multiple-tasks-concurrently","title":"Example 2: Running Multiple Tasks Concurrently","text":"# Import the library\nfrom swarm_models import BaseMultiModalModel\n\n# Create an instance of the model\nmodel = BaseMultiModalModel(\n model_name=\"your_model_name\",\n temperature=0.5,\n max_tokens=500,\n max_workers=4,\n device=\"cuda\",\n)\n\n# Define a list of tasks and image URLs\ntasks = [\"Task 1\", \"Task 2\", \"Task 3\"]\nimages = [\"https://image1.jpg\", \"https://image2.jpg\", \"https://image3.jpg\"]\n\n# Run the model on multiple tasks concurrently\nresponses = model.run_many(tasks, images)\nfor response in responses:\n print(response)\n
"},{"location":"swarms/models/base_multimodal_model/#example-3-running-the-model-asynchronously","title":"Example 3: Running the Model Asynchronously","text":"# Import the library\nfrom swarm_models import BaseMultiModalModel\n\n# Create an instance of the model\nmodel = BaseMultiModalModel(\n model_name=\"your_model_name\",\n temperature=0.5,\n max_tokens=500,\n device=\"cuda\",\n)\n\n# Define a list of tasks and image URLs\ntasks_images = [\n (\"Task 1\", \"https://image1.jpg\"),\n (\"Task 2\", \"https://image2.jpg\"),\n (\"Task 3\", \"https://image3.jpg\"),\n]\n\n# Run the model on multiple tasks asynchronously\nresponses = model.run_batch_async(tasks_images)\nfor response in responses:\n print(response)\n
"},{"location":"swarms/models/base_multimodal_model/#example-4-inheriting-basemultimodalmodel-for-its-prebuilt-classes","title":"Example 4: Inheriting BaseMultiModalModel
for it's prebuilt classes","text":"from swarm_models import BaseMultiModalModel\n\n\nclass CustomMultiModalModel(BaseMultiModalModel):\n def __init__(self, model_name, custom_parameter, *args, **kwargs):\n # Call the parent class constructor\n super().__init__(model_name=model_name, *args, **kwargs)\n # Initialize custom parameters specific to your model\n self.custom_parameter = custom_parameter\n\n def __call__(self, text, img):\n # Implement the multimodal model logic here\n # You can use self.custom_parameter and other inherited attributes\n pass\n\n def generate_summary(self, text):\n # Implement the summary generation logic using your model\n # You can use self.custom_parameter and other inherited attributes\n pass\n\n\n# Create an instance of your custom multimodal model\ncustom_model = CustomMultiModalModel(\n model_name=\"your_custom_model_name\",\n custom_parameter=\"your_custom_value\",\n temperature=0.5,\n max_tokens=500,\n device=\"cuda\",\n)\n\n# Run your custom model\nresponse = custom_model.run(\n \"Generate a summary of this text\", \"https://www.example.com/image.jpg\"\n)\nprint(response)\n\n# Generate a summary using your custom model\nsummary = custom_model.generate_summary(\"This is a sample text to summarize.\")\nprint(summary)\n
In the code above:
We define a CustomMultiModalModel
class that inherits from BaseMultiModalModel
.
In the constructor of our custom class, we call the parent class constructor using super()
and initialize any custom parameters specific to our model. In this example, we introduced a custom_parameter
.
We override the __call__
method, which is responsible for running the multimodal model logic. Here, you can implement the specific behavior of your model, considering both text and image inputs.
We override the generate_summary
method, which is used to generate a summary of text input. You can implement your custom summarization logic here.
We create an instance of our custom model, passing the required parameters, including the custom parameter.
We demonstrate how to run the custom model and generate a summary using it.
By inheriting from BaseMultiModalModel
, you can leverage the prebuilt features and methods provided by the library while customizing the behavior of your multimodal model. This allows you to create powerful and specialized models for various multimodal tasks.
These examples demonstrate how to use MultiModalAI to run multimodal models with text and image inputs. You can adjust the parameters and methods to suit your specific use cases.
"},{"location":"swarms/models/base_multimodal_model/#6-additional-tips","title":"6. Additional Tips","text":"Here are some additional tips and considerations for using MultiModalAI effectively:
Custom Models: You can create your own multimodal models and inherit from the BaseMultiModalModel
class to integrate them with this library.
Retries: In cases where text generation might fail due to various reasons (e.g., server issues), using methods with retries can be helpful.
Monitoring: You can monitor the performance of your model using methods like _tokens_per_second()
and _time_for_generation()
.
Chat History: The library maintains a chat history, allowing you to keep track of interactions.
Streaming: The stream()
method can be useful for displaying output character by character, which can be helpful for certain applications.
Here are some references and resources that you may find useful for working with multimodal models:
Hugging Face Transformers Library: A library for working with various transformer-based models.
PIL (Python Imaging Library): Documentation for working with images in Python using the Pillow library.
Concurrent Programming in Python: Official Python documentation for concurrent programming.
Requests Library Documentation: Documentation for the Requests library, which is used for making HTTP requests.
Base64 Encoding in Python: Official Python documentation for base64 encoding and decoding.
This concludes the documentation for the MultiModalAI library. You can now explore the library further and integrate it with your multimodal AI projects.
"},{"location":"swarms/models/cerebras/","title":"Using Cerebras LLaMA with Swarms","text":"This guide demonstrates how to create and use an AI agent powered by the Cerebras LLaMA 3 70B model using the Swarms framework.
"},{"location":"swarms/models/cerebras/#prerequisites","title":"Prerequisites","text":"Python 3.7+
Swarms library installed (pip install swarms
)
Set your ENV key CEREBRAS_API_KEY
from swarms.structs.agent import Agent\n
This imports the Agent
class from Swarms, which is the core component for creating AI agents.
agent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent\",\n max_loops=4,\n model_name=\"cerebras/llama3-70b-instruct\",\n dynamic_temperature_enabled=True,\n interactive=False,\n output_type=\"all\",\n)\n
Let's break down each parameter:
agent_name
: A descriptive name for your agent (here, \"Financial-Analysis-Agent\")
agent_description
: A brief description of the agent's purpose
max_loops
: Maximum number of interaction loops the agent can perform (set to 4)
model_name
: Specifies the Cerebras LLaMA 3 70B model to use
dynamic_temperature_enabled
: Enables dynamic adjustment of temperature for varied responses
interactive
: When False, runs without requiring user interaction
output_type
: Set to \"all\" to return complete response information
agent.run(\"Conduct an analysis of the best real undervalued ETFs\")\n
This command:
Activates the agent
Processes the given prompt about ETF analysis
Returns the analysis based on the model's knowledge
The Cerebras LLaMA 3 70B model is a powerful language model suitable for complex analysis tasks
The agent can be customized further with additional parameters
The max_loops=4
setting prevents infinite loops while allowing sufficient processing depth
Setting interactive=False
makes the agent run autonomously without user intervention
The agent will provide a detailed analysis of undervalued ETFs, including:
Market analysis
Performance metrics
Risk assessment
Investment recommendations
Note: Actual output will vary based on current market conditions and the model's training data.
"},{"location":"swarms/models/custom_model/","title":"How to Create A Custom Language Model","text":"When working with advanced language models, there might come a time when you need a custom solution tailored to your specific needs. Inheriting from an BaseLLM
in a Python framework allows developers to create custom language model classes with ease. This developer guide will take you through the process step by step.
Before you begin, ensure that you have:
BaseLLM
.BaseLLM
","text":"The BaseLLM
is an abstract base class that defines a set of methods and properties which your custom language model (LLM) should implement. Abstract classes in Python are not designed to be instantiated directly but are meant to be subclasses.
Start by defining a new class that inherits from BaseLLM
. This class will implement the required methods defined in the abstract base class.
from swarms import BaseLLM\n\nclass vLLMLM(BaseLLM):\n pass\n
"},{"location":"swarms/models/custom_model/#step-3-initialize-your-class","title":"Step 3: Initialize Your Class","text":"Implement the __init__
method to initialize your custom LLM. You'll want to initialize the base class as well and define any additional parameters for your model.
class vLLMLM(BaseLLM):\n def __init__(self, model_name='default_model', tensor_parallel_size=1, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.model_name = model_name\n self.tensor_parallel_size = tensor_parallel_size\n # Add any additional initialization here\n
"},{"location":"swarms/models/custom_model/#step-4-implement-required-methods","title":"Step 4: Implement Required Methods","text":"Implement the run
method or any other abstract methods required by BaseLLM
. This is where you define how your model processes input and returns output.
class vLLMLM(BaseLLM):\n # ... existing code ...\n\n def run(self, task, *args, **kwargs):\n # Logic for running your model goes here\n return \"Processed output\"\n
"},{"location":"swarms/models/custom_model/#step-5-test-your-model","title":"Step 5: Test Your Model","text":"Instantiate your custom LLM and test it to ensure that it works as expected.
model = vLLMLM(model_name='my_custom_model', tensor_parallel_size=2)\noutput = model.run(\"What are the symptoms of COVID-19?\")\nprint(output) # Outputs: \"Processed output\"\n
"},{"location":"swarms/models/custom_model/#step-6-integrate-additional-components","title":"Step 6: Integrate Additional Components","text":"Depending on the requirements, you might need to integrate additional components such as database connections, parallel computing resources, or custom processing pipelines.
"},{"location":"swarms/models/custom_model/#step-7-documentation","title":"Step 7: Documentation","text":"Write comprehensive docstrings for your class and its methods. Good documentation is crucial for maintaining the code and for other developers who might use your model.
class vLLMLM(BaseLLM):\n \"\"\"\n A custom language model class that extends BaseLLM.\n\n ... more detailed docstring ...\n \"\"\"\n # ... existing code ...\n
"},{"location":"swarms/models/custom_model/#step-8-best-practices","title":"Step 8: Best Practices","text":"Follow best practices such as error handling, input validation, and resource management to ensure your model is robust and reliable.
"},{"location":"swarms/models/custom_model/#step-9-packaging-your-model","title":"Step 9: Packaging Your Model","text":"Package your custom LLM class into a module or package that can be easily distributed and imported into other projects.
"},{"location":"swarms/models/custom_model/#step-10-version-control-and-collaboration","title":"Step 10: Version Control and Collaboration","text":"Use a version control system like Git to track changes to your model. This makes collaboration easier and helps you keep a history of your work.
"},{"location":"swarms/models/custom_model/#conclusion","title":"Conclusion","text":"By following this guide, you should now have a custom model that extends the BaseLLM
. Remember that the key to a successful custom LLM is understanding the base functionalities, implementing necessary changes, and testing thoroughly. Keep iterating and improving based on feedback and performance metrics.
This guide provides the fundamental steps to create custom models using BaseLLM
. For detailed implementation and advanced customization, it's essential to dive deeper into the specific functionalities and capabilities of the language model framework you are using.
Dalle3
Documentation","text":""},{"location":"swarms/models/dalle3/#table-of-contents","title":"Table of Contents","text":"The Dalle3 library is a Python module that provides an easy-to-use interface for generating images from text descriptions using the DALL\u00b7E 3 model by OpenAI. DALL\u00b7E 3 is a powerful language model capable of converting textual prompts into images. This documentation will guide you through the installation, setup, and usage of the Dalle3 library.
"},{"location":"swarms/models/dalle3/#installation","title":"Installation","text":"To use the Dalle3 model, you must first install swarms:
pip install swarms\n
"},{"location":"swarms/models/dalle3/#quick-start","title":"Quick Start","text":"Let's get started with a quick example of using the Dalle3 library to generate an image from a text prompt:
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class\ndalle = Dalle3()\n\n# Define a text prompt\ntask = \"A painting of a dog\"\n\n# Generate an image from the text prompt\nimage_url = dalle3(task)\n\n# Print the generated image URL\nprint(image_url)\n
This example demonstrates the basic usage of the Dalle3 library to convert a text prompt into an image. The generated image URL will be printed to the console.
"},{"location":"swarms/models/dalle3/#dalle3-class","title":"Dalle3 Class","text":"The Dalle3 library provides a Dalle3
class that allows you to interact with the DALL\u00b7E 3 model. This class has several attributes and methods for generating images from text prompts.
model
(str): The name of the DALL\u00b7E 3 model. Default: \"dall-e-3\".img
(str): The image URL generated by the Dalle3 API.size
(str): The size of the generated image. Default: \"1024x1024\".max_retries
(int): The maximum number of API request retries. Default: 3.quality
(str): The quality of the generated image. Default: \"standard\".n
(int): The number of variations to create. Default: 4.__call__(self, task: str) -> Dalle3
","text":"This method makes a call to the Dalle3 API and returns the image URL generated from the provided text prompt.
Parameters: - task
(str): The text prompt to be converted to an image.
Returns: - Dalle3
: An instance of the Dalle3 class with the image URL generated by the Dalle3 API.
create_variations(self, img: str)
","text":"This method creates variations of an image using the Dalle3 API.
Parameters: - img
(str): The image to be used for the API request.
Returns: - img
(str): The image URL of the generated variations.
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class\ndalle3 = Dalle3()\n\n# Define a text prompt\ntask = \"A painting of a dog\"\n\n# Generate an image from the text prompt\nimage_url = dalle3(task)\n\n# Print the generated image URL\nprint(image_url)\n
"},{"location":"swarms/models/dalle3/#example-2-creating-image-variations","title":"Example 2: Creating Image Variations","text":"from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class\ndalle3 = Dalle3()\n\n# Define the URL of an existing image\nimg_url = \"https://images.unsplash.com/photo-1694734479898-6ac4633158ac?q=80&w=1287&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\n\n# Create variations of the image\nvariations_url = dalle3.create_variations(img_url)\n\n# Print the URLs of the generated variations\nprint(variations_url)\n
Certainly! Here are additional examples that cover various edge cases and methods of the Dalle3
class in the Dalle3 library:
You can customize the size of the generated image by specifying the size
parameter when creating an instance of the Dalle3
class. Here's how to generate a smaller image:
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class with a custom image size\ndalle3 = Dalle3(size=\"512x512\")\n\n# Define a text prompt\ntask = \"A small painting of a cat\"\n\n# Generate a smaller image from the text prompt\nimage_url = dalle3(task)\n\n# Print the generated image URL\nprint(image_url)\n
"},{"location":"swarms/models/dalle3/#example-4-adjusting-retry-limit","title":"Example 4: Adjusting Retry Limit","text":"You can adjust the maximum number of API request retries using the max_retries
parameter. Here's how to increase the retry limit:
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class with a higher retry limit\ndalle3 = Dalle3(max_retries=5)\n\n# Define a text prompt\ntask = \"An image of a landscape\"\n\n# Generate an image with a higher retry limit\nimage_url = dalle3(task)\n\n# Print the generated image URL\nprint(image_url)\n
"},{"location":"swarms/models/dalle3/#example-5-generating-image-variations","title":"Example 5: Generating Image Variations","text":"To create variations of an existing image, you can use the create_variations
method. Here's an example:
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class\ndalle3 = Dalle3()\n\n# Define the URL of an existing image\nimg_url = \"https://images.unsplash.com/photo-1677290043066-12eccd944004?q=80&w=1287&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\n\n# Create variations of the image\nvariations_url = dalle3.create_variations(img_url)\n\n# Print the URLs of the generated variations\nprint(variations_url)\n
"},{"location":"swarms/models/dalle3/#example-6-handling-api-errors","title":"Example 6: Handling API Errors","text":"The Dalle3 library provides error handling for API-related issues. Here's how to handle and display API errors:
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class\ndalle3 = Dalle3()\n\n# Define a text prompt\ntask = \"Invalid prompt that may cause an API error\"\n\ntry:\n # Attempt to generate an image with an invalid prompt\n image_url = dalle3(task)\n print(image_url)\nexcept Exception as e:\n print(f\"Error occurred: {str(e)}\")\n
"},{"location":"swarms/models/dalle3/#example-7-customizing-image-quality","title":"Example 7: Customizing Image Quality","text":"You can customize the quality of the generated image by specifying the quality
parameter. Here's how to generate a high-quality image:
from swarm_models.dalle3 import Dalle3\n\n# Create an instance of the Dalle3 class with high quality\ndalle3 = Dalle3(quality=\"high\")\n\n# Define a text prompt\ntask = \"A high-quality image of a sunset\"\n\n# Generate a high-quality image from the text prompt\nimage_url = dalle3(task)\n\n# Print the generated image URL\nprint(image_url)\n
"},{"location":"swarms/models/dalle3/#error-handling","title":"Error Handling","text":"The Dalle3 library provides error handling for API-related issues. If an error occurs during API communication, the library will handle it and provide detailed error messages. Make sure to handle exceptions appropriately in your code.
"},{"location":"swarms/models/dalle3/#advanced-usage","title":"Advanced Usage","text":"For advanced usage and customization of the Dalle3 library, you can explore the attributes and methods of the Dalle3
class. Adjusting parameters such as size
, max_retries
, and quality
allows you to fine-tune the image generation process to your specific needs.
For more information about the DALL\u00b7E 3 model and the Dalle3 library, you can refer to the official OpenAI documentation and resources.
This concludes the documentation for the Dalle3 library. You can now use the library to generate images from text prompts and explore its advanced features for various applications.
"},{"location":"swarms/models/distilled_whisperx/","title":"DistilWhisperModel Documentation","text":""},{"location":"swarms/models/distilled_whisperx/#overview","title":"Overview","text":"The DistilWhisperModel
is a Python class designed to handle English speech recognition tasks. It leverages the capabilities of the Whisper model, which is fine-tuned for speech-to-text processes. It is designed for both synchronous and asynchronous transcription of audio inputs, offering flexibility for real-time applications or batch processing.
Before you can use DistilWhisperModel
, ensure you have the required libraries installed:
pip3 install --upgrade swarms\n
"},{"location":"swarms/models/distilled_whisperx/#initialization","title":"Initialization","text":"The DistilWhisperModel
class is initialized with the following parameters:
model_id
str
The identifier for the pre-trained Whisper model \"distil-whisper/distil-large-v2\"
Example of initialization:
from swarm_models import DistilWhisperModel\n\n# Initialize with default model\nmodel_wrapper = DistilWhisperModel()\n\n# Initialize with a specific model ID\nmodel_wrapper = DistilWhisperModel(model_id=\"distil-whisper/distil-large-v2\")\n
"},{"location":"swarms/models/distilled_whisperx/#attributes","title":"Attributes","text":"After initialization, the DistilWhisperModel
has several attributes:
device
str
The device used for computation (\"cuda:0\"
for GPU or \"cpu\"
). torch_dtype
torch.dtype
The data type used for the Torch tensors. model_id
str
The model identifier string. model
torch.nn.Module
The actual Whisper model loaded from the identifier. processor
transformers.AutoProcessor
The processor for handling input data."},{"location":"swarms/models/distilled_whisperx/#methods","title":"Methods","text":""},{"location":"swarms/models/distilled_whisperx/#transcribe","title":"transcribe
","text":"Transcribes audio input synchronously.
Arguments:
Argument Type Descriptioninputs
Union[str, dict]
File path or audio data dictionary. Returns: str
- The transcribed text.
Usage Example:
# Synchronous transcription\ntranscription = model_wrapper.transcribe(\"path/to/audio.mp3\")\nprint(transcription)\n
"},{"location":"swarms/models/distilled_whisperx/#async_transcribe","title":"async_transcribe
","text":"Transcribes audio input asynchronously.
Arguments:
Argument Type Descriptioninputs
Union[str, dict]
File path or audio data dictionary. Returns: Coroutine
- A coroutine that when awaited, returns the transcribed text.
Usage Example:
import asyncio\n\n# Asynchronous transcription\ntranscription = asyncio.run(model_wrapper.async_transcribe(\"path/to/audio.mp3\"))\nprint(transcription)\n
"},{"location":"swarms/models/distilled_whisperx/#real_time_transcribe","title":"real_time_transcribe
","text":"Simulates real-time transcription of an audio file.
Arguments:
Argument Type Descriptionaudio_file_path
str
Path to the audio file. chunk_duration
int
Duration of audio chunks in seconds. Usage Example:
# Real-time transcription simulation\nmodel_wrapper.real_time_transcribe(\"path/to/audio.mp3\", chunk_duration=5)\n
"},{"location":"swarms/models/distilled_whisperx/#error-handling","title":"Error Handling","text":"The DistilWhisperModel
class incorporates error handling for file not found errors and generic exceptions during the transcription process. If a non-recoverable exception is raised, it is printed to the console in red to indicate failure.
The DistilWhisperModel
offers a convenient interface to the powerful Whisper model for speech recognition. Its design supports both batch and real-time transcription, catering to different application needs. The class's error handling and retry logic make it robust for real-world applications.
chunk_duration
according to the processing power of your system for real-time transcription.For a full list of models supported by transformers.AutoModelForSpeechSeq2Seq
, visit the Hugging Face Model Hub.
Welcome to the documentation for Fuyu, a versatile model for generating text conditioned on both textual prompts and images. Fuyu is based on the Adept's Fuyu model and offers a convenient way to create text that is influenced by the content of an image. In this documentation, you will find comprehensive information on the Fuyu class, its architecture, usage, and examples.
"},{"location":"swarms/models/fuyu/#overview","title":"Overview","text":"Fuyu is a text generation model that leverages both text and images to generate coherent and contextually relevant text. It combines state-of-the-art language modeling techniques with image processing capabilities to produce text that is semantically connected to the content of an image. Whether you need to create captions for images or generate text that describes visual content, Fuyu can assist you.
"},{"location":"swarms/models/fuyu/#class-definition","title":"Class Definition","text":"class Fuyu:\n def __init__(\n self,\n pretrained_path: str = \"adept/fuyu-8b\",\n device_map: str = \"cuda:0\",\n max_new_tokens: int = 7,\n ):\n
"},{"location":"swarms/models/fuyu/#purpose","title":"Purpose","text":"The Fuyu class serves as a convenient interface for using the Adept's Fuyu model. It allows you to generate text based on a textual prompt and an image. The primary purpose of Fuyu is to provide a user-friendly way to create text that is influenced by visual content, making it suitable for various applications, including image captioning, storytelling, and creative text generation.
"},{"location":"swarms/models/fuyu/#parameters","title":"Parameters","text":"pretrained_path
(str): The path to the pretrained Fuyu model. By default, it uses the \"adept/fuyu-8b\" model.device_map
(str): The device to use for model inference (e.g., \"cuda:0\" for GPU or \"cpu\" for CPU). Default: \"cuda:0\".max_new_tokens
(int): The maximum number of tokens to generate in the output text. Default: 7.To use Fuyu, follow these steps:
from swarm_models.fuyu import Fuyu\n\nfuyu = Fuyu()\n
text = \"Hello, my name is\"\nimg_path = \"path/to/image.png\"\noutput_text = fuyu(text, img_path)\n
"},{"location":"swarms/models/fuyu/#example-2-text-generation","title":"Example 2 - Text Generation","text":"from swarm_models.fuyu import Fuyu\n\nfuyu = Fuyu()\n\ntext = \"Hello, my name is\"\n\nimg_path = \"path/to/image.png\"\n\noutput_text = fuyu(text, img_path)\nprint(output_text)\n
"},{"location":"swarms/models/fuyu/#how-fuyu-works","title":"How Fuyu Works","text":"Fuyu combines text and image processing to generate meaningful text outputs. Here's how it works:
Initialization: When you create a Fuyu instance, you specify the pretrained model path, the device for inference, and the maximum number of tokens to generate.
Processing Text and Images: Fuyu can process both textual prompts and images. You provide a text prompt and the path to an image as input.
Tokenization: Fuyu tokenizes the input text and encodes the image using its tokenizer.
Model Inference: The model takes the tokenized inputs and generates text that is conditioned on both the text and the image.
Output Text: Fuyu returns the generated text as the output.
max_new_tokens
parameter allows you to control the length of the generated text.That concludes the documentation for Fuyu. We hope you find this model useful for your text generation tasks that involve images. If you have any questions or encounter any issues, please refer to the Fuyu documentation for further assistance. Enjoy working with Fuyu!
"},{"location":"swarms/models/gemini/","title":"Gemini","text":""},{"location":"swarms/models/gemini/#gemini-documentation","title":"Gemini
Documentation","text":""},{"location":"swarms/models/gemini/#introduction","title":"Introduction","text":"The Gemini module is a versatile tool for leveraging the power of multimodal AI models to generate content. It allows users to combine textual and image inputs to generate creative and informative outputs. In this documentation, we will explore the Gemini module in detail, covering its purpose, architecture, methods, and usage examples.
"},{"location":"swarms/models/gemini/#purpose","title":"Purpose","text":"The Gemini module is designed to bridge the gap between text and image data, enabling users to harness the capabilities of multimodal AI models effectively. By providing both a textual task and an image as input, Gemini generates content that aligns with the specified task and incorporates the visual information from the image.
"},{"location":"swarms/models/gemini/#installation","title":"Installation","text":"Before using Gemini, ensure that you have the required dependencies installed. You can install them using the following commands:
pip install swarms\npip install google-generativeai\npip install python-dotenv\n
"},{"location":"swarms/models/gemini/#class-gemini","title":"Class: Gemini","text":""},{"location":"swarms/models/gemini/#overview","title":"Overview","text":"The Gemini
class is the central component of the Gemini module. It inherits from the BaseMultiModalModel
class and provides methods to interact with the Gemini AI model. Let's dive into its architecture and functionality.
class Gemini(BaseMultiModalModel):\n def __init__(\n self,\n model_name: str = \"gemini-pro\",\n gemini_api_key: str = get_gemini_api_key_env,\n *args,\n **kwargs,\n ):\n
Parameter Type Description Default Value model_name
str The name of the Gemini model. \"gemini-pro\" gemini_api_key
str The Gemini API key. If not provided, it is fetched from the environment. (None) model_name
: Specifies the name of the Gemini model to use. By default, it is set to \"gemini-pro,\" but you can specify a different model if needed.
gemini_api_key
: This parameter allows you to provide your Gemini API key directly. If not provided, the constructor attempts to fetch it from the environment using the get_gemini_api_key_env
helper function.
def run(\n self,\n task: str = None,\n img: str = None,\n *args,\n **kwargs,\n) -> str:\n
Parameter Type Description task
str The textual task for content generation. img
str The path to the image to be processed. *args
Variable Additional positional arguments. **kwargs
Variable Additional keyword arguments. task
: Specifies the textual task for content generation. It can be a sentence or a phrase that describes the desired content.
img
: Provides the path to the image that will be processed along with the textual task. Gemini combines the visual information from the image with the textual task to generate content.
*args
and **kwargs
: Allow for additional, flexible arguments that can be passed to the underlying Gemini model. These arguments can vary based on the specific Gemini model being used.
Returns: A string containing the generated content.
Examples:
from swarm_models import Gemini\n\n# Initialize the Gemini model\ngemini = Gemini()\n\n# Generate content for a textual task with an image\ngenerated_content = gemini.run(\n task=\"Describe this image\",\n img=\"image.jpg\",\n)\n\n# Print the generated content\nprint(generated_content)\n
In this example, we initialize the Gemini model, provide a textual task, and specify an image for processing. The run()
method generates content based on the input and returns the result.
def process_img(\n self,\n img: str = None,\n type: str = \"image/png\",\n *args,\n **kwargs,\n):\n
Parameter Type Description Default Value img
str The path to the image to be processed. (None) type
str The MIME type of the image (e.g., \"image/png\"). \"image/png\" *args
Variable Additional positional arguments. **kwargs
Variable Additional keyword arguments. img
: Specifies the path to the image that will be processed. It's essential to provide a valid image path for image-based content generation.
type
: Indicates the MIME type of the image. By default, it is set to \"image/png,\" but you can change it based on the image format you're using.
*args
and **kwargs
: Allow for additional, flexible arguments that can be passed to the underlying Gemini model. These arguments can vary based on the specific Gemini model being used.
Raises: ValueError if any of the following conditions are met: - No image is provided. - The image type is not specified. - The Gemini API key is missing.
Examples:
from swarm_models.gemini import Gemini\n\n# Initialize the Gemini model\ngemini = Gemini()\n\n# Process an image\nprocessed_image = gemini.process_img(\n img=\"image.jpg\",\n type=\"image/jpeg\",\n)\n\n# Further use the processed image in content generation\ngenerated_content = gemini.run(\n task=\"Describe this image\",\n img=processed_image,\n)\n\n# Print the generated content\nprint(generated_content)\n
In this example, we demonstrate how to process an image using the process_img()
method and then use the processed image in content generation.
Gemini is designed to work seamlessly with various multimodal AI models, making it a powerful tool for content generation tasks.
The module uses the google.generativeai
package to access the underlying AI models. Ensure that you have this package installed to leverage the full capabilities of Gemini.
It's essential to provide a valid Gemini API key for authentication. You can either pass it directly during initialization or store it in the environment variable \"GEMINI_API_KEY.\"
Gemini's flexibility allows you to experiment with different Gemini models and tailor the content generation process to your specific needs.
Keep in mind that Gemini is designed to handle both textual and image inputs, making it a valuable asset for various applications, including natural language processing and computer vision tasks.
If you encounter any issues or have specific requirements, refer to the Gemini documentation for more details and advanced usage.
Gemini GitHub Repository: Explore the Gemini repository for additional information, updates, and examples.
Google GenerativeAI Documentation: Dive deeper into the capabilities of the Google GenerativeAI package used by Gemini.
Gemini API Documentation: Access the official documentation for the Gemini API to explore advanced features and integrations.
In this comprehensive documentation, we've explored the Gemini module, its purpose, architecture, methods, and usage examples. Gemini empowers developers to generate content by combining textual tasks and images, making it a valuable asset for multimodal AI applications. Whether you're working on natural language processing or computer vision projects, Gemini can help you achieve impressive results.
"},{"location":"swarms/models/gpt4v/","title":"GPT4VisionAPI
Documentation","text":"Table of Contents - Introduction - Installation - Module Overview - Class: GPT4VisionAPI - Initialization - Methods - encode_image - run - call - Examples - Example 1: Basic Usage - Example 2: Custom API Key - Example 3: Adjusting Maximum Tokens - Additional Information - References
"},{"location":"swarms/models/gpt4v/#introduction","title":"Introduction","text":"Welcome to the documentation for the GPT4VisionAPI
module! This module is a powerful wrapper for the OpenAI GPT-4 Vision model. It allows you to interact with the model to generate descriptions or answers related to images. This documentation will provide you with comprehensive information on how to use this module effectively.
Before you start using the GPT4VisionAPI
module, make sure you have the required dependencies installed. You can install them using the following commands:
pip3 install --upgrade swarms\n
"},{"location":"swarms/models/gpt4v/#module-overview","title":"Module Overview","text":"The GPT4VisionAPI
module serves as a bridge between your application and the OpenAI GPT-4 Vision model. It allows you to send requests to the model and retrieve responses related to images. Here are some key features and functionality provided by this module:
The GPT4VisionAPI
class is the core component of this module. It encapsulates the functionality required to interact with the GPT-4 Vision model. Below, we'll dive into the class in detail.
When initializing the GPT4VisionAPI
class, you have the option to provide the OpenAI API key and set the maximum token limit. Here are the parameters and their descriptions:
OPENAI_API_KEY
environment variable (if available) The OpenAI API key. If not provided, it defaults to the OPENAI_API_KEY
environment variable. max_tokens int 300 The maximum number of tokens to generate in the model's response. Here's how you can initialize the GPT4VisionAPI
class:
from swarm_models import GPT4VisionAPI\n\n# Initialize with default API key and max_tokens\napi = GPT4VisionAPI()\n\n# Initialize with custom API key and max_tokens\ncustom_api_key = \"your_custom_api_key\"\napi = GPT4VisionAPI(openai_api_key=custom_api_key, max_tokens=500)\n
"},{"location":"swarms/models/gpt4v/#methods","title":"Methods","text":""},{"location":"swarms/models/gpt4v/#encode_image","title":"encode_image","text":"This method allows you to encode an image from a URL to base64 format. It's a utility function used internally by the module.
def encode_image(img: str) -> str:\n \"\"\"\n Encode image to base64.\n\n Parameters:\n - img (str): URL of the image to encode.\n\n Returns:\n str: Base64 encoded image.\n \"\"\"\n
"},{"location":"swarms/models/gpt4v/#run","title":"run","text":"The run
method is the primary way to interact with the GPT-4 Vision model. It sends a request to the model with a task and an image URL, and it returns the model's response.
def run(task: str, img: str) -> str:\n \"\"\"\n Run the GPT-4 Vision model.\n\n Parameters:\n - task (str): The task or question related to the image.\n - img (str): URL of the image to analyze.\n\n Returns:\n str: The model's response.\n \"\"\"\n
"},{"location":"swarms/models/gpt4v/#call","title":"call","text":"The __call__
method is a convenient way to run the GPT-4 Vision model. It has the same functionality as the run
method.
def __call__(task: str, img: str) -> str:\n \"\"\"\n Run the GPT-4 Vision model (callable).\n\n Parameters:\n - task (str): The task or question related to the image.\n - img\n\n (str): URL of the image to analyze.\n\n Returns:\n str: The model's response.\n \"\"\"\n
"},{"location":"swarms/models/gpt4v/#examples","title":"Examples","text":"Let's explore some usage examples of the GPT4VisionAPI
module to better understand how to use it effectively.
In this example, we'll use the module with the default API key and maximum tokens to analyze an image.
from swarm_models import GPT4VisionAPI\n\n# Initialize with default API key and max_tokens\napi = GPT4VisionAPI()\n\n# Define the task and image URL\ntask = \"What is the color of the object?\"\nimg = \"https://i.imgur.com/2M2ZGwC.jpeg\"\n\n# Run the GPT-4 Vision model\nresponse = api.run(task, img)\n\n# Print the model's response\nprint(response)\n
"},{"location":"swarms/models/gpt4v/#example-2-custom-api-key","title":"Example 2: Custom API Key","text":"If you have a custom API key, you can initialize the module with it as shown in this example.
from swarm_models import GPT4VisionAPI\n\n# Initialize with custom API key and max_tokens\ncustom_api_key = \"your_custom_api_key\"\napi = GPT4VisionAPI(openai_api_key=custom_api_key, max_tokens=500)\n\n# Define the task and image URL\ntask = \"What is the object in the image?\"\nimg = \"https://i.imgur.com/3T3ZHwD.jpeg\"\n\n# Run the GPT-4 Vision model\nresponse = api.run(task, img)\n\n# Print the model's response\nprint(response)\n
"},{"location":"swarms/models/gpt4v/#example-3-adjusting-maximum-tokens","title":"Example 3: Adjusting Maximum Tokens","text":"You can also customize the maximum token limit when initializing the module. In this example, we set it to 1000 tokens.
from swarm_models import GPT4VisionAPI\n\n# Initialize with default API key and custom max_tokens\napi = GPT4VisionAPI(max_tokens=1000)\n\n# Define the task and image URL\ntask = \"Describe the scene in the image.\"\nimg = \"https://i.imgur.com/4P4ZRxU.jpeg\"\n\n# Run the GPT-4 Vision model\nresponse = api.run(task, img)\n\n# Print the model's response\nprint(response)\n
"},{"location":"swarms/models/gpt4v/#additional-information","title":"Additional Information","text":"This documentation provides a comprehensive guide on how to use the GPT4VisionAPI
module effectively. It covers initialization, methods, usage examples, and additional information to ensure a smooth experience when working with the GPT-4 Vision model.
This documentation provides instructions on how to obtain your Groq API key and set it up in a .env
file for use in your project.
Visit the Groq website and sign up for an account if you don't have one. If you already have an account, log in.
Access API Keys:
Once logged in, navigate to the API section of your account dashboard. This is usually found under \"Settings\" or \"API Management\".
Generate API Key:
.env
File","text":"In the root directory of your project, create a new file named .env
.
Add Your API Key:
.env
file in a text editor and add the following line, replacing your_groq_api_key_here
with the API key you copied earlier:GROQ_API_KEY=your_groq_api_key_here\n
.env
file.import os\nfrom swarm_models import OpenAIChat\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"GROQ_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_base=\"https://api.groq.com/openai/v1\",\n openai_api_key=api_key,\n model_name=\"llama-3.1-70b-versatile\",\n temperature=0.1,\n)\n\nmodel.run(\"What are the best metrics to track and understand risk in private equity\")\n
"},{"location":"swarms/models/groq/#important-notes","title":"Important Notes","text":".gitignore
file to exclude the .env
file from being tracked.python-dotenv
) to load environment variables from the .env
file if your project requires it.You are now ready to use the Groq API in your project! If you encounter any issues, refer to the Groq documentation or support for further assistance.
"},{"location":"swarms/models/hf/","title":"HuggingFaceLLM","text":""},{"location":"swarms/models/hf/#overview-introduction","title":"Overview & Introduction","text":"The HuggingFaceLLM
class in the Zeta library provides a simple and easy-to-use interface to harness the power of Hugging Face's transformer-based language models, specifically for causal language modeling. This enables developers to generate coherent and contextually relevant sentences or paragraphs given a prompt, without delving deep into the intricate details of the underlying model or the tokenization process.
Causal Language Modeling (CLM) is a task where given a series of tokens (or words), the model predicts the next token in the sequence. This functionality is central to many natural language processing tasks, including chatbots, story generation, and code autocompletion.
"},{"location":"swarms/models/hf/#class-definition","title":"Class Definition","text":"class HuggingFaceLLM:\n
"},{"location":"swarms/models/hf/#parameters","title":"Parameters:","text":"model_id (str)
: Identifier for the pre-trained model on the Hugging Face model hub. Examples include \"gpt2-medium\", \"openai-gpt\", etc.
device (str, optional)
: The device on which to load and run the model. Defaults to 'cuda' if GPU is available, else 'cpu'.
max_length (int, optional)
: Maximum length of the generated sequence. Defaults to 20.
quantization_config (dict, optional)
: Configuration dictionary for model quantization (if applicable). Default is None
.
llm = HuggingFaceLLM(model_id=\"gpt2-medium\")\n
Upon initialization, the specified pre-trained model and tokenizer are loaded from Hugging Face's model hub. The model is then moved to the designated device. If there's an issue loading either the model or the tokenizer, an error will be logged.
"},{"location":"swarms/models/hf/#generation","title":"Generation:","text":"The main functionality of this class is text generation. The class provides two methods for this: __call__
and generate
. Both methods take in a prompt text and an optional max_length
parameter and return the generated text.
Usage:
from swarms import HuggingFaceLLM\n\n# Initialize\nllm = HuggingFaceLLM(model_id=\"gpt2-medium\")\n\n# Generate text using __call__ method\nresult = llm(\"Once upon a time,\")\nprint(result)\n\n# Alternatively, using the generate method\nresult = llm.generate(\"The future of AI is\")\nprint(result)\n
"},{"location":"swarms/models/hf/#mathematical-explanation","title":"Mathematical Explanation:","text":"Given a sequence of tokens \\( x_1, x_2, ..., x_n \\), a causal language model aims to maximize the likelihood of the next token \\( x_{n+1} \\) in the sequence. Formally, it tries to optimize:
\\[ P(x_{n+1} | x_1, x_2, ..., x_n) \\]Where \\( P \\) is the probability distribution over all possible tokens in the vocabulary.
The model takes the tokenized input sequence, feeds it through several transformer blocks, and finally through a linear layer to produce logits for each token in the vocabulary. The token with the highest logit value is typically chosen as the next token in the sequence.
"},{"location":"swarms/models/hf/#additional-information-tips","title":"Additional Information & Tips:","text":"Ensure you have an active internet connection when initializing the class for the first time, as the models and tokenizers are fetched from Hugging Face's servers.
Although the default max_length
is set to 20, it's advisable to adjust this parameter based on the context of the problem.
Keep an eye on GPU memory when using large models or generating long sequences.
Hugging Face Model Hub: https://huggingface.co/models
Introduction to Transformers: https://huggingface.co/transformers/introduction.html
Causal Language Modeling: Vaswani, A., et al. (2017). Attention is All You Need. arXiv:1706.03762
Note: This documentation template provides a comprehensive overview of the HuggingFaceLLM
class. Developers can follow similar structures when documenting other classes or functionalities.
HuggingfaceLLM
Documentation","text":""},{"location":"swarms/models/huggingface/#introduction","title":"Introduction","text":"The HuggingfaceLLM
class is designed for running inference using models from the Hugging Face Transformers library. This documentation provides an in-depth understanding of the class, its purpose, attributes, methods, and usage examples.
The HuggingfaceLLM
class serves the following purposes:
The HuggingfaceLLM
class is defined as follows:
class HuggingfaceLLM:\n def __init__(\n self,\n model_id: str,\n device: str = None,\n max_length: int = 20,\n quantize: bool = False,\n quantization_config: dict = None,\n verbose=False,\n distributed=False,\n decoding=False,\n ):\n # Attributes and initialization logic explained below\n pass\n\n def load_model(self):\n # Method to load the pre-trained model and tokenizer\n pass\n\n def run(self, prompt_text: str, max_length: int = None):\n # Method to generate text-based responses\n pass\n\n def __call__(self, prompt_text: str, max_length: int = None):\n # Alternate method for generating text-based responses\n pass\n
"},{"location":"swarms/models/huggingface/#attributes","title":"Attributes","text":"Attribute Description model_id
The ID of the pre-trained model to be used. device
The device on which the model runs ('cuda'
for GPU or 'cpu'
for CPU). max_length
The maximum length of the generated text. quantize
A boolean indicating whether quantization should be used. quantization_config
A dictionary with configuration options for quantization. verbose
A boolean indicating whether verbose logs should be printed. logger
An optional logger for logging messages (defaults to a basic logger). distributed
A boolean indicating whether distributed processing should be used. decoding
A boolean indicating whether to perform decoding during text generation."},{"location":"swarms/models/huggingface/#class-methods","title":"Class Methods","text":""},{"location":"swarms/models/huggingface/#__init__-method","title":"__init__
Method","text":"The __init__
method initializes an instance of the HuggingfaceLLM
class with the specified parameters. It also loads the pre-trained model and tokenizer.
model_id
(str): The ID of the pre-trained model to use.device
(str, optional): The device to run the model on ('cuda' or 'cpu').max_length
(int, optional): The maximum length of the generated text.quantize
(bool, optional): Whether to use quantization.quantization_config
(dict, optional): Configuration for quantization.verbose
(bool, optional): Whether to print verbose logs.logger
(logging.Logger, optional): The logger to use.distributed
(bool, optional): Whether to use distributed processing.decoding
(bool, optional): Whether to perform decoding during text generation.load_model
Method","text":"The load_model
method loads the pre-trained model and tokenizer specified by model_id
.
run
and __call__
Methods","text":"Both run
and __call__
methods generate text-based responses based on a given prompt. They accept the following parameters:
prompt_text
(str): The text prompt to initiate text generation.max_length
(int, optional): The maximum length of the generated text.Here are three ways to use the HuggingfaceLLM
class:
from swarm_models import HuggingfaceLLM\n\n# Initialize the HuggingfaceLLM instance with a model ID\nmodel_id = \"NousResearch/Nous-Hermes-2-Vision-Alpha\"\ninference = HuggingfaceLLM(model_id=model_id)\n\n# Generate text based on a prompt\nprompt_text = \"Once upon a time\"\ngenerated_text = inference(prompt_text)\nprint(generated_text)\n
"},{"location":"swarms/models/huggingface/#example-2-custom-configuration","title":"Example 2: Custom Configuration","text":"from swarm_models import HuggingfaceLLM\n\n# Initialize with custom configuration\ncustom_config = {\n \"quantize\": True,\n \"quantization_config\": {\"load_in_4bit\": True},\n \"verbose\": True,\n}\ninference = HuggingfaceLLM(\n model_id=\"NousResearch/Nous-Hermes-2-Vision-Alpha\", **custom_config\n)\n\n# Generate text based on a prompt\nprompt_text = \"Tell me a joke\"\ngenerated_text = inference(prompt_text)\nprint(generated_text)\n
"},{"location":"swarms/models/huggingface/#example-3-distributed-processing","title":"Example 3: Distributed Processing","text":"from swarm_models import HuggingfaceLLM\n\n# Initialize for distributed processing\ninference = HuggingfaceLLM(model_id=\"gpt2-medium\", distributed=True)\n\n# Generate text based on a prompt\nprompt_text = \"Translate the following sentence to French\"\ngenerated_text = inference(prompt_text)\nprint(generated_text)\n
"},{"location":"swarms/models/huggingface/#additional-information","title":"Additional Information","text":"HuggingfaceLLM
class provides the flexibility to load and use pre-trained models from the Hugging Face Transformers library.This documentation provides a comprehensive understanding of the HuggingfaceLLM
class, its attributes, methods, and usage examples. Developers can use this class to perform text generation tasks efficiently using pre-trained models from the Hugging Face Transformers library.
Idefics
Documentation","text":""},{"location":"swarms/models/idefics/#introduction","title":"Introduction","text":"Welcome to the documentation for Idefics, a versatile multimodal inference tool using pre-trained models from the Hugging Face Hub. Idefics is designed to facilitate the generation of text from various prompts, including text and images. This documentation provides a comprehensive understanding of Idefics, its architecture, usage, and how it can be integrated into your projects.
"},{"location":"swarms/models/idefics/#overview","title":"Overview","text":"Idefics leverages the power of pre-trained models to generate textual responses based on a wide range of prompts. It is capable of handling both text and images, making it suitable for various multimodal tasks, including text generation from images.
"},{"location":"swarms/models/idefics/#class-definition","title":"Class Definition","text":"class Idefics:\n def __init__(\n self,\n checkpoint=\"HuggingFaceM4/idefics-9b-instruct\",\n device=None,\n torch_dtype=torch.bfloat16,\n max_length=100,\n ):\n
"},{"location":"swarms/models/idefics/#usage","title":"Usage","text":"To use Idefics, follow these steps:
from swarm_models import Idefics\n\nmodel = Idefics()\n
prompts = [\n \"User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG\"\n]\nresponse = model(prompts)\nprint(response)\n
"},{"location":"swarms/models/idefics/#example-1-image-questioning","title":"Example 1 - Image Questioning","text":"from swarm_models import Idefics\n\nmodel = Idefics()\nprompts = [\n \"User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG\"\n]\nresponse = model(prompts)\nprint(response)\n
"},{"location":"swarms/models/idefics/#example-2-bidirectional-conversation","title":"Example 2 - Bidirectional Conversation","text":"from swarm_models import Idefics\n\nmodel = Idefics()\nuser_input = \"User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG\"\nresponse = model.chat(user_input)\nprint(response)\n\nuser_input = \"User: Who is that? https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052\"\nresponse = model.chat(user_input)\nprint(response)\n
"},{"location":"swarms/models/idefics/#example-3-configuration-changes","title":"Example 3 - Configuration Changes","text":"model.set_checkpoint(\"new_checkpoint\")\nmodel.set_device(\"cpu\")\nmodel.set_max_length(200)\nmodel.clear_chat_history()\n
"},{"location":"swarms/models/idefics/#how-idefics-works","title":"How Idefics Works","text":"Idefics operates by leveraging pre-trained models from the Hugging Face Hub. Here's how it works:
Initialization: When you create an Idefics instance, it initializes the model using a specified checkpoint, sets the device for inference, and configures other parameters like data type and maximum text length.
Prompt-Based Inference: You can use the infer
method to generate text based on prompts. It processes prompts in batched or non-batched mode, depending on your preference. It uses a pre-trained processor to handle text and images.
Bidirectional Conversation: The chat
method enables bidirectional conversations. You provide user input, and the model responds accordingly. The chat history is maintained for context.
Configuration Changes: You can change the model checkpoint, device, maximum text length, or clear the chat history as needed during runtime.
checkpoint
: The name of the pre-trained model checkpoint (default is \"HuggingFaceM4/idefics-9b-instruct\").device
: The device to use for inference. By default, it uses CUDA if available; otherwise, it uses CPU.torch_dtype
: The data type to use for inference. By default, it uses torch.bfloat16.max_length
: The maximum length of the generated text (default is 100).That concludes the documentation for Idefics. We hope you find this tool valuable for your multimodal text generation tasks. If you have any questions or encounter any issues, please refer to the Hugging Face Transformers documentation for further assistance. Enjoy working with Idefics!
"},{"location":"swarms/models/kosmos/","title":"Kosmos
Documentation","text":""},{"location":"swarms/models/kosmos/#introduction","title":"Introduction","text":"Welcome to the documentation for Kosmos, a powerful multimodal AI model that can perform various tasks, including multimodal grounding, referring expression comprehension, referring expression generation, grounded visual question answering (VQA), and grounded image captioning. Kosmos is based on the ydshieh/kosmos-2-patch14-224 model and is designed to process both text and images to provide meaningful outputs. In this documentation, you will find a detailed explanation of the Kosmos class, its functions, parameters, and usage examples.
"},{"location":"swarms/models/kosmos/#overview","title":"Overview","text":"Kosmos is a state-of-the-art multimodal AI model that combines the power of natural language understanding with image analysis. It can perform several tasks that involve processing both textual prompts and images to provide informative responses. Whether you need to find objects in an image, understand referring expressions, generate descriptions, answer questions, or create captions, Kosmos has you covered.
"},{"location":"swarms/models/kosmos/#class-definition","title":"Class Definition","text":"class Kosmos:\n def __init__(self, model_name=\"ydshieh/kosmos-2-patch14-224\"):\n
"},{"location":"swarms/models/kosmos/#usage","title":"Usage","text":"To use Kosmos, follow these steps:
from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n
kosmos.multimodal_grounding(\n \"Find the red apple in the image.\", \"https://example.com/apple.jpg\"\n)\n
"},{"location":"swarms/models/kosmos/#example-1-multimodal-grounding","title":"Example 1 - Multimodal Grounding","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nkosmos.multimodal_grounding(\n \"Find the red apple in the image.\", \"https://example.com/apple.jpg\"\n)\n
kosmos.referring_expression_comprehension(\n \"Show me the green bottle.\", \"https://example.com/bottle.jpg\"\n)\n
"},{"location":"swarms/models/kosmos/#example-2-referring-expression-comprehension","title":"Example 2 - Referring Expression Comprehension","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nkosmos.referring_expression_comprehension(\n \"Show me the green bottle.\", \"https://example.com/bottle.jpg\"\n)\n
kosmos.referring_expression_generation(\n \"It is on the table.\", \"https://example.com/table.jpg\"\n)\n
"},{"location":"swarms/models/kosmos/#example-3-referring-expression-generation","title":"Example 3 - Referring Expression Generation","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nkosmos.referring_expression_generation(\n \"It is on the table.\", \"https://example.com/table.jpg\"\n)\n
kosmos.grounded_vqa(\"What is the color of the car?\", \"https://example.com/car.jpg\")\n
"},{"location":"swarms/models/kosmos/#example-4-grounded-visual-question-answering","title":"Example 4 - Grounded Visual Question Answering","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nkosmos.grounded_vqa(\"What is the color of the car?\", \"https://example.com/car.jpg\")\n
kosmos.grounded_image_captioning(\"https://example.com/beach.jpg\")\n
"},{"location":"swarms/models/kosmos/#example-5-grounded-image-captioning","title":"Example 5 - Grounded Image Captioning","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nkosmos.grounded_image_captioning(\"https://example.com/beach.jpg\")\n
kosmos.grounded_image_captioning_detailed(\"https://example.com/beach.jpg\")\n
"},{"location":"swarms/models/kosmos/#example-6-detailed-grounded-image-captioning","title":"Example 6 - Detailed Grounded Image Captioning","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nkosmos.grounded_image_captioning_detailed(\"https://example.com/beach.jpg\")\n
image = kosmos.get_image(\"https://example.com/image.jpg\")\nentities = [\n (\"apple\", (0, 3), [(0.2, 0.3, 0.4, 0.5)]),\n (\"banana\", (4, 9), [(0.6, 0.2, 0.8, 0.4)]),\n]\nkosmos.draw_entity_boxes_on_image(image, entities, show=True)\n
"},{"location":"swarms/models/kosmos/#example-7-drawing-entity-boxes-on-image","title":"Example 7 - Drawing Entity Boxes on Image","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\n\nimage = kosmos.get_image(\"https://example.com/image.jpg\")\nentities = [\n (\"apple\", (0, 3), [(0.2, 0.3, 0.4, 0.5)]),\n (\"banana\", (4, 9), [(0.6, 0.2, 0.8, 0.4)]),\n]\nkosmos.draw_entity_boxes_on_image(image, entities, show=True)\n
entities = [\n (\"apple\", (0, 3), [(0.2, 0.3, 0.4, 0.5)]),\n (\"banana\", (4, 9), [(0.6, 0.2, 0.8, 0.4)]),\n]\nimage = kosmos.generate_boxes(\n \"Find the apple and the banana in the image.\", \"https://example.com/image.jpg\"\n)\n
"},{"location":"swarms/models/kosmos/#example-8-generating-boxes-for-entities","title":"Example 8 - Generating Boxes for Entities","text":"from swarm_models.kosmos_two import Kosmos\n\nkosmos = Kosmos()\nentities = [\n (\"apple\", (0, 3), [(0.2, 0.3, 0.4, 0.5)]),\n (\"banana\", (4, 9), [(0.6, 0.2, 0.8, 0.4)]),\n]\nimage = kosmos.generate_boxes(\n \"Find the apple and the banana in the image.\", \"https://example.com/image.jpg\"\n)\n
"},{"location":"swarms/models/kosmos/#how-kosmos-works","title":"How Kosmos Works","text":"Kosmos is a multimodal AI model that combines text and image processing. It uses the ydshieh/kosmos-2-patch14-224 model for understanding and generating responses. Here's how it works:
Initialization: When you create a Kosmos instance, it loads the ydshieh/kosmos-2-patch14-224 model for multimodal tasks.
Processing Text and Images: Kosmos can process both text prompts and images. It takes a textual prompt and an image URL as input.
Task Execution: Based on the task you specify, Kosmos generates informative responses by combining natural language understanding with image analysis.
Drawing Entity Boxes: You can use the draw_entity_boxes_on_image
method to draw bounding boxes around entities in an image.
Generating Boxes for Entities: The generate_boxes
method allows you to generate bounding boxes for entities mentioned in a prompt.
model_name
: The name or path of the Kosmos model to be used. By default, it uses the ydshieh/kosmos-2-patch14-224 model.draw_entity_boxes_on_image
method is useful for visualizing the results of multimodal grounding tasks.generate_boxes
method is handy for generating bounding boxes around entities mentioned in a textual prompt.That concludes the documentation for Kosmos. We hope you find this multimodal AI model valuable for your projects. If you have any questions or encounter any issues, please refer to the Kosmos documentation for further assistance. Enjoy working with Kosmos!
"},{"location":"swarms/models/layoutlm_document_qa/","title":"LayoutLMDocumentQA
Documentation","text":""},{"location":"swarms/models/layoutlm_document_qa/#introduction","title":"Introduction","text":"Welcome to the documentation for LayoutLMDocumentQA, a multimodal model designed for visual question answering (QA) on real-world documents, such as invoices, PDFs, and more. This comprehensive documentation will provide you with a deep understanding of the LayoutLMDocumentQA class, its architecture, usage, and examples.
"},{"location":"swarms/models/layoutlm_document_qa/#overview","title":"Overview","text":"LayoutLMDocumentQA is a versatile model that combines layout-based understanding of documents with natural language processing to answer questions about the content of documents. It is particularly useful for automating tasks like invoice processing, extracting information from PDFs, and handling various document-based QA scenarios.
"},{"location":"swarms/models/layoutlm_document_qa/#class-definition","title":"Class Definition","text":"class LayoutLMDocumentQA(AbstractModel):\n def __init__(\n self, \n model_name: str = \"impira/layoutlm-document-qa\",\n task: str = \"document-question-answering\",\n ):\n
"},{"location":"swarms/models/layoutlm_document_qa/#purpose","title":"Purpose","text":"The LayoutLMDocumentQA class serves the following primary purposes:
Document QA: LayoutLMDocumentQA is specifically designed for document-based question answering. It can process both the textual content and the layout of a document to answer questions.
Multimodal Understanding: It combines natural language understanding with document layout analysis, making it suitable for documents with complex structures.
model_name
(str): The name or path of the pretrained LayoutLMDocumentQA model. Default: \"impira/layoutlm-document-qa\".task
(str): The specific task for which the model will be used. Default: \"document-question-answering\".To use LayoutLMDocumentQA, follow these steps:
from swarm_models import LayoutLMDocumentQA\n\nlayout_lm_doc_qa = LayoutLMDocumentQA()\n
"},{"location":"swarms/models/layoutlm_document_qa/#example-1-initialization","title":"Example 1 - Initialization","text":"layout_lm_doc_qa = LayoutLMDocumentQA()\n
question = \"What is the total amount?\"\nimage_path = \"path/to/document_image.png\"\nanswer = layout_lm_doc_qa(question, image_path)\n
"},{"location":"swarms/models/layoutlm_document_qa/#example-2-document-qa","title":"Example 2 - Document QA","text":"layout_lm_doc_qa = LayoutLMDocumentQA()\nquestion = \"What is the total amount?\"\nimage_path = \"path/to/document_image.png\"\nanswer = layout_lm_doc_qa(question, image_path)\n
"},{"location":"swarms/models/layoutlm_document_qa/#how-layoutlmdocumentqa-works","title":"How LayoutLMDocumentQA Works","text":"LayoutLMDocumentQA employs a multimodal approach to document QA. Here's how it works:
Initialization: When you create a LayoutLMDocumentQA instance, you can specify the model to use and the task, which is \"document-question-answering\" by default.
Question and Document: You provide a question about the document and the image path of the document to the LayoutLMDocumentQA instance.
Multimodal Processing: LayoutLMDocumentQA processes both the question and the document image. It combines layout-based analysis with natural language understanding.
Answer Generation: The model generates an answer to the question based on its analysis of the document layout and content.
That concludes the documentation for LayoutLMDocumentQA. We hope you find this tool valuable for your document-based question answering needs. If you have any questions or encounter any issues, please refer to the LayoutLMDocumentQA documentation for further assistance. Enjoy using LayoutLMDocumentQA!
"},{"location":"swarms/models/llama3/","title":"Llama3","text":""},{"location":"swarms/models/llama3/#llava3","title":"Llava3","text":"from transformers import AutoTokenizer, AutoModelForCausalLM\nimport torch\nfrom swarm_models.base_llm import BaseLLM\n\n\nclass Llama3(BaseLLM):\n \"\"\"\n Llama3 class represents a Llama model for natural language generation.\n\n Args:\n model_id (str): The ID of the Llama model to use.\n system_prompt (str): The system prompt to use for generating responses.\n temperature (float): The temperature value for controlling the randomness of the generated responses.\n top_p (float): The top-p value for controlling the diversity of the generated responses.\n max_tokens (int): The maximum number of tokens to generate in the response.\n **kwargs: Additional keyword arguments.\n\n Attributes:\n model_id (str): The ID of the Llama model being used.\n system_prompt (str): The system prompt for generating responses.\n temperature (float): The temperature value for generating responses.\n top_p (float): The top-p value for generating responses.\n max_tokens (int): The maximum number of tokens to generate in the response.\n tokenizer (AutoTokenizer): The tokenizer for the Llama model.\n model (AutoModelForCausalLM): The Llama model for generating responses.\n\n Methods:\n run(task, *args, **kwargs): Generates a response for the given task.\n\n \"\"\"\n\n def __init__(\n self,\n model_id=\"meta-llama/Meta-Llama-3-8B-Instruct\",\n system_prompt: str = None,\n temperature: float = 0.6,\n top_p: float = 0.9,\n max_tokens: int = 4000,\n **kwargs,\n ):\n self.model_id = model_id\n self.system_prompt = system_prompt\n self.temperature = temperature\n self.top_p = top_p\n self.max_tokens = max_tokens\n self.tokenizer = AutoTokenizer.from_pretrained(model_id)\n self.model = AutoModelForCausalLM.from_pretrained(\n model_id,\n torch_dtype=torch.bfloat16,\n device_map=\"auto\",\n )\n\n def run(self, task: str, *args, **kwargs):\n \"\"\"\n Generates a response for the given task.\n\n Args:\n task (str): The user's task or input.\n\n Returns:\n str: The generated response.\n\n \"\"\"\n messages = [\n {\"role\": \"system\", \"content\": self.system_prompt},\n {\"role\": \"user\", \"content\": task},\n ]\n\n input_ids = self.tokenizer.apply_chat_template(\n messages, add_generation_prompt=True, return_tensors=\"pt\"\n ).to(self.model.device)\n\n terminators = [\n self.tokenizer.eos_token_id,\n self.tokenizer.convert_tokens_to_ids(\"<|eot_id|>\"),\n ]\n\n outputs = self.model.generate(\n input_ids,\n max_new_tokens=self.max_tokens,\n eos_token_id=terminators,\n do_sample=True,\n temperature=self.temperature,\n top_p=self.top_p,\n *args,\n **kwargs,\n )\n response = outputs[0][input_ids.shape[-1] :]\n return self.tokenizer.decode(\n response, skip_special_tokens=True\n )\n
"},{"location":"swarms/models/models_available_overview/","title":"Models available overview","text":""},{"location":"swarms/models/models_available_overview/#the-swarms-framework-a-comprehensive-guide-to-model-apis-and-usage","title":"The Swarms Framework: A Comprehensive Guide to Model APIs and Usage","text":""},{"location":"swarms/models/models_available_overview/#introduction","title":"Introduction","text":"The Swarms framework is a versatile and robust tool designed to streamline the integration and orchestration of multiple AI models, making it easier for developers to build sophisticated multi-agent systems. This blog aims to provide a detailed guide on using the Swarms framework, covering the various models it supports, common methods, settings, and practical examples.
"},{"location":"swarms/models/models_available_overview/#overview-of-the-swarms-framework","title":"Overview of the Swarms Framework","text":"Swarms is a \"framework of frameworks\" that allows seamless integration of various AI models, including those from OpenAI, Anthropic, Hugging Face, Azure, and more. This flexibility enables users to leverage the strengths of different models within a single application. The framework provides a unified interface for model interaction, simplifying the process of integrating and managing multiple AI models.
"},{"location":"swarms/models/models_available_overview/#getting-started-with-swarms","title":"Getting Started with Swarms","text":"To get started with Swarms, you need to install the framework and set up the necessary environment variables. Here's a step-by-step guide:
"},{"location":"swarms/models/models_available_overview/#installation","title":"Installation","text":"You can install the Swarms framework using pip:
pip install swarms\n
"},{"location":"swarms/models/models_available_overview/#setting-up-environment-variables","title":"Setting Up Environment Variables","text":"Swarms relies on environment variables to manage API keys and other configurations. You can use the dotenv
package to load these variables from a .env
file.
pip install python-dotenv\n
Create a .env
file in your project directory and add your API keys and other settings:
OPENAI_API_KEY=your_openai_api_key\nANTHROPIC_API_KEY=your_anthropic_api_key\nAZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint\nAZURE_OPENAI_DEPLOYMENT=your_azure_openai_deployment\nOPENAI_API_VERSION=your_openai_api_version\nAZURE_OPENAI_API_KEY=your_azure_openai_api_key\nAZURE_OPENAI_AD_TOKEN=your_azure_openai_ad_token\n
"},{"location":"swarms/models/models_available_overview/#using-the-swarms-framework","title":"Using the Swarms Framework","text":"Swarms supports a variety of models from different providers. Here are some examples of how to use these models within the Swarms framework.
"},{"location":"swarms/models/models_available_overview/#using-the-anthropic-model","title":"Using the Anthropic Model","text":"The Anthropic model is one of the many models supported by Swarms. Here's how you can use it:
import os\nfrom swarm_models import Anthropic\n\n# Load the environment variables\nanthropic_api_key = os.getenv(\"ANTHROPIC_API_KEY\")\n\n# Create an instance of the Anthropic model\nmodel = Anthropic(anthropic_api_key=anthropic_api_key)\n\n# Define the task\ntask = \"What is quantum field theory? What are 3 books on the field?\"\n\n# Generate a response\nresponse = model(task)\n\n# Print the response\nprint(response)\n
"},{"location":"swarms/models/models_available_overview/#using-the-huggingfacellm-model","title":"Using the HuggingfaceLLM Model","text":"HuggingfaceLLM allows you to use models from Hugging Face's vast repository. Here's an example:
from swarm_models import HuggingfaceLLM\n\n# Define the model ID\nmodel_id = \"NousResearch/Yarn-Mistral-7b-128k\"\n\n# Create an instance of the HuggingfaceLLM model\ninference = HuggingfaceLLM(model_id=model_id)\n\n# Define the task\ntask = \"Once upon a time\"\n\n# Generate a response\ngenerated_text = inference(task)\nprint(generated_text)\n
"},{"location":"swarms/models/models_available_overview/#using-the-openaichat-model","title":"Using the OpenAIChat Model","text":"The OpenAIChat model is designed for conversational tasks. Here's how to use it:
import os\nfrom swarm_models import OpenAIChat\n\n# Load the environment variables\nopenai_api_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat model\nopenai = OpenAIChat(openai_api_key=openai_api_key, verbose=False)\n\n# Define the task\nchat = openai(\"What are quantum fields?\")\nprint(chat)\n
"},{"location":"swarms/models/models_available_overview/#using-the-togetherllm-model","title":"Using the TogetherLLM Model","text":"TogetherLLM supports models from the Together ecosystem. Here's an example:
from swarms import TogetherLLM\n\n# Initialize the model with your parameters\nmodel = TogetherLLM(\n model_name=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n max_tokens=1000,\n together_api_key=\"your_together_api_key\",\n)\n\n# Run the model\nresponse = model.run(\"Generate a blog post about the best way to make money online.\")\nprint(response)\n
"},{"location":"swarms/models/models_available_overview/#using-the-azure-openai-model","title":"Using the Azure OpenAI Model","text":"The Azure OpenAI model is another powerful tool that can be integrated with Swarms. Here's how to use it:
import os\nfrom dotenv import load_dotenv\nfrom swarms import AzureOpenAI\n\n# Load the environment variables\nload_dotenv()\n\n# Create an instance of the AzureOpenAI class\nmodel = AzureOpenAI(\n azure_endpoint=os.getenv(\"AZURE_OPENAI_ENDPOINT\"),\n deployment_name=os.getenv(\"AZURE_OPENAI_DEPLOYMENT\"),\n openai_api_version=os.getenv(\"OPENAI_API_VERSION\"),\n openai_api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\n azure_ad_token=os.getenv(\"AZURE_OPENAI_AD_TOKEN\"),\n)\n\n# Define the prompt\nprompt = (\n \"Analyze this load document and assess it for any risks and\"\n \" create a table in markdown format.\"\n)\n\n# Generate a response\nresponse = model(prompt)\nprint(response)\n
"},{"location":"swarms/models/models_available_overview/#using-the-gpt4visionapi-model","title":"Using the GPT4VisionAPI Model","text":"The GPT4VisionAPI model can analyze images and provide detailed insights. Here's how to use it:
import os\nfrom dotenv import load_dotenv\nfrom swarms import GPT4VisionAPI\n\n# Load the environment variables\nload_dotenv()\n\n# Get the API key from the environment variables\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the GPT4VisionAPI class\ngpt4vision = GPT4VisionAPI(\n openai_api_key=api_key,\n model_name=\"gpt-4o\",\n max_tokens=1000,\n openai_proxy=\"https://api.openai.com/v1/chat/completions\",\n)\n\n# Define the URL of the image to analyze\nimg = \"ear.png\"\n\n# Define the task to perform on the image\ntask = \"What is this image\"\n\n# Run the GPT4VisionAPI on the image with the specified task\nanswer = gpt4vision.run(task, img, return_json=True)\n\n# Print the answer\nprint(answer)\n
"},{"location":"swarms/models/models_available_overview/#using-the-qwenvlmultimodal-model","title":"Using the QwenVLMultiModal Model","text":"The QwenVLMultiModal model is designed for multi-modal tasks, such as processing both text and images. Here's an example of how to use it:
from swarms import QwenVLMultiModal\n\n# Instantiate the QwenVLMultiModal model\nmodel = QwenVLMultiModal(\n model_name=\"Qwen/Qwen-VL-Chat\",\n device=\"cuda\",\n quantize=True,\n)\n\n# Run the model\nresponse = model(\"Hello, how are you?\", \"https://example.com/image.jpg\")\n\n# Print the response\nprint(response)\n
"},{"location":"swarms/models/models_available_overview/#common-methods-in-swarms","title":"Common Methods in Swarms","text":"Swarms provides several common methods that are useful across different models. One of the most frequently used methods is __call__
.
__call__
Method","text":"The __call__
method is used to run the model on a given task. Here is a generic example:
# Assuming `model` is an instance of any supported model\ntask = \"Explain the theory of relativity.\"\nresponse = model(task)\nprint(response)\n
This method abstracts the complexity of interacting with different model APIs, providing a consistent interface for executing tasks.
"},{"location":"swarms/models/models_available_overview/#common-settings-in-swarms","title":"Common Settings in Swarms","text":"Swarms allows you to configure various settings to customize the behavior of the models. Here are some common settings:
"},{"location":"swarms/models/models_available_overview/#api-keys","title":"API Keys","text":"API keys are essential for authenticating and accessing the models. These keys are typically set through environment variables:
import os\n\n# Set API keys as environment variables\nos.environ['OPENAI_API_KEY'] = 'your_openai_api_key'\nos.environ['ANTHROPIC_API_KEY'] = 'your_anthropic_api_key'\n
"},{"location":"swarms/models/models_available_overview/#model-specific-settings","title":"Model-Specific Settings","text":"Different models may have specific settings that need to be configured. For example, the AzureOpenAI
model requires several settings related to the Azure environment:
model = AzureOpenAI(\n azure_endpoint=os.getenv(\"AZURE_OPENAI_ENDPOINT\"),\n deployment_name=os.getenv(\"AZURE_OPENAI_DEPLOYMENT\"),\n openai_api_version=os.getenv(\"OPENAI_API_VERSION\"),\n openai_api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"),\n azure_ad_token=os.getenv(\"AZURE_OPENAI_AD_TOKEN\"),\n)\n
"},{"location":"swarms/models/models_available_overview/#advanced-usage-and-best-practices","title":"Advanced Usage and Best Practices","text":"To make the most out of the Swarms framework, consider the following best practices:
"},{"location":"swarms/models/models_available_overview/#extensive-logging","title":"Extensive Logging","text":"Use logging to monitor the behavior and performance of your models. The loguru
library is recommended for its simplicity and flexibility:
from loguru import logger\n\n# Log model interactions\nlogger.info(\"Running task on Anthropic model\")\nresponse = model(task)\nlogger.info(f\"Response: {response}\")\n
"},{"location":"swarms/models/models_available_overview/#error-handling","title":"Error Handling","text":"Implement robust error handling to manage API failures and other issues gracefully:
try:\n response = model(task)\nexcept Exception as e:\n logger.error(f\"Error running task: {e}\")\n response = \"An error occurred while processing your request.\"\nprint(response)\n
"},{"location":"swarms/models/models_available_overview/#conclusion","title":"Conclusion","text":"The Swarms framework provides a powerful and flexible way to integrate and manage multiple AI models within a single application. By following the guidelines and examples provided in this blog, you can leverage Swarms to build sophisticated, multi-agent systems with ease. Whether you're using models from OpenAI, Anthropic, Azure, or Hugging Face,
Swarms offers a unified interface that simplifies the process of model orchestration and execution.
"},{"location":"swarms/models/nougat/","title":"Nougat
Documentation","text":""},{"location":"swarms/models/nougat/#introduction","title":"Introduction","text":"Welcome to the documentation for Nougat, a versatile model designed by Meta for transcribing scientific PDFs into user-friendly Markdown format, extracting information from PDFs, and extracting metadata from PDF documents. This documentation will provide you with a deep understanding of the Nougat class, its architecture, usage, and examples.
"},{"location":"swarms/models/nougat/#overview","title":"Overview","text":"Nougat is a powerful tool that combines language modeling and image processing capabilities to convert scientific PDF documents into Markdown format. It is particularly useful for researchers, students, and professionals who need to extract valuable information from PDFs quickly. With Nougat, you can simplify complex PDFs, making their content more accessible and easy to work with.
"},{"location":"swarms/models/nougat/#class-definition","title":"Class Definition","text":"class Nougat:\n def __init__(\n self,\n model_name_or_path=\"facebook/nougat-base\",\n min_length: int = 1,\n max_new_tokens: int = 30,\n ):\n
"},{"location":"swarms/models/nougat/#purpose","title":"Purpose","text":"The Nougat class serves the following primary purposes:
PDF Transcription: Nougat is designed to transcribe scientific PDFs into Markdown format. It helps convert complex PDF documents into a more readable and structured format, making it easier to extract information.
Information Extraction: It allows users to extract valuable information and content from PDFs efficiently. This can be particularly useful for researchers and professionals who need to extract data, figures, or text from scientific papers.
Metadata Extraction: Nougat can also extract metadata from PDF documents, providing essential details about the document, such as title, author, and publication date.
model_name_or_path
(str): The name or path of the pretrained Nougat model. Default: \"facebook/nougat-base\".min_length
(int): The minimum length of the generated transcription. Default: 1.max_new_tokens
(int): The maximum number of new tokens to generate in the Markdown transcription. Default: 30.To use Nougat, follow these steps:
from swarm_models import Nougat\n\nnougat = Nougat()\n
"},{"location":"swarms/models/nougat/#example-1-initialization","title":"Example 1 - Initialization","text":"nougat = Nougat()\n
markdown_transcription = nougat(\"path/to/pdf_file.png\")\n
"},{"location":"swarms/models/nougat/#example-2-pdf-transcription","title":"Example 2 - PDF Transcription","text":"nougat = Nougat()\nmarkdown_transcription = nougat(\"path/to/pdf_file.png\")\n
information = nougat.extract_information(\"path/to/pdf_file.png\")\n
"},{"location":"swarms/models/nougat/#example-3-information-extraction","title":"Example 3 - Information Extraction","text":"nougat = Nougat()\ninformation = nougat.extract_information(\"path/to/pdf_file.png\")\n
metadata = nougat.extract_metadata(\"path/to/pdf_file.png\")\n
"},{"location":"swarms/models/nougat/#example-4-metadata-extraction","title":"Example 4 - Metadata Extraction","text":"nougat = Nougat()\nmetadata = nougat.extract_metadata(\"path/to/pdf_file.png\")\n
"},{"location":"swarms/models/nougat/#how-nougat-works","title":"How Nougat Works","text":"Nougat employs a vision encoder-decoder model, along with a dedicated processor, to transcribe PDFs into Markdown format and perform information and metadata extraction. Here's how it works:
Initialization: When you create a Nougat instance, you can specify the model to use, the minimum transcription length, and the maximum number of new tokens to generate.
Processing PDFs: Nougat can process PDFs as input. You can provide the path to a PDF document.
Image Processing: The processor converts PDF pages into images, which are then encoded by the model.
Transcription: Nougat generates Markdown transcriptions of PDF content, ensuring a minimum length and respecting the token limit.
Information Extraction: Information extraction involves parsing the Markdown transcription to identify key details or content of interest.
Metadata Extraction: Metadata extraction involves identifying and extracting document metadata, such as title, author, and publication date.
That concludes the documentation for Nougat. We hope you find this tool valuable for your PDF transcription, information extraction, and metadata extraction needs. If you have any questions or encounter any issues, please refer to the Nougat documentation for further assistance. Enjoy using Nougat!
"},{"location":"swarms/models/openai/","title":"BaseOpenAI
and OpenAI
Documentation","text":""},{"location":"swarms/models/openai/#table-of-contents","title":"Table of Contents","text":"The BaseOpenAI
and OpenAI
classes are part of the LangChain library, designed to interact with OpenAI's large language models (LLMs). These classes provide a seamless interface for utilizing OpenAI's API to generate natural language text.
Both BaseOpenAI
and OpenAI
classes inherit from BaseLLM
, demonstrating an inheritance-based architecture. This architecture allows for easy extensibility and customization while adhering to the principles of object-oriented programming.
The purpose of these classes is to simplify the interaction with OpenAI's LLMs. They encapsulate API calls, handle tokenization, and provide a high-level interface for generating text. By instantiating an object of the OpenAI
class, developers can quickly leverage the power of OpenAI's models to generate text for various applications, such as chatbots, content generation, and more.
Here are the key attributes and their descriptions for the BaseOpenAI
and OpenAI
classes:
lc_secrets
A dictionary of secrets required for LangChain, including the OpenAI API key. lc_attributes
A dictionary of attributes relevant to LangChain. is_lc_serializable()
A method indicating if the class is serializable for LangChain. model_name
The name of the language model to use. temperature
The sampling temperature for text generation. max_tokens
The maximum number of tokens to generate in a completion. top_p
The total probability mass of tokens to consider at each step. frequency_penalty
Penalizes repeated tokens according to frequency. presence_penalty
Penalizes repeated tokens. n
How many completions to generate for each prompt. best_of
Generates best_of
completions server-side and returns the \"best.\" model_kwargs
Holds any model parameters valid for create
calls not explicitly specified. openai_api_key
The OpenAI API key used for authentication. openai_api_base
The base URL for the OpenAI API. openai_organization
The OpenAI organization name, if applicable. openai_proxy
An explicit proxy URL for OpenAI requests. batch_size
The batch size to use when passing multiple documents for generation. request_timeout
The timeout for requests to the OpenAI completion API. logit_bias
Adjustment to the probability of specific tokens being generated. max_retries
The maximum number of retries to make when generating. streaming
Whether to stream the results or not. allowed_special
A set of special tokens that are allowed. disallowed_special
A collection of special tokens that are not allowed. tiktoken_model_name
The model name to pass to tiktoken
for token counting."},{"location":"swarms/models/openai/#5-methods","title":"5. Methods","text":""},{"location":"swarms/models/openai/#51-construction","title":"5.1 Construction","text":""},{"location":"swarms/models/openai/#511-__new__cls-data-any-unionopenaichat-baseopenai","title":"5.1.1 __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]
","text":"cls
(class): The class instance.data
(dict): Additional data for initialization.build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]
","text":"cls
(class): The class instance.values
(dict): Values and parameters to build extra kwargs.validate_environment(cls, values: Dict) -> Dict
","text":"values
(dict): The class values and parameters.get_sub_prompts(self, params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) -> List[List[str]]
","text":"params
(dict): Parameters for LLM call.prompts
(list): List of prompts.stop
(list, optional): List of stop words.get_token_ids(self, text: str) -> List[int]
","text":"tiktoken
package.text
(str): The text for which to calculate token IDs.modelname_to_contextsize(modelname: str) -> int
","text":"modelname
(str): The model name to determine the context size for.max_tokens_for_prompt(self, prompt: str) -> int
","text":"prompt
(str): The prompt for which todetermine the maximum token limit. - Returns: - int: The maximum token limit.
"},{"location":"swarms/models/openai/#54-generation","title":"5.4 Generation","text":""},{"location":"swarms/models/openai/#541-generateself-text-unionstr-liststr-kwargs-unionstr-liststr","title":"5.4.1generate(self, text: Union[str, List[str]], **kwargs) -> Union[str, List[str]]
","text":"text
(str or list): The input text or list of inputs.**kwargs
(dict): Additional parameters for the generation process.generate_async(self, text: Union[str, List[str]], **kwargs) -> Union[str, List[str]]
","text":"text
(str or list): The input text or list of inputs.**kwargs
(dict): Additional parameters for the asynchronous generation process.# Import the OpenAI class\nfrom swarm_models import OpenAI\n\n# Set your OpenAI API key\napi_key = \"YOUR_API_KEY\"\n\n# Create an OpenAI object\nopenai = OpenAI(api_key)\n
"},{"location":"swarms/models/openai/#62-generating-text","title":"6.2 Generating Text","text":"# Generate text from a single prompt\nprompt = \"Translate the following English text to French: 'Hello, how are you?'\"\ngenerated_text = openai.generate(prompt, max_tokens=50)\n\n# Generate text from multiple prompts\nprompts = [\n \"Translate this: 'Good morning' to Spanish.\",\n \"Summarize the following article:\",\n article_text,\n]\ngenerated_texts = openai.generate(prompts, max_tokens=100)\n\n# Generate text asynchronously\nasync_prompt = \"Translate 'Thank you' into German.\"\nasync_result = openai.generate_async(async_prompt, max_tokens=30)\n\n# Access the result of an asynchronous generation\nasync_result_text = async_result.get()\n
"},{"location":"swarms/models/openai/#63-advanced-configuration","title":"6.3 Advanced Configuration","text":"# Configure generation with advanced options\ncustom_options = {\n \"temperature\": 0.7,\n \"max_tokens\": 100,\n \"top_p\": 0.9,\n \"frequency_penalty\": 0.2,\n \"presence_penalty\": 0.4,\n}\ngenerated_text = openai.generate(prompt, **custom_options)\n
This documentation provides a comprehensive understanding of the BaseOpenAI
and OpenAI
classes, their attributes, methods, and usage examples. Developers can utilize these classes to interact with OpenAI's language models efficiently, enabling various natural language generation tasks.
OpenAIChat
Documentation","text":""},{"location":"swarms/models/openai_chat/#table-of-contents","title":"Table of Contents","text":"The OpenAIChat
class is part of the LangChain library and serves as an interface to interact with OpenAI's Chat large language models. This documentation provides an in-depth understanding of the class, its attributes, methods, and usage examples.
The OpenAIChat
class is designed for conducting chat-like conversations with OpenAI's language models, such as GPT-3.5 Turbo. It allows you to create interactive conversations by sending messages and receiving model-generated responses. This class simplifies the process of integrating OpenAI's models into chatbot applications and other natural language processing tasks.
The OpenAIChat
class is built on top of the BaseLLM
class, which provides a foundation for working with large language models. This inheritance-based architecture allows for customization and extension while adhering to object-oriented programming principles.
Here are the key attributes and their descriptions for the OpenAIChat
class:
client
An internal client for making API calls to OpenAI. model_name
The name of the language model to use (default: \"gpt-3.5-turbo\"). model_kwargs
Additional model parameters valid for create
calls not explicitly specified. openai_api_key
The OpenAI API key used for authentication. openai_api_base
The base URL for the OpenAI API. openai_proxy
An explicit proxy URL for OpenAI requests. max_retries
The maximum number of retries to make when generating (default: 6). prefix_messages
A list of messages to set the initial conversation state (default: []). streaming
Whether to stream the results or not (default: False). allowed_special
A set of special tokens that are allowed (default: an empty set). disallowed_special
A collection of special tokens that are not allowed (default: \"all\")."},{"location":"swarms/models/openai_chat/#5-methods","title":"5. Methods","text":""},{"location":"swarms/models/openai_chat/#51-construction","title":"5.1 Construction","text":""},{"location":"swarms/models/openai_chat/#511-__init__self-model_name-str-gpt-35-turbo-openai_api_key-optionalstr-none-openai_api_base-optionalstr-none-openai_proxy-optionalstr-none-max_retries-int-6-prefix_messages-list","title":"5.1.1 __init__(self, model_name: str = \"gpt-3.5-turbo\", openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = [])
","text":"model_name
(str): The name of the language model to use (default: \"gpt-3.5-turbo\").openai_api_key
(str, optional): The OpenAI API key used for authentication.openai_api_base
(str, optional): The base URL for the OpenAI API.openai_proxy
(str, optional): An explicit proxy URL for OpenAI requests.max_retries
(int): The maximum number of retries to make when generating (default: 6).prefix_messages
(List): A list of messages to set the initial conversation state (default: []).build_extra(self, values: Dict[str, Any]) -> Dict[str, Any]
","text":"values
(dict): Values and parameters to build extra kwargs.validate_environment(self, values: Dict) -> Dict
","text":"values
(dict): The class values and parameters._get_chat_params(self, prompts: List[str], stop: Optional[List[str]] = None) -> Tuple
","text":"prompts
(list): List of user messages.stop
(list, optional): List of stop words._stream(self, prompt: str, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) -> Iterator[GenerationChunk]
","text":"prompt
(str): The user's message.stop
(list, optional): List of stop words.run_manager
(optional): Callback manager for asynchronous generation.**kwargs
(dict): Additional parameters for asynchronous generation._agenerate(self, prompts: List[str], stop: Optional[List[str]] = None, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any) -> LLMResult
","text":"prompts
(list): List of user messages.stop
(list, optional): List of stop words.run_manager
(optional): Callback manager for asynchronous generation.**kwargs
(dict): Additional parameters for asynchronous generation.get_token_ids(self, text: str) -> List[int]
","text":"text
(str): The text for which to calculate token IDs.token IDs.
"},{"location":"swarms/models/openai_chat/#6-usage-examples","title":"6. Usage Examples","text":""},{"location":"swarms/models/openai_chat/#example-1-initializing-openaichat","title":"Example 1: InitializingOpenAIChat
","text":"from swarm_models import OpenAIChat\n\n# Initialize OpenAIChat with model name and API key\nopenai_chat = OpenAIChat(model_name=\"gpt-3.5-turbo\", openai_api_key=\"YOUR_API_KEY\")\n
"},{"location":"swarms/models/openai_chat/#example-2-sending-messages-and-generating-responses","title":"Example 2: Sending Messages and Generating Responses","text":"# Define a conversation\nconversation = [\n \"User: Tell me a joke.\",\n \"Assistant: Why did the chicken cross the road?\",\n \"User: I don't know. Why?\",\n \"Assistant: To get to the other side!\",\n]\n\n# Set the conversation as the prefix messages\nopenai_chat.prefix_messages = conversation\n\n# Generate a response\nuser_message = \"User: Tell me another joke.\"\nresponse = openai_chat.generate([user_message])\n\n# Print the generated response\nprint(\n response[0][0].text\n) # Output: \"Assistant: Why don't scientists trust atoms? Because they make up everything!\"\n
"},{"location":"swarms/models/openai_chat/#example-3-asynchronous-generation","title":"Example 3: Asynchronous Generation","text":"import asyncio\n\n\n# Define an asynchronous function for generating responses\nasync def generate_responses():\n user_message = \"User: Tell me a fun fact.\"\n async for chunk in openai_chat.stream([user_message]):\n print(chunk.text)\n\n\n# Run the asynchronous generation function\nasyncio.run(generate_responses())\n
"},{"location":"swarms/models/openai_chat/#7-additional-information","title":"7. Additional Information","text":"OpenAIChat
class, you should have the openai
Python package installed, and the environment variable OPENAI_API_KEY
set with your API key.openai.create
call can be passed to the OpenAIChat
constructor.model_name
, openai_api_key
, prefix_messages
, and more._stream
and _agenerate
methods to interactively receive model-generated text chunks.get_token_ids
method, which utilizes the tiktoken
package. Make sure to install the tiktoken
package with pip install tiktoken
if needed.This documentation provides a comprehensive overview of the OpenAIChat
class, its attributes, methods, and usage examples. You can use this class to create chatbot applications, conduct conversations with language models, and explore the capabilities of OpenAI's GPT-3.5 Turbo model.
The OpenAIFunctionCaller
class is designed to interface with OpenAI's chat completion API, allowing users to generate responses based on given prompts using specified models. This class encapsulates the setup and execution of API calls, including handling API keys, model parameters, and response formatting. The class extends the BaseLLM
and utilizes OpenAI's client library to facilitate interactions.
A class that represents a caller for OpenAI chat completions.
"},{"location":"swarms/models/openai_function_caller/#attributes","title":"Attributes","text":"Attribute Type Descriptionsystem_prompt
str
The system prompt to be used in the chat completion. model_name
str
The name of the OpenAI model to be used. max_tokens
int
The maximum number of tokens in the generated completion. temperature
float
The temperature parameter for randomness in the completion. base_model
BaseModel
The base model to be used for the completion. parallel_tool_calls
bool
Whether to make parallel tool calls. top_p
float
The top-p parameter for nucleus sampling in the completion. client
openai.OpenAI
The OpenAI client for making API calls."},{"location":"swarms/models/openai_function_caller/#methods","title":"Methods","text":""},{"location":"swarms/models/openai_function_caller/#check_api_key","title":"check_api_key
","text":"Checks if the API key is provided and retrieves it from the environment if not.
Parameter Type Description NoneReturns:
Type Descriptionstr
The API key."},{"location":"swarms/models/openai_function_caller/#run","title":"run
","text":"Runs the chat completion with the given task and returns the generated completion.
Parameter Type Descriptiontask
str
The user's task for the chat completion. *args
Additional positional arguments to be passed to the OpenAI API. **kwargs
Additional keyword arguments to be passed to the OpenAI API. Returns:
Type Descriptionstr
The generated completion."},{"location":"swarms/models/openai_function_caller/#convert_to_dict_from_base_model","title":"convert_to_dict_from_base_model
","text":"Converts a BaseModel
to a dictionary.
base_model
BaseModel
The BaseModel to be converted. Returns:
Type Descriptiondict
A dictionary representing the BaseModel."},{"location":"swarms/models/openai_function_caller/#convert_list_of_base_models","title":"convert_list_of_base_models
","text":"Converts a list of BaseModels
to a list of dictionaries.
base_models
List[BaseModel]
A list of BaseModels to be converted. Returns:
Type DescriptionList[Dict]
A list of dictionaries representing the converted BaseModels."},{"location":"swarms/models/openai_function_caller/#usage-examples","title":"Usage Examples","text":"Here are three examples demonstrating different ways to use the OpenAIFunctionCaller
class:
import openai\nfrom swarm_models.openai_function_caller import OpenAIFunctionCaller\nfrom swarms.artifacts.main_artifact import Artifact\n\n\n# Pydantic is a data validation library that provides data validation and parsing using Python type hints.\n\n\n# Example usage:\n# Initialize the function caller\nmodel = OpenAIFunctionCaller(\n system_prompt=\"You're a helpful assistant.The time is August 6, 2024\",\n max_tokens=500,\n temperature=0.5,\n base_model=Artifact,\n parallel_tool_calls=False,\n)\n\n\n# The OpenAIFunctionCaller class is used to interact with the OpenAI API and make function calls.\n# Here, we initialize an instance of the OpenAIFunctionCaller class with the following parameters:\n# - system_prompt: A prompt that sets the context for the conversation with the API.\n# - max_tokens: The maximum number of tokens to generate in the API response.\n# - temperature: A parameter that controls the randomness of the generated text.\n# - base_model: The base model to use for the API calls, in this case, the WeatherAPI class.\nout = model.run(\"Create a python file with a python game code in it\")\nprint(out)\n
"},{"location":"swarms/models/openai_function_caller/#example-2-prompt-generator","title":"Example 2: Prompt Generator","text":"from swarm_models.openai_function_caller import OpenAIFunctionCaller\nfrom pydantic import BaseModel, Field\nfrom typing import Sequence\n\n\nclass PromptUseCase(BaseModel):\n use_case_name: str = Field(\n ...,\n description=\"The name of the use case\",\n )\n use_case_description: str = Field(\n ...,\n description=\"The description of the use case\",\n )\n\n\nclass PromptSpec(BaseModel):\n prompt_name: str = Field(\n ...,\n description=\"The name of the prompt\",\n )\n prompt_description: str = Field(\n ...,\n description=\"The description of the prompt\",\n )\n prompt: str = Field(\n ...,\n description=\"The prompt for the agent\",\n )\n tags: str = Field(\n ...,\n description=\"The tags for the prompt such as sentiment, code, etc seperated by commas.\",\n )\n use_cases: Sequence[PromptUseCase] = Field(\n ...,\n description=\"The use cases for the prompt\",\n )\n\n\n# Example usage:\n# Initialize the function caller\nmodel = OpenAIFunctionCaller(\n system_prompt=\"You're an agent creator, you're purpose is to create system prompt for new LLM Agents for the user. Follow the best practices for creating a prompt such as making it direct and clear. Providing instructions and many-shot examples will help the agent understand the task better.\",\n max_tokens=1000,\n temperature=0.5,\n base_model=PromptSpec,\n parallel_tool_calls=False,\n)\n\n\n# The OpenAIFunctionCaller class is used to interact with the OpenAI API and make function calls.\nout = model.run(\n \"Create an prompt for generating quality rust code with instructions and examples.\"\n)\nprint(out)\n
"},{"location":"swarms/models/openai_function_caller/#example-3-sentiment-analysis","title":"Example 3: Sentiment Analysis","text":"from swarm_models.openai_function_caller import OpenAIFunctionCaller\nfrom pydantic import BaseModel, Field\n\n\n# Pydantic is a data validation library that provides data validation and parsing using Python type hints.\n# It is used here to define the data structure for making API calls to retrieve weather information.\nclass SentimentAnalysisCard(BaseModel):\n text: str = Field(\n ...,\n description=\"The text to be analyzed for sentiment rating\",\n )\n rating: str = Field(\n ...,\n description=\"The sentiment rating of the text from 0.0 to 1.0\",\n )\n\n\n# The WeatherAPI class is a Pydantic BaseModel that represents the data structure\n# for making API calls to retrieve weather information. It has two attributes: city and date.\n\n# Example usage:\n# Initialize the function caller\nmodel = OpenAIFunctionCaller(\n system_prompt=\"You're a sentiment Analysis Agent, you're purpose is to rate the sentiment of text\",\n max_tokens=100,\n temperature=0.5,\n base_model=SentimentAnalysisCard,\n parallel_tool_calls=False,\n)\n\n\n# The OpenAIFunctionCaller class is used to interact with the OpenAI API and make function calls.\n# Here, we initialize an instance of the OpenAIFunctionCaller class with the following parameters:\n# - system_prompt: A prompt that sets the context for the conversation with the API.\n# - max_tokens: The maximum number of tokens to generate in the API response.\n# - temperature: A parameter that controls the randomness of the generated text.\n# - base_model: The base model to use for the API calls, in this case, the WeatherAPI class.\nout = model.run(\"The hotel was average, but the food was excellent.\")\nprint(out)\n
"},{"location":"swarms/models/openai_function_caller/#additional-information-and-tips","title":"Additional Information and Tips","text":"temperature
and top_p
parameters to control the randomness and diversity of the generated responses. Lower values for temperature
will result in more deterministic outputs, while higher values will introduce more variability.parallel_tool_calls
, ensure that the tools you are calling in parallel are thread-safe and can handle concurrent execution.By following this comprehensive guide, you can effectively utilize the OpenAIFunctionCaller
class to generate chat completions using OpenAI's models, customize the response parameters, and handle API interactions seamlessly within your application.
OpenAITTS
Documentation","text":""},{"location":"swarms/models/openai_tts/#table-of-contents","title":"Table of Contents","text":"The OpenAITTS
module is a Python library that provides an interface for converting text to speech (TTS) using the OpenAI TTS API. It allows you to generate high-quality speech from text input, making it suitable for various applications such as voice assistants, speech synthesis, and more.
To use the OpenAITTS
model, you need to install the necessary dependencies. You can do this using pip
:
pip install swarms requests wave\n
"},{"location":"swarms/models/openai_tts/#3-usage","title":"3. Usage","text":""},{"location":"swarms/models/openai_tts/#initialization","title":"Initialization","text":"To use the OpenAITTS
module, you need to initialize an instance of the OpenAITTS
class. Here's how you can do it:
from swarm_models.openai_tts import OpenAITTS\n\n# Initialize the OpenAITTS instance\ntts = OpenAITTS(\n model_name=\"tts-1-1106\",\n proxy_url=\"https://api.openai.com/v1/audio/speech\",\n openai_api_key=openai_api_key_env,\n voice=\"onyx\",\n)\n
"},{"location":"swarms/models/openai_tts/#parameters","title":"Parameters:","text":"model_name
(str): The name of the TTS model to use (default is \"tts-1-1106\").proxy_url
(str): The URL for the OpenAI TTS API (default is \"https://api.openai.com/v1/audio/speech\").openai_api_key
(str): Your OpenAI API key. It can be obtained from the OpenAI website.voice
(str): The voice to use for generating speech (default is \"onyx\").chunk_size
(int): The size of data chunks when fetching audio (default is 1024 * 1024 bytes).autosave
(bool): Whether to automatically save the generated speech to a file (default is False).saved_filepath
(str): The path to the file where the speech will be saved (default is \"runs/tts_speech.wav\").Once the OpenAITTS
instance is initialized, you can use it to convert text to speech using the run
method:
# Generate speech from text\nspeech_data = tts.run(\"Hello, world!\")\n
"},{"location":"swarms/models/openai_tts/#parameters_1","title":"Parameters:","text":"task
(str): The text you want to convert to speech.speech_data
(bytes): The generated speech data.You can also use the run_and_save
method to generate speech from text and save it to a file:
# Generate speech from text and save it to a file\nspeech_data = tts.run_and_save(\"Hello, world!\")\n
"},{"location":"swarms/models/openai_tts/#parameters_2","title":"Parameters:","text":"task
(str): The text you want to convert to speech.speech_data
(bytes): The generated speech data.Here's a basic example of how to use the OpenAITTS
module to generate speech from text:
from swarm_models.openai_tts import OpenAITTS\n\n# Initialize the OpenAITTS instance\ntts = OpenAITTS(\n model_name=\"tts-1-1106\",\n proxy_url=\"https://api.openai.com/v1/audio/speech\",\n openai_api_key=openai_api_key_env,\n voice=\"onyx\",\n)\n\n# Generate speech from text\nspeech_data = tts.run(\"Hello, world!\")\n
"},{"location":"swarms/models/openai_tts/#saving-the-output","title":"Saving the Output","text":"You can save the generated speech to a WAV file using the run_and_save
method:
# Generate speech from text and save it to a file\nspeech_data = tts.run_and_save(\"Hello, world!\")\n
"},{"location":"swarms/models/openai_tts/#5-advanced-options","title":"5. Advanced Options","text":"The OpenAITTS
module supports various advanced options for customizing the TTS generation process. You can specify the model name, voice, and other parameters during initialization. Additionally, you can configure the chunk size for audio data fetching and choose whether to automatically save the generated speech to a file.
If you encounter any issues while using the OpenAITTS
module, please make sure you have installed all the required dependencies and that your OpenAI API key is correctly configured. If you still face problems, refer to the OpenAI documentation or contact their support for assistance.
This documentation provides a comprehensive guide on how to use the OpenAITTS
module to convert text to speech using OpenAI's TTS model. It covers initialization, basic usage, advanced options, troubleshooting, and references for further exploration.
Vilt
Documentation","text":""},{"location":"swarms/models/vilt/#introduction","title":"Introduction","text":"Welcome to the documentation for Vilt, a Vision-and-Language Transformer (ViLT) model fine-tuned on the VQAv2 dataset. Vilt is a powerful model capable of answering questions about images. This documentation will provide a comprehensive understanding of Vilt, its architecture, usage, and how it can be integrated into your projects.
"},{"location":"swarms/models/vilt/#overview","title":"Overview","text":"Vilt is based on the Vision-and-Language Transformer (ViLT) architecture, designed for tasks that involve understanding both text and images. It has been fine-tuned on the VQAv2 dataset, making it adept at answering questions about images. This model is particularly useful for tasks where textual and visual information needs to be combined to provide meaningful answers.
"},{"location":"swarms/models/vilt/#class-definition","title":"Class Definition","text":"class Vilt:\n def __init__(self):\n \"\"\"\n Initialize the Vilt model.\n \"\"\"\n
"},{"location":"swarms/models/vilt/#usage","title":"Usage","text":"To use the Vilt model, follow these steps:
from swarm_models import Vilt\n\nmodel = Vilt()\n
output = model(\n \"What is this image?\", \"http://images.cocodataset.org/val2017/000000039769.jpg\"\n)\n
"},{"location":"swarms/models/vilt/#example-1-image-questioning","title":"Example 1 - Image Questioning","text":"model = Vilt()\noutput = model(\n \"What are the objects in this image?\",\n \"http://images.cocodataset.org/val2017/000000039769.jpg\",\n)\nprint(output)\n
"},{"location":"swarms/models/vilt/#example-2-image-analysis","title":"Example 2 - Image Analysis","text":"model = Vilt()\noutput = model(\n \"Describe the scene in this image.\",\n \"http://images.cocodataset.org/val2017/000000039769.jpg\",\n)\nprint(output)\n
"},{"location":"swarms/models/vilt/#example-3-visual-knowledge-retrieval","title":"Example 3 - Visual Knowledge Retrieval","text":"model = Vilt()\noutput = model(\n \"Tell me more about the landmark in this image.\",\n \"http://images.cocodataset.org/val2017/000000039769.jpg\",\n)\nprint(output)\n
"},{"location":"swarms/models/vilt/#how-vilt-works","title":"How Vilt Works","text":"Vilt operates by combining text and image information to generate meaningful answers to questions about the provided image. Here's how it works:
Initialization: When you create a Vilt instance, it initializes the processor and the model. The processor is responsible for handling the image and text input, while the model is the fine-tuned ViLT model.
Processing Input: When you call the Vilt model with a text question and an image URL, it downloads the image and processes it along with the text question. This processing step involves tokenization and encoding of the input.
Forward Pass: The encoded input is then passed through the ViLT model. It calculates the logits, and the answer with the highest probability is selected.
Output: The predicted answer is returned as the output of the model.
Vilt does not require any specific parameters during initialization. It is pre-configured to work with the \"dandelin/vilt-b32-finetuned-vqa\" model.
"},{"location":"swarms/models/vilt/#additional-information","title":"Additional Information","text":"That concludes the documentation for Vilt. We hope you find this model useful for your vision-and-language tasks. If you have any questions or encounter any issues, please refer to the Hugging Face Transformers documentation for further assistance. Enjoy working with Vilt!
"},{"location":"swarms/prompts/essence/","title":"The Essence of Enterprise-Grade Prompting","text":"Large Language Models (LLMs) like GPT-4 have revolutionized the landscape of AI-driven automation, customer support, marketing, and more. However, extracting the highest quality output from these models requires a thoughtful approach to crafting prompts\u2014an endeavor that goes beyond mere trial and error. In enterprise settings, where consistency, quality, and performance are paramount, enterprise-grade prompting has emerged as a structured discipline, combining art with the science of human-machine communication.
Enterprise-grade prompting involves understanding the intricate dynamics between language models, context, and the task at hand. It requires knowledge of not only the technical capabilities of LLMs but also the intricacies of how they interpret human language. Effective prompting becomes the linchpin for ensuring that AI-driven outputs are accurate, reliable, and aligned with business needs. It is this discipline that turns raw AI capabilities into tangible enterprise value.
In this essay, we will dissect the essence of enterprise-grade prompting, explore the most effective prompting strategies, explain what works and what doesn't, and conclude with the current holy grail of automated prompt engineering. We will also share concrete examples and illustrations of each technique, with a particular focus on their application in an enterprise setting.
"},{"location":"swarms/prompts/essence/#1-foundational-principles-of-prompting","title":"1. Foundational Principles of Prompting","text":"The effectiveness of prompting lies in understanding both the capabilities and limitations of LLMs. A well-structured prompt helps LLMs focus on the most relevant information while avoiding ambiguities that can lead to unreliable results. In enterprise-grade contexts, prompts must be designed with the end-user's expectations in mind, ensuring quality, safety, scalability, and traceability.
Example: Instead of prompting \"Explain what a blockchain is,\" an enterprise-grade prompt might be \"Explain the concept of blockchain, focusing on how distributed ledgers help increase transparency in supply chain management. Keep the explanation under 200 words for a general audience.\" This prompt provides clear, relevant, and concise instructions tailored to specific needs.
"},{"location":"swarms/prompts/essence/#2-best-prompting-strategies","title":"2. Best Prompting Strategies","text":"The field of enterprise-grade prompting employs numerous strategies to maximize the quality of LLM output. Here are some of the most effective ones:
"},{"location":"swarms/prompts/essence/#21-instruction-based-prompting","title":"2.1. Instruction-Based Prompting","text":"Instruction-based prompting provides explicit instructions for the LLM to follow. This approach is valuable in enterprise applications where responses must adhere to a specific tone, structure, or depth of analysis.
Example:
This prompt is highly effective because it instructs the model on what format (bullet points), audience (marketing team), and depth (summary) to produce, minimizing the risk of irrelevant details.
Why It Works: LLMs excel when they have a clear set of rules to follow. Enterprises benefit from this structured approach, as it ensures consistency across multiple use cases, be it marketing, HR, or customer service. Clear instructions also make it easier to validate outputs against defined expectations, which is crucial for maintaining quality.
"},{"location":"swarms/prompts/essence/#22-multi-shot-prompting","title":"2.2. Multi-Shot Prompting","text":"Multi-shot prompting provides several examples before asking the model to complete a task. This helps set expectations by showing the model the desired style and type of output.
Example:
Customer: 'I received a damaged item.' Response: 'We apologize for the damaged item. Please provide us with your order number so we can send a replacement.'
Customer: 'The app keeps crashing on my phone.' Response:\"
Why It Works: Multi-shot prompting is highly effective in enterprise-grade applications where consistency is critical. Showing multiple examples helps the model learn patterns without needing extensive fine-tuning, saving both time and cost. Enterprises can leverage this technique to ensure that responses remain aligned with brand standards and customer expectations across different departments.
"},{"location":"swarms/prompts/essence/#23-chain-of-thought-prompting","title":"2.3. Chain of Thought Prompting","text":"Chain of Thought (CoT) prompting helps LLMs generate reasoning steps explicitly before arriving at an answer. This method is useful for complex problem-solving tasks or when transparency in decision-making is important.
Example:
Why It Works: CoT prompting allows the model to work through the process iteratively, providing more explainable results. In enterprise applications where complex decision-making is involved, this strategy ensures stakeholders understand why a particular output was generated. This transparency is crucial in high-stakes areas like finance, healthcare, and logistics, where understanding the reasoning behind an output is as important as the output itself.
"},{"location":"swarms/prompts/essence/#24-iterative-feedback-and-adaptive-prompting","title":"2.4. Iterative Feedback and Adaptive Prompting","text":"Iterative prompting involves providing multiple prompts or rounds of feedback to refine the output. Adaptive prompts take prior responses and adjust based on context, ensuring the final output meets the required standard.
Example:
Why It Works: Enterprises require output that is precise and tailored to brand identity. Iterative feedback provides an effective means to adjust and refine outputs until the desired quality is achieved. By breaking down the task into multiple feedback loops, enterprises can ensure the final output is aligned with their core values and objectives.
"},{"location":"swarms/prompts/essence/#25-contextual-expansion-for-enhanced-relevance","title":"2.5. Contextual Expansion for Enhanced Relevance","text":"A lesser-known but powerful strategy is contextual expansion. This involves expanding the prompt to include broader information about the context, thereby allowing the model to generate richer, more relevant responses.
Example:
Why It Works: By including more context, the prompt allows the model to generate a response that feels more tailored to the customer's situation, enhancing both satisfaction and trust. Enterprises benefit from this approach by increasing the quality of customer service interactions.
"},{"location":"swarms/prompts/essence/#3-what-doesnt-work-in-prompting","title":"3. What Doesn't Work in Prompting","text":"While the above methods are effective, prompting can often fall short in certain scenarios:
"},{"location":"swarms/prompts/essence/#31-overly-vague-prompts","title":"3.1. Overly Vague Prompts","text":"An insufficiently detailed prompt results in vague outputs. For example, simply asking \"What are some strategies to grow a business?\" can lead to generic responses that lack actionable insight. Vague prompts are particularly problematic in enterprise settings where specificity is crucial to drive action.
"},{"location":"swarms/prompts/essence/#32-excessive-length","title":"3.2. Excessive Length","text":"Overloading a prompt with details often causes the LLM to become confused, producing incomplete or inaccurate responses. For example, \"Explain blockchain, focusing on cryptographic methods, network nodes, ledger distribution, proof of work, mining processes, hash functions, transaction validation, etc.\" attempts to include too many subjects for a concise response. Enterprise-grade prompts should focus on a specific area to avoid overwhelming the model and degrading the output quality.
"},{"location":"swarms/prompts/essence/#33-ambiguity-in-expected-output","title":"3.3. Ambiguity in Expected Output","text":"Ambiguity arises when prompts don't clearly specify the desired output format, tone, or length. For example, asking \"Describe our new product\" without specifying whether it should be a single-line summary, a paragraph, or a technical overview can lead to an unpredictable response. Enterprises must clearly define expectations to ensure consistent and high-quality outputs.
"},{"location":"swarms/prompts/essence/#4-the-holy-grail-automated-prompt-engineering","title":"4. The Holy Grail: Automated Prompt Engineering","text":"In an enterprise setting, scaling prompt engineering for consistency and high performance remains a key challenge. Automated Prompt Engineering (APE) offers a potential solution for bridging the gap between individual craftsmanship and enterprise-wide implementation.
4.1. AI-Augmented Prompt Design
Automated Prompt Engineering tools can evaluate the outputs generated by various prompts, selecting the one with the highest quality metrics. These tools can be trained to understand what constitutes an ideal response for specific enterprise contexts.
Example:
Why It Works: AI-Augmented Prompt Design reduces the need for manual intervention and standardizes the quality of responses across the organization. This approach helps enterprises maintain consistency while saving valuable time that would otherwise be spent on trial-and-error prompting.
4.2. Reinforcement Learning for Prompts (RLP)
Using Reinforcement Learning for Prompts involves training models to automatically iterate on prompts to improve the quality of the final output. The model is rewarded for generating responses that align with predefined criteria, such as clarity, completeness, or relevance.
Example:
Why It Works: RLP can significantly improve the quality of complex outputs over time. Enterprises that require a high level of precision, such as in legal or compliance-related applications, benefit from RLP by ensuring outputs meet stringent standards.
4.3. Dynamic Contextual Adaptation
Another aspect of automated prompt engineering involves adapting prompts in real time based on user context. For example, if a user interacting with a customer support bot seems frustrated (as detected by sentiment analysis), an adaptive prompt may be used to generate a more empathetic response.
Example:
Why It Works: In dynamic enterprise environments, where every user experience matters, adapting prompts to the immediate context can significantly improve customer satisfaction. Real-time adaptation allows the model to be more responsive and attuned to customer needs, thereby fostering loyalty and trust.
4.4. Collaborative Prompt Refinement
Automated prompt engineering can also involve collaboration between AI models and human experts. Collaborative Prompt Refinement (CPR) allows human operators to provide iterative guidance, which the model then uses to enhance its understanding and improve future outputs.
Example:
Why It Works: CPR bridges the gap between human expertise and machine efficiency, ensuring that outputs are not only technically accurate but also aligned with expert expectations. This iterative learning loop enhances the model\u2019s ability to autonomously generate high-quality content.
"},{"location":"swarms/prompts/essence/#5-the-future-of-enterprise-grade-prompting","title":"5. The Future of Enterprise-Grade Prompting","text":"The future of enterprise-grade prompting is in leveraging automation, context-awareness, and reinforcement learning. By moving from static prompts to dynamic, learning-enabled systems, enterprises can ensure consistent and optimized communication across their AI systems.
Automated systems such as APE and RLP are in their early stages, but they represent the potential to deliver highly scalable prompting solutions that automatically evolve based on user feedback and performance metrics. As more sophisticated models and methods become available, enterprise-grade prompting will likely involve:
The rise of self-improving prompting systems marks a significant shift in how enterprises leverage AI for communication and decision-making. As more sophisticated models emerge, we anticipate a greater emphasis on adaptability, real-time learning, and seamless integration with existing business processes.
Conclusion
Enterprise-grade prompting transcends the art of crafting effective prompts into a well-defined process, merging structure with creativity and guided refinement. By understanding the foundational principles, leveraging strategies like instruction-based and chain-of-thought prompting, and adopting automation, enterprises can consistently extract high-quality results from LLMs.
The evolution towards automated prompt engineering is transforming enterprise AI use from reactive problem-solving to proactive, intelligent decision-making. As the enterprise AI ecosystem matures, prompting will continue to be the linchpin that aligns the capabilities of LLMs with real-world business needs, ensuring optimal outcomes at scale.
Whether it's customer support, compliance, marketing, or operational analytics, the strategies outlined in this essay\u2014paired with advancements in automated prompt engineering\u2014hold the key to effective, scalable, and enterprise-grade utilization of AI models. Enterprises that invest in these methodologies today are likely to maintain a competitive edge in an increasingly automated business landscape.
Next Steps
This essay is a stepping stone towards understanding enterprise-grade prompting. We encourage AI teams to start experimenting with these prompting techniques in sandbox environments, identify what works best for their needs, and gradually iterate. Automation is the future, and investing in automated prompt engineering today will yield highly optimized, scalable solutions that consistently deliver value.
Ready to take the next step? Let\u2019s explore how to design adaptive prompting frameworks tailored to your enterprise\u2019s unique requirements.
"},{"location":"swarms/prompts/main/","title":"Managing Prompts in Production","text":"The Prompt
class provides a comprehensive solution for managing prompts, including advanced features like version control, autosaving, and logging. This guide will walk you through how to effectively use this class in a production environment, focusing on its core features, use cases, and best practices.
Before diving into how to use the Prompt
class, ensure that you have the required dependencies installed:
pip3 install -U swarms\n
"},{"location":"swarms/prompts/main/#creating-a-new-prompt","title":"Creating a New Prompt","text":"To create a new instance of a Prompt
, simply initialize it with the required attributes such as content
:
from swarms import Prompt\n\nprompt = Prompt(\n content=\"This is my first prompt!\",\n name=\"My First Prompt\",\n description=\"A simple example prompt.\"\n)\n\nprint(prompt)\n
This creates a new prompt with the current timestamp and a unique identifier.
"},{"location":"swarms/prompts/main/#2-managing-prompt-content","title":"2. Managing Prompt Content","text":""},{"location":"swarms/prompts/main/#editing-prompts","title":"Editing Prompts","text":"Once you have initialized a prompt, you can edit its content using the edit_prompt
method. Each time the content is edited, a new version is stored in the edit_history
, and the last_modified_at
timestamp is updated.
new_content = \"This is an updated version of my prompt.\"\nprompt.edit_prompt(new_content)\n
Note: If the new content is identical to the current content, an error will be raised to prevent unnecessary edits:
try:\n prompt.edit_prompt(\"This is my first prompt!\") # Same as initial content\nexcept ValueError as e:\n print(e) # Output: New content must be different from the current content.\n
"},{"location":"swarms/prompts/main/#retrieving-prompt-content","title":"Retrieving Prompt Content","text":"You can retrieve the current prompt content using the get_prompt
method:
current_content = prompt.get_prompt()\nprint(current_content) # Output: This is an updated version of my prompt.\n
This method also logs telemetry data, which includes both system information and prompt metadata.
"},{"location":"swarms/prompts/main/#3-version-control","title":"3. Version Control","text":""},{"location":"swarms/prompts/main/#tracking-edits-and-history","title":"Tracking Edits and History","text":"The Prompt
class automatically tracks every change made to the prompt. This is stored in the edit_history
attribute as a list of previous versions.
print(prompt.edit_history) # Output: ['This is my first prompt!', 'This is an updated version of my prompt.']\n
The number of edits is also tracked using the edit_count
attribute:
print(prompt.edit_count) # Output: 2\n
"},{"location":"swarms/prompts/main/#rolling-back-to-previous-versions","title":"Rolling Back to Previous Versions","text":"If you want to revert a prompt to a previous version, you can use the rollback
method, passing the version index you want to revert to:
prompt.rollback(0)\nprint(prompt.get_prompt()) # Output: This is my first prompt!\n
The rollback operation is thread-safe, and any rollback also triggers a telemetry log.
"},{"location":"swarms/prompts/main/#4-autosaving-prompts","title":"4. Autosaving Prompts","text":""},{"location":"swarms/prompts/main/#enabling-and-configuring-autosave","title":"Enabling and Configuring Autosave","text":"To automatically save prompts to storage after every change, you can enable the autosave
feature when initializing the prompt:
prompt = Prompt(\n content=\"This is my first prompt!\",\n autosave=True,\n autosave_folder=\"my_prompts\" # Specify the folder within WORKSPACE_DIR\n)\n
This will ensure that every edit or rollback action triggers an autosave to the specified folder.
"},{"location":"swarms/prompts/main/#manually-triggering-autosave","title":"Manually Triggering Autosave","text":"You can also manually trigger an autosave by calling the _autosave
method (which is a private method typically used internally):
prompt._autosave() # Manually triggers autosaving\n
Autosaves are stored as JSON files in the folder specified by autosave_folder
under the workspace directory (WORKSPACE_DIR
environment variable).
The Prompt
class integrates with the loguru
logging library to provide detailed logs for every major action, such as editing, rolling back, and saving. The log_telemetry
method captures and logs system data, including prompt metadata, for each operation.
Here's an example of a log when editing a prompt:
2024-10-10 10:12:34.567 | INFO | Editing prompt a7b8f9. Current content: 'This is my first prompt!'\n2024-10-10 10:12:34.789 | DEBUG | Prompt a7b8f9 updated. Edit count: 1. New content: 'This is an updated version of my prompt.'\n
You can extend logging by integrating the log_telemetry
method with your own telemetry systems or databases:
prompt.log_telemetry()\n
"},{"location":"swarms/prompts/main/#6-handling-errors","title":"6. Handling Errors","text":"Error handling in the Prompt
class is robust and prevents common mistakes, such as editing with identical content or rolling back to an invalid version. Here's a common scenario:
try:\n prompt.edit_prompt(\"This is an updated version of my prompt.\")\nexcept ValueError as e:\n print(e) # Output: New content must be different from the current content.\n
"},{"location":"swarms/prompts/main/#invalid-rollback-version","title":"Invalid Rollback Version","text":"try:\n prompt.rollback(10) # Invalid version index\nexcept IndexError as e:\n print(e) # Output: Invalid version number for rollback.\n
Always ensure that version numbers passed to rollback
are within the valid range of existing versions.
The Prompt
class currently includes a placeholder for saving and loading prompts from persistent storage. You can override the save_to_storage
and load_from_storage
methods to integrate with databases, cloud storage, or other persistent layers.
Here's how you can implement the save functionality:
def save_to_storage(self):\n # Example of saving to a database or cloud storage\n data = self.model_dump()\n save_to_database(data) # Custom function to save data\n
Similarly, you can implement a load_from_storage
function to load the prompt from a storage location using its unique identifier (id
).
from swarms.prompts.prompt import Prompt\n\n# Example 1: Initializing a Financial Report Prompt\nfinancial_prompt = Prompt(\n content=\"Q1 2024 Earnings Report: Initial Draft\", autosave=True\n)\n\n# Output the initial state of the prompt\nprint(\"\\n--- Example 1: Initializing Prompt ---\")\nprint(f\"Prompt ID: {financial_prompt.id}\")\nprint(f\"Content: {financial_prompt.content}\")\nprint(f\"Created At: {financial_prompt.created_at}\")\nprint(f\"Edit Count: {financial_prompt.edit_count}\")\nprint(f\"History: {financial_prompt.edit_history}\")\n\n\n# Example 2: Editing a Financial Report Prompt\nfinancial_prompt.edit_prompt(\n \"Q1 2024 Earnings Report: Updated Revenue Figures\"\n)\n\n# Output the updated state of the prompt\nprint(\"\\n--- Example 2: Editing Prompt ---\")\nprint(f\"Content after edit: {financial_prompt.content}\")\nprint(f\"Edit Count: {financial_prompt.edit_count}\")\nprint(f\"History: {financial_prompt.edit_history}\")\n\n\n# Example 3: Rolling Back to a Previous Version\nfinancial_prompt.edit_prompt(\"Q1 2024 Earnings Report: Final Version\")\nfinancial_prompt.rollback(\n 1\n) # Roll back to the second version (index 1)\n\n# Output the state after rollback\nprint(\"\\n--- Example 3: Rolling Back ---\")\nprint(f\"Content after rollback: {financial_prompt.content}\")\nprint(f\"Edit Count: {financial_prompt.edit_count}\")\nprint(f\"History: {financial_prompt.edit_history}\")\n\n\n# Example 4: Handling Invalid Rollback\nprint(\"\\n--- Example 4: Invalid Rollback ---\")\ntry:\n financial_prompt.rollback(\n 5\n ) # Attempt an invalid rollback (out of bounds)\nexcept IndexError as e:\n print(f\"Error: {e}\")\n\n\n# Example 5: Preventing Duplicate Edits\nprint(\"\\n--- Example 5: Preventing Duplicate Edits ---\")\ntry:\n financial_prompt.edit_prompt(\n \"Q1 2024 Earnings Report: Updated Revenue Figures\"\n ) # Duplicate content\nexcept ValueError as e:\n print(f\"Error: {e}\")\n\n\n# Example 6: Retrieving the Prompt Content as a String\nprint(\"\\n--- Example 6: Retrieving Prompt as String ---\")\ncurrent_content = financial_prompt.get_prompt()\nprint(f\"Current Prompt Content: {current_content}\")\n\n\n# Example 7: Simulating Financial Report Changes Over Time\nprint(\"\\n--- Example 7: Simulating Changes Over Time ---\")\n# Initialize a new prompt representing an initial financial report draft\nfinancial_prompt = Prompt(\n content=\"Q2 2024 Earnings Report: Initial Draft\"\n)\n\n# Simulate several updates over time\nfinancial_prompt.edit_prompt(\n \"Q2 2024 Earnings Report: Updated Forecasts\"\n)\nfinancial_prompt.edit_prompt(\n \"Q2 2024 Earnings Report: Revenue Adjustments\"\n)\nfinancial_prompt.edit_prompt(\"Q2 2024 Earnings Report: Final Review\")\n\n# Display full history\nprint(f\"Final Content: {financial_prompt.content}\")\nprint(f\"Edit Count: {financial_prompt.edit_count}\")\nprint(f\"Edit History: {financial_prompt.edit_history}\")\n
"},{"location":"swarms/prompts/main/#8-conclusion","title":"8. Conclusion","text":"This guide covered how to effectively use the Prompt
class in production environments, including core features like editing, version control, autosaving, and logging. By following the best practices outlined here, you can ensure that your prompts are managed efficiently, with minimal overhead and maximum flexibility.
The Prompt
class is designed with scalability and robustness in mind, making it a great choice for managing prompt content in multi-agent architectures or any application where dynamic prompt management is required. Feel free to extend the functionality to suit your needs, whether it's integrating with persistent storage or enhancing logging mechanisms.
By using this architecture, you'll be able to scale your system effortlessly while maintaining detailed version control and history of every interaction with your prompts.
"},{"location":"swarms/structs/","title":"Introduction to Multi-Agent Collaboration","text":""},{"location":"swarms/structs/#benefits-of-multi-agent-collaboration","title":"\ud83d\ude80 Benefits of Multi-Agent Collaboration","text":"Fig. 1: Key benefits and structure of multi-agent collaboration"},{"location":"swarms/structs/#why-multi-agent-architectures","title":"Why Multi-Agent Architectures?","text":"Multi-agent systems unlock new levels of intelligence, reliability, and efficiency by enabling agents to work together. Here are the core benefits:
swarms
provides a variety of powerful, pre-built multi-agent architectures enabling you to orchestrate agents in various ways. Choose the right structure for your specific problem to build efficient and reliable production systems.
a -> b, c
) between agents. Flexible and adaptive workflows, task distribution, dynamic routing. GraphWorkflow Orchestrates agents as nodes in a Directed Acyclic Graph (DAG). Complex projects with intricate dependencies, like software builds. MixtureOfAgents (MoA) Utilizes multiple expert agents in parallel and synthesizes their outputs. Complex problem-solving, achieving state-of-the-art performance through collaboration. GroupChat Agents collaborate and make decisions through a conversational interface. Real-time collaborative decision-making, negotiations, brainstorming. ForestSwarm Dynamically selects the most suitable agent or tree of agents for a given task. Task routing, optimizing for expertise, complex decision-making trees. SpreadSheetSwarm Manages thousands of agents concurrently, tracking tasks and outputs in a structured format. Massive-scale parallel operations, large-scale data generation and analysis. SwarmRouter Universal orchestrator that provides a single interface to run any type of swarm with dynamic selection. Simplifying complex workflows, switching between swarm strategies, unified multi-agent management. HierarchicalSwarm Director agent coordinates specialized worker agents in a hierarchy. Complex, multi-stage tasks, iterative refinement, enterprise workflows. Board of Directors Board of directors convenes to discuss, vote, and reach consensus on task distribution. Democratic decision-making, corporate governance, collective intelligence, strategic planning. Hybrid Hierarchical-Cluster Swarm (HHCS) Router agent distributes tasks to specialized swarms for parallel, hierarchical processing. Enterprise-scale, multi-domain, and highly complex workflows."},{"location":"swarms/structs/#hierarchicalswarm-example","title":"\ud83c\udfe2 HierarchicalSwarm Example","text":"Hierarchical architectures enable structured, iterative, and scalable problem-solving by combining a director (or router) agent with specialized worker agents or swarms. Below are two key patterns:
from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized agents\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Expert in market research and analysis\",\n model_name=\"gpt-4o\",\n)\nfinancial_agent = Agent(\n agent_name=\"Financial-Analyst\",\n agent_description=\"Specialist in financial analysis and valuation\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the hierarchical swarm\nswarm = HierarchicalSwarm(\n name=\"Financial-Analysis-Swarm\",\n description=\"A hierarchical swarm for comprehensive financial analysis\",\n agents=[research_agent, financial_agent],\n max_loops=2,\n verbose=True,\n)\n\n# Execute a complex task\nresult = swarm.run(task=\"Analyze the market potential for Tesla (TSLA) stock\")\nprint(result)\n
Full HierarchicalSwarm Documentation \u2192
"},{"location":"swarms/structs/#board-of-directors-example","title":"\ud83c\udfdb\ufe0f Board of Directors Example","text":"The Board of Directors provides a sophisticated democratic alternative to the single Director pattern, enabling collective decision-making through voting and consensus. This approach is ideal for corporate governance, strategic planning, and scenarios requiring multiple perspectives.
from swarms import Agent\nfrom swarms.structs.board_of_directors_swarm import (\n BoardOfDirectorsSwarm,\n BoardMember,\n BoardMemberRole\n)\n\n# Create board members with specific roles\nchairman = Agent(\n agent_name=\"Chairman\",\n agent_description=\"Chairman of the Board responsible for leading meetings\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Chairman of the Board...\"\n)\n\nvice_chairman = Agent(\n agent_name=\"Vice-Chairman\",\n agent_description=\"Vice Chairman who supports the Chairman\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Vice Chairman...\"\n)\n\n# Create BoardMember objects with roles and expertise\nboard_members = [\n BoardMember(chairman, BoardMemberRole.CHAIRMAN, 1.5, [\"leadership\", \"strategy\"]),\n BoardMember(vice_chairman, BoardMemberRole.VICE_CHAIRMAN, 1.2, [\"operations\", \"coordination\"]),\n]\n\n# Create worker agents\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Expert in market research and analysis\",\n model_name=\"gpt-4o\",\n)\n\nfinancial_agent = Agent(\n agent_name=\"Financial-Analyst\",\n agent_description=\"Specialist in financial analysis and valuation\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the Board of Directors swarm\nboard_swarm = BoardOfDirectorsSwarm(\n name=\"Executive_Board_Swarm\",\n description=\"Executive board with specialized roles for strategic decision-making\",\n board_members=board_members,\n agents=[research_agent, financial_agent],\n max_loops=2,\n verbose=True,\n decision_threshold=0.6,\n enable_voting=True,\n enable_consensus=True,\n)\n\n# Execute a complex task with democratic decision-making\nresult = board_swarm.run(task=\"Analyze the market potential for Tesla (TSLA) stock\")\nprint(result)\n
Full Board of Directors Documentation \u2192
"},{"location":"swarms/structs/#sequentialworkflow","title":"SequentialWorkflow","text":"A SequentialWorkflow
executes tasks in a strict order, forming a pipeline where each agent builds upon the work of the previous one. SequentialWorkflow
is Ideal for processes that have clear, ordered steps. This ensures that tasks with dependencies are handled correctly.
from swarms import Agent, SequentialWorkflow\n\n# Initialize agents for a 3-step process\n# 1. Generate an idea\nidea_generator = Agent(agent_name=\"IdeaGenerator\", system_prompt=\"Generate a unique startup idea.\", model_name=\"gpt-4o-mini\")\n# 2. Validate the idea\nvalidator = Agent(agent_name=\"Validator\", system_prompt=\"Take this startup idea and analyze its market viability.\", model_name=\"gpt-4o-mini\")\n# 3. Create a pitch\npitch_creator = Agent(agent_name=\"PitchCreator\", system_prompt=\"Write a 3-sentence elevator pitch for this validated startup idea.\", model_name=\"gpt-4o-mini\")\n\n# Create the sequential workflow\nworkflow = SequentialWorkflow(agents=[idea_generator, validator, pitch_creator])\n\n# Run the workflow\nelevator_pitch = workflow.run()\nprint(elevator_pitch)\n
"},{"location":"swarms/structs/#concurrentworkflow-with-spreadsheetswarm","title":"ConcurrentWorkflow (with SpreadSheetSwarm
)","text":"A concurrent workflow runs multiple agents simultaneously. SpreadSheetSwarm
is a powerful implementation that can manage thousands of concurrent agents and log their outputs to a CSV file. Use this architecture for high-throughput tasks that can be performed in parallel, drastically reducing execution time.
from swarms import Agent, SpreadSheetSwarm\n\n# Define a list of tasks (e.g., social media posts to generate)\nplatforms = [\"Twitter\", \"LinkedIn\", \"Instagram\"]\n\n# Create an agent for each task\nagents = [\n Agent(\n agent_name=f\"{platform}-Marketer\",\n system_prompt=f\"Generate a real estate marketing post for {platform}.\",\n model_name=\"gpt-4o-mini\",\n )\n for platform in platforms\n]\n\n# Initialize the swarm to run these agents concurrently\nswarm = SpreadSheetSwarm(\n agents=agents,\n autosave_on=True,\n save_file_path=\"marketing_posts.csv\",\n)\n\n# Run the swarm with a single, shared task description\nproperty_description = \"A beautiful 3-bedroom house in sunny California.\"\nswarm.run(task=f\"Generate a post about: {property_description}\")\n# Check marketing_posts.csv for the results!\n
"},{"location":"swarms/structs/#agentrearrange","title":"AgentRearrange","text":"Inspired by einsum
, AgentRearrange
lets you define complex, non-linear relationships between agents using a simple string-based syntax. Learn more. This architecture is Perfect for orchestrating dynamic workflows where agents might work in parallel, sequence, or a combination of both.
from swarms import Agent, AgentRearrange\n\n# Define agents\nresearcher = Agent(agent_name=\"researcher\", model_name=\"gpt-4o-mini\")\nwriter = Agent(agent_name=\"writer\", model_name=\"gpt-4o-mini\")\neditor = Agent(agent_name=\"editor\", model_name=\"gpt-4o-mini\")\n\n# Define a flow: researcher sends work to both writer and editor simultaneously\n# This is a one-to-many relationship\nflow = \"researcher -> writer, editor\"\n\n# Create the rearrangement system\nrearrange_system = AgentRearrange(\n agents=[researcher, writer, editor],\n flow=flow,\n)\n\n# Run the system\n# The researcher will generate content, and then both the writer and editor\n# will process that content in parallel.\noutputs = rearrange_system.run(\"Analyze the impact of AI on modern cinema.\")\nprint(outputs)\n
"},{"location":"swarms/structs/#swarmrouter-the-universal-swarm-orchestrator","title":"SwarmRouter: The Universal Swarm Orchestrator","text":"The SwarmRouter
simplifies building complex workflows by providing a single interface to run any type of swarm. Instead of importing and managing different swarm classes, you can dynamically select the one you need just by changing the swarm_type
parameter. Read the full documentation
This makes your code cleaner and more flexible, allowing you to switch between different multi-agent strategies with ease. Here's a complete example that shows how to define agents and then use SwarmRouter
to execute the same task using different collaborative strategies.
from swarms import Agent\nfrom swarms.structs.swarm_router import SwarmRouter, SwarmType\n\n# Define a few generic agents\nwriter = Agent(agent_name=\"Writer\", system_prompt=\"You are a creative writer.\", model_name=\"gpt-4o-mini\")\neditor = Agent(agent_name=\"Editor\", system_prompt=\"You are an expert editor for stories.\", model_name=\"gpt-4o-mini\")\nreviewer = Agent(agent_name=\"Reviewer\", system_prompt=\"You are a final reviewer who gives a score.\", model_name=\"gpt-4o-mini\")\n\n# The agents and task will be the same for all examples\nagents = [writer, editor, reviewer]\ntask = \"Write a short story about a robot who discovers music.\"\n\n# --- Example 1: SequentialWorkflow ---\n# Agents run one after another in a chain: Writer -> Editor -> Reviewer.\nprint(\"Running a Sequential Workflow...\")\nsequential_router = SwarmRouter(swarm_type=SwarmType.SequentialWorkflow, agents=agents)\nsequential_output = sequential_router.run(task)\nprint(f\"Final Sequential Output:\\n{sequential_output}\\n\")\n\n# --- Example 2: ConcurrentWorkflow ---\n# All agents receive the same initial task and run at the same time.\nprint(\"Running a Concurrent Workflow...\")\nconcurrent_router = SwarmRouter(swarm_type=SwarmType.ConcurrentWorkflow, agents=agents)\nconcurrent_outputs = concurrent_router.run(task)\n# This returns a dictionary of each agent's output\nfor agent_name, output in concurrent_outputs.items():\n print(f\"Output from {agent_name}:\\n{output}\\n\")\n\n# --- Example 3: MixtureOfAgents ---\n# All agents run in parallel, and a special 'aggregator' agent synthesizes their outputs.\nprint(\"Running a Mixture of Agents Workflow...\")\naggregator = Agent(\n agent_name=\"Aggregator\",\n system_prompt=\"Combine the story, edits, and review into a final document.\",\n model_name=\"gpt-4o-mini\"\n)\nmoa_router = SwarmRouter(\n swarm_type=SwarmType.MixtureOfAgents,\n agents=agents,\n aggregator_agent=aggregator, # MoA requires an aggregator\n)\naggregated_output = moa_router.run(task)\nprint(f\"Final Aggregated Output:\\n{aggregated_output}\\n\")\n
The SwarmRouter
is a powerful tool for simplifying multi-agent orchestration. It provides a consistent and flexible way to deploy different collaborative strategies, allowing you to build more sophisticated applications with less code.
The MixtureOfAgents
architecture processes tasks by feeding them to multiple \"expert\" agents in parallel. Their diverse outputs are then synthesized by an aggregator agent to produce a final, high-quality result. Learn more here
from swarms import Agent, MixtureOfAgents\n\n# Define expert agents\nfinancial_analyst = Agent(agent_name=\"FinancialAnalyst\", system_prompt=\"Analyze financial data.\", model_name=\"gpt-4o-mini\")\nmarket_analyst = Agent(agent_name=\"MarketAnalyst\", system_prompt=\"Analyze market trends.\", model_name=\"gpt-4o-mini\")\nrisk_analyst = Agent(agent_name=\"RiskAnalyst\", system_prompt=\"Analyze investment risks.\", model_name=\"gpt-4o-mini\")\n\n# Define the aggregator agent\naggregator = Agent(\n agent_name=\"InvestmentAdvisor\",\n system_prompt=\"Synthesize the financial, market, and risk analyses to provide a final investment recommendation.\",\n model_name=\"gpt-4o-mini\"\n)\n\n# Create the MoA swarm\nmoa_swarm = MixtureOfAgents(\n agents=[financial_analyst, market_analyst, risk_analyst],\n aggregator_agent=aggregator,\n)\n\n# Run the swarm\nrecommendation = moa_swarm.run(\"Should we invest in NVIDIA stock right now?\")\nprint(recommendation)\n
"},{"location":"swarms/structs/#groupchat","title":"GroupChat","text":"GroupChat
creates a conversational environment where multiple agents can interact, discuss, and collaboratively solve a problem. You can define the speaking order or let it be determined dynamically. This architecture is ideal for tasks that benefit from debate and multi-perspective reasoning, such as contract negotiation, brainstorming, or complex decision-making.
from swarms import Agent, GroupChat\n\n# Define agents for a debate\ntech_optimist = Agent(agent_name=\"TechOptimist\", system_prompt=\"Argue for the benefits of AI in society.\", model_name=\"gpt-4o-mini\")\ntech_critic = Agent(agent_name=\"TechCritic\", system_prompt=\"Argue against the unchecked advancement of AI.\", model_name=\"gpt-4o-mini\")\n\n# Create the group chat\nchat = GroupChat(\n agents=[tech_optimist, tech_critic],\n max_loops=4, # Limit the number of turns in the conversation\n)\n\n# Run the chat with an initial topic\nconversation_history = chat.run(\n \"Let's discuss the societal impact of artificial intelligence.\"\n)\n\n# Print the full conversation\nfor message in conversation_history:\n print(f\"[{message['agent_name']}]: {message['content']}\")\n
--
"},{"location":"swarms/structs/#connect-with-us","title":"Connect With Us","text":"Join our community of agent engineers and researchers for technical support, cutting-edge updates, and exclusive access to world-class agent engineering insights!
Platform Description Link \ud83d\udcda Documentation Official documentation and guides docs.swarms.world \ud83d\udcdd Blog Latest updates and technical articles Medium \ud83d\udcac Discord Live chat and community support Join Discord \ud83d\udc26 Twitter Latest news and announcements @kyegomez \ud83d\udc65 LinkedIn Professional network and updates The Swarm Corporation \ud83d\udcfa YouTube Tutorials and demos Swarms Channel \ud83c\udfab Events Join our community events Sign up here \ud83d\ude80 Onboarding Session Get onboarded with Kye Gomez, creator and lead maintainer of Swarms Book Session"},{"location":"swarms/structs/abstractswarm/","title":"BaseSwarm
Documentation","text":""},{"location":"swarms/structs/abstractswarm/#table-of-contents","title":"Table of Contents","text":"The Swarms library is designed to provide a framework for swarm simulation architectures. Swarms are collections of autonomous agents or workers that collaborate to perform tasks and achieve common goals. This documentation will guide you through the functionality and usage of the Swarms library, explaining the purpose and implementation details of the provided classes and methods.
"},{"location":"swarms/structs/abstractswarm/#2-class-definition","title":"2. Class Definition","text":""},{"location":"swarms/structs/abstractswarm/#baseswarm-class","title":"BaseSwarm
Class","text":"The BaseSwarm
class is an abstract base class that serves as the foundation for swarm simulation architectures. It defines the core functionality and methods required to manage and interact with a swarm of workers.
from abc import ABC, abstractmethod\nfrom typing import List\n\nfrom swarms.swarms.base import AbstractWorker\n\n\nclass BaseSwarm(ABC):\n \"\"\"\n Abstract class for swarm simulation architectures\n\n Methods:\n ---------\n ...\n \"\"\"\n\n # The class definition and constructor are provided here.\n\n @abstractmethod\n def __init__(self, workers: List[\"AbstractWorker\"]):\n \"\"\"Initialize the swarm with workers\"\"\"\n\n # Other abstract methods are listed here.\n
"},{"location":"swarms/structs/abstractswarm/#3-methods","title":"3. Methods","text":""},{"location":"swarms/structs/abstractswarm/#communicate","title":"communicate()
","text":"The communicate()
method allows the swarm to exchange information through the orchestrator, protocols, and the universal communication layer.
Usage Example 1:
swarm = YourSwarmClass(workers)\nswarm.communicate()\n
Usage Example 2:
# Another example of using the communicate method\nswarm = YourSwarmClass(workers)\nswarm.communicate()\n
"},{"location":"swarms/structs/abstractswarm/#run","title":"run()
","text":"The run()
method executes the swarm, initiating its activities.
Usage Example 1:
swarm = YourSwarmClass(workers)\nswarm.run()\n
Usage Example 2:
# Another example of running the swarm\nswarm = YourSwarmClass(workers)\nswarm.run()\n
"},{"location":"swarms/structs/abstractswarm/#arun","title":"arun()
","text":"The arun()
method runs the swarm asynchronously, allowing for parallel execution of tasks.
Usage Example 1:
swarm = YourSwarmClass(workers)\nswarm.arun()\n
Usage Example 2:
# Another example of running the swarm asynchronously\nswarm = YourSwarmClass(workers)\nswarm.arun()\n
"},{"location":"swarms/structs/abstractswarm/#add_workerworker-abstractworker","title":"add_worker(worker: \"AbstractWorker\")
","text":"The add_worker()
method adds a worker to the swarm.
Parameters: - worker
(AbstractWorker): The worker to be added to the swarm.
Usage Example:
swarm = YourSwarmClass([])\nworker = YourWorkerClass()\nswarm.add_worker(worker)\n
"},{"location":"swarms/structs/abstractswarm/#remove_workerworker-abstractworker","title":"remove_worker(worker: \"AbstractWorker\")
","text":"The remove_worker()
method removes a worker from the swarm.
Parameters: - worker
(AbstractWorker): The worker to be removed from the swarm.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_id(\"worker_id\")\nswarm.remove_worker(worker)\n
"},{"location":"swarms/structs/abstractswarm/#broadcastmessage-str-sender-optionalabstractworker-none","title":"broadcast(message: str, sender: Optional[\"AbstractWorker\"] = None)
","text":"The broadcast()
method sends a message to all workers in the swarm.
Parameters: - message
(str): The message to be broadcasted. - sender
(Optional[AbstractWorker]): The sender of the message (optional).
Usage Example 1:
swarm = YourSwarmClass(workers)\nmessage = \"Hello, everyone!\"\nswarm.broadcast(message)\n
Usage Example 2:
# Another example of broadcasting a message\nswarm = YourSwarmClass(workers)\nmessage = \"Important announcement!\"\nsender = swarm.get_worker_by_name(\"Supervisor\")\nswarm.broadcast(message, sender)\n
"},{"location":"swarms/structs/abstractswarm/#reset","title":"reset()
","text":"The reset()
method resets the swarm to its initial state.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm.reset()\n
"},{"location":"swarms/structs/abstractswarm/#plantask-str","title":"plan(task: str)
","text":"The plan()
method instructs workers to individually plan using a workflow or pipeline for a specified task.
Parameters: - task
(str): The task for which workers should plan.
Usage Example:
swarm = YourSwarmClass(workers)\ntask = \"Perform data analysis\"\nswarm.plan(task)\n
"},{"location":"swarms/structs/abstractswarm/#direct_messagemessage-str-sender-abstractworker-recipient-abstractworker","title":"direct_message(message: str, sender: \"AbstractWorker\", recipient: \"AbstractWorker\")
","text":"The direct_message()
method sends a direct message from one worker to another.
Parameters: - message
(str): The message to be sent. - sender
(AbstractWorker): The sender of the message. - recipient
(AbstractWorker): The recipient of the message.
Usage Example:
swarm = YourSwarmClass(workers)\nsender = swarm.get_worker_by_name(\"Worker1\")\nrecipient = swarm.get_worker_by_name(\"Worker2\")\nmessage = \"Hello\n\n, Worker2!\"\nswarm.direct_message(message, sender, recipient)\n
"},{"location":"swarms/structs/abstractswarm/#autoscalernum_workers-int-worker-listabstractworker","title":"autoscaler(num_workers: int, worker: List[\"AbstractWorker\"])
","text":"The autoscaler()
method acts as an autoscaler, dynamically adjusting the number of workers based on system load or other criteria.
Parameters: - num_workers
(int): The desired number of workers. - worker
(List[AbstractWorker]): A list of workers to be managed by the autoscaler.
Usage Example:
swarm = YourSwarmClass([])\nworkers = [YourWorkerClass() for _ in range(10)]\nswarm.autoscaler(5, workers)\n
"},{"location":"swarms/structs/abstractswarm/#get_worker_by_idid-str-abstractworker","title":"get_worker_by_id(id: str) -> \"AbstractWorker\"
","text":"The get_worker_by_id()
method locates a worker in the swarm by their ID.
Parameters: - id
(str): The ID of the worker to locate.
Returns: - AbstractWorker
: The worker with the specified ID.
Usage Example:
swarm = YourSwarmClass(workers)\nworker_id = \"worker_123\"\nworker = swarm.get_worker_by_id(worker_id)\n
"},{"location":"swarms/structs/abstractswarm/#get_worker_by_namename-str-abstractworker","title":"get_worker_by_name(name: str) -> \"AbstractWorker\"
","text":"The get_worker_by_name()
method locates a worker in the swarm by their name.
Parameters: - name
(str): The name of the worker to locate.
Returns: - AbstractWorker
: The worker with the specified name.
Usage Example:
swarm = YourSwarmClass(workers)\nworker_name = \"Alice\"\nworker = swarm.get_worker_by_name(worker_name)\n
"},{"location":"swarms/structs/abstractswarm/#assign_taskworker-abstractworker-task-any-dict","title":"assign_task(worker: \"AbstractWorker\", task: Any) -> Dict
","text":"The assign_task()
method assigns a task to a specific worker.
Parameters: - worker
(AbstractWorker): The worker to whom the task should be assigned. - task
(Any): The task to be assigned.
Returns: - Dict
: A dictionary indicating the status of the task assignment.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_name(\"Worker1\")\ntask = \"Perform data analysis\"\nresult = swarm.assign_task(worker, task)\n
"},{"location":"swarms/structs/abstractswarm/#get_all_tasksworker-abstractworker-task-any","title":"get_all_tasks(worker: \"AbstractWorker\", task: Any)
","text":"The get_all_tasks()
method retrieves all tasks assigned to a specific worker.
Parameters: - worker
(AbstractWorker): The worker for whom tasks should be retrieved. - task
(Any): The task to be retrieved.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_name(\"Worker1\")\ntasks = swarm.get_all_tasks(worker, \"data analysis\")\n
"},{"location":"swarms/structs/abstractswarm/#get_finished_tasks-listdict","title":"get_finished_tasks() -> List[Dict]
","text":"The get_finished_tasks()
method retrieves all tasks that have been completed by the workers in the swarm.
Returns: - List[Dict]
: A list of dictionaries representing finished tasks.
Usage Example:
swarm = YourSwarmClass(workers)\nfinished_tasks = swarm.get_finished_tasks()\n
"},{"location":"swarms/structs/abstractswarm/#get_pending_tasks-listdict","title":"get_pending_tasks() -> List[Dict]
","text":"The get_pending_tasks()
method retrieves all tasks that are pending or yet to be completed by the workers in the swarm.
Returns: - List[Dict]
: A list of dictionaries representing pending tasks.
Usage Example:
swarm = YourSwarmClass(workers)\npending_tasks = swarm.get_pending_tasks()\n
"},{"location":"swarms/structs/abstractswarm/#pause_workerworker-abstractworker-worker_id-str","title":"pause_worker(worker: \"AbstractWorker\", worker_id: str)
","text":"The pause_worker()
method pauses a specific worker, temporarily suspending their activities.
Parameters: - worker
(AbstractWorker): The worker to be paused. - worker_id
(str): The ID of the worker to be paused.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_name(\"Worker1\")\nworker_id = \"worker_123\"\nswarm.pause_worker(worker, worker_id)\n
"},{"location":"swarms/structs/abstractswarm/#resume_workerworker-abstractworker-worker_id-str","title":"resume_worker(worker: \"AbstractWorker\", worker_id: str)
","text":"The resume_worker()
method resumes a paused worker, allowing them to continue their activities.
Parameters: - worker
(AbstractWorker): The worker to be resumed. - worker_id
(str): The ID of the worker to be resumed.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_name(\"Worker1\")\nworker_id = \"worker_123\"\nswarm.resume_worker(worker, worker_id)\n
"},{"location":"swarms/structs/abstractswarm/#stop_workerworker-abstractworker-worker_id-str","title":"stop_worker(worker: \"AbstractWorker\", worker_id: str)
","text":"The stop_worker()
method stops a specific worker, terminating their activities.
Parameters: - worker
(AbstractWorker): The worker to be stopped. - worker_id
(str): The ID of the worker to be stopped.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_name(\"Worker1\")\nworker_id = \"worker_123\"\nswarm.stop_worker(worker, worker_id)\n
"},{"location":"swarms/structs/abstractswarm/#restart_workerworker-abstractworker","title":"restart_worker(worker: \"AbstractWorker\")
","text":"The restart_worker()
method restarts a worker, resetting them to their initial state.
Parameters: - worker
(AbstractWorker): The worker to be restarted.
Usage Example:
swarm = YourSwarmClass(workers)\nworker = swarm.get_worker_by_name(\"Worker1\")\nswarm.restart_worker(worker)\n
"},{"location":"swarms/structs/abstractswarm/#scale_upnum_worker-int","title":"scale_up(num_worker: int)
","text":"The scale_up()
method increases the number of workers in the swarm.
Parameters: - num_worker
(int): The number of workers to add to the swarm.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm.scale_up(5)\n
"},{"location":"swarms/structs/abstractswarm/#scale_downnum_worker-int","title":"scale_down(num_worker: int)
","text":"The scale_down()
method decreases the number of workers in the swarm.
Parameters: - num_worker
(int): The number of workers to remove from the swarm.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm.scale_down(3)\n
"},{"location":"swarms/structs/abstractswarm/#scale_tonum_worker-int","title":"scale_to(num_worker: int)
","text":"The scale_to()
method scales the swarm to a specific number of workers.
Parameters: - num_worker
(int): The desired number of workers.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm.scale_to(10)\n
"},{"location":"swarms/structs/abstractswarm/#get","title":"`get","text":"_all_workers() -> List[\"AbstractWorker\"]`
The get_all_workers()
method retrieves a list of all workers in the swarm.
Returns: - List[AbstractWorker]
: A list of all workers in the swarm.
Usage Example:
swarm = YourSwarmClass(workers)\nall_workers = swarm.get_all_workers()\n
"},{"location":"swarms/structs/abstractswarm/#get_swarm_size-int","title":"get_swarm_size() -> int
","text":"The get_swarm_size()
method returns the size of the swarm, which is the total number of workers.
Returns: - int
: The size of the swarm.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm_size = swarm.get_swarm_size()\n
"},{"location":"swarms/structs/abstractswarm/#get_swarm_status-dict","title":"get_swarm_status() -> Dict
","text":"The get_swarm_status()
method provides information about the current status of the swarm.
Returns: - Dict
: A dictionary containing various status indicators for the swarm.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm_status = swarm.get_swarm_status()\n
"},{"location":"swarms/structs/abstractswarm/#save_swarm_state","title":"save_swarm_state()
","text":"The save_swarm_state()
method allows you to save the current state of the swarm, including worker configurations and task assignments.
Usage Example:
swarm = YourSwarmClass(workers)\nswarm.save_swarm_state()\n
This comprehensive documentation covers the Swarms library, including the BaseSwarm
class and its methods. You can use this documentation as a guide to understanding and effectively utilizing the Swarms framework for swarm simulation architectures. Feel free to explore further and adapt the library to your specific use cases.
Agent
","text":"Swarm Agent is a powerful autonomous agent framework designed to connect Language Models (LLMs) with various tools and long-term memory. This class provides the ability to ingest and process various types of documents such as PDFs, text files, Markdown files, JSON files, and more. The Agent structure offers a wide range of features to enhance the capabilities of LLMs and facilitate efficient task execution.
"},{"location":"swarms/structs/agent/#overview","title":"Overview","text":"The Agent
class establishes a conversational loop with a language model, allowing for interactive task execution, feedback collection, and dynamic response generation. It includes features such as:
graph TD\n A[Task Initiation] -->|Receives Task| B[Initial LLM Processing]\n B -->|Interprets Task| C[Tool Usage]\n C -->|Calls Tools| D[Function 1]\n C -->|Calls Tools| E[Function 2]\n D -->|Returns Data| C\n E -->|Returns Data| C\n C -->|Provides Data| F[Memory Interaction]\n F -->|Stores and Retrieves Data| G[RAG System]\n G -->|ChromaDB/Pinecone| H[Enhanced Data]\n F -->|Provides Enhanced Data| I[Final LLM Processing]\n I -->|Generates Final Response| J[Output]\n C -->|No Tools Available| K[Skip Tool Usage]\n K -->|Proceeds to Memory Interaction| F\n F -->|No Memory Available| L[Skip Memory Interaction]\n L -->|Proceeds to Final LLM Processing| I
"},{"location":"swarms/structs/agent/#agent-attributes","title":"Agent
Attributes","text":"Attribute Description id
Unique identifier for the agent instance. llm
Language model instance used by the agent. template
Template used for formatting responses. max_loops
Maximum number of loops the agent can run. stopping_condition
Callable function determining when to stop looping. loop_interval
Interval (in seconds) between loops. retry_attempts
Number of retry attempts for failed LLM calls. retry_interval
Interval (in seconds) between retry attempts. return_history
Boolean indicating whether to return conversation history. stopping_token
Token that stops the agent from looping when present in the response. dynamic_loops
Boolean indicating whether to dynamically determine the number of loops. interactive
Boolean indicating whether to run in interactive mode. dashboard
Boolean indicating whether to display a dashboard. agent_name
Name of the agent instance. agent_description
Description of the agent instance. system_prompt
System prompt used to initialize the conversation. tools
List of callable functions representing tools the agent can use. dynamic_temperature_enabled
Boolean indicating whether to dynamically adjust the LLM's temperature. sop
Standard operating procedure for the agent. sop_list
List of strings representing the standard operating procedure. saved_state_path
File path for saving and loading the agent's state. autosave
Boolean indicating whether to automatically save the agent's state. context_length
Maximum length of the context window (in tokens) for the LLM. user_name
Name used to represent the user in the conversation. self_healing_enabled
Boolean indicating whether to attempt self-healing in case of errors. code_interpreter
Boolean indicating whether to interpret and execute code snippets. multi_modal
Boolean indicating whether to support multimodal inputs. pdf_path
File path of a PDF document to be ingested. list_of_pdf
List of file paths for PDF documents to be ingested. tokenizer
Instance of a tokenizer used for token counting and management. long_term_memory
Instance of a BaseVectorDatabase
implementation for long-term memory management. preset_stopping_token
Boolean indicating whether to use a preset stopping token. traceback
Object used for traceback handling. traceback_handlers
List of traceback handlers. streaming_on
Boolean indicating whether to stream responses. docs
List of document paths or contents to be ingested. docs_folder
Path to a folder containing documents to be ingested. verbose
Boolean indicating whether to print verbose output. parser
Callable function used for parsing input data. best_of_n
Integer indicating the number of best responses to generate. callback
Callable function to be called after each agent loop. metadata
Dictionary containing metadata for the agent. callbacks
List of callable functions to be called during execution. logger_handler
Handler for logging messages. search_algorithm
Callable function for long-term memory retrieval. logs_to_filename
File path for logging agent activities. evaluator
Callable function for evaluating the agent's responses. stopping_func
Callable function used as a stopping condition. custom_loop_condition
Callable function used as a custom loop condition. sentiment_threshold
Float value representing the sentiment threshold for evaluating responses. custom_exit_command
String representing a custom command for exiting the agent's loop. sentiment_analyzer
Callable function for sentiment analysis on outputs. limit_tokens_from_string
Callable function for limiting the number of tokens in a string. custom_tools_prompt
Callable function for generating a custom prompt for tool usage. tool_schema
Data structure representing the schema for the agent's tools. output_type
Type representing the expected output type of responses. function_calling_type
String representing the type of function calling. output_cleaner
Callable function for cleaning the agent's output. function_calling_format_type
String representing the format type for function calling. list_base_models
List of base models used for generating tool schemas. metadata_output_type
String representing the output type for metadata. state_save_file_type
String representing the file type for saving the agent's state. chain_of_thoughts
Boolean indicating whether to use the chain of thoughts technique. algorithm_of_thoughts
Boolean indicating whether to use the algorithm of thoughts technique. tree_of_thoughts
Boolean indicating whether to use the tree of thoughts technique. tool_choice
String representing the method for tool selection. execute_tool
Boolean indicating whether to execute tools. rules
String representing the rules for the agent's behavior. planning
Boolean indicating whether to perform planning. planning_prompt
String representing the prompt for planning. device
String representing the device on which the agent should run. custom_planning_prompt
String representing a custom prompt for planning. memory_chunk_size
Integer representing the maximum size of memory chunks for long-term memory retrieval. agent_ops_on
Boolean indicating whether agent operations should be enabled. return_step_meta
Boolean indicating whether to return JSON of all steps and additional metadata. output_type
Literal type indicating whether to output \"string\", \"str\", \"list\", \"json\", \"dict\", or \"yaml\". time_created
Float representing the time the agent was created. tags
Optional list of strings for tagging the agent. use_cases
Optional list of dictionaries describing use cases for the agent. step_pool
List of Step objects representing the agent's execution steps. print_every_step
Boolean indicating whether to print every step of execution. agent_output
ManySteps object containing the agent's output and metadata. executor_workers
Integer representing the number of executor workers for concurrent operations. data_memory
Optional callable for data memory operations. load_yaml_path
String representing the path to a YAML file for loading configurations. auto_generate_prompt
Boolean indicating whether to automatically generate prompts. rag_every_loop
Boolean indicating whether to query RAG database for context on every loop plan_enabled
Boolean indicating whether planning functionality is enabled artifacts_on
Boolean indicating whether to save artifacts from agent execution artifacts_output_path
File path where artifacts should be saved artifacts_file_extension
File extension to use for saved artifacts device
Device to run computations on (\"cpu\" or \"gpu\") all_cores
Boolean indicating whether to use all CPU cores device_id
ID of the GPU device to use if running on GPU scheduled_run_date
Optional datetime for scheduling future agent runs"},{"location":"swarms/structs/agent/#agent-methods","title":"Agent
Methods","text":"Method Description Inputs Usage Example run(task, img=None, is_last=False, device=\"cpu\", device_id=0, all_cores=True, *args, **kwargs)
Runs the autonomous agent loop to complete the given task. task
(str): The task to be performed.img
(str, optional): Path to an image file.is_last
(bool): Whether this is the last task.device
(str): Device to run on (\"cpu\" or \"gpu\").device_id
(int): ID of the GPU to use.all_cores
(bool): Whether to use all CPU cores.*args
, **kwargs
: Additional arguments. response = agent.run(\"Generate a report on financial performance.\")
__call__(task, img=None, *args, **kwargs)
Alternative way to call the run
method. Same as run
. response = agent(\"Generate a report on financial performance.\")
parse_and_execute_tools(response, *args, **kwargs)
Parses the agent's response and executes any tools mentioned in it. response
(str): The agent's response to be parsed.*args
, **kwargs
: Additional arguments. agent.parse_and_execute_tools(response)
add_memory(message)
Adds a message to the agent's memory. message
(str): The message to add. agent.add_memory(\"Important information\")
plan(task, *args, **kwargs)
Plans the execution of a task. task
(str): The task to plan.*args
, **kwargs
: Additional arguments. agent.plan(\"Analyze market trends\")
run_concurrent(task, *args, **kwargs)
Runs a task concurrently. task
(str): The task to run.*args
, **kwargs
: Additional arguments. response = await agent.run_concurrent(\"Concurrent task\")
run_concurrent_tasks(tasks, *args, **kwargs)
Runs multiple tasks concurrently. tasks
(List[str]): List of tasks to run.*args
, **kwargs
: Additional arguments. responses = agent.run_concurrent_tasks([\"Task 1\", \"Task 2\"])
bulk_run(inputs)
Generates responses for multiple input sets. inputs
(List[Dict[str, Any]]): List of input dictionaries. responses = agent.bulk_run([{\"task\": \"Task 1\"}, {\"task\": \"Task 2\"}])
save()
Saves the agent's history to a file. None agent.save()
load(file_path)
Loads the agent's history from a file. file_path
(str): Path to the file. agent.load(\"agent_history.json\")
graceful_shutdown()
Gracefully shuts down the system, saving the state. None agent.graceful_shutdown()
analyze_feedback()
Analyzes the feedback for issues. None agent.analyze_feedback()
undo_last()
Undoes the last response and returns the previous state. None previous_state, message = agent.undo_last()
add_response_filter(filter_word)
Adds a response filter to filter out certain words. filter_word
(str): Word to filter. agent.add_response_filter(\"sensitive\")
apply_response_filters(response)
Applies response filters to the given response. response
(str): Response to filter. filtered_response = agent.apply_response_filters(response)
filtered_run(task)
Runs a task with response filtering applied. task
(str): Task to run. response = agent.filtered_run(\"Generate a report\")
save_to_yaml(file_path)
Saves the agent to a YAML file. file_path
(str): Path to save the YAML file. agent.save_to_yaml(\"agent_config.yaml\")
get_llm_parameters()
Returns the parameters of the language model. None llm_params = agent.get_llm_parameters()
save_state(file_path, *args, **kwargs)
Saves the current state of the agent to a JSON file. file_path
(str): Path to save the JSON file.*args
, **kwargs
: Additional arguments. agent.save_state(\"agent_state.json\")
update_system_prompt(system_prompt)
Updates the system prompt. system_prompt
(str): New system prompt. agent.update_system_prompt(\"New system instructions\")
update_max_loops(max_loops)
Updates the maximum number of loops. max_loops
(int): New maximum number of loops. agent.update_max_loops(5)
update_loop_interval(loop_interval)
Updates the loop interval. loop_interval
(int): New loop interval. agent.update_loop_interval(2)
update_retry_attempts(retry_attempts)
Updates the number of retry attempts. retry_attempts
(int): New number of retry attempts. agent.update_retry_attempts(3)
update_retry_interval(retry_interval)
Updates the retry interval. retry_interval
(int): New retry interval. agent.update_retry_interval(5)
reset()
Resets the agent's memory. None agent.reset()
ingest_docs(docs, *args, **kwargs)
Ingests documents into the agent's memory. docs
(List[str]): List of document paths.*args
, **kwargs
: Additional arguments. agent.ingest_docs([\"doc1.pdf\", \"doc2.txt\"])
ingest_pdf(pdf)
Ingests a PDF document into the agent's memory. pdf
(str): Path to the PDF file. agent.ingest_pdf(\"document.pdf\")
receive_message(name, message)
Receives a message and adds it to the agent's memory. name
(str): Name of the sender.message
(str): Content of the message. agent.receive_message(\"User\", \"Hello, agent!\")
send_agent_message(agent_name, message, *args, **kwargs)
Sends a message from the agent to a user. agent_name
(str): Name of the agent.message
(str): Message to send.*args
, **kwargs
: Additional arguments. response = agent.send_agent_message(\"AgentX\", \"Task completed\")
add_tool(tool)
Adds a tool to the agent's toolset. tool
(Callable): Tool to add. agent.add_tool(my_custom_tool)
add_tools(tools)
Adds multiple tools to the agent's toolset. tools
(List[Callable]): List of tools to add. agent.add_tools([tool1, tool2])
remove_tool(tool)
Removes a tool from the agent's toolset. Method -------- ------------- -------- ---------------- remove_tool(tool)
Removes a tool from the agent's toolset. tool
(Callable): Tool to remove. agent.remove_tool(my_custom_tool)
remove_tools(tools)
Removes multiple tools from the agent's toolset. tools
(List[Callable]): List of tools to remove. agent.remove_tools([tool1, tool2])
get_docs_from_doc_folders()
Retrieves and processes documents from the specified folder. None agent.get_docs_from_doc_folders()
memory_query(task, *args, **kwargs)
Queries the long-term memory for relevant information. task
(str): The task or query.*args
, **kwargs
: Additional arguments. result = agent.memory_query(\"Find information about X\")
sentiment_analysis_handler(response)
Performs sentiment analysis on the given response. response
(str): The response to analyze. agent.sentiment_analysis_handler(\"Great job!\")
count_and_shorten_context_window(history, *args, **kwargs)
Counts tokens and shortens the context window if necessary. history
(str): The conversation history.*args
, **kwargs
: Additional arguments. shortened_history = agent.count_and_shorten_context_window(history)
output_cleaner_and_output_type(response, *args, **kwargs)
Cleans and formats the output based on specified type. response
(str): The response to clean and format.*args
, **kwargs
: Additional arguments. cleaned_response = agent.output_cleaner_and_output_type(response)
stream_response(response, delay=0.001)
Streams the response token by token. response
(str): The response to stream.delay
(float): Delay between tokens. agent.stream_response(\"This is a streamed response\")
dynamic_context_window()
Dynamically adjusts the context window. None agent.dynamic_context_window()
check_available_tokens()
Checks and returns the number of available tokens. None available_tokens = agent.check_available_tokens()
tokens_checks()
Performs token checks and returns available tokens. None token_info = agent.tokens_checks()
truncate_string_by_tokens(input_string, limit)
Truncates a string to fit within a token limit. input_string
(str): String to truncate.limit
(int): Token limit. truncated_string = agent.truncate_string_by_tokens(\"Long string\", 100)
tokens_operations(input_string)
Performs various token-related operations on the input string. input_string
(str): String to process. processed_string = agent.tokens_operations(\"Input string\")
parse_function_call_and_execute(response)
Parses a function call from the response and executes it. response
(str): Response containing the function call. result = agent.parse_function_call_and_execute(response)
llm_output_parser(response)
Parses the output from the language model. response
(Any): Response from the LLM. parsed_response = agent.llm_output_parser(llm_output)
log_step_metadata(loop, task, response)
Logs metadata for each step of the agent's execution. loop
(int): Current loop number.task
(str): Current task.response
(str): Agent's response. agent.log_step_metadata(1, \"Analyze data\", \"Analysis complete\")
to_dict()
Converts the agent's attributes to a dictionary. None agent_dict = agent.to_dict()
to_json(indent=4, *args, **kwargs)
Converts the agent's attributes to a JSON string. indent
(int): Indentation for JSON.*args
, **kwargs
: Additional arguments. agent_json = agent.to_json()
to_yaml(indent=4, *args, **kwargs)
Converts the agent's attributes to a YAML string. indent
(int): Indentation for YAML.*args
, **kwargs
: Additional arguments. agent_yaml = agent.to_yaml()
to_toml(*args, **kwargs)
Converts the agent's attributes to a TOML string. *args
, **kwargs
: Additional arguments. agent_toml = agent.to_toml()
model_dump_json()
Saves the agent model to a JSON file in the workspace directory. None agent.model_dump_json()
model_dump_yaml()
Saves the agent model to a YAML file in the workspace directory. None agent.model_dump_yaml()
log_agent_data()
Logs the agent's data to an external API. None agent.log_agent_data()
handle_tool_schema_ops()
Handles operations related to tool schemas. None agent.handle_tool_schema_ops()
call_llm(task, *args, **kwargs)
Calls the appropriate method on the language model. task
(str): Task for the LLM.*args
, **kwargs
: Additional arguments. response = agent.call_llm(\"Generate text\")
handle_sop_ops()
Handles operations related to standard operating procedures. None agent.handle_sop_ops()
agent_output_type(responses)
Processes and returns the agent's output based on the specified output type. responses
(list): List of responses. formatted_output = agent.agent_output_type(responses)
check_if_no_prompt_then_autogenerate(task)
Checks if a system prompt is not set and auto-generates one if needed. task
(str): The task to use for generating a prompt. agent.check_if_no_prompt_then_autogenerate(\"Analyze data\")
check_if_no_prompt_then_autogenerate(task)
Checks if auto_generate_prompt is enabled and generates a prompt by combining agent name, description and system prompt task
(str, optional): Task to use as fallback agent.check_if_no_prompt_then_autogenerate(\"Analyze data\")
handle_artifacts(response, output_path, extension)
Handles saving artifacts from agent execution response
(str): Agent responseoutput_path
(str): Output pathextension
(str): File extension agent.handle_artifacts(response, \"outputs/\", \".txt\")
"},{"location":"swarms/structs/agent/#updated-run-method","title":"Updated Run Method","text":"Update the run method documentation to include new parameters:
Method Description Inputs Usage Examplerun(task, img=None, is_last=False, device=\"cpu\", device_id=0, all_cores=True, scheduled_run_date=None)
Runs the agent with specified parameters task
(str): Task to runimg
(str, optional): Image pathis_last
(bool): If this is last taskdevice
(str): Device to usedevice_id
(int): GPU IDall_cores
(bool): Use all CPU coresscheduled_run_date
(datetime, optional): Future run date agent.run(\"Analyze data\", device=\"gpu\", device_id=0)
"},{"location":"swarms/structs/agent/#getting-started","title":"Getting Started","text":"To use the Swarm Agent, first install the required dependencies:
pip3 install -U swarms\n
Then, you can initialize and use the agent as follows:
from swarms.structs.agent import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT\n\n# Initialize the Financial Analysis Agent with GPT-4o-mini model\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"finance_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n return_step_meta=False,\n output_type=\"str\",\n)\n\n# Run the agent\nresponse = agent.run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\"\n)\nprint(response)\n
"},{"location":"swarms/structs/agent/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/structs/agent/#tool-integration","title":"Tool Integration","text":"To integrate tools with the Swarm Agent
, you can pass a list of callable functions with types and doc strings to the tools
parameter when initializing the Agent
instance. The agent will automatically convert these functions into an OpenAI function calling schema and make them available for use during task execution.
from swarms import Agent\nfrom swarm_models import OpenAIChat\nimport subprocess\n\ndef terminal(code: str):\n \"\"\"\n Run code in the terminal.\n\n Args:\n code (str): The code to run in the terminal.\n\n Returns:\n str: The output of the code.\n \"\"\"\n out = subprocess.run(code, shell=True, capture_output=True, text=True).stdout\n return str(out)\n\n# Initialize the agent with a tool\nagent = Agent(\n agent_name=\"Terminal-Agent\",\n llm=OpenAIChat(api_key=os.getenv(\"OPENAI_API_KEY\")),\n tools=[terminal],\n system_prompt=\"You are an agent that can execute terminal commands. Use the tools provided to assist the user.\",\n)\n\n# Run the agent\nresponse = agent.run(\"List the contents of the current directory\")\nprint(response)\n
"},{"location":"swarms/structs/agent/#long-term-memory-management","title":"Long-term Memory Management","text":"The Swarm Agent supports integration with vector databases for long-term memory management. Here's an example using ChromaDB:
from swarms import Agent\nfrom swarm_models import Anthropic\nfrom swarms_memory import ChromaDB\n\n# Initialize ChromaDB\nchromadb = ChromaDB(\n metric=\"cosine\",\n output_dir=\"finance_agent_rag\",\n)\n\n# Initialize the agent with long-term memory\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n llm=Anthropic(anthropic_api_key=os.getenv(\"ANTHROPIC_API_KEY\")),\n long_term_memory=chromadb,\n system_prompt=\"You are a financial analysis agent with access to long-term memory.\",\n)\n\n# Run the agent\nresponse = agent.run(\"What are the components of a startup's stock incentive equity plan?\")\nprint(response)\n
"},{"location":"swarms/structs/agent/#interactive-mode","title":"Interactive Mode","text":"To enable interactive mode, set the interactive
parameter to True
when initializing the Agent
:
agent = Agent(\n agent_name=\"Interactive-Agent\",\n llm=OpenAIChat(api_key=os.getenv(\"OPENAI_API_KEY\")),\n interactive=True,\n system_prompt=\"You are an interactive agent. Engage in a conversation with the user.\",\n)\n\n# Run the agent in interactive mode\nagent.run(\"Let's start a conversation\")\n
"},{"location":"swarms/structs/agent/#sentiment-analysis","title":"Sentiment Analysis","text":"To perform sentiment analysis on the agent's outputs, you can provide a sentiment analyzer function:
from textblob import TextBlob\n\ndef sentiment_analyzer(text):\n analysis = TextBlob(text)\n return analysis.sentiment.polarity\n\nagent = Agent(\n agent_name=\"Sentiment-Analysis-Agent\",\n llm=OpenAIChat(api_key=os.getenv(\"OPENAI_API_KEY\")),\n sentiment_analyzer=sentiment_analyzer,\n sentiment_threshold=0.5,\n system_prompt=\"You are an agent that generates responses with sentiment analysis.\",\n)\n\nresponse = agent.run(\"Generate a positive statement about AI\")\nprint(response)\n
"},{"location":"swarms/structs/agent/#undo-functionality","title":"Undo Functionality","text":"# Feature 2: Undo functionality\nresponse = agent.run(\"Another task\")\nprint(f\"Response: {response}\")\nprevious_state, message = agent.undo_last()\nprint(message)\n
"},{"location":"swarms/structs/agent/#response-filtering","title":"Response Filtering","text":"# Feature 3: Response filtering\nagent.add_response_filter(\"report\")\nresponse = agent.filtered_run(\"Generate a report on finance\")\nprint(response)\n
"},{"location":"swarms/structs/agent/#saving-and-loading-state","title":"Saving and Loading State","text":"# Save the agent state\nagent.save_state('saved_flow.json')\n\n# Load the agent state\nagent = Agent(llm=llm_instance, max_loops=5)\nagent.load('saved_flow.json')\nagent.run(\"Continue with the task\")\n
"},{"location":"swarms/structs/agent/#async-and-concurrent-execution","title":"Async and Concurrent Execution","text":"# Run a task concurrently\nresponse = await agent.run_concurrent(\"Concurrent task\")\nprint(response)\n\n# Run multiple tasks concurrently\ntasks = [\n {\"task\": \"Task 1\"},\n {\"task\": \"Task 2\", \"img\": \"path/to/image.jpg\"},\n {\"task\": \"Task 3\", \"custom_param\": 42}\n]\nresponses = agent.bulk_run(tasks)\nprint(responses)\n
"},{"location":"swarms/structs/agent/#various-other-settings","title":"Various other settings","text":"# # Convert the agent object to a dictionary\nprint(agent.to_dict())\nprint(agent.to_toml())\nprint(agent.model_dump_json())\nprint(agent.model_dump_yaml())\n\n# Ingest documents into the agent's knowledge base\nagent.ingest_docs(\"your_pdf_path.pdf\")\n\n# Receive a message from a user and process it\nagent.receive_message(name=\"agent_name\", message=\"message\")\n\n# Send a message from the agent to a user\nagent.send_agent_message(agent_name=\"agent_name\", message=\"message\")\n\n# Ingest multiple documents into the agent's knowledge base\nagent.ingest_docs(\"your_pdf_path.pdf\", \"your_csv_path.csv\")\n\n# Run the agent with a filtered system prompt\nagent.filtered_run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\"\n)\n\n# Run the agent with multiple system prompts\nagent.bulk_run(\n [\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\",\n \"Another system prompt\",\n ]\n)\n\n# Add a memory to the agent\nagent.add_memory(\"Add a memory to the agent\")\n\n# Check the number of available tokens for the agent\nagent.check_available_tokens()\n\n# Perform token checks for the agent\nagent.tokens_checks()\n\n# Print the dashboard of the agent\nagent.print_dashboard()\n\n\n# Fetch all the documents from the doc folders\nagent.get_docs_from_doc_folders()\n\n# Dump the model to a JSON file\nagent.model_dump_json()\nprint(agent.to_toml())\n
"},{"location":"swarms/structs/agent/#auto-generate-prompt-cpu-execution","title":"Auto Generate Prompt + CPU Execution","text":"import os\nfrom swarms import Agent\nfrom swarm_models import OpenAIChat\n\nfrom dotenv import load_dotenv\n\n# Load environment variables\nload_dotenv()\n\n# Retrieve the OpenAI API key from the environment variable\napi_key = os.getenv(\"GROQ_API_KEY\")\n\n# Initialize the model for OpenAI Chat\nmodel = OpenAIChat(\n openai_api_base=\"https://api.groq.com/openai/v1\",\n openai_api_key=api_key,\n model_name=\"llama-3.1-70b-versatile\",\n temperature=0.1,\n)\n\n# Initialize the agent with automated prompt engineering enabled\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=None, # System prompt is dynamically generated\n agent_description=None,\n llm=model,\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=False,\n dynamic_temperature_enabled=True,\n saved_state_path=\"finance_agent.json\",\n user_name=\"Human:\",\n return_step_meta=False,\n output_type=\"string\",\n streaming_on=False,\n auto_generate_prompt=True, # Enable automated prompt engineering\n)\n\n# Run the agent with a task description and specify the device\nagent.run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria\",\n ## Will design a system prompt based on the task if description and system prompt are None\n device=\"cpu\",\n)\n\n# Print the dynamically generated system prompt\nprint(agent.system_prompt)\n
"},{"location":"swarms/structs/agent/#agent-structured-outputs","title":"Agent Structured Outputs","text":"tools_list_dictionary
parameterstr_to_dict
function to convert the output to a dictionary from dotenv import load_dotenv\n\nfrom swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\nfrom swarms.utils.str_to_dict import str_to_dict\n\nload_dotenv()\n\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_stock_price\",\n \"description\": \"Retrieve the current stock price and related information for a specified company.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"ticker\": {\n \"type\": \"string\",\n \"description\": \"The stock ticker symbol of the company, e.g. AAPL for Apple Inc.\",\n },\n \"include_history\": {\n \"type\": \"boolean\",\n \"description\": \"Indicates whether to include historical price data along with the current price.\",\n },\n \"time\": {\n \"type\": \"string\",\n \"format\": \"date-time\",\n \"description\": \"Optional parameter to specify the time for which the stock data is requested, in ISO 8601 format.\",\n },\n },\n \"required\": [\n \"ticker\",\n \"include_history\",\n \"time\",\n ],\n },\n },\n }\n]\n\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=1,\n tools_list_dictionary=tools,\n)\n\nout = agent.run(\n \"What is the current stock price for Apple Inc. (AAPL)? Include historical price data.\",\n)\n\nprint(out)\n\nprint(type(out))\n\nprint(str_to_dict(out))\n\nprint(type(str_to_dict(out)))\n
system_prompt
to guide the agent's behavior.tools
to extend the agent's capabilities for specific tasks.retry_attempts
feature for robust execution.long_term_memory
for tasks that require persistent information.interactive
mode for real-time conversations and dashboard
for monitoring.sentiment_analysis
for applications requiring tone management.autosave
and save
/load
methods for continuity across sessions.dynamic_context_window
and tokens_checks
methods.concurrent
and async
methods for performance-critical applications.analyze_feedback
method.artifacts_on
to save important outputs from agent executiondevice
and device_id
appropriately for optimal performancerag_every_loop
when continuous context from long-term memory is neededscheduled_run_date
for automated task schedulingBy following these guidelines and leveraging the Swarm Agent's extensive features, you can create powerful, flexible, and efficient autonomous agents for a wide range of applications.
"},{"location":"swarms/structs/agent_docs_v1/","title":"Agent
Documentation","text":"Swarm Agent is a powerful autonomous agent framework designed to connect Language Models (LLMs) with various tools and long-term memory. This class provides the ability to ingest and process various types of documents such as PDFs, text files, Markdown files, JSON files, and more. The Agent structure offers a wide range of features to enhance the capabilities of LLMs and facilitate efficient task execution.
Conversational Loop: It establishes a conversational loop with a language model. This means it allows you to interact with the model in a back-and-forth manner, taking turns in the conversation.
Feedback Collection: The class allows users to provide feedback on the responses generated by the model. This feedback can be valuable for training and improving the model's responses over time.
Stoppable Conversation: You can define custom stopping conditions for the conversation, allowing you to stop the interaction based on specific criteria. For example, you can stop the conversation if a certain keyword is detected in the responses.
Retry Mechanism: The class includes a retry mechanism that can be helpful if there are issues generating responses from the model. It attempts to generate a response multiple times before raising an error.
graph TD\n A[Task Initiation] -->|Receives Task| B[Initial LLM Processing]\n B -->|Interprets Task| C[Tool Usage]\n C -->|Calls Tools| D[Function 1]\n C -->|Calls Tools| E[Function 2]\n D -->|Returns Data| C\n E -->|Returns Data| C\n C -->|Provides Data| F[Memory Interaction]\n F -->|Stores and Retrieves Data| G[RAG System]\n G -->|ChromaDB/Pinecone| H[Enhanced Data]\n F -->|Provides Enhanced Data| I[Final LLM Processing]\n I -->|Generates Final Response| J[Output]\n C -->|No Tools Available| K[Skip Tool Usage]\n K -->|Proceeds to Memory Interaction| F\n F -->|No Memory Available| L[Skip Memory Interaction]\n L -->|Proceeds to Final LLM Processing| I
"},{"location":"swarms/structs/agent_docs_v1/#agent-attributes","title":"Agent
Attributes","text":"Attribute Description id
A unique identifier for the agent instance. llm
The language model instance used by the agent. template
The template used for formatting responses. max_loops
The maximum number of loops the agent can run. stopping_condition
A callable function that determines when the agent should stop looping. loop_interval
The interval (in seconds) between loops. retry_attempts
The number of retry attempts for failed LLM calls. retry_interval
The interval (in seconds) between retry attempts. return_history
A boolean indicating whether the agent should return the conversation history. stopping_token
A token that, when present in the response, stops the agent from looping. dynamic_loops
A boolean indicating whether the agent should dynamically determine the number of loops. interactive
A boolean indicating whether the agent should run in interactive mode. dashboard
A boolean indicating whether the agent should display a dashboard. agent_name
The name of the agent instance. agent_description
A description of the agent instance. system_prompt
The system prompt used to initialize the conversation. tools
A list of callable functions representing tools the agent can use. dynamic_temperature_enabled
A boolean indicating whether the agent should dynamically adjust the temperature of the LLM. sop
The standard operating procedure for the agent. sop_list
A list of strings representing the standard operating procedure. saved_state_path
The file path for saving and loading the agent's state. autosave
A boolean indicating whether the agent should automatically save its state. context_length
The maximum length of the context window (in tokens) for the LLM. user_name
The name used to represent the user in the conversation. self_healing_enabled
A boolean indicating whether the agent should attempt to self-heal in case of errors. code_interpreter
A boolean indicating whether the agent should interpret and execute code snippets. multi_modal
A boolean indicating whether the agent should support multimodal inputs (e.g., text and images). pdf_path
The file path of a PDF document to be ingested. list_of_pdf
A list of file paths for PDF documents to be ingested. tokenizer
An instance of a tokenizer used for token counting and management. long_term_memory
An instance of a BaseVectorDatabase
implementation for long-term memory management. preset_stopping_token
A boolean indicating whether the agent should use a preset stopping token. traceback
An object used for traceback handling. traceback_handlers
A list of traceback handlers. streaming_on
A boolean indicating whether the agent should stream its responses. docs
A list of document paths or contents to be ingested. docs_folder
The path to a folder containing documents to be ingested. verbose
A boolean indicating whether the agent should print verbose output. parser
A callable function used for parsing input data. best_of_n
An integer indicating the number of best responses to generate (for sampling). callback
A callable function to be called after each agent loop. metadata
A dictionary containing metadata for the agent. callbacks
A list of callable functions to be called during the agent's execution. logger_handler
A handler for logging messages. search_algorithm
A callable function representing the search algorithm for long-term memory retrieval. logs_to_filename
The file path for logging agent activities. evaluator
A callable function used for evaluating the agent's responses. output_json
A boolean indicating whether the agent's output should be in JSON format. stopping_func
A callable function used as a stopping condition for the agent. custom_loop_condition
A callable function used as a custom loop condition for the agent. sentiment_threshold
A float value representing the sentiment threshold for evaluating responses. custom_exit_command
A string representing a custom command for exiting the agent's loop. sentiment_analyzer
A callable function used for sentiment analysis on the agent's outputs. limit_tokens_from_string
A callable function used for limiting the number of tokens in a string. custom_tools_prompt
A callable function used for generating a custom prompt for tool usage. tool_schema
A data structure representing the schema for the agent's tools. output_type
A type representing the expected output type of the agent's responses. function_calling_type
A string representing the type of function calling (e.g., \"json\"). output_cleaner
A callable function used for cleaning the agent's output. function_calling_format_type
A string representing the format type for function calling (e.g., \"OpenAI\"). list_base_models
A list of base models used for generating tool schemas. metadata_output_type
A string representing the output type for metadata. state_save_file_type
A string representing the file type for saving the agent's state (e.g., \"json\", \"yaml\"). chain_of_thoughts
A boolean indicating whether the agent should use the chain of thoughts technique. algorithm_of_thoughts
A boolean indicating whether the agent should use the algorithm of thoughts technique. tree_of_thoughts
A boolean indicating whether the agent should use the tree of thoughts technique. tool_choice
A string representing the method for tool selection (e.g., \"auto\"). execute_tool
A boolean indicating whether the agent should execute tools. rules
A string representing the rules for the agent's behavior. planning
A boolean indicating whether the agent should perform planning. planning_prompt
A string representing the prompt for planning. device
A string representing the device on which the agent should run. custom_planning_prompt
A string representing a custom prompt for planning. memory_chunk_size
An integer representing the maximum size of memory chunks for long-term memory retrieval. agent_ops_on
A boolean indicating whether agent operations should be enabled. return_step_meta
A boolean indicating whether or not to return JSON of all the steps and additional metadata output_type
A Literal type indicating whether to output \"string\", \"str\", \"list\", \"json\", \"dict\", \"yaml\""},{"location":"swarms/structs/agent_docs_v1/#agent-methods","title":"Agent
Methods","text":"Method Description Inputs Usage Example run(task, img=None, *args, **kwargs)
Runs the autonomous agent loop to complete the given task. task
(str): The task to be performed.img
(str, optional): Path to an image file, if the task involves image processing.*args
, **kwargs
: Additional arguments to pass to the language model. response = agent.run(\"Generate a report on financial performance.\")
__call__(task, img=None, *args, **kwargs)
An alternative way to call the run
method. Same as run
. response = agent(\"Generate a report on financial performance.\")
parse_and_execute_tools(response, *args, **kwargs)
Parses the agent's response and executes any tools mentioned in it. response
(str): The agent's response to be parsed.*args
, **kwargs
: Additional arguments to pass to the tool execution. agent.parse_and_execute_tools(response)
long_term_memory_prompt(query, *args, **kwargs)
Generates a prompt for querying the agent's long-term memory. query
(str): The query to search for in long-term memory.*args
, **kwargs
: Additional arguments to pass to the long-term memory retrieval. memory_retrieval = agent.long_term_memory_prompt(\"financial performance\")
add_memory(message)
Adds a message to the agent's memory. message
(str): The message"},{"location":"swarms/structs/agent_docs_v1/#features","title":"Features","text":"First run the following:
pip3 install -U swarms\n
And, then now you can get started with the following:
import os\nfrom swarms import Agent\nfrom swarm_models import OpenAIChat\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Create an instance of the OpenAIChat class\nmodel = OpenAIChat(\n api_key=api_key, model_name=\"gpt-4o-mini\", temperature=0.1\n)\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent_sas_chicken_eej\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"finance_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n return_step_meta=False,\n output_type=\"str\",\n)\n\n\nagent.run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria\"\n)\nprint(out)\n
This example initializes an instance of the Agent
class with an OpenAI language model and a maximum of 3 loops. The run()
method is then called with a task to generate a report on financial performance, and the agent's response is printed.
The Swarm Agent provides numerous advanced features and customization options. Here are a few examples of how to leverage these features:
"},{"location":"swarms/structs/agent_docs_v1/#tool-integration","title":"Tool Integration","text":"To integrate tools with the Swarm Agent, you can pass a list of callable functions with types and doc strings to the tools
parameter when initializing the Agent
instance. The agent will automatically convert these functions into an OpenAI function calling schema and make them available for use during task execution.
from swarms import Agent\nfrom swarm_models import OpenAIChat\nfrom swarms_memory import ChromaDB\nimport subprocess\nimport os\n\n# Making an instance of the ChromaDB class\nmemory = ChromaDB(\n metric=\"cosine\",\n n_results=3,\n output_dir=\"results\",\n docs_folder=\"docs\",\n)\n\n# Model\nmodel = OpenAIChat(\n api_key=os.getenv(\"OPENAI_API_KEY\"),\n model_name=\"gpt-4o-mini\",\n temperature=0.1,\n)\n\n\n# Tools in swarms are simple python functions and docstrings\ndef terminal(\n code: str,\n):\n \"\"\"\n Run code in the terminal.\n\n Args:\n code (str): The code to run in the terminal.\n\n Returns:\n str: The output of the code.\n \"\"\"\n out = subprocess.run(\n code, shell=True, capture_output=True, text=True\n ).stdout\n return str(out)\n\n\ndef browser(query: str):\n \"\"\"\n Search the query in the browser with the `browser` tool.\n\n Args:\n query (str): The query to search in the browser.\n\n Returns:\n str: The search results.\n \"\"\"\n import webbrowser\n\n url = f\"https://www.google.com/search?q={query}\"\n webbrowser.open(url)\n return f\"Searching for {query} in the browser.\"\n\n\ndef create_file(file_path: str, content: str):\n \"\"\"\n Create a file using the file editor tool.\n\n Args:\n file_path (str): The path to the file.\n content (str): The content to write to the file.\n\n Returns:\n str: The result of the file creation operation.\n \"\"\"\n with open(file_path, \"w\") as file:\n file.write(content)\n return f\"File {file_path} created successfully.\"\n\n\ndef file_editor(file_path: str, mode: str, content: str):\n \"\"\"\n Edit a file using the file editor tool.\n\n Args:\n file_path (str): The path to the file.\n mode (str): The mode to open the file in.\n content (str): The content to write to the file.\n\n Returns:\n str: The result of the file editing operation.\n \"\"\"\n with open(file_path, mode) as file:\n file.write(content)\n return f\"File {file_path} edited successfully.\"\n\n\n# Agent\nagent = Agent(\n agent_name=\"Devin\",\n system_prompt=(\n \"Autonomous agent that can interact with humans and other\"\n \" agents. Be Helpful and Kind. Use the tools provided to\"\n \" assist the user. Return all code in markdown format.\"\n ),\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n interactive=True,\n tools=[terminal, browser, file_editor, create_file],\n streaming=True,\n long_term_memory=memory,\n)\n\n# Run the agent\nout = agent(\n \"Create a CSV file with the latest tax rates for C corporations in the following ten states and the District of Columbia: Alabama, California, Florida, Georgia, Illinois, New York, North Carolina, Ohio, Texas, and Washington.\"\n)\nprint(out)\n
"},{"location":"swarms/structs/agent_docs_v1/#long-term-memory-management","title":"Long-term Memory Management","text":"The Swarm Agent supports integration with various vector databases for long-term memory management. You can pass an instance of a BaseVectorDatabase
implementation to the long_term_memory
parameter when initializing the Agent
.
import os\n\nfrom swarms_memory import ChromaDB\n\nfrom swarms import Agent\nfrom swarm_models import Anthropic\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\n\n# Initilaize the chromadb client\nchromadb = ChromaDB(\n metric=\"cosine\",\n output_dir=\"fiance_agent_rag\",\n # docs_folder=\"artifacts\", # Folder of your documents\n)\n\n# Model\nmodel = Anthropic(anthropic_api_key=os.getenv(\"ANTHROPIC_API_KEY\"))\n\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n agent_description=\"Agent creates \",\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"finance_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=3,\n context_length=200000,\n long_term_memory=chromadb,\n)\n\n\nagent.run(\n \"What are the components of a startups stock incentive equity plan\"\n)\n
"},{"location":"swarms/structs/agent_docs_v1/#document-ingestion","title":"Document Ingestion","text":"The Swarm Agent can ingest various types of documents, such as PDFs, text files, Markdown files, and JSON files. You can pass a list of document paths or contents to the docs
parameter when initializing the Agent
.
from swarms.structs import Agent\n\n# Initialize the agent with documents\nagent = Agent(llm=llm, max_loops=3, docs=[\"path/to/doc1.pdf\", \"path/to/doc2.txt\"])\n
"},{"location":"swarms/structs/agent_docs_v1/#interactive-mode","title":"Interactive Mode","text":"The Swarm Agent supports an interactive mode, where users can engage in real-time communication with the agent. To enable interactive mode, set the interactive
parameter to True
when initializing the Agent
.
from swarms.structs import Agent\n\n# Initialize the agent in interactive mode\nagent = Agent(llm=llm, max_loops=3, interactive=True)\n\n# Run the agent in interactive mode\nagent.interactive_run()\n
"},{"location":"swarms/structs/agent_docs_v1/#sentiment-analysis","title":"Sentiment Analysis","text":"The Swarm Agent can perform sentiment analysis on its generated outputs using a sentiment analyzer function. You can pass a callable function to the sentiment_analyzer
parameter when initializing the Agent
.
from swarms.structs import Agent\nfrom my_sentiment_analyzer import sentiment_analyzer_function\n\n# Initialize the agent with a sentiment analyzer\nagent = Agent(\n agent_name = \"sentiment-analyzer-agent-01\", system_prompt=\"...\"\n llm=llm, max_loops=3, sentiment_analyzer=sentiment_analyzer_function)\n
"},{"location":"swarms/structs/agent_docs_v1/#undo-functionality","title":"Undo Functionality","text":"# Feature 2: Undo functionality\nresponse = agent.run(\"Another task\")\nprint(f\"Response: {response}\")\nprevious_state, message = agent.undo_last()\nprint(message)\n
"},{"location":"swarms/structs/agent_docs_v1/#response-filtering","title":"Response Filtering","text":"# Feature 3: Response filtering\nagent.add_response_filter(\"report\")\nresponse = agent.filtered_run(\"Generate a report on finance\")\nprint(response)\n
"},{"location":"swarms/structs/agent_docs_v1/#saving-and-loading-state","title":"Saving and Loading State","text":"# Save the agent state\nagent.save_state('saved_flow.json')\n\n# Load the agent state\nagent = Agent(llm=llm_instance, max_loops=5)\nagent.load('saved_flow.json')\nagent.run(\"Continue with the task\")\n
"},{"location":"swarms/structs/agent_docs_v1/#async-and-concurrent-execution","title":"Async and Concurrent Execution","text":"# Run a task concurrently\nresponse = await agent.run_concurrent(\"Concurrent task\")\nprint(response)\n\n# Run multiple tasks concurrently\ntasks = [\n {\"task\": \"Task 1\"},\n {\"task\": \"Task 2\", \"img\": \"path/to/image.jpg\"},\n {\"task\": \"Task 3\", \"custom_param\": 42}\n]\nresponses = agent.bulk_run(tasks)\nprint(responses)\n
"},{"location":"swarms/structs/agent_docs_v1/#various-other-settings","title":"Various other settings","text":"# # Convert the agent object to a dictionary\nprint(agent.to_dict())\nprint(agent.to_toml())\nprint(agent.model_dump_json())\nprint(agent.model_dump_yaml())\n\n# Ingest documents into the agent's knowledge base\nagent.ingest_docs(\"your_pdf_path.pdf\")\n\n# Receive a message from a user and process it\nagent.receive_message(name=\"agent_name\", message=\"message\")\n\n# Send a message from the agent to a user\nagent.send_agent_message(agent_name=\"agent_name\", message=\"message\")\n\n# Ingest multiple documents into the agent's knowledge base\nagent.ingest_docs(\"your_pdf_path.pdf\", \"your_csv_path.csv\")\n\n# Run the agent with a filtered system prompt\nagent.filtered_run(\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\"\n)\n\n# Run the agent with multiple system prompts\nagent.bulk_run(\n [\n \"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?\",\n \"Another system prompt\",\n ]\n)\n\n# Add a memory to the agent\nagent.add_memory(\"Add a memory to the agent\")\n\n# Check the number of available tokens for the agent\nagent.check_available_tokens()\n\n# Perform token checks for the agent\nagent.tokens_checks()\n\n# Print the dashboard of the agent\nagent.print_dashboard()\n\n\n# Fetch all the documents from the doc folders\nagent.get_docs_from_doc_folders()\n\n# Activate agent ops\n\n# Dump the model to a JSON file\nagent.model_dump_json()\nprint(agent.to_toml())\n
"},{"location":"swarms/structs/agent_mcp/","title":"Agent MCP Integration Guide","text":"Direct MCP Server Connection
Connect agents to MCP servers via URL for seamless integration
Quick Start
Dynamic Tool Discovery
Automatically fetch and utilize tools from MCP servers
Tool Discovery
Real-time Communication
Server-sent Events (SSE) for live data streaming
Configuration
Structured Output
Process and format responses with multiple output types
Examples
The Model Context Protocol (MCP) integration enables Swarms agents to dynamically connect to external tools and services through a standardized protocol. This powerful feature expands agent capabilities by providing access to APIs, databases, and specialized services.
What is MCP?
The Model Context Protocol is a standardized way for AI agents to interact with external tools and services, providing a consistent interface for tool discovery and execution.
"},{"location":"swarms/structs/agent_mcp/#features-matrix","title":"Features Matrix","text":"\u2705 Current Capabilities\ud83d\udea7 In Development Feature Status Description Direct MCP Connection \u2705 Ready Connect via URL to MCP servers Tool Discovery \u2705 Ready Auto-fetch available tools SSE Communication \u2705 Ready Real-time server communication Multiple Tool Execution \u2705 Ready Execute multiple tools per session Structured Output \u2705 Ready Format responses in multiple types Feature Status Expected MCPConnection Model \ud83d\udea7 Development Q1 2024 Multiple Server Support \ud83d\udea7 Planned Q2 2024 Parallel Function Calling \ud83d\udea7 Research Q2 2024 Auto-discovery \ud83d\udea7 Planned Q3 2024"},{"location":"swarms/structs/agent_mcp/#quick-start","title":"Quick Start","text":"Prerequisites
System RequirementsInstallationpip install swarms\n
"},{"location":"swarms/structs/agent_mcp/#step-1-basic-agent-setup","title":"Step 1: Basic Agent Setup","text":"Simple MCP Agent
from swarms import Agent\n\n# Initialize agent with MCP integration\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"AI-powered financial advisor\",\n max_loops=1,\n mcp_url=\"http://localhost:8000/sse\", # Your MCP server\n output_type=\"all\",\n)\n\n# Execute task using MCP tools\nresult = agent.run(\n \"Get current Bitcoin price and analyze market trends\"\n)\nprint(result)\n
"},{"location":"swarms/structs/agent_mcp/#step-2-advanced-configuration","title":"Step 2: Advanced Configuration","text":"Production-Ready Setup
from swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT\n\nagent = Agent(\n agent_name=\"Advanced-Financial-Agent\",\n agent_description=\"Comprehensive market analysis agent\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=3,\n mcp_url=\"http://production-server:8000/sse\",\n output_type=\"json\",\n # Additional parameters for production\n temperature=0.1,\n verbose=True,\n)\n
"},{"location":"swarms/structs/agent_mcp/#integration-flow","title":"Integration Flow","text":"The following diagram illustrates the complete MCP integration workflow:
graph TD\n A[\ud83d\ude80 Agent Receives Task] --> B[\ud83d\udd17 Connect to MCP Server]\n B --> C[\ud83d\udd0d Discover Available Tools]\n C --> D[\ud83e\udde0 Analyze Task Requirements]\n D --> E[\ud83d\udcdd Generate Tool Request]\n E --> F[\ud83d\udce4 Send to MCP Server]\n F --> G[\u2699\ufe0f Server Processes Request]\n G --> H[\ud83d\udce5 Receive Response]\n H --> I[\ud83d\udd04 Process & Validate]\n I --> J[\ud83d\udcca Summarize Results]\n J --> K[\u2705 Return Final Output]\n\n class A,K startEnd\n class D,I,J process\n class F,G,H communication
"},{"location":"swarms/structs/agent_mcp/#detailed-process-breakdown","title":"Detailed Process Breakdown","text":"Process Steps
1-3: Initialization4-6: Execution7-9: Processing10-11: CompletionTask Initiation - Agent receives user query
Server Connection - Establish MCP server link
Tool Discovery - Fetch available tool schemas
Task Analysis - Determine required tools
Request Generation - Create structured API calls
Server Communication - Send requests via SSE
Server Processing - MCP server executes tools
Response Handling - Receive and validate data
Result Processing - Parse and structure output
Summarization - Generate user-friendly summary
Final Output - Return complete response
"},{"location":"swarms/structs/agent_mcp/#configuration-options","title":"Configuration Options","text":""},{"location":"swarms/structs/agent_mcp/#agent-parameters","title":"Agent Parameters","text":"Configuration Reference
Parameter Type Description Default Examplemcp_url
str
MCP server endpoint None
\"http://localhost:8000/sse\"
output_type
str
Response format \"str\"
\"json\"
, \"all\"
, \"dict\"
max_loops
int
Execution iterations 1
3
temperature
float
Response creativity 0.1
0.1-1.0
verbose
bool
Debug logging False
True
"},{"location":"swarms/structs/agent_mcp/#example-implementations","title":"Example Implementations","text":""},{"location":"swarms/structs/agent_mcp/#cryptocurrency-trading-agent","title":"Cryptocurrency Trading Agent","text":"Crypto Price Monitor
from swarms import Agent\n\ncrypto_agent = Agent(\n agent_name=\"Crypto-Trading-Agent\",\n agent_description=\"Real-time cryptocurrency market analyzer\",\n max_loops=2,\n mcp_url=\"http://crypto-server:8000/sse\",\n output_type=\"json\",\n temperature=0.1,\n)\n\n# Multi-exchange price comparison\nresult = crypto_agent.run(\n \"\"\"\n Compare Bitcoin and Ethereum prices across OKX and HTX exchanges.\n Calculate arbitrage opportunities and provide trading recommendations.\n \"\"\"\n)\n
"},{"location":"swarms/structs/agent_mcp/#financial-analysis-suite","title":"Financial Analysis Suite","text":"Advanced Financial Agent
from swarms import Agent\nfrom swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT\n\nfinancial_agent = Agent(\n agent_name=\"Financial-Analysis-Suite\",\n agent_description=\"Comprehensive financial market analyst\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n max_loops=4,\n mcp_url=\"http://finance-api:8000/sse\",\n output_type=\"all\",\n temperature=0.2,\n)\n\n# Complex market analysis\nanalysis = financial_agent.run(\n \"\"\"\n Perform a comprehensive analysis of Tesla (TSLA) stock:\n 1. Current price and technical indicators\n 2. Recent news sentiment analysis\n 3. Competitor comparison (GM, Ford)\n 4. Investment recommendation with risk assessment\n \"\"\"\n)\n
"},{"location":"swarms/structs/agent_mcp/#custom-industry-agent","title":"Custom Industry Agent","text":"Healthcare Data Agent
from swarms import Agent\n\nhealthcare_agent = Agent(\n agent_name=\"Healthcare-Data-Agent\",\n agent_description=\"Medical data analysis and research assistant\",\n max_loops=3,\n mcp_url=\"http://medical-api:8000/sse\",\n output_type=\"dict\",\n system_prompt=\"\"\"\n You are a healthcare data analyst. Use available medical databases\n and research tools to provide accurate, evidence-based information.\n Always cite sources and include confidence levels.\n \"\"\",\n)\n\nresearch = healthcare_agent.run(\n \"Research latest treatments for Type 2 diabetes and their efficacy rates\"\n)\n
"},{"location":"swarms/structs/agent_mcp/#mcp-server-development","title":"MCP Server Development","text":""},{"location":"swarms/structs/agent_mcp/#fastmcp-server-example","title":"FastMCP Server Example","text":"Building a Custom MCP Server
from mcp.server.fastmcp import FastMCP\nimport requests\nfrom typing import Optional\nimport asyncio\n\n# Initialize MCP server\nmcp = FastMCP(\"crypto_analysis_server\")\n\n@mcp.tool(\n name=\"get_crypto_price\",\n description=\"Fetch current cryptocurrency price with market data\",\n)\ndef get_crypto_price(\n symbol: str, \n currency: str = \"USD\",\n include_24h_change: bool = True\n) -> dict:\n \"\"\"\n Get real-time cryptocurrency price and market data.\n\n Args:\n symbol: Cryptocurrency symbol (e.g., BTC, ETH)\n currency: Target currency for price (default: USD)\n include_24h_change: Include 24-hour price change data\n \"\"\"\n try:\n url = f\"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": symbol.lower(),\n \"vs_currencies\": currency.lower(),\n \"include_24hr_change\": include_24h_change\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n return {\n \"symbol\": symbol.upper(),\n \"price\": data[symbol.lower()][currency.lower()],\n \"currency\": currency.upper(),\n \"change_24h\": data[symbol.lower()].get(\"24h_change\", 0),\n \"timestamp\": \"2024-01-15T10:30:00Z\"\n }\n\n except Exception as e:\n return {\"error\": f\"Failed to fetch price: {str(e)}\"}\n\n@mcp.tool(\n name=\"analyze_market_sentiment\",\n description=\"Analyze cryptocurrency market sentiment from social media\",\n)\ndef analyze_market_sentiment(symbol: str, timeframe: str = \"24h\") -> dict:\n \"\"\"Analyze market sentiment for a cryptocurrency.\"\"\"\n # Implement sentiment analysis logic\n return {\n \"symbol\": symbol,\n \"sentiment_score\": 0.75,\n \"sentiment\": \"Bullish\",\n \"confidence\": 0.85,\n \"timeframe\": timeframe\n }\n\nif __name__ == \"__main__\":\n mcp.run(transport=\"sse\")\n
"},{"location":"swarms/structs/agent_mcp/#server-best-practices","title":"Server Best Practices","text":"Server Development Guidelines
\ud83c\udfd7\ufe0f Architecture\ud83d\udd12 Security\u26a1 PerformanceImportant Limitations
"},{"location":"swarms/structs/agent_mcp/#mcpconnection-model","title":"\ud83d\udea7 MCPConnection Model","text":"The enhanced connection model is under development:
# \u274c Not available yet\nfrom swarms.schemas.mcp_schemas import MCPConnection\n\nmcp_config = MCPConnection(\n url=\"http://server:8000/sse\",\n headers={\"Authorization\": \"Bearer token\"},\n timeout=30,\n retry_attempts=3\n)\n\n# \u2705 Use direct URL instead\nmcp_url = \"http://server:8000/sse\"\n
"},{"location":"swarms/structs/agent_mcp/#single-server-limitation","title":"\ud83d\udea7 Single Server Limitation","text":"Currently supports one server per agent:
# \u274c Multiple servers not supported\nmcp_servers = [\n \"http://server1:8000/sse\",\n \"http://server2:8000/sse\"\n]\n\n# \u2705 Single server only\nmcp_url = \"http://primary-server:8000/sse\"\n
"},{"location":"swarms/structs/agent_mcp/#sequential-execution","title":"\ud83d\udea7 Sequential Execution","text":"Tools execute sequentially, not in parallel:
# Current: tool1() \u2192 tool2() \u2192 tool3()\n# Future: tool1() | tool2() | tool3() (parallel)\n
"},{"location":"swarms/structs/agent_mcp/#troubleshooting","title":"Troubleshooting","text":""},{"location":"swarms/structs/agent_mcp/#common-issues-solutions","title":"Common Issues & Solutions","text":"Connection Problems
Server UnreachableAuthentication ErrorsSSL/TLS IssuesSymptoms: Connection timeout or refused
Solutions:
# Check server status\ncurl -I http://localhost:8000/sse\n\n# Verify port is open\nnetstat -tulpn | grep :8000\n\n# Test network connectivity\nping your-server-host\n
Symptoms: 401/403 HTTP errors
Solutions:
# Verify API credentials\nheaders = {\"Authorization\": \"Bearer your-token\"}\n\n# Check token expiration\n# Validate permissions\n
Symptoms: Certificate errors
Solutions:
# For development only\nimport ssl\nssl._create_default_https_context = ssl._create_unverified_context\n
Tool Discovery Failures
Empty Tool ListSchema Validation ErrorsSymptoms: No tools found from server
Debugging:
# Check server tool registration\n@mcp.tool(name=\"tool_name\", description=\"...\")\ndef your_tool():\n pass\n\n# Verify server startup logs\n# Check tool endpoint responses\n
Symptoms: Invalid tool parameters
Solutions:
# Ensure proper type hints\ndef tool(param: str, optional: int = 0) -> dict:\n return {\"result\": \"success\"}\n\n# Validate parameter types\n# Check required vs optional parameters\n
Performance Issues
Slow Response TimesMemory UsageSymptoms: Long wait times for responses
Optimization:
# Increase timeout\nagent = Agent(\n mcp_url=\"http://server:8000/sse\",\n timeout=60, # seconds\n)\n\n# Optimize server performance\n# Use connection pooling\n# Implement caching\n
Symptoms: High memory consumption
Solutions:
# Limit max_loops\nagent = Agent(max_loops=2)\n\n# Use streaming for large responses\n# Implement garbage collection\n
"},{"location":"swarms/structs/agent_mcp/#debugging-tools","title":"Debugging Tools","text":"Debug Configuration
import logging\n\n# Enable debug logging\nlogging.basicConfig(level=logging.DEBUG)\n\nagent = Agent(\n agent_name=\"Debug-Agent\",\n mcp_url=\"http://localhost:8000/sse\",\n verbose=True, # Enable verbose output\n output_type=\"all\", # Get full execution trace\n)\n\n# Monitor network traffic\n# Check server logs\n# Use profiling tools\n
"},{"location":"swarms/structs/agent_mcp/#security-best-practices","title":"Security Best Practices","text":""},{"location":"swarms/structs/agent_mcp/#authentication-authorization","title":"Authentication & Authorization","text":"Security Checklist
\ud83d\udd11 Authentication\ud83d\udee1\ufe0f Authorization\ud83d\udd12 Data ProtectionProduction Security Setup
import os\nfrom swarms import Agent\n\n# Secure configuration\nagent = Agent(\n agent_name=\"Production-Agent\",\n mcp_url=os.getenv(\"MCP_SERVER_URL\"), # From environment\n # Additional security headers would go here when MCPConnection is available\n verbose=False, # Disable verbose logging in production\n output_type=\"json\", # Structured output only\n)\n\n# Environment variables (.env file)\n\"\"\"\nMCP_SERVER_URL=https://secure-server.company.com/sse\nMCP_API_KEY=your-secure-api-key\nMCP_TIMEOUT=30\n\"\"\"\n
"},{"location":"swarms/structs/agent_mcp/#performance-optimization","title":"Performance Optimization","text":""},{"location":"swarms/structs/agent_mcp/#agent-optimization","title":"Agent Optimization","text":"Performance Tips
\u26a1 Configuration\ud83d\udd04 Caching\ud83d\udcca Monitoring# Optimized agent settings\nagent = Agent(\n max_loops=2, # Limit iterations\n temperature=0.1, # Reduce randomness\n output_type=\"json\", # Structured output\n # Future: connection_pool_size=10\n)\n
# Implement response caching\nfrom functools import lru_cache\n\n@lru_cache(maxsize=100)\ndef cached_mcp_call(query):\n return agent.run(query)\n
import time\n\nstart_time = time.time()\nresult = agent.run(\"query\")\nexecution_time = time.time() - start_time\n\nprint(f\"Execution time: {execution_time:.2f}s\")\n
"},{"location":"swarms/structs/agent_mcp/#server-optimization","title":"Server Optimization","text":"Server Performance
from mcp.server.fastmcp import FastMCP\nimport asyncio\nfrom concurrent.futures import ThreadPoolExecutor\n\nmcp = FastMCP(\"optimized_server\")\n\n# Async tool with thread pool\n@mcp.tool(name=\"async_heavy_task\")\nasync def heavy_computation(data: str) -> dict:\n loop = asyncio.get_event_loop()\n with ThreadPoolExecutor() as executor:\n result = await loop.run_in_executor(\n executor, process_heavy_task, data\n )\n return result\n\ndef process_heavy_task(data):\n # CPU-intensive processing\n return {\"processed\": data}\n
"},{"location":"swarms/structs/agent_mcp/#future-roadmap","title":"Future Roadmap","text":""},{"location":"swarms/structs/agent_mcp/#upcoming-features","title":"Upcoming Features","text":"Development Timeline
1 Week2 Week3 WeekGet Involved
We welcome contributions to improve MCP integration:
Need Assistance?
\ud83d\udcda Documentation\ud83d\udcac Community\ud83d\udd27 DevelopmentCheat Sheet
# Basic setup\nfrom swarms import Agent\n\nagent = Agent(\n agent_name=\"Your-Agent\",\n mcp_url=\"http://localhost:8000/sse\",\n output_type=\"json\",\n max_loops=2\n)\n\n# Execute task\nresult = agent.run(\"Your query here\")\n\n# Common patterns\ncrypto_query = \"Get Bitcoin price\"\nanalysis_query = \"Analyze Tesla stock performance\"\nresearch_query = \"Research recent AI developments\"\n
"},{"location":"swarms/structs/agent_mcp/#conclusion","title":"Conclusion","text":"The MCP integration brings powerful external tool connectivity to Swarms agents, enabling them to access real-world data and services through a standardized protocol. While some advanced features are still in development, the current implementation provides robust functionality for most use cases.
Ready to Start?
Begin with the Quick Start section and explore the examples to see MCP integration in action. As new features become available, this documentation will be updated with the latest capabilities and best practices.
Stay Updated
Join our Discord community to stay informed about new MCP features and connect with other developers building amazing agent applications.
Quick Start
Get up and running with MCP integration in minutes
Examples
Explore real-world implementations and use cases
Configuration
Learn about all available configuration options
Troubleshooting
Solve common issues and optimize performance
The Agent class provides powerful built-in methods for facilitating communication and collaboration between multiple agents. These methods enable agents to talk to each other, pass information, and coordinate complex multi-agent workflows seamlessly.
"},{"location":"swarms/structs/agent_multi_agent_communication/#overview","title":"Overview","text":"Multi-agent communication is essential for building sophisticated AI systems where different agents need to collaborate, share information, and coordinate their actions. The Agent class provides several methods to facilitate this communication:
Method Purpose Use Casetalk_to
Direct communication between two agents Agent handoffs, expert consultation talk_to_multiple_agents
Concurrent communication with multiple agents Broadcasting, consensus building receive_message
Process incoming messages from other agents Message handling, task delegation send_agent_message
Send formatted messages to other agents Direct messaging, notifications"},{"location":"swarms/structs/agent_multi_agent_communication/#features","title":"Features","text":"Feature Description Direct Agent Communication Enable one-to-one conversations between agents Concurrent Multi-Agent Communication Broadcast messages to multiple agents simultaneously Message Processing Handle incoming messages with contextual formatting Error Handling Robust error handling for failed communications Threading Support Efficient concurrent processing using ThreadPoolExecutor Flexible Parameters Support for images, custom arguments, and kwargs"},{"location":"swarms/structs/agent_multi_agent_communication/#core-methods","title":"Core Methods","text":""},{"location":"swarms/structs/agent_multi_agent_communication/#talk_toagent-task-imgnone-args-kwargs","title":"talk_to(agent, task, img=None, *args, **kwargs)
","text":"Enables direct communication between the current agent and another agent. The method processes the task, generates a response, and then passes that response to the target agent.
Parameters:
Parameter Type Default Descriptionagent
Any
Required The target agent instance to communicate with task
str
Required The task or message to send to the agent img
str
None
Optional image path for multimodal communication *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments Returns: Any
- The response from the target agent
Usage Example:
from swarms import Agent\n\n# Create two specialized agents\nresearcher = Agent(\n agent_name=\"Research-Agent\",\n system_prompt=\"You are a research specialist focused on gathering and analyzing information.\",\n max_loops=1,\n)\n\nanalyst = Agent(\n agent_name=\"Analysis-Agent\", \n system_prompt=\"You are an analytical specialist focused on interpreting research data.\",\n max_loops=1,\n)\n\n# Agent communication\nresearch_result = researcher.talk_to(\n agent=analyst,\n task=\"Analyze the market trends for renewable energy stocks\"\n)\n\nprint(research_result)\n
"},{"location":"swarms/structs/agent_multi_agent_communication/#talk_to_multiple_agentsagents-task-args-kwargs","title":"talk_to_multiple_agents(agents, task, *args, **kwargs)
","text":"Enables concurrent communication with multiple agents using ThreadPoolExecutor for efficient parallel processing.
Parameters:
Parameter Type Default Descriptionagents
List[Union[Any, Callable]]
Required List of agent instances to communicate with task
str
Required The task or message to send to all agents *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments Returns: List[Any]
- List of responses from all agents (or None for failed communications)
Usage Example:
from swarms import Agent\n\n# Create multiple specialized agents\nagents = [\n Agent(\n agent_name=\"Financial-Analyst\",\n system_prompt=\"You are a financial analysis expert.\",\n max_loops=1,\n ),\n Agent(\n agent_name=\"Risk-Assessor\", \n system_prompt=\"You are a risk assessment specialist.\",\n max_loops=1,\n ),\n Agent(\n agent_name=\"Market-Researcher\",\n system_prompt=\"You are a market research expert.\",\n max_loops=1,\n )\n]\n\ncoordinator = Agent(\n agent_name=\"Coordinator-Agent\",\n system_prompt=\"You coordinate multi-agent analysis.\",\n max_loops=1,\n)\n\n# Broadcast to multiple agents\nresponses = coordinator.talk_to_multiple_agents(\n agents=agents,\n task=\"Evaluate the investment potential of Tesla stock\"\n)\n\n# Process responses\nfor i, response in enumerate(responses):\n if response:\n print(f\"Agent {i+1} Response: {response}\")\n else:\n print(f\"Agent {i+1} failed to respond\")\n
"},{"location":"swarms/structs/agent_multi_agent_communication/#receive_messageagent_name-task-args-kwargs","title":"receive_message(agent_name, task, *args, **kwargs)
","text":"Processes incoming messages from other agents with proper context formatting.
Parameters:
Parameter Type Default Descriptionagent_name
str
Required Name of the sending agent task
str
Required The message content *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments Returns: Any
- The agent's response to the received message
Usage Example:
from swarms import Agent\n\n# Create an agent that can receive messages\nrecipient_agent = Agent(\n agent_name=\"Support-Agent\",\n system_prompt=\"You provide helpful support and assistance.\",\n max_loops=1,\n)\n\n# Simulate receiving a message from another agent\nresponse = recipient_agent.receive_message(\n agent_name=\"Customer-Service-Agent\",\n task=\"A customer is asking about refund policies. Can you help?\"\n)\n\nprint(response)\n
"},{"location":"swarms/structs/agent_multi_agent_communication/#send_agent_messageagent_name-message-args-kwargs","title":"send_agent_message(agent_name, message, *args, **kwargs)
","text":"Sends a formatted message from the current agent to a specified target agent.
Parameters:
Parameter Type Default Descriptionagent_name
str
Required Name of the target agent message
str
Required The message to send *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments Returns: Any
- The result of sending the message
Usage Example:
from swarms import Agent\n\nsender_agent = Agent(\n agent_name=\"Notification-Agent\",\n system_prompt=\"You send notifications and updates.\",\n max_loops=1,\n)\n\n# Send a message to another agent\nresult = sender_agent.send_agent_message(\n agent_name=\"Task-Manager-Agent\",\n message=\"Task XYZ has been completed successfully\"\n)\n\nprint(result)\n
This comprehensive guide covers all aspects of multi-agent communication using the Agent class methods. These methods provide the foundation for building sophisticated multi-agent systems with robust communication capabilities.
"},{"location":"swarms/structs/agent_rearrange/","title":"AgentRearrange
Class","text":"The AgentRearrange
class represents a swarm of agents for rearranging tasks. It allows you to create a swarm of agents, add or remove agents from the swarm, and run the swarm to process tasks based on a specified flow pattern.
id
str
Unique identifier for the swarm name
str
Name of the swarm description
str
Description of the swarm's purpose agents
dict
Dictionary mapping agent names to Agent objects flow
str
Flow pattern defining task execution order max_loops
int
Maximum number of execution loops verbose
bool
Whether to enable verbose logging memory_system
BaseVectorDatabase
Memory system for storing agent interactions human_in_the_loop
bool
Whether human intervention is enabled custom_human_in_the_loop
Callable
Custom function for human intervention return_json
bool
Whether to return output in JSON format output_type
OutputType
Format of output (\"all\", \"final\", \"list\", or \"dict\") docs
List[str]
List of document paths to add to agent prompts doc_folder
str
Folder path containing documents to add to agent prompts swarm_history
dict
History of agent interactions"},{"location":"swarms/structs/agent_rearrange/#methods","title":"Methods","text":""},{"location":"swarms/structs/agent_rearrange/#__init__self-agents-listagent-none-flow-str-none-max_loops-int-1-verbose-bool-true","title":"__init__(self, agents: List[Agent] = None, flow: str = None, max_loops: int = 1, verbose: bool = True)
","text":"Initializes the AgentRearrange
object.
agents
List[Agent]
(optional) A list of Agent
objects. Defaults to None
. flow
str
(optional) The flow pattern of the tasks. Defaults to None
. max_loops
int
(optional) The maximum number of loops for the agents to run. Defaults to 1
. verbose
bool
(optional) Whether to enable verbose logging or not. Defaults to True
."},{"location":"swarms/structs/agent_rearrange/#add_agentself-agent-agent","title":"add_agent(self, agent: Agent)
","text":"Adds an agent to the swarm.
Parameter Type Descriptionagent
Agent
The agent to be added."},{"location":"swarms/structs/agent_rearrange/#remove_agentself-agent_name-str","title":"remove_agent(self, agent_name: str)
","text":"Removes an agent from the swarm.
Parameter Type Descriptionagent_name
str
The name of the agent to be removed."},{"location":"swarms/structs/agent_rearrange/#add_agentsself-agents-listagent","title":"add_agents(self, agents: List[Agent])
","text":"Adds multiple agents to the swarm.
Parameter Type Descriptionagents
List[Agent]
A list of Agent
objects."},{"location":"swarms/structs/agent_rearrange/#validate_flowself","title":"validate_flow(self)
","text":"Validates the flow pattern.
Raises:
ValueError
: If the flow pattern is incorrectly formatted or contains duplicate agent names.Returns:
bool
: True
if the flow pattern is valid.run(self, task: str = None, img: str = None, device: str = \"cpu\", device_id: int = 1, all_cores: bool = True, all_gpus: bool = False, *args, **kwargs)
","text":"Executes the agent rearrangement task with specified compute resources.
Parameter Type Descriptiontask
str
The task to execute img
str
Path to input image if required device
str
Computing device to use ('cpu' or 'gpu') device_id
int
ID of specific device to use all_cores
bool
Whether to use all CPU cores all_gpus
bool
Whether to use all available GPUs Returns:
str
: The final processed task.batch_run(self, tasks: List[str], img: Optional[List[str]] = None, batch_size: int = 10, device: str = \"cpu\", device_id: int = None, all_cores: bool = True, all_gpus: bool = False, *args, **kwargs)
","text":"Process multiple tasks in batches.
Parameter Type Descriptiontasks
List[str]
List of tasks to process img
List[str]
Optional list of images corresponding to tasks batch_size
int
Number of tasks to process simultaneously device
str
Computing device to use device_id
int
Specific device ID if applicable all_cores
bool
Whether to use all CPU cores all_gpus
bool
Whether to use all available GPUs"},{"location":"swarms/structs/agent_rearrange/#concurrent_runself-tasks-liststr-img-optionalliststr-none-max_workers-optionalint-none-device-str-cpu-device_id-int-none-all_cores-bool-true-all_gpus-bool-false-args-kwargs","title":"concurrent_run(self, tasks: List[str], img: Optional[List[str]] = None, max_workers: Optional[int] = None, device: str = \"cpu\", device_id: int = None, all_cores: bool = True, all_gpus: bool = False, *args, **kwargs)
","text":"Process multiple tasks concurrently using ThreadPoolExecutor.
Parameter Type Descriptiontasks
List[str]
List of tasks to process img
List[str]
Optional list of images corresponding to tasks max_workers
int
Maximum number of worker threads device
str
Computing device to use device_id
int
Specific device ID if applicable all_cores
bool
Whether to use all CPU cores all_gpus
bool
Whether to use all available GPUs"},{"location":"swarms/structs/agent_rearrange/#documentation-for-rearrange-function","title":"Documentation for rearrange
Function","text":"======================================
The rearrange
function is a helper function that rearranges the given list of agents based on the specified flow.
agents
List[Agent]
The list of agents to be rearranged. flow
str
The flow used for rearranging the agents. task
str
(optional) The task to be performed during rearrangement. Defaults to None
. *args
- Additional positional arguments. **kwargs
- Additional keyword arguments."},{"location":"swarms/structs/agent_rearrange/#returns","title":"Returns","text":"The result of running the agent system with the specified task.
"},{"location":"swarms/structs/agent_rearrange/#example","title":"Example","text":"agents = [agent1, agent2, agent3]\nflow = \"agent1 -> agent2, agent3\"\ntask = \"Perform a task\"\nrearrange(agents, flow, task)\n
"},{"location":"swarms/structs/agent_rearrange/#example-usage","title":"Example Usage","text":"Here's an example of how to use the AgentRearrange
class and the rearrange
function:
from swarms import Agent, AgentRearrange\nfrom typing import List\n\n# Initialize the director agent\ndirector = Agent(\n agent_name=\"Accounting Director\",\n system_prompt=\"Directs the accounting tasks for the workers\",\n llm=Anthropic(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accounting_director.json\",\n)\n\n# Initialize worker 1\nworker1 = Agent(\n agent_name=\"Accountant 1\",\n system_prompt=\"Processes financial transactions and prepares financial statements\",\n llm=Anthropic(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant1.json\",\n)\n\n# Initialize worker 2\nworker2 = Agent(\n agent_name=\"Accountant 2\",\n system_prompt=\"Performs audits and ensures compliance with financial regulations\",\n llm=Anthropic(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant2.json\",\n)\n\n# Create a list of agents\nagents = [director, worker1, worker2]\n\n# Define the flow pattern\nflow = \"Accounting Director -> Accountant 1 -> Accountant 2\"\n\n# Using AgentRearrange class\nagent_system = AgentRearrange(agents=agents, flow=flow)\noutput = agent_system.run(\"Process monthly financial statements\")\nprint(output)\n
In this example, we first initialize three agents: director
, worker1
, and worker2
. Then, we create a list of these agents and define the flow pattern \"Director -> Worker1 -> Worker2\"
.
We can use the AgentRearrange
class by creating an instance of it with the list of agents and the flow pattern. We then call the run
method with the initial task, and it will execute the agents in the specified order, passing the output of one agent as the input to the next agent.
Alternatively, we can use the rearrange
function by passing the list of agents, the flow pattern, and the initial task as arguments.
Both the AgentRearrange
class and the rearrange
function will return the final output after processing the task through the agents according to the specified flow pattern.
The AgentRearrange
class includes error handling mechanisms to validate the flow pattern. If the flow pattern is incorrectly formatted or contains duplicate agent names, a ValueError
will be raised with an appropriate error message.
# Invalid flow pattern\ninvalid_flow = \"Director->Worker1,Worker2->Worker3\"\nagent_system = AgentRearrange(agents=agents, flow=invalid_flow)\noutput = agent_system.run(\"Some task\")`\n
This will raise a ValueError
with the message \"Agent 'Worker3' is not registered.\"
.
The AgentRearrange
class supports both parallel and sequential processing of tasks based on the specified flow pattern. If the flow pattern includes multiple agents separated by commas (e.g., \"agent1, agent2\"
), the agents will be executed in parallel, and their outputs will be concatenated with a semicolon (;
). If the flow pattern includes a single agent, it will be executed sequentially.
parallel_flow = \"Worker1, Worker2 -> Director\"
sequential_flow = \"Worker1 -> Worker2 -> Director\"
In the parallel_flow
example, Worker1
and Worker2
will be executed in parallel, and their outputs will be concatenated and passed to Director
. In the sequential_flow
example, Worker1
will be executed first, and its output will be passed to Worker2
, and then the output of Worker2
will be passed to Director
.
The AgentRearrange
class includes logging capabilities using the loguru
library. If verbose
is set to True
during initialization, a log file named agent_rearrange.log
will be created, and log messages will be written to it. You can use this log file to track the execution of the agents and any potential issues or errors that may occur.
2023-05-08 10:30:15.456 | INFO | agent_rearrange:__init__:34 - Adding agent Director to the swarm.\n2023-05-08 10:30:15.457 | INFO | agent_rearrange:__init__:34 - Adding agent Worker1 to the swarm.\n2023-05-08 10:30:15.457 | INFO | agent_rearrange:__init__:34 - Adding agent Worker2 to the swarm.\n2023-05-08 10:30:15.458 | INFO | agent_rearrange:run:118 - Running agents in parallel: ['Worker1', 'Worker2']\n2023-05-08 10:30:15.459 | INFO | agent_rearrange:run:121 - Running agents sequentially: ['Director']`\n
"},{"location":"swarms/structs/agent_rearrange/#additional-parameters","title":"Additional Parameters","text":"The AgentRearrange
class also accepts additional parameters that can be passed to the run
method using *args
and **kwargs
. These parameters will be forwarded to the individual agents during execution.
agent_system = AgentRearrange(agents=agents, flow=flow)
output = agent_system.run(\"Some task\", max_tokens=200, temperature=0.7)
In this example, the max_tokens
and temperature
parameters will be passed to each agent during execution.
The AgentRearrange
class and the rearrange
function can be customized and extended to suit specific use cases. For example, you can create custom agents by inheriting from the Agent
class and implementing custom logic for task processing. You can then add these custom agents to the swarm and define the flow pattern accordingly.
Additionally, you can modify the run
method of the AgentRearrange
class to implement custom logic for task processing and agent interaction.
It's important to note that the AgentRearrange
class and the rearrange
function rely on the individual agents to process tasks correctly. The quality of the output will depend on the capabilities and configurations of the agents used in the swarm. Additionally, the AgentRearrange
class does not provide any mechanisms for task prioritization or load balancing among the agents.
The AgentRearrange
class and the rearrange
function provide a flexible and extensible framework for orchestrating swarms of agents to process tasks based on a specified flow pattern. By combining the capabilities of individual agents, you can create complex workflows and leverage the strengths of different agents to tackle various tasks efficiently.
While the current implementation offers basic functionality for agent rearrangement, there is room for future improvements and customizations to enhance the system's capabilities and cater to more specific use cases.
Whether you're working on natural language processing tasks, data analysis, or any other domain where agent-based systems can be beneficial, the AgentRearrange
class and the rearrange
function provide a solid foundation for building and experimenting with swarm-based solutions.
The AgentRegistry
class is designed to manage a collection of agents, providing methods for adding, deleting, updating, and querying agents. This class ensures thread-safe operations on the registry, making it suitable for concurrent environments. Additionally, the AgentModel
class is a Pydantic model used for validating and storing agent information.
agent_id
str
The unique identifier for the agent. agent
Agent
The agent object."},{"location":"swarms/structs/agent_registry/#agentregistry","title":"AgentRegistry","text":"Attribute Type Description agents
Dict[str, AgentModel]
A dictionary mapping agent IDs to AgentModel
instances. lock
Lock
A threading lock for thread-safe operations."},{"location":"swarms/structs/agent_registry/#methods","title":"Methods","text":""},{"location":"swarms/structs/agent_registry/#__init__self","title":"__init__(self)
","text":"Initializes the AgentRegistry
object.
registry = AgentRegistry()\n
add(self, agent_id: str, agent: Agent) -> None
","text":"Adds a new agent to the registry.
agent_id
(str
): The unique identifier for the agent.agent
(Agent
): The agent to add.
Raises:
ValueError
: If the agent ID already exists in the registry.ValidationError
: If the input data is invalid.
Usage Example:
agent = Agent(agent_name=\"Agent1\")\nregistry.add(\"agent_1\", agent)\n
delete(self, agent_id: str) -> None
","text":"Deletes an agent from the registry.
agent_id
(str
): The unique identifier for the agent to delete.
Raises:
KeyError
: If the agent ID does not exist in the registry.
Usage Example:
registry.delete(\"agent_1\")\n
update_agent(self, agent_id: str, new_agent: Agent) -> None
","text":"Updates an existing agent in the registry.
agent_id
(str
): The unique identifier for the agent to update.new_agent
(Agent
): The new agent to replace the existing one.
Raises:
KeyError
: If the agent ID does not exist in the registry.ValidationError
: If the input data is invalid.
Usage Example:
new_agent = Agent(agent_name=\"UpdatedAgent\")\nregistry.update_agent(\"agent_1\", new_agent)\n
get(self, agent_id: str) -> Agent
","text":"Retrieves an agent from the registry.
agent_id
(str
): The unique identifier for the agent to retrieve.
Returns:
Agent
: The agent associated with the given agent ID.
Raises:
KeyError
: If the agent ID does not exist in the registry.
Usage Example:
agent = registry.get(\"agent_1\")\n
list_agents(self) -> List[str]
","text":"Lists all agent identifiers in the registry.
List[str]
: A list of all agent identifiers.
Usage Example:
agent_ids = registry.list_agents()\n
query(self, condition: Optional[Callable[[Agent], bool]] = None) -> List[Agent]
","text":"Queries agents based on a condition.
condition
(Optional[Callable[[Agent], bool]]
): A function that takes an agent and returns a boolean indicating whether the agent meets the condition. Defaults to None
.
Returns:
List[Agent]
: A list of agents that meet the condition.
Usage Example:
def is_active(agent):\n return agent.is_active\n\nactive_agents = registry.query(is_active)\n
find_agent_by_name(self, agent_name: str) -> Agent
","text":"Finds an agent by its name.
agent_name
(str
): The name of the agent to find.
Returns:
Agent
: The agent with the specified name.
Usage Example:
agent = registry.find_agent_by_name(\"Agent1\")\n
from swarms.structs.agent_registry import AgentRegistry\nfrom swarms import Agent, OpenAIChat, Anthropic\n\n# Initialize the agents\ngrowth_agent1 = Agent(\n agent_name=\"Marketing Specialist\",\n system_prompt=\"You're the marketing specialist, your purpose is to help companies grow by improving their marketing strategies!\",\n agent_description=\"Improve a company's marketing strategies!\",\n llm=OpenAIChat(),\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n saved_state_path=\"marketing_specialist.json\",\n stopping_token=\"Stop!\",\n interactive=True,\n context_length=1000,\n)\n\ngrowth_agent2 = Agent(\n agent_name=\"Sales Specialist\",\n system_prompt=\"You're the sales specialist, your purpose is to help companies grow by improving their sales strategies!\",\n agent_description=\"Improve a company's sales strategies!\",\n llm=Anthropic(),\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n saved_state_path=\"sales_specialist.json\",\n stopping_token=\"Stop!\",\n interactive=True,\n context_length=1000,\n)\n\ngrowth_agent3 = Agent(\n agent_name=\"Product Development Specialist\",\n system_prompt=\"You're the product development specialist, your purpose is to help companies grow by improving their product development strategies!\",\n agent_description=\"Improve a company's product development strategies!\",\n llm=Anthropic(),\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n saved_state_path=\"product_development_specialist.json\",\n stopping_token=\"Stop!\",\n interactive=True,\n context_length=1000,\n)\n\ngrowth_agent4 = Agent(\n agent_name=\"Customer Service Specialist\",\n system_prompt=\"You're the customer service specialist, your purpose is to help companies grow by improving their customer service strategies!\",\n agent_description=\"Improve a company's customer service strategies!\",\n llm=OpenAIChat(),\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n saved_state_path=\"customer_service_specialist.json\",\n stopping_token=\"Stop!\",\n interactive=True,\n context_length=1000,\n)\n\n\n# Register the agents\\\nregistry = AgentRegistry()\n\n# Register the agents\nregistry.add(\"Marketing Specialist\", growth_agent1)\nregistry.add(\"Sales Specialist\", growth_agent2)\nregistry.add(\"Product Development Specialist\", growth_agent3)\nregistry.add(\"Customer Service Specialist\", growth_agent4)\n
"},{"location":"swarms/structs/agent_registry/#logging-and-error-handling","title":"Logging and Error Handling","text":"Each method in the AgentRegistry
class includes logging to track the execution flow and captures errors to provide detailed information in case of failures. This is crucial for debugging and ensuring smooth operation of the registry. The report_error
function is used for reporting exceptions that occur during method execution.
AgentRegistry
are properly initialized and configured to handle the tasks they will receive.lock
attribute to ensure thread-safe operations when accessing or modifying the registry.The swarms.structs library provides a collection of classes for representing artifacts and their attributes. This documentation will provide an overview of the Artifact
class, its attributes, functionality, and usage examples.
The Artifact
class represents an artifact and its attributes. It inherits from the BaseModel
class and includes the following attributes:
artifact_id (str)
: Id of the artifact.file_name (str)
: Filename of the artifact.relative_path (str, optional)
: Relative path of the artifact in the agent's workspace.These attributes are crucial for identifying and managing different artifacts within a given context.
"},{"location":"swarms/structs/artifact/#class-definition","title":"Class Definition","text":"The Artifact
class can be defined as follows:
class Artifact(BaseModel):\n \"\"\"\n Represents an artifact.\n\n Attributes:\n artifact_id (str): Id of the artifact.\n file_name (str): Filename of the artifact.\n relative_path (str, optional): Relative path of the artifact in the agent's workspace.\n \"\"\"\n\n artifact_id: str = Field(\n ...,\n description=\"Id of the artifact\",\n example=\"b225e278-8b4c-4f99-a696-8facf19f0e56\",\n )\n file_name: str = Field(\n ..., description=\"Filename of the artifact\", example=\"main.py\"\n )\n relative_path: Optional[str] = Field(\n None,\n description=(\"Relative path of the artifact in the agent's workspace\"),\n example=\"python/code/\",\n )\n
The Artifact
class defines the mandatory and optional attributes and provides corresponding descriptions along with example values.
The Artifact
class encapsulates the information and attributes representing an artifact. It provides a structured and organized way to manage artifacts within a given context.
To create an instance of the Artifact
class, you can simply initialize it with the required attributes. Here's an example:
from swarms.structs import Artifact\n\nartifact_instance = Artifact(\n artifact_id=\"b225e278-8b4c-4f99-a696-8facf19f0e56\",\n file_name=\"main.py\",\n relative_path=\"python/code/\",\n)\n
In this example, we create an instance of the Artifact
class with the specified artifact details.
You can access the attributes of the Artifact
instance using dot notation. Here's how you can access the file name of the artifact:
print(artifact_instance.file_name)\n# Output: \"main.py\"\n
"},{"location":"swarms/structs/artifact/#example-3-handling-optional-attributes","title":"Example 3: Handling optional attributes","text":"If the relative_path
attribute is not provided during artifact creation, it will default to None
. Here's an example:
artifact_instance_no_path = Artifact(\n artifact_id=\"c280s347-9b7d-3c68-m337-7abvf50j23k\", file_name=\"script.js\"\n)\n\nprint(artifact_instance_no_path.relative_path)\n# Output: None\n
By providing default values for optional attributes, the Artifact
class allows flexibility in defining artifact instances.
The Artifact
class represents a powerful and flexible means of handling various artifacts with different attributes. By utilizing this class, users can organize, manage, and streamline their artifacts with ease.
For further details and references related to the swarms.structs library and the Artifact
class, refer to the official documentation.
This comprehensive documentation provides an in-depth understanding of the Artifact
class, its attributes, functionality, and usage examples. By following the detailed examples and explanations, developers can effectively leverage the capabilities of the Artifact
class within their projects.
The Agent Builder is a powerful class that automatically builds and manages swarms of AI agents. It provides a flexible and extensible framework for creating, coordinating, and executing multiple AI agents working together to accomplish complex tasks.
"},{"location":"swarms/structs/auto_agent_builder/#overview","title":"Overview","text":"The Agent Builder uses a boss agent to delegate work and create new specialized agents as needed. It's designed to be production-ready with robust error handling, logging, and configuration options.
"},{"location":"swarms/structs/auto_agent_builder/#architecture","title":"Architecture","text":"graph TD\n A[Agent Builder] --> B[Configuration]\n A --> C[Agent Creation]\n A --> D[Task Execution]\n\n B --> B1[Name]\n B --> B2[Description]\n B --> B3[Model Settings]\n\n C --> C1[Agent Pool]\n C --> C2[Agent Registry]\n C --> C3[Agent Configuration]\n\n D --> D1[Task Distribution]\n D --> D2[Result Collection]\n D --> D3[Error Handling]\n\n C1 --> E[Specialized Agents]\n C2 --> E\n C3 --> E\n\n E --> F[Task Execution]\n F --> G[Results]
"},{"location":"swarms/structs/auto_agent_builder/#class-structure","title":"Class Structure","text":""},{"location":"swarms/structs/auto_agent_builder/#agentsbuilder-class","title":"AgentsBuilder Class","text":"Parameter Type Default Description name str \"swarm-creator-01\" The name of the swarm description str \"This is a swarm that creates swarms\" A description of the swarm's purpose verbose bool True Whether to output detailed logs max_loops int 1 Maximum number of execution loops model_name str \"gpt-4o\" The model to use for agent creation return_dictionary bool True Whether to return results as a dictionary system_prompt str BOSS_SYSTEM_PROMPT The system prompt for the boss agent"},{"location":"swarms/structs/auto_agent_builder/#methods","title":"Methods","text":"Method Description Parameters Returns run Run the swarm on a given task task: str, image_url: str = None, *args, **kwargs Tuple[List[Agent], int] _create_agents Create necessary agents for a task task: str, *args, **kwargs List[Agent] build_agent Build a single agent with specifications agent_name: str, agent_description: str, agent_system_prompt: str, max_loops: int = 1, model_name: str = \"gpt-4o\", dynamic_temperature_enabled: bool = True, auto_generate_prompt: bool = False, role: str = \"worker\", max_tokens: int = 8192, temperature: float = 0.5 Agent"},{"location":"swarms/structs/auto_agent_builder/#enterprise-use-cases","title":"Enterprise Use Cases","text":""},{"location":"swarms/structs/auto_agent_builder/#1-customer-service-automation","title":"1. Customer Service Automation","text":"Create specialized agents for different aspects of customer service
Handle ticket routing, response generation, and escalation
Maintain consistent service quality across channels
Build agents for data collection, cleaning, analysis, and visualization
Automate complex data processing workflows
Generate insights and reports automatically
Deploy agents for content research, writing, editing, and publishing
Maintain brand consistency across content
Automate content scheduling and distribution
Create agents for workflow automation
Handle document processing and routing
Manage approval chains and notifications
Build agents for literature review, experiment design, and data collection
Automate research documentation and reporting
Facilitate collaboration between research teams
from swarms import AgentsBuilder\n\n# Initialize the agent builder\nagents_builder = AgentsBuilder(\n name=\"enterprise-automation\",\n description=\"Enterprise workflow automation swarm\",\n verbose=True\n)\n\n# Define a use-case for building agents\ntask = \"Develop a swarm of agents to automate the generation of personalized marketing strategies based on customer data and market trends\"\n\n# Run the swarm\nagents = agents_builder.run(task)\n\n# Access results\nprint(agents)\n
"},{"location":"swarms/structs/auto_agent_builder/#best-practices","title":"Best Practices","text":"Log all errors for debugging and monitoring
Resource Management
Use connection pooling for database operations
Security
Follow least privilege principle for agent permissions
Monitoring and Logging
Set up alerts for critical failures
Scalability
graph LR\n A[External System] --> B[API Gateway]\n B --> C[Agent Builder]\n C --> D[Agent Pool]\n D --> E[Specialized Agents]\n E --> F[External Services]\n\n subgraph \"Monitoring\"\n G[Logs]\n H[Metrics]\n I[Alerts]\n end\n\n C --> G\n C --> H\n C --> I
"},{"location":"swarms/structs/auto_agent_builder/#performance-considerations","title":"Performance Considerations","text":"Optimize agent creation and destruction
Task Distribution
Handle task timeouts and retries
Resource Optimization
Common issues and solutions:
Review system prompts
Performance Issues
Review agent configurations
Integration Problems
The AutoSwarm
class represents a swarm of agents that can be created and managed automatically. This class leverages the AutoSwarmRouter
to route tasks to appropriate swarms and supports custom preprocessing, routing, and postprocessing of tasks. It is designed to handle complex workflows efficiently.
name
Optional[str]
None
The name of the swarm. description
Optional[str]
None
The description of the swarm. verbose
bool
False
Whether to enable verbose mode. custom_params
Optional[Dict[str, Any]]
None
Custom parameters for the swarm. custom_preprocess
Optional[Callable]
None
Custom preprocessing function for tasks. custom_postprocess
Optional[Callable]
None
Custom postprocessing function for task results. custom_router
Optional[Callable]
None
Custom routing function for tasks. max_loops
int
1
The maximum number of loops to run the workflow."},{"location":"swarms/structs/auto_swarm/#attributes_1","title":"Attributes","text":"Attribute Type Description name
Optional[str]
The name of the swarm. description
Optional[str]
The description of the swarm. verbose
bool
Whether to enable verbose mode. custom_params
Optional[Dict[str, Any]]
Custom parameters for the swarm. custom_preprocess
Optional[Callable]
Custom preprocessing function for tasks. custom_postprocess
Optional[Callable]
Custom postprocessing function for task results. custom_router
Optional[Callable]
Custom routing function for tasks. max_loops
int
The maximum number of loops to run the workflow. router
AutoSwarmRouter
The router for managing task routing."},{"location":"swarms/structs/auto_swarm/#methods","title":"Methods","text":""},{"location":"swarms/structs/auto_swarm/#init_logging","title":"init_logging","text":"Initializes logging for the AutoSwarm
.
Examples:
swarm = AutoSwarm(name=\"example_swarm\", verbose=True)\nswarm.init_logging()\n
"},{"location":"swarms/structs/auto_swarm/#run","title":"run","text":"Runs the swarm simulation.
Arguments:
Parameter Type Default Descriptiontask
str
None
The task to be executed. *args
Additional arguments. **kwargs
Additional keyword arguments. Returns:
Return Type DescriptionAny
The result of the executed task. Raises:
Exception
: If any error occurs during task execution.Examples:
swarm = AutoSwarm(name=\"example_swarm\", max_loops=3)\nresult = swarm.run(task=\"example_task\")\nprint(result)\n
"},{"location":"swarms/structs/auto_swarm/#list_all_swarms","title":"list_all_swarms","text":"Lists all available swarms and their descriptions.
Examples:
swarm = AutoSwarm(name=\"example_swarm\", max_loops=3)\nswarm.list_all_swarms()\n# Output:\n# INFO: Swarm Name: swarm1 || Swarm Description: Description of swarm1\n# INFO: Swarm Name: swarm2 || Swarm Description: Description of swarm2\n
"},{"location":"swarms/structs/auto_swarm/#additional-examples","title":"Additional Examples","text":""},{"location":"swarms/structs/auto_swarm/#example-1-custom-preprocessing-and-postprocessing","title":"Example 1: Custom Preprocessing and Postprocessing","text":"def custom_preprocess(task, *args, **kwargs):\n # Custom preprocessing logic\n task = task.upper()\n return task, args, kwargs\n\ndef custom_postprocess(result):\n # Custom postprocessing logic\n return result.lower()\n\nswarm = AutoSwarm(\n name=\"example_swarm\",\n custom_preprocess=custom_preprocess,\n custom_postprocess=custom_postprocess,\n max_loops=3\n)\n\n# Running a task with custom preprocessing and postprocessing\nresult = swarm.run(task=\"example_task\")\nprint(result) # Output will be the processed result\n
"},{"location":"swarms/structs/auto_swarm/#example-2-custom-router-function","title":"Example 2: Custom Router Function","text":"def custom_router(swarm, task, *args, **kwargs):\n # Custom routing logic\n if \"specific\" in task:\n return swarm.router.swarm_dict[\"specific_swarm\"].run(task, *args, **kwargs)\n return swarm.router.swarm_dict[\"default_swarm\"].run(task, *args, **kwargs)\n\nswarm = AutoSwarm(\n name=\"example_swarm\",\n custom_router=custom_router,\n max_loops=3\n)\n\n# Running a task with custom routing\nresult = swarm.run(task=\"specific_task\")\nprint(result) # Output will be the result of the routed task\n
"},{"location":"swarms/structs/auto_swarm/#example-3-verbose-mode","title":"Example 3: Verbose Mode","text":"swarm = AutoSwarm(\n name=\"example_swarm\",\n verbose=True,\n max_loops=3\n)\n\n# Running a task with verbose mode enabled\nresult = swarm.run(task=\"example_task\")\n# Output will include detailed logs of the task execution process\n
"},{"location":"swarms/structs/auto_swarm/#full-example-4","title":"Full Example 4:","text":"First create a class with BaseSwarm -> Then wrap it in the router -> then pass that to the AutoSwarm
from swarms import BaseSwarm, AutoSwarmRouter, AutoSwarm\n\n\nclass FinancialReportSummarization(BaseSwarm):\n def __init__(self, name: str = None, *args, **kwargs):\n super().__init__()\n\n def run(self, task, *args, **kwargs):\n return task\n\n\n# Add swarm to router\nrouter = AutoSwarmRouter(swarms=[FinancialReportSummarization])\n\n# Create AutoSwarm Instance\nautoswarm = AutoSwarm(\n name=\"kyegomez/FinancialReportSummarization\",\n description=\"A swarm for financial document summarizing and generation\",\n verbose=True,\n router=router,\n)\n\n# Run the AutoSwarm\nautoswarm.run(\"Analyze these documents and give me a summary:\")\n
"},{"location":"swarms/structs/auto_swarm/#summary","title":"Summary","text":"The AutoSwarm
class provides a robust framework for managing and executing tasks using a swarm of agents. With customizable preprocessing, routing, and postprocessing functions, it is highly adaptable to various workflows and can handle complex task execution scenarios efficiently. The integration with AutoSwarmRouter
enhances its flexibility, making it a powerful tool for dynamic task management.
The AutoSwarmBuilder
is a powerful class that automatically builds and manages swarms of AI agents to accomplish complex tasks. It uses a boss agent to delegate work and create specialized agents as needed.
The AutoSwarmBuilder is designed to:
Automatically create and coordinate multiple AI agents
Delegate tasks to specialized agents
Manage communication between agents
Handle complex workflows through a swarm router
Executes the swarm on a given task.
Parameters:
task
(str): The task to execute
*args
: Additional positional arguments
**kwargs
: Additional keyword arguments
Returns:
Creates specialized agents for a given task.
Parameters:
task
(str): The task to create agents forReturns:
Builds a single agent with specified parameters.
Parameters: - agent_name
(str): Name of the agent
agent_description
(str): Description of the agent
agent_system_prompt
(str): System prompt for the agent
Returns:
Executes the swarm on multiple tasks.
Parameters:
tasks
(List[str]): List of tasks to executeReturns:
from swarms.structs.auto_swarm_builder import AutoSwarmBuilder\n\n# Initialize the swarm builder\nswarm = AutoSwarmBuilder(\n name=\"Content Creation Swarm\",\n description=\"A swarm specialized in creating high-quality content\"\n)\n\n# Run the swarm on a content creation task\nresult = swarm.run(\n \"Create a comprehensive blog post about artificial intelligence in healthcare, \"\n \"including current applications, future trends, and ethical considerations.\"\n)\n
"},{"location":"swarms/structs/auto_swarm_builder/#example-2-data-analysis-swarm","title":"Example 2: Data Analysis Swarm","text":"from swarms.structs.auto_swarm_builder import AutoSwarmBuilder\n\n# Initialize the swarm builder\nswarm = AutoSwarmBuilder(\n name=\"Data Analysis Swarm\",\n description=\"A swarm specialized in data analysis and visualization\"\n)\n\n# Run the swarm on a data analysis task\nresult = swarm.run(\n \"Analyze the provided sales data and create a detailed report with visualizations \"\n \"showing trends, patterns, and recommendations for improvement.\"\n)\n
"},{"location":"swarms/structs/auto_swarm_builder/#example-3-batch-processing-multiple-tasks","title":"Example 3: Batch Processing Multiple Tasks","text":"from swarms.structs.auto_swarm_builder import AutoSwarmBuilder\n\n# Initialize the swarm builder\nswarm = AutoSwarmBuilder(\n name=\"Multi-Task Swarm\",\n description=\"A swarm capable of handling multiple diverse tasks\"\n)\n\n# Define multiple tasks\ntasks = [\n \"Create a marketing strategy for a new product launch\",\n \"Analyze customer feedback and generate improvement suggestions\",\n \"Develop a project timeline for the next quarter\"\n]\n\n# Run the swarm on all tasks\nresults = swarm.batch_run(tasks)\n
"},{"location":"swarms/structs/auto_swarm_builder/#best-practices","title":"Best Practices","text":"Task Definition
Provide clear, specific task descriptions
Include any relevant context or constraints
Specify expected output format if needed
Configuration
Set appropriate max_loops
based on task complexity
Use verbose=True
during development for debugging
Consider using random_models=True
for diverse agent capabilities
Error Handling
The class includes comprehensive error handling
All methods include try-catch blocks with detailed logging
Errors are propagated with full stack traces for debugging
Architecture
The AutoSwarmBuilder uses a sophisticated boss agent system to coordinate tasks
Agents are created dynamically based on task requirements
The system includes built-in logging and error handling
Results are returned in a structured format for easy processing
The AutoSwarmRouter
class is designed to route tasks to the appropriate swarm based on the provided name. This class allows for customization of preprocessing, routing, and postprocessing of tasks, making it highly adaptable to various workflows and requirements.
BaseSwarm
objects that perform the tasks.name
Optional[str]
None
The name of the router. description
Optional[str]
None
The description of the router. verbose
bool
False
Whether to enable verbose mode. custom_params
Optional[Dict[str, Any]]
None
Custom parameters for the router. swarms
Sequence[BaseSwarm]
None
A list of BaseSwarm
objects. custom_preprocess
Optional[Callable]
None
Custom preprocessing function for tasks. custom_postprocess
Optional[Callable]
None
Custom postprocessing function for task results. custom_router
Optional[Callable]
None
Custom routing function for tasks."},{"location":"swarms/structs/auto_swarm_router/#attributes_1","title":"Attributes","text":"Attribute Type Description name
Optional[str]
The name of the router. description
Optional[str]
The description of the router. verbose
bool
Whether to enable verbose mode. custom_params
Optional[Dict[str, Any]]
Custom parameters for the router. swarms
Sequence[BaseSwarm]
A list of BaseSwarm
objects. custom_preprocess
Optional[Callable]
Custom preprocessing function for tasks. custom_postprocess
Optional[Callable]
Custom postprocessing function for task results. custom_router
Optional[Callable]
Custom routing function for tasks. swarm_dict
Dict[str, BaseSwarm]
A dictionary of swarms keyed by their name."},{"location":"swarms/structs/auto_swarm_router/#methods","title":"Methods","text":""},{"location":"swarms/structs/auto_swarm_router/#run","title":"run","text":"Executes the swarm simulation and routes the task to the appropriate swarm.
Arguments:
Parameter Type Default Descriptiontask
str
None
The task to be executed. *args
Additional arguments. **kwargs
Additional keyword arguments. Returns:
Return Type DescriptionAny
The result of the routed task. Raises:
ValueError
: If the specified swarm is not found.Exception
: If any error occurs during task routing or execution.Examples:
router = AutoSwarmRouter(name=\"example_router\", swarms=[swarm1, swarm2])\n\n# Running a task\nresult = router.run(task=\"example_task\")\n
"},{"location":"swarms/structs/auto_swarm_router/#len_of_swarms","title":"len_of_swarms","text":"Prints the number of swarms available in the router.
Examples:
router = AutoSwarmRouter(name=\"example_router\", swarms=[swarm1, swarm2])\n\n# Printing the number of swarms\nrouter.len_of_swarms() # Output: 2\n
"},{"location":"swarms/structs/auto_swarm_router/#list_available_swarms","title":"list_available_swarms","text":"Logs the available swarms and their descriptions.
Examples:
router = AutoSwarmRouter(name=\"example_router\", swarms=[swarm1, swarm2])\n\n# Listing available swarms\nrouter.list_available_swarms()\n# Output:\n# INFO: Swarm Name: swarm1 || Swarm Description: Description of swarm1\n# INFO: Swarm Name: swarm2 || Swarm Description: Description of swarm2\n
"},{"location":"swarms/structs/auto_swarm_router/#additional-examples","title":"Additional Examples","text":""},{"location":"swarms/structs/auto_swarm_router/#example-1-custom-preprocessing-and-postprocessing","title":"Example 1: Custom Preprocessing and Postprocessing","text":"def custom_preprocess(task, *args, **kwargs):\n # Custom preprocessing logic\n task = task.upper()\n return task, args, kwargs\n\ndef custom_postprocess(result):\n # Custom postprocessing logic\n return result.lower()\n\nrouter = AutoSwarmRouter(\n name=\"example_router\",\n swarms=[swarm1, swarm2],\n custom_preprocess=custom_preprocess,\n custom_postprocess=custom_postprocess\n)\n\n# Running a task with custom preprocessing and postprocessing\nresult = router.run(task=\"example_task\")\nprint(result) # Output will be the processed result\n
"},{"location":"swarms/structs/auto_swarm_router/#example-2-custom-router-function","title":"Example 2: Custom Router Function","text":"def custom_router(router, task, *args, **kwargs):\n # Custom routing logic\n if \"specific\" in task:\n return router.swarm_dict[\"specific_swarm\"].run(task, *args, **kwargs)\n return router.swarm_dict[\"default_swarm\"].run(task, *args, **kwargs)\n\nrouter = AutoSwarmRouter(\n name=\"example_router\",\n swarms=[default_swarm, specific_swarm],\n custom_router=custom_router\n)\n\n# Running a task with custom routing\nresult = router.run(task=\"specific_task\")\nprint(result) # Output will be the result of the routed task\n
"},{"location":"swarms/structs/auto_swarm_router/#example-3-verbose-mode","title":"Example 3: Verbose Mode","text":"router = AutoSwarmRouter(\n name=\"example_router\",\n swarms=[swarm1, swarm2],\n verbose=True\n)\n\n# Running a task with verbose mode enabled\nresult = router.run(task=\"example_task\")\n# Output will include detailed logs of the task routing and execution process\n
"},{"location":"swarms/structs/auto_swarm_router/#summary","title":"Summary","text":"The AutoSwarmRouter
class provides a flexible and customizable approach to routing tasks to appropriate swarms, supporting custom preprocessing, routing, and postprocessing functions. This makes it a powerful tool for managing complex workflows that require dynamic task handling and execution.
The BaseStructure
module contains the basic structure and attributes required for running machine learning models and associated metadata, error logging, artifact saving/loading, and relevant event logging.
The module provides the flexibility to save and load the model metadata, log errors, save artifacts, and maintain a log for multiple events associated with multiple threads and batched operations. The key attributes of the module include name, description, save_metadata_path, and save_error_path.
"},{"location":"swarms/structs/basestructure/#class-definition","title":"Class Definition:","text":""},{"location":"swarms/structs/basestructure/#arguments","title":"Arguments:","text":"Argument Type Description name str (Optional) The name of the structure. description str (Optional) A description of the structure. save_metadata bool A boolean flag to enable or disable metadata saving. save_artifact_path str (Optional) The path to save artifacts. save_metadata_path str (Optional) The path to save metadata. save_error_path str (Optional) The path to save errors."},{"location":"swarms/structs/basestructure/#methods","title":"Methods:","text":""},{"location":"swarms/structs/basestructure/#1-run","title":"1. run","text":"Runs the structure.
"},{"location":"swarms/structs/basestructure/#2-save_to_file","title":"2. save_to_file","text":"Saves data to a file. * data: Value to be saved. * file_path: Path where the data is to be saved.
"},{"location":"swarms/structs/basestructure/#3-load_from_file","title":"3. load_from_file","text":"Loads data from a file. * file_path: Path from where the data is to be loaded.
"},{"location":"swarms/structs/basestructure/#4-save_metadata","title":"4. save_metadata","text":"Saves metadata to a file. * metadata: Data to be saved as metadata.
"},{"location":"swarms/structs/basestructure/#5-load_metadata","title":"5. load_metadata","text":"Loads metadata from a file.
"},{"location":"swarms/structs/basestructure/#6-log_error","title":"6. log_error","text":"Logs error to a file.
"},{"location":"swarms/structs/basestructure/#7-save_artifact","title":"7. save_artifact","text":"Saves artifact to a file. * artifact: The artifact to be saved. * artifact_name: Name of the artifact.
"},{"location":"swarms/structs/basestructure/#8-load_artifact","title":"8. load_artifact","text":"Loads artifact from a file. * artifact_name: Name of the artifact.
"},{"location":"swarms/structs/basestructure/#9-log_event","title":"9. log_event","text":"Logs an event to a file. * event: The event to be logged. * event_type: Type of the event (optional, defaults to \"INFO\").
"},{"location":"swarms/structs/basestructure/#10-run_async","title":"10. run_async","text":"Runs the structure asynchronously.
"},{"location":"swarms/structs/basestructure/#11-save_metadata_async","title":"11. save_metadata_async","text":"Saves metadata to a file asynchronously.
"},{"location":"swarms/structs/basestructure/#12-load_metadata_async","title":"12. load_metadata_async","text":"Loads metadata from a file asynchronously.
"},{"location":"swarms/structs/basestructure/#13-log_error_async","title":"13. log_error_async","text":"Logs error to a file asynchronously.
"},{"location":"swarms/structs/basestructure/#14-save_artifact_async","title":"14. save_artifact_async","text":"Saves artifact to a file asynchronously.
"},{"location":"swarms/structs/basestructure/#15-load_artifact_async","title":"15. load_artifact_async","text":"Loads artifact from a file asynchronously.
"},{"location":"swarms/structs/basestructure/#16-log_event_async","title":"16. log_event_async","text":"Logs an event to a file asynchronously.
"},{"location":"swarms/structs/basestructure/#17-asave_to_file","title":"17. asave_to_file","text":"Saves data to a file asynchronously.
"},{"location":"swarms/structs/basestructure/#18-aload_from_file","title":"18. aload_from_file","text":"Loads data from a file asynchronously.
"},{"location":"swarms/structs/basestructure/#19-run_concurrent","title":"19. run_concurrent","text":"Runs the structure concurrently.
"},{"location":"swarms/structs/basestructure/#20-compress_data","title":"20. compress_data","text":"Compresses data.
"},{"location":"swarms/structs/basestructure/#21-decompres_data","title":"21. decompres_data","text":"Decompresses data.
"},{"location":"swarms/structs/basestructure/#22-run_batched","title":"22. run_batched","text":"Runs batched data.
"},{"location":"swarms/structs/basestructure/#examples","title":"Examples:","text":""},{"location":"swarms/structs/basestructure/#example-1-saving-metadata","title":"Example 1: Saving Metadata","text":"base_structure = BaseStructure(name=\"ExampleStructure\")\nmetadata = {\"key1\": \"value1\", \"key2\": \"value2\"}\nbase_structure.save_metadata(metadata)\n
"},{"location":"swarms/structs/basestructure/#example-2-loading-artifact","title":"Example 2: Loading Artifact","text":"artifact_name = \"example_artifact\"\nartifact_data = base_structure.load_artifact(artifact_name)\n
"},{"location":"swarms/structs/basestructure/#example-3-running-concurrently","title":"Example 3: Running Concurrently","text":"concurrent_data = [data1, data2, data3]\nresults = base_structure.run_concurrent(batched_data=concurrent_data)\n
"},{"location":"swarms/structs/basestructure/#note","title":"Note:","text":"The BaseStructure
class is designed to provide a modular and extensible structure for managing metadata, logs, errors, and batched operations while running machine learning models. The class's methods offer asynchronous and concurrent execution capabilities, thus optimizing the performance of the associated applications and models. The module's attributes and methods cater to a wide range of use cases, making it an essential foundational component for machine learning and data-based applications.
The BaseStructure
module offers a robust and flexible foundation for managing machine learning model metadata, error logs, and event tracking, including asynchronous, concurrent, and batched operations. By leveraging the inherent capabilities of this class, developers can enhance the reliability, scalability, and performance of machine learning-based applications.
asyncio
gzip
Module for Data CompressionThe above documentation provides detailed information about the BaseStructure
module, including its functionality, attributes, methods, usage examples, and references to relevant resources for further exploration. This comprehensive documentation aims to deepen the users' understanding of the module's purpose and how it can be effectively utilized in practice.
Please let me know if you need further elaboration on any specific aspect or functionality of the BaseStructure
module.
The ConcurrentWorkflow
class is designed to facilitate the concurrent execution of multiple agents, each tasked with solving a specific query or problem. This class is particularly useful in scenarios where multiple agents need to work in parallel, allowing for efficient resource utilization and faster completion of tasks. The workflow manages the execution, collects metadata, and optionally saves the results in a structured format.
ThreadPoolExecutor
name
str
The name of the workflow. Defaults to \"ConcurrentWorkflow\"
. description
str
A brief description of the workflow. agents
List[Agent]
A list of agents to be executed concurrently. metadata_output_path
str
Path to save the metadata output. Defaults to \"agent_metadata.json\"
. auto_save
bool
Flag indicating whether to automatically save the metadata. output_type
str
The type of output format. Defaults to \"dict\"
. max_loops
int
Maximum number of loops for each agent. Defaults to 1
. return_str_on
bool
Flag to return output as string. Defaults to False
. auto_generate_prompts
bool
Flag indicating whether to auto-generate prompts for agents. return_entire_history
bool
Flag to return entire conversation history. Defaults to False
. interactive
bool
Flag indicating whether to enable interactive mode. Defaults to False
. cache_size
int
The size of the cache. Defaults to 100
. max_retries
int
The maximum number of retry attempts. Defaults to 3
. retry_delay
float
The delay between retry attempts in seconds. Defaults to 1.0
. show_progress
bool
Flag indicating whether to show progress. Defaults to False
. _cache
dict
The cache for storing agent outputs. _progress_bar
tqdm
The progress bar for tracking execution."},{"location":"swarms/structs/concurrentworkflow/#methods","title":"Methods","text":""},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflow__init__","title":"ConcurrentWorkflow.__init__","text":"Initializes the ConcurrentWorkflow
class with the provided parameters.
name
str
\"ConcurrentWorkflow\"
The name of the workflow. description
str
\"Execution of multiple agents concurrently\"
A brief description of the workflow. agents
List[Agent]
[]
A list of agents to be executed concurrently. metadata_output_path
str
\"agent_metadata.json\"
Path to save the metadata output. auto_save
bool
True
Flag indicating whether to automatically save the metadata. output_type
str
\"dict\"
The type of output format. max_loops
int
1
Maximum number of loops for each agent. return_str_on
bool
False
Flag to return output as string. auto_generate_prompts
bool
False
Flag indicating whether to auto-generate prompts for agents. return_entire_history
bool
False
Flag to return entire conversation history. interactive
bool
False
Flag indicating whether to enable interactive mode. cache_size
int
100
The size of the cache. max_retries
int
3
The maximum number of retry attempts. retry_delay
float
1.0
The delay between retry attempts in seconds. show_progress
bool
False
Flag indicating whether to show progress."},{"location":"swarms/structs/concurrentworkflow/#raises","title":"Raises","text":"ValueError
: If the list of agents is empty or if the description is empty.Disables print statements for all agents in the workflow.
workflow.disable_agent_prints()\n
"},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflowactivate_auto_prompt_engineering","title":"ConcurrentWorkflow.activate_auto_prompt_engineering","text":"Activates the auto-generate prompts feature for all agents in the workflow.
workflow.activate_auto_prompt_engineering()\n
"},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflowenable_progress_bar","title":"ConcurrentWorkflow.enable_progress_bar","text":"Enables the progress bar display for task execution.
workflow.enable_progress_bar()\n
"},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflowdisable_progress_bar","title":"ConcurrentWorkflow.disable_progress_bar","text":"Disables the progress bar display.
workflow.disable_progress_bar()\n
"},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflowclear_cache","title":"ConcurrentWorkflow.clear_cache","text":"Clears the task cache.
workflow.clear_cache()\n
"},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflowget_cache_stats","title":"ConcurrentWorkflow.get_cache_stats","text":"Gets cache statistics.
"},{"location":"swarms/structs/concurrentworkflow/#returns","title":"Returns","text":"Dict[str, int]
: A dictionary containing cache statistics.stats = workflow.get_cache_stats()\nprint(stats) # {'cache_size': 5, 'max_cache_size': 100}\n
"},{"location":"swarms/structs/concurrentworkflow/#concurrentworkflowrun","title":"ConcurrentWorkflow.run","text":"Executes the workflow for the provided task.
"},{"location":"swarms/structs/concurrentworkflow/#parameters_1","title":"Parameters","text":"Parameter Type Descriptiontask
Optional[str]
The task or query to give to all agents. img
Optional[str]
The image to be processed by the agents. *args
tuple
Additional positional arguments. **kwargs
dict
Additional keyword arguments."},{"location":"swarms/structs/concurrentworkflow/#returns_1","title":"Returns","text":"Any
: The result of the execution, format depends on output_type and return_entire_history settings.ValueError
: If an invalid device is specified.Exception
: If any other error occurs during execution.Runs the workflow for a batch of tasks.
"},{"location":"swarms/structs/concurrentworkflow/#parameters_2","title":"Parameters","text":"Parameter Type Descriptiontasks
List[str]
A list of tasks or queries to give to all agents."},{"location":"swarms/structs/concurrentworkflow/#returns_2","title":"Returns","text":"List[Any]
: A list of results for each task.from swarms import Agent, ConcurrentWorkflow\n\n# Initialize agents\nagents = [\n Agent(\n agent_name=f\"Agent-{i}\",\n system_prompt=\"You are a helpful assistant.\",\n model_name=\"gpt-4\",\n max_loops=1,\n )\n for i in range(3)\n]\n\n# Initialize workflow with interactive mode\nworkflow = ConcurrentWorkflow(\n name=\"Interactive Workflow\",\n agents=agents,\n interactive=True,\n show_progress=True,\n cache_size=100,\n max_retries=3,\n retry_delay=1.0\n)\n\n# Run workflow\ntask = \"What are the benefits of using Python for data analysis?\"\nresult = workflow.run(task)\nprint(result)\n
"},{"location":"swarms/structs/concurrentworkflow/#example-2-batch-processing-with-progress-bar","title":"Example 2: Batch Processing with Progress Bar","text":"# Initialize workflow\nworkflow = ConcurrentWorkflow(\n name=\"Batch Processing Workflow\",\n agents=agents,\n show_progress=True,\n auto_save=True\n)\n\n# Define tasks\ntasks = [\n \"Analyze the impact of climate change on agriculture\",\n \"Evaluate renewable energy solutions\",\n \"Assess water conservation strategies\"\n]\n\n# Run batch processing\nresults = workflow.run_batched(tasks)\n\n# Process results\nfor task, result in zip(tasks, results):\n print(f\"Task: {task}\")\n print(f\"Result: {result}\\n\")\n
"},{"location":"swarms/structs/concurrentworkflow/#example-3-error-handling-and-retries","title":"Example 3: Error Handling and Retries","text":"import logging\n\n# Set up logging\nlogging.basicConfig(level=logging.INFO)\n\n# Initialize workflow with retry settings\nworkflow = ConcurrentWorkflow(\n name=\"Reliable Workflow\",\n agents=agents,\n max_retries=3,\n retry_delay=1.0,\n show_progress=True\n)\n\n# Run workflow with error handling\ntry:\n task = \"Generate a comprehensive market analysis report\"\n result = workflow.run(task)\n print(result)\nexcept Exception as e:\n logging.error(f\"An error occurred: {str(e)}\")\n
"},{"location":"swarms/structs/concurrentworkflow/#tips-and-best-practices","title":"Tips and Best Practices","text":"The Conversation
class is a powerful tool for managing and structuring conversation data in a Python program. It enables you to create, manipulate, and analyze conversations easily with support for multiple storage backends including persistent databases. This documentation provides a comprehensive understanding of the Conversation
class, its attributes, methods, and how to effectively use it with different storage backends.
The Conversation
class is designed to manage conversations by keeping track of messages and their attributes. It offers methods for adding, deleting, updating, querying, and displaying messages within the conversation. Additionally, it supports exporting and importing conversations, searching for specific keywords, and more.
New in this version: The class now supports multiple storage backends for persistent conversation storage:
pip install mem0ai
)pip install supabase
)pip install redis
)pip install duckdb
)pip install pulsar-client
)All backends use lazy loading - database dependencies are only imported when the specific backend is instantiated. Each backend provides helpful error messages if required packages are not installed.
"},{"location":"swarms/structs/conversation/#attributes","title":"Attributes","text":"Attribute Type Description id str Unique identifier for the conversation name str Name of the conversation system_prompt Optional[str] System prompt for the conversation time_enabled bool Flag to enable time tracking for messages autosave bool Flag to enable automatic saving save_enabled bool Flag to control if saving is enabled save_filepath str File path for saving conversation history load_filepath str File path for loading conversation history conversation_history list List storing conversation messages tokenizer Callable Tokenizer for counting tokens context_length int Maximum tokens allowed in conversation rules str Rules for the conversation custom_rules_prompt str Custom prompt for rules user str User identifier for messages save_as_yaml bool Flag to save as YAML save_as_json_bool bool Flag to save as JSON token_count bool Flag to enable token counting message_id_on bool Flag to enable message IDs backend str Storage backend type backend_instance Any The actual backend instance conversations_dir str Directory to store conversations"},{"location":"swarms/structs/conversation/#2-initialization-parameters","title":"2. Initialization Parameters","text":"Parameter Type Default Description id str generated Unique conversation ID name str None Name of the conversation system_prompt Optional[str] None System prompt for the conversation time_enabled bool False Enable time tracking autosave bool False Enable automatic saving save_enabled bool False Control if saving is enabled save_filepath str None File path for saving load_filepath str None File path for loading tokenizer Callable None Tokenizer for counting tokens context_length int 8192 Maximum tokens allowed rules str None Conversation rules custom_rules_prompt str None Custom rules prompt user str \"User:\" User identifier save_as_yaml bool False Save as YAML save_as_json_bool bool False Save as JSON token_count bool True Enable token counting message_id_on bool False Enable message IDs provider Literal[\"mem0\", \"in-memory\"] \"in-memory\" Legacy storage provider backend Optional[str] None Storage backend (takes precedence over provider) conversations_dir Optional[str] None Directory for conversations"},{"location":"swarms/structs/conversation/#3-backend-configuration","title":"3. Backend Configuration","text":""},{"location":"swarms/structs/conversation/#backend-specific-parameters","title":"Backend-Specific Parameters","text":""},{"location":"swarms/structs/conversation/#supabase-backend","title":"Supabase Backend","text":"Parameter Type Default Description supabase_url Optional[str] None Supabase project URL supabase_key Optional[str] None Supabase API key table_name str \"conversations\" Database table nameEnvironment variables: SUPABASE_URL
, SUPABASE_ANON_KEY
The backend
parameter takes precedence over the legacy provider
parameter:
# Legacy way (still supported)\nconversation = Conversation(provider=\"in-memory\")\n\n# New way (recommended)\nconversation = Conversation(backend=\"supabase\")\nconversation = Conversation(backend=\"redis\")\nconversation = Conversation(backend=\"sqlite\")\n
"},{"location":"swarms/structs/conversation/#4-methods","title":"4. Methods","text":""},{"location":"swarms/structs/conversation/#addrole-str-content-unionstr-dict-list-metadata-optionaldict-none","title":"add(role: str, content: Union[str, dict, list], metadata: Optional[dict] = None)
","text":"Adds a message to the conversation history.
Parameter Type Description role str Role of the speaker content Union[str, dict, list] Message content metadata Optional[dict] Additional metadataExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello, how are you?\")\nconversation.add(\"assistant\", \"I'm doing well, thank you!\")\n
"},{"location":"swarms/structs/conversation/#add_multiple_messagesroles-liststr-contents-listunionstr-dict-list","title":"add_multiple_messages(roles: List[str], contents: List[Union[str, dict, list]])
","text":"Adds multiple messages to the conversation history.
Parameter Type Description roles List[str] List of speaker roles contents List[Union[str, dict, list]] List of message contentsExample:
conversation = Conversation()\nconversation.add_multiple_messages(\n [\"user\", \"assistant\"],\n [\"Hello!\", \"Hi there!\"]\n)\n
"},{"location":"swarms/structs/conversation/#deleteindex-str","title":"delete(index: str)
","text":"Deletes a message from the conversation history.
Parameter Type Description index str Index of message to deleteExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.delete(0) # Deletes the first message\n
"},{"location":"swarms/structs/conversation/#updateindex-str-role-str-content-unionstr-dict","title":"update(index: str, role: str, content: Union[str, dict])
","text":"Updates a message in the conversation history.
Parameter Type Description index str Index of message to update role str New role of speaker content Union[str, dict] New message contentExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.update(0, \"user\", \"Hi there!\")\n
"},{"location":"swarms/structs/conversation/#queryindex-str","title":"query(index: str)
","text":"Retrieves a message from the conversation history.
Parameter Type Description index str Index of message to queryExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nmessage = conversation.query(0)\n
"},{"location":"swarms/structs/conversation/#searchkeyword-str","title":"search(keyword: str)
","text":"Searches for messages containing a keyword.
Parameter Type Description keyword str Keyword to search forExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello world\")\nresults = conversation.search(\"world\")\n
"},{"location":"swarms/structs/conversation/#display_conversationdetailed-bool-false","title":"display_conversation(detailed: bool = False)
","text":"Displays the conversation history.
Parameter Type Description detailed bool Show detailed informationExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.display_conversation(detailed=True)\n
"},{"location":"swarms/structs/conversation/#export_conversationfilename-str","title":"export_conversation(filename: str)
","text":"Exports conversation history to a file.
Parameter Type Description filename str Output file pathExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.export_conversation(\"chat.txt\")\n
"},{"location":"swarms/structs/conversation/#import_conversationfilename-str","title":"import_conversation(filename: str)
","text":"Imports conversation history from a file.
Parameter Type Description filename str Input file pathExample:
conversation = Conversation()\nconversation.import_conversation(\"chat.txt\")\n
"},{"location":"swarms/structs/conversation/#count_messages_by_role","title":"count_messages_by_role()
","text":"Counts messages by role.
Returns: Dict[str, int]
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.add(\"assistant\", \"Hi\")\ncounts = conversation.count_messages_by_role()\n
"},{"location":"swarms/structs/conversation/#return_history_as_string","title":"return_history_as_string()
","text":"Returns conversation history as a string.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nhistory = conversation.return_history_as_string()\n
"},{"location":"swarms/structs/conversation/#save_as_jsonfilename-str","title":"save_as_json(filename: str)
","text":"Saves conversation history as JSON.
Parameter Type Description filename str Output JSON file pathExample:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.save_as_json(\"chat.json\")\n
"},{"location":"swarms/structs/conversation/#load_from_jsonfilename-str","title":"load_from_json(filename: str)
","text":"Loads conversation history from JSON.
Parameter Type Description filename str Input JSON file pathExample:
conversation = Conversation()\nconversation.load_from_json(\"chat.json\")\n
"},{"location":"swarms/structs/conversation/#truncate_memory_with_tokenizer","title":"truncate_memory_with_tokenizer()
","text":"Truncates conversation history based on token limit.
Example:
conversation = Conversation(tokenizer=some_tokenizer)\nconversation.truncate_memory_with_tokenizer()\n
"},{"location":"swarms/structs/conversation/#clear","title":"clear()
","text":"Clears the conversation history.
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.clear()\n
"},{"location":"swarms/structs/conversation/#to_json","title":"to_json()
","text":"Converts conversation history to JSON string.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\njson_str = conversation.to_json()\n
"},{"location":"swarms/structs/conversation/#to_dict","title":"to_dict()
","text":"Converts conversation history to dictionary.
Returns: list
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\ndict_data = conversation.to_dict()\n
"},{"location":"swarms/structs/conversation/#to_yaml","title":"to_yaml()
","text":"Converts conversation history to YAML string.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nyaml_str = conversation.to_yaml()\n
"},{"location":"swarms/structs/conversation/#get_visible_messagesagent-agent-turn-int","title":"get_visible_messages(agent: \"Agent\", turn: int)
","text":"Gets visible messages for an agent at a specific turn.
Parameter Type Description agent Agent The agent turn int Turn numberReturns: List[Dict]
Example:
conversation = Conversation()\nvisible_msgs = conversation.get_visible_messages(agent, 1)\n
"},{"location":"swarms/structs/conversation/#get_last_message_as_string","title":"get_last_message_as_string()
","text":"Gets the last message as a string.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nlast_msg = conversation.get_last_message_as_string()\n
"},{"location":"swarms/structs/conversation/#return_messages_as_list","title":"return_messages_as_list()
","text":"Returns messages as a list of strings.
Returns: List[str]
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nmessages = conversation.return_messages_as_list()\n
"},{"location":"swarms/structs/conversation/#return_messages_as_dictionary","title":"return_messages_as_dictionary()
","text":"Returns messages as a list of dictionaries.
Returns: List[Dict]
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nmessages = conversation.return_messages_as_dictionary()\n
"},{"location":"swarms/structs/conversation/#add_tool_output_to_agentrole-str-tool_output-dict","title":"add_tool_output_to_agent(role: str, tool_output: dict)
","text":"Adds tool output to conversation.
Parameter Type Description role str Role of the tool tool_output dict Tool output to addExample:
conversation = Conversation()\nconversation.add_tool_output_to_agent(\"tool\", {\"result\": \"success\"})\n
"},{"location":"swarms/structs/conversation/#return_json","title":"return_json()
","text":"Returns conversation as JSON string.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\njson_str = conversation.return_json()\n
"},{"location":"swarms/structs/conversation/#get_final_message","title":"get_final_message()
","text":"Gets the final message.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nfinal_msg = conversation.get_final_message()\n
"},{"location":"swarms/structs/conversation/#get_final_message_content","title":"get_final_message_content()
","text":"Gets content of final message.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\ncontent = conversation.get_final_message_content()\n
"},{"location":"swarms/structs/conversation/#return_all_except_first","title":"return_all_except_first()
","text":"Returns all messages except first.
Returns: List[Dict]
Example:
conversation = Conversation()\nconversation.add(\"system\", \"Start\")\nconversation.add(\"user\", \"Hello\")\nmessages = conversation.return_all_except_first()\n
"},{"location":"swarms/structs/conversation/#return_all_except_first_string","title":"return_all_except_first_string()
","text":"Returns all messages except first as string.
Returns: str
Example:
conversation = Conversation()\nconversation.add(\"system\", \"Start\")\nconversation.add(\"user\", \"Hello\")\nmessages = conversation.return_all_except_first_string()\n
"},{"location":"swarms/structs/conversation/#batch_addmessages-listdict","title":"batch_add(messages: List[dict])
","text":"Adds multiple messages in batch.
Parameter Type Description messages List[dict] List of messages to addExample:
conversation = Conversation()\nconversation.batch_add([\n {\"role\": \"user\", \"content\": \"Hello\"},\n {\"role\": \"assistant\", \"content\": \"Hi\"}\n])\n
"},{"location":"swarms/structs/conversation/#get_cache_stats","title":"get_cache_stats()
","text":"Gets cache usage statistics.
Returns: Dict[str, int]
Example:
conversation = Conversation()\nstats = conversation.get_cache_stats()\n
"},{"location":"swarms/structs/conversation/#load_conversationname-str-conversations_dir-optionalstr-none","title":"load_conversation(name: str, conversations_dir: Optional[str] = None)
","text":"Loads a conversation from cache.
Parameter Type Description name str Name of conversation conversations_dir Optional[str] Directory containing conversationsReturns: Conversation
Example:
conversation = Conversation.load_conversation(\"my_chat\")\n
"},{"location":"swarms/structs/conversation/#list_cached_conversationsconversations_dir-optionalstr-none","title":"list_cached_conversations(conversations_dir: Optional[str] = None)
","text":"Lists all cached conversations.
Parameter Type Description conversations_dir Optional[str] Directory containing conversationsReturns: List[str]
Example:
conversations = Conversation.list_cached_conversations()\n
"},{"location":"swarms/structs/conversation/#clear_memory","title":"clear_memory()
","text":"Clears the conversation memory.
Example:
conversation = Conversation()\nconversation.add(\"user\", \"Hello\")\nconversation.clear_memory()\n
"},{"location":"swarms/structs/conversation/#5-examples","title":"5. Examples","text":""},{"location":"swarms/structs/conversation/#basic-usage","title":"Basic Usage","text":"from swarms.structs import Conversation\n\n# Create a new conversation with in-memory storage\nconversation = Conversation(\n name=\"my_chat\",\n system_prompt=\"You are a helpful assistant\",\n time_enabled=True\n)\n\n# Add messages\nconversation.add(\"user\", \"Hello!\")\nconversation.add(\"assistant\", \"Hi there!\")\n\n# Display conversation\nconversation.display_conversation()\n\n# Save conversation (in-memory only saves to file)\nconversation.save_as_json(\"my_chat.json\")\n
"},{"location":"swarms/structs/conversation/#using-supabase-backend","title":"Using Supabase Backend","text":"import os\nfrom swarms.structs import Conversation\n\n# Using environment variables\nos.environ[\"SUPABASE_URL\"] = \"https://your-project.supabase.co\"\nos.environ[\"SUPABASE_ANON_KEY\"] = \"your-anon-key\"\n\nconversation = Conversation(\n name=\"supabase_chat\",\n backend=\"supabase\",\n system_prompt=\"You are a helpful assistant\",\n time_enabled=True\n)\n\n# Or using explicit parameters\nconversation = Conversation(\n name=\"supabase_chat\",\n backend=\"supabase\",\n supabase_url=\"https://your-project.supabase.co\",\n supabase_key=\"your-anon-key\",\n system_prompt=\"You are a helpful assistant\",\n time_enabled=True\n)\n\n# Add messages (automatically stored in Supabase)\nconversation.add(\"user\", \"Hello!\")\nconversation.add(\"assistant\", \"Hi there!\")\n\n# All operations work transparently with the backend\nconversation.display_conversation()\nresults = conversation.search(\"Hello\")\n
"},{"location":"swarms/structs/conversation/#using-redis-backend","title":"Using Redis Backend","text":"from swarms.structs import Conversation\n\n# Using Redis with default settings\nconversation = Conversation(\n name=\"redis_chat\",\n backend=\"redis\",\n system_prompt=\"You are a helpful assistant\"\n)\n\n# Using Redis with custom configuration\nconversation = Conversation(\n name=\"redis_chat\",\n backend=\"redis\",\n redis_host=\"localhost\",\n redis_port=6379,\n redis_db=0,\n redis_password=\"mypassword\",\n system_prompt=\"You are a helpful assistant\"\n)\n\nconversation.add(\"user\", \"Hello Redis!\")\nconversation.add(\"assistant\", \"Hello from Redis backend!\")\n
"},{"location":"swarms/structs/conversation/#using-sqlite-backend","title":"Using SQLite Backend","text":"from swarms.structs import Conversation\n\n# SQLite with default database file\nconversation = Conversation(\n name=\"sqlite_chat\",\n backend=\"sqlite\",\n system_prompt=\"You are a helpful assistant\"\n)\n\n# SQLite with custom database path\nconversation = Conversation(\n name=\"sqlite_chat\",\n backend=\"sqlite\",\n db_path=\"/path/to/my/conversations.db\",\n system_prompt=\"You are a helpful assistant\"\n)\n\nconversation.add(\"user\", \"Hello SQLite!\")\nconversation.add(\"assistant\", \"Hello from SQLite backend!\")\n
"},{"location":"swarms/structs/conversation/#advanced-usage-with-multi-agent-systems","title":"Advanced Usage with Multi-Agent Systems","text":"import os\nfrom swarms.structs import Agent, Conversation\nfrom swarms.structs.multi_agent_exec import run_agents_concurrently\n\n# Set up Supabase backend for persistent storage\nconversation = Conversation(\n name=\"multi_agent_research\",\n backend=\"supabase\",\n supabase_url=os.getenv(\"SUPABASE_URL\"),\n supabase_key=os.getenv(\"SUPABASE_ANON_KEY\"),\n system_prompt=\"Multi-agent collaboration session\",\n time_enabled=True\n)\n\n# Create specialized agents\ndata_analyst = Agent(\n agent_name=\"DataAnalyst\",\n system_prompt=\"You are a senior data analyst...\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n)\n\nresearcher = Agent(\n agent_name=\"ResearchSpecialist\", \n system_prompt=\"You are a research specialist...\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n)\n\n# Run agents and store results in persistent backend\ntask = \"Analyze the current state of AI in healthcare\"\nresults = run_agents_concurrently(agents=[data_analyst, researcher], task=task)\n\n# Store results in conversation (automatically persisted)\nfor result, agent in zip(results, [data_analyst, researcher]):\n conversation.add(content=result, role=agent.agent_name)\n\n# Conversation is automatically saved to Supabase\nprint(f\"Conversation stored with {len(conversation.to_dict())} messages\")\n
"},{"location":"swarms/structs/conversation/#error-handling-and-fallbacks","title":"Error Handling and Fallbacks","text":"from swarms.structs import Conversation\n\ntry:\n # Attempt to use Supabase backend\n conversation = Conversation(\n name=\"fallback_test\",\n backend=\"supabase\",\n supabase_url=\"https://your-project.supabase.co\",\n supabase_key=\"your-key\"\n )\n print(\"\u2705 Supabase backend initialized successfully\")\nexcept ImportError as e:\n print(f\"\u274c Supabase not available: {e}\")\n # Automatic fallback to in-memory storage\n conversation = Conversation(\n name=\"fallback_test\",\n backend=\"in-memory\"\n )\n print(\"\ud83d\udca1 Falling back to in-memory storage\")\n\n# Usage remains the same regardless of backend\nconversation.add(\"user\", \"Hello!\")\nconversation.add(\"assistant\", \"Hi there!\")\n
"},{"location":"swarms/structs/conversation/#loading-and-managing-conversations","title":"Loading and Managing Conversations","text":"from swarms.structs import Conversation\n\n# List all saved conversations\nconversations = Conversation.list_conversations()\nfor conv in conversations:\n print(f\"ID: {conv['id']}, Name: {conv['name']}, Created: {conv['created_at']}\")\n\n# Load a specific conversation\nconversation = Conversation.load_conversation(\"my_conversation_name\")\n\n# Load conversation from specific file\nconversation = Conversation.load_conversation(\n \"my_chat\",\n load_filepath=\"/path/to/conversation.json\"\n)\n
"},{"location":"swarms/structs/conversation/#backend-comparison","title":"Backend Comparison","text":"# In-memory: Fast, no persistence\nconv_memory = Conversation(backend=\"in-memory\")\n\n# SQLite: Local file-based persistence\nconv_sqlite = Conversation(backend=\"sqlite\", db_path=\"conversations.db\")\n\n# Redis: Distributed caching, high performance\nconv_redis = Conversation(backend=\"redis\", redis_host=\"localhost\")\n\n# Supabase: Cloud PostgreSQL, real-time features\nconv_supabase = Conversation(\n backend=\"supabase\", \n supabase_url=\"https://project.supabase.co\",\n supabase_key=\"your-key\"\n)\n\n# DuckDB: Analytical workloads, columnar storage\nconv_duckdb = Conversation(backend=\"duckdb\", db_path=\"analytics.duckdb\")\n
"},{"location":"swarms/structs/conversation/#error-handling","title":"Error Handling","text":"The conversation class provides graceful error handling:
Example error message:
Backend 'supabase' dependencies not available. Install with: pip install supabase\n
"},{"location":"swarms/structs/conversation/#migration-guide","title":"Migration Guide","text":""},{"location":"swarms/structs/conversation/#from-provider-to-backend","title":"From Provider to Backend","text":"# Old way\nconversation = Conversation(provider=\"in-memory\")\n\n# New way (recommended)\nconversation = Conversation(backend=\"in-memory\")\n\n# Both work, but backend takes precedence\nconversation = Conversation(\n provider=\"in-memory\", # Ignored\n backend=\"supabase\" # Used\n)\n
"},{"location":"swarms/structs/conversation/#conclusion","title":"Conclusion","text":"The Conversation
class provides a comprehensive set of tools for managing conversations in Python applications with full backend flexibility. It supports various storage backends, lazy loading, token counting, caching, and multiple export/import formats. The class is designed to be flexible and extensible, making it suitable for a wide range of use cases from simple chat applications to complex conversational AI systems with persistent storage requirements.
Choose the appropriate backend based on your needs: - in-memory: Development and testing - sqlite: Local applications and small-scale deployments - redis: Distributed applications requiring high performance - supabase: Cloud applications with real-time requirements - duckdb: Analytics and data science workloads - pulsar: Event-driven architectures and streaming applications
"},{"location":"swarms/structs/council_of_judges/","title":"CouncilAsAJudge","text":"The CouncilAsAJudge
is a sophisticated evaluation system that employs multiple AI agents to assess model responses across various dimensions. It provides comprehensive, multi-dimensional analysis of AI model outputs through parallel evaluation and aggregation.
The CouncilAsAJudge
implements a council of specialized AI agents that evaluate different aspects of a model's response. Each agent focuses on a specific dimension of evaluation, and their findings are aggregated into a comprehensive report.
graph TD\n A[User Query] --> B[Base Agent]\n B --> C[Model Response]\n C --> D[CouncilAsAJudge]\n\n subgraph \"Evaluation Dimensions\"\n D --> E1[Accuracy Agent]\n D --> E2[Helpfulness Agent]\n D --> E3[Harmlessness Agent]\n D --> E4[Coherence Agent]\n D --> E5[Conciseness Agent]\n D --> E6[Instruction Adherence Agent]\n end\n\n E1 --> F[Evaluation Aggregation]\n E2 --> F\n E3 --> F\n E4 --> F\n E5 --> F\n E6 --> F\n\n F --> G[Comprehensive Report]\n\n style D fill:#f9f,stroke:#333,stroke-width:2px\n style F fill:#bbf,stroke:#333,stroke-width:2px
"},{"location":"swarms/structs/council_of_judges/#key-features","title":"Key Features","text":"pip install swarms\n
"},{"location":"swarms/structs/council_of_judges/#basic-usage","title":"Basic Usage","text":"from swarms import Agent, CouncilAsAJudge\n\n# Create a base agent\nbase_agent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"You are a financial expert helping users understand and establish ROTH IRAs.\",\n model_name=\"claude-opus-4-20250514\",\n max_loops=1,\n)\n\n# Run the base agent\nuser_query = \"How can I establish a ROTH IRA to buy stocks and get a tax break?\"\nmodel_output = base_agent.run(user_query)\n\n# Create and run the council\npanel = CouncilAsAJudge()\nresults = panel.run(user_query, model_output)\nprint(results)\n
"},{"location":"swarms/structs/council_of_judges/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/structs/council_of_judges/#custom-model-configuration","title":"Custom Model Configuration","text":"from swarms import CouncilAsAJudge\n\n# Initialize with custom model\ncouncil = CouncilAsAJudge(\n model_name=\"anthropic/claude-3-sonnet-20240229\",\n output_type=\"all\",\n cache_size=256,\n max_workers=4,\n random_model_name=False\n)\n
"},{"location":"swarms/structs/council_of_judges/#parallel-processing-configuration","title":"Parallel Processing Configuration","text":"from swarms import CouncilAsAJudge\n\n# Configure parallel processing\ncouncil = CouncilAsAJudge(\n max_workers=8, # Custom number of worker threads\n random_model_name=True # Enable dynamic model selection\n)\n
"},{"location":"swarms/structs/council_of_judges/#evaluation-dimensions","title":"Evaluation Dimensions","text":"The council evaluates responses across six key dimensions:
Dimension Evaluation Criteria Accuracy \u2022 Factual correctness\u2022 Source credibility\u2022 Temporal consistency\u2022 Technical accuracy Helpfulness \u2022 Problem-solving efficacy\u2022 Solution feasibility\u2022 Context inclusion\u2022 Proactive addressing of follow-ups Harmlessness \u2022 Safety assessment\u2022 Ethical considerations\u2022 Age-appropriateness\u2022 Content sensitivity Coherence \u2022 Structural integrity\u2022 Logical flow\u2022 Information hierarchy\u2022 Transition effectiveness Conciseness \u2022 Communication efficiency\u2022 Information density\u2022 Redundancy elimination\u2022 Focus maintenance Instruction Adherence \u2022 Requirement coverage\u2022 Constraint compliance\u2022 Format matching\u2022 Scope appropriateness"},{"location":"swarms/structs/council_of_judges/#api-reference","title":"API Reference","text":""},{"location":"swarms/structs/council_of_judges/#councilasajudge_1","title":"CouncilAsAJudge","text":"class CouncilAsAJudge:\n def __init__(\n self,\n id: str = swarm_id(),\n name: str = \"CouncilAsAJudge\",\n description: str = \"Evaluates the model's response across multiple dimensions\",\n model_name: str = \"gpt-4o-mini\",\n output_type: str = \"all\",\n cache_size: int = 128,\n max_workers: int = None,\n random_model_name: bool = True,\n )\n
"},{"location":"swarms/structs/council_of_judges/#parameters","title":"Parameters","text":"id
(str): Unique identifier for the councilname
(str): Display name of the councildescription
(str): Description of the council's purposemodel_name
(str): Name of the model to use for evaluationsoutput_type
(str): Type of output to returncache_size
(int): Size of the LRU cache for promptsmax_workers
(int): Maximum number of worker threadsrandom_model_name
(bool): Whether to use random model selectiondef run(self, task: str, model_response: str) -> None\n
Evaluates a model response across all dimensions.
"},{"location":"swarms/structs/council_of_judges/#parameters_1","title":"Parameters","text":"task
(str): Original user promptmodel_response
(str): Model's response to evaluatefrom swarms import Agent, CouncilAsAJudge\n\n# Create financial analysis agent\nfinancial_agent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"You are a financial expert helping users understand and establish ROTH IRAs.\",\n model_name=\"claude-opus-4-20250514\",\n max_loops=1,\n)\n\n# Run analysis\nquery = \"How can I establish a ROTH IRA to buy stocks and get a tax break?\"\nresponse = financial_agent.run(query)\n\n# Evaluate response\ncouncil = CouncilAsAJudge()\nevaluation = council.run(query, response)\nprint(evaluation)\n
"},{"location":"swarms/structs/council_of_judges/#technical-documentation-example","title":"Technical Documentation Example","text":"from swarms import Agent, CouncilAsAJudge\n\n# Create documentation agent\ndoc_agent = Agent(\n agent_name=\"Documentation-Agent\",\n system_prompt=\"You are a technical documentation expert.\",\n model_name=\"gpt-4\",\n max_loops=1,\n)\n\n# Generate documentation\nquery = \"Explain how to implement a REST API using FastAPI\"\nresponse = doc_agent.run(query)\n\n# Evaluate documentation quality\ncouncil = CouncilAsAJudge(\n model_name=\"anthropic/claude-3-sonnet-20240229\",\n output_type=\"all\"\n)\nevaluation = council.run(query, response)\nprint(evaluation)\n
"},{"location":"swarms/structs/council_of_judges/#best-practices","title":"Best Practices","text":""},{"location":"swarms/structs/council_of_judges/#model-selection","title":"Model Selection","text":"Model Selection Best Practices
Performance Tips
Error Handling Guidelines
Resource Management
Memory Problems
If you encounter memory-related problems:
Performance Issues
To improve performance:
Evaluation Issues
When evaluations fail:
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
"},{"location":"swarms/structs/council_of_judges/#license","title":"License","text":"License
This project is licensed under the MIT License - see the LICENSE file for details.
"},{"location":"swarms/structs/create_new_swarm/","title":"How to Add a New Swarm Class","text":"This guide provides comprehensive step-by-step instructions for developers to create and add a new swarm. It emphasizes the importance of adhering to best practices, using proper type hints, and documenting code thoroughly to ensure maintainability, scalability, and clarity in your implementations.
"},{"location":"swarms/structs/create_new_swarm/#overview","title":"Overview","text":"A Swarm class enables developers to manage and coordinate multiple agents working together to accomplish complex tasks efficiently. Each Swarm must:
run(task: str, img: str, *args, **kwargs)
method, which serves as the primary execution method for tasks.name
, description
, and agents
parameters.agents
is a list of callables, with each callable adhering to specific requirements for dynamic agent behavior.Each Agent within the swarm must:
agent_name
, system_prompt
, and a run
method.By adhering to these requirements, you can create robust, reusable, and modular swarms that streamline task management and enhance collaborative functionality. Developers are also encouraged to contribute their swarms back to the open-source community by submitting a pull request to the Swarms repository at https://github.com/kyegomez/swarms.
"},{"location":"swarms/structs/create_new_swarm/#creating-a-swarm-class","title":"Creating a Swarm Class","text":"Below is a detailed template for creating a Swarm class. Ensure that all elements are documented and clearly defined:
from typing import Callable, Any, List\n\nclass MySwarm:\n \"\"\"\n A custom swarm class to manage and execute tasks with multiple agents.\n\n Attributes:\n name (str): The name of the swarm.\n description (str): A brief description of the swarm's purpose.\n agents (List[Callable]): A list of callables representing the agents to be utilized.\n \"\"\"\n\n def __init__(self, name: str, description: str, agents: List[Callable]):\n \"\"\"\n Initialize the Swarm with its name, description, and agents.\n\n Args:\n name (str): The name of the swarm.\n description (str): A description of the swarm.\n agents (List[Callable]): A list of callables that provide the agents for the swarm.\n \"\"\"\n self.name = name\n self.description = description\n self.agents = agents\n\n def run(self, task: str, img: str, *args: Any, **kwargs: Any) -> Any:\n \"\"\"\n Execute a task using the swarm and its agents.\n\n Args:\n task (str): The task description.\n img (str): The image input.\n *args: Additional positional arguments for customization.\n **kwargs: Additional keyword arguments for fine-tuning behavior.\n\n Returns:\n Any: The result of the task execution, aggregated from all agents.\n \"\"\"\n results = []\n for agent in self.agents:\n result = agent.run(task, img, *args, **kwargs)\n results.append(result)\n return results\n
This Swarm class serves as the main orchestrator for coordinating agents and running tasks dynamically and flexibly.
"},{"location":"swarms/structs/create_new_swarm/#creating-an-agent-class","title":"Creating an Agent Class","text":"Each agent must follow a well-defined structure to ensure compatibility with the swarm. Below is an example of an agent class:
class Agent:\n \"\"\"\n A single agent class to handle specific tasks assigned by the swarm.\n\n Attributes:\n agent_name (str): The name of the agent.\n system_prompt (str): The system prompt guiding the agent's behavior and purpose.\n \"\"\"\n\n def __init__(self, agent_name: str, system_prompt: str):\n \"\"\"\n Initialize the agent with its name and system prompt.\n\n Args:\n agent_name (str): The name of the agent.\n system_prompt (str): The guiding prompt for the agent.\n \"\"\"\n self.agent_name = agent_name\n self.system_prompt = system_prompt\n\n def run(self, task: str, img: str, *args: Any, **kwargs: Any) -> Any:\n \"\"\"\n Execute a specific task assigned to the agent.\n\n Args:\n task (str): The task description.\n img (str): The image input for processing.\n *args: Additional positional arguments for task details.\n **kwargs: Additional keyword arguments for extended functionality.\n\n Returns:\n Any: The result of the task execution, which can be customized.\n \"\"\"\n # Example implementation (to be customized by developer)\n return f\"Agent {self.agent_name} executed task: {task}\"\n
This structure ensures that each agent can independently handle tasks and integrate seamlessly into a swarm.
"},{"location":"swarms/structs/create_new_swarm/#adding-your-swarm-to-a-project","title":"Adding Your Swarm to a Project","text":""},{"location":"swarms/structs/create_new_swarm/#step-1-define-your-agents","title":"Step 1: Define Your Agents","text":"Create one or more instances of the Agent
class to serve as components of your swarm. For example:
def create_agents():\n return [\n Agent(agent_name=\"Agent1\", system_prompt=\"Analyze the image and summarize results.\"),\n Agent(agent_name=\"Agent2\", system_prompt=\"Detect objects and highlight key features.\"),\n ]\n
"},{"location":"swarms/structs/create_new_swarm/#step-2-implement-your-swarm","title":"Step 2: Implement Your Swarm","text":"Create an instance of your Swarm class, defining its name, description, and associated agents:
my_swarm = MySwarm(\n name=\"Image Analysis Swarm\",\n description=\"A swarm designed to analyze images and perform a range of related tasks.\",\n agents=create_agents()\n)\n
"},{"location":"swarms/structs/create_new_swarm/#step-3-execute-tasks","title":"Step 3: Execute Tasks","text":"Call the run
method of your swarm, passing in the required parameters for execution:
results = my_swarm.run(task=\"Analyze image content\", img=\"path/to/image.jpg\")\nprint(results)\n
This simple flow allows you to dynamically utilize agents for diverse operations and ensures efficient task execution.
"},{"location":"swarms/structs/create_new_swarm/#best-practices","title":"Best Practices","text":"To ensure your swarm implementation is efficient and maintainable, follow these best practices:
Type Annotations: Use precise type hints for parameters and return types to improve code readability and support static analysis tools.
Comprehensive Documentation: Include clear and detailed docstrings for all classes, methods, and attributes to ensure your code is understandable.
Thorough Testing: Test your swarm and agents with various tasks to verify correctness and identify potential edge cases.
Modular Design: Keep your swarm and agent logic modular, enabling reuse and easy extensions for future enhancements.
Error Handling: Implement robust error handling in the run
methods to gracefully manage unexpected inputs or issues during execution.
Code Review: Regularly review and refactor your code to align with the latest best practices and maintain high quality.
Scalability: Design your swarm with scalability in mind, ensuring it can handle a large number of agents and complex tasks.
Logging and Monitoring: Include comprehensive logging to track task execution and monitor performance, enabling easier debugging and optimization.
Open-Source Contributions: Consider contributing your swarm to the Swarms repository to benefit the community. Submit a pull request at https://github.com/kyegomez/swarms.
Given the implementation above, executing a task might produce output such as:
[\n \"Agent Agent1 executed task: Analyze image content\",\n \"Agent Agent2 executed task: Analyze image content\"\n]\n
The modular design ensures that each agent contributes to the overall functionality of the swarm, allowing seamless scalability and dynamic task management.
"},{"location":"swarms/structs/create_new_swarm/#conclusion","title":"Conclusion","text":"By following these guidelines, you can create swarms that are powerful, flexible, and maintainable. Leveraging the provided templates and best practices enables you to build efficient multi-agent systems capable of handling diverse and complex tasks. Proper structuring, thorough testing, and adherence to best practices will ensure your swarm integrates effectively into any project, delivering robust and reliable performance. Furthermore, maintaining clear documentation and emphasizing modularity will help your implementation adapt to future needs and use cases. Empower your projects with a well-designed swarm architecture today, and consider submitting your swarm to the open-source community to foster collaboration and innovation.
"},{"location":"swarms/structs/custom_swarm/","title":"Building Custom Swarms: A Comprehensive Guide for Swarm Engineers","text":""},{"location":"swarms/structs/custom_swarm/#introduction","title":"Introduction","text":"As artificial intelligence and machine learning continue to grow in complexity and applicability, building systems that can harness multiple agents to solve complex tasks becomes more critical. Swarm engineering enables AI agents to collaborate and solve problems autonomously in diverse fields such as finance, marketing, operations, and even creative industries.
This comprehensive guide covers how to build a custom swarm system that integrates multiple agents into a cohesive system capable of solving tasks collaboratively. We'll cover everything from basic swarm structure to advanced features like conversation management, logging, error handling, and scalability.
By the end of this guide, you will have a complete understanding of:
What swarms are and how they can be built
How to create agents and integrate them into swarms
How to implement proper conversation management for message storage
Best practices for error handling, logging, and optimization
How to make swarms scalable and production-ready
A Swarm refers to a collection of agents that collaborate to solve a problem. Each agent in the swarm performs part of the task, either independently or by communicating with other agents. Swarms are ideal for:
Scalability: You can add or remove agents dynamically based on the task's complexity
Flexibility: Each agent can be designed to specialize in different parts of the problem, offering modularity
Autonomy: Agents in a swarm can operate autonomously, reducing the need for constant supervision
Conversation Management: All interactions are tracked and stored for analysis and continuity
Every Swarm class must adhere to these fundamental requirements:
"},{"location":"swarms/structs/custom_swarm/#required-methods-and-attributes","title":"Required Methods and Attributes","text":"run(task: str, img: str, *args, **kwargs)
method: The primary execution method for tasks
name
: A descriptive name for the swarm
description
: A clear description of the swarm's purpose
agents
: A list of callables representing the agents
conversation
: A conversation structure for message storage and history management
Each Agent within the swarm must contain:
agent_name
: Unique identifier for the agent
system_prompt
: Instructions that guide the agent's behavior
run
method: Method to execute tasks assigned to the agent
from typing import List, Union, Any, Optional, Callable\nfrom loguru import logger\nfrom swarms.structs.base_swarm import BaseSwarm\nfrom swarms.structs.conversation import Conversation\nfrom swarms.structs.agent import Agent\nimport concurrent.futures\nimport os\n
"},{"location":"swarms/structs/custom_swarm/#custom-exception-handling","title":"Custom Exception Handling","text":"class SwarmExecutionError(Exception):\n \"\"\"Custom exception for handling swarm execution errors.\"\"\"\n pass\n\nclass AgentValidationError(Exception):\n \"\"\"Custom exception for agent validation errors.\"\"\"\n pass\n
"},{"location":"swarms/structs/custom_swarm/#building-the-custom-swarm-class","title":"Building the Custom Swarm Class","text":""},{"location":"swarms/structs/custom_swarm/#basic-swarm-structure","title":"Basic Swarm Structure","text":"class CustomSwarm(BaseSwarm):\n \"\"\"\n A custom swarm class to manage and execute tasks with multiple agents.\n\n This swarm integrates conversation management for tracking all agent interactions,\n provides error handling, and supports both sequential and concurrent execution.\n\n Attributes:\n name (str): The name of the swarm.\n description (str): A brief description of the swarm's purpose.\n agents (List[Callable]): A list of callables representing the agents.\n conversation (Conversation): Conversation management for message storage.\n max_workers (int): Maximum number of concurrent workers for parallel execution.\n autosave_conversation (bool): Whether to automatically save conversation history.\n \"\"\"\n\n def __init__(\n self,\n name: str,\n description: str,\n agents: List[Callable],\n max_workers: int = 4,\n autosave_conversation: bool = True,\n conversation_config: Optional[dict] = None,\n ):\n \"\"\"\n Initialize the CustomSwarm with its name, description, and agents.\n\n Args:\n name (str): The name of the swarm.\n description (str): A description of the swarm.\n agents (List[Callable]): A list of callables that provide the agents for the swarm.\n max_workers (int): Maximum number of concurrent workers.\n autosave_conversation (bool): Whether to automatically save conversations.\n conversation_config (dict): Configuration for conversation management.\n \"\"\"\n super().__init__(name=name, description=description, agents=agents)\n self.name = name\n self.description = description\n self.agents = agents\n self.max_workers = max_workers\n self.autosave_conversation = autosave_conversation\n\n # Initialize conversation management\n # See: https://docs.swarms.world/swarms/structs/conversation/\n conversation_config = conversation_config or {}\n self.conversation = Conversation(\n id=f\"swarm_{name}_{int(time.time())}\",\n name=f\"{name}_conversation\",\n autosave=autosave_conversation,\n save_enabled=True,\n time_enabled=True,\n **conversation_config\n )\n\n # Validate agents and log initialization\n self.validate_agents()\n logger.info(f\"\ud83d\ude80 CustomSwarm '{self.name}' initialized with {len(self.agents)} agents\")\n\n # Add swarm initialization to conversation history\n self.conversation.add(\n role=\"System\",\n content=f\"Swarm '{self.name}' initialized with {len(self.agents)} agents: {[getattr(agent, 'agent_name', 'Unknown') for agent in self.agents]}\"\n )\n\n def validate_agents(self):\n \"\"\"\n Validates that each agent has the required methods and attributes.\n\n Raises:\n AgentValidationError: If any agent fails validation.\n \"\"\"\n for i, agent in enumerate(self.agents):\n # Check for required run method\n if not hasattr(agent, 'run'):\n raise AgentValidationError(f\"Agent at index {i} does not have a 'run' method.\")\n\n # Check for agent_name attribute\n if not hasattr(agent, 'agent_name'):\n logger.warning(f\"Agent at index {i} does not have 'agent_name' attribute. Using 'Agent_{i}'\")\n agent.agent_name = f\"Agent_{i}\"\n\n logger.info(f\"\u2705 Agent '{agent.agent_name}' validated successfully.\")\n\n def run(self, task: str, img: str = None, *args: Any, **kwargs: Any) -> Any:\n \"\"\"\n Execute a task using the swarm and its agents with conversation tracking.\n\n Args:\n task (str): The task description.\n img (str): The image input (optional).\n *args: Additional positional arguments for customization.\n **kwargs: Additional keyword arguments for fine-tuning behavior.\n\n Returns:\n Any: The result of the task execution, aggregated from all agents.\n \"\"\"\n logger.info(f\"\ud83c\udfaf Running task '{task}' across {len(self.agents)} agents in swarm '{self.name}'\")\n\n # Add task to conversation history\n self.conversation.add(\n role=\"User\",\n content=f\"Task: {task}\" + (f\" | Image: {img}\" if img else \"\"),\n category=\"input\"\n )\n\n try:\n # Execute task across all agents\n results = self._execute_agents(task, img, *args, **kwargs)\n\n # Add results to conversation\n self.conversation.add(\n role=\"Swarm\",\n content=f\"Task completed successfully. Processed by {len(results)} agents.\",\n category=\"output\"\n )\n\n logger.success(f\"\u2705 Task completed successfully by swarm '{self.name}'\")\n return results\n\n except Exception as e:\n error_msg = f\"\u274c Task execution failed in swarm '{self.name}': {str(e)}\"\n logger.error(error_msg)\n\n # Add error to conversation\n self.conversation.add(\n role=\"System\",\n content=f\"Error: {error_msg}\",\n category=\"error\"\n )\n\n raise SwarmExecutionError(error_msg)\n\n def _execute_agents(self, task: str, img: str = None, *args, **kwargs) -> List[Any]:\n \"\"\"\n Execute the task across all agents with proper conversation tracking.\n\n Args:\n task (str): The task to execute.\n img (str): Optional image input.\n\n Returns:\n List[Any]: Results from all agents.\n \"\"\"\n results = []\n\n for agent in self.agents:\n try:\n # Execute agent task\n result = agent.run(task, img, *args, **kwargs)\n results.append(result)\n\n # Add agent response to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=result,\n category=\"agent_output\"\n )\n\n logger.info(f\"\u2705 Agent '{agent.agent_name}' completed task successfully\")\n\n except Exception as e:\n error_msg = f\"Agent '{agent.agent_name}' failed: {str(e)}\"\n logger.error(error_msg)\n\n # Add agent error to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=f\"Error: {error_msg}\",\n category=\"agent_error\"\n )\n\n # Continue with other agents but log the failure\n results.append(f\"FAILED: {error_msg}\")\n\n return results\n
"},{"location":"swarms/structs/custom_swarm/#enhanced-swarm-with-concurrent-execution","title":"Enhanced Swarm with Concurrent Execution","text":" def run_concurrent(self, task: str, img: str = None, *args: Any, **kwargs: Any) -> List[Any]:\n \"\"\"\n Execute a task using concurrent execution for better performance.\n\n Args:\n task (str): The task description.\n img (str): The image input (optional).\n *args: Additional positional arguments.\n **kwargs: Additional keyword arguments.\n\n Returns:\n List[Any]: Results from all agents executed concurrently.\n \"\"\"\n logger.info(f\"\ud83d\ude80 Running task concurrently across {len(self.agents)} agents\")\n\n # Add task to conversation\n self.conversation.add(\n role=\"User\",\n content=f\"Concurrent Task: {task}\" + (f\" | Image: {img}\" if img else \"\"),\n category=\"input\"\n )\n\n results = []\n\n with concurrent.futures.ThreadPoolExecutor(max_workers=self.max_workers) as executor:\n # Submit all agent tasks\n future_to_agent = {\n executor.submit(self._run_single_agent, agent, task, img, *args, **kwargs): agent\n for agent in self.agents\n }\n\n # Collect results as they complete\n for future in concurrent.futures.as_completed(future_to_agent):\n agent = future_to_agent[future]\n try:\n result = future.result()\n results.append(result)\n\n # Add to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=result,\n category=\"agent_output\"\n )\n\n except Exception as e:\n error_msg = f\"Concurrent execution failed for agent '{agent.agent_name}': {str(e)}\"\n logger.error(error_msg)\n results.append(f\"FAILED: {error_msg}\")\n\n # Add error to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=f\"Error: {error_msg}\",\n category=\"agent_error\"\n )\n\n # Add completion summary\n self.conversation.add(\n role=\"Swarm\",\n content=f\"Concurrent task completed. {len(results)} agents processed.\",\n category=\"output\"\n )\n\n return results\n\n def _run_single_agent(self, agent: Callable, task: str, img: str = None, *args, **kwargs) -> Any:\n \"\"\"\n Execute a single agent with error handling.\n\n Args:\n agent: The agent to execute.\n task (str): The task to execute.\n img (str): Optional image input.\n\n Returns:\n Any: The agent's result.\n \"\"\"\n try:\n return agent.run(task, img, *args, **kwargs)\n except Exception as e:\n logger.error(f\"Agent '{getattr(agent, 'agent_name', 'Unknown')}' execution failed: {str(e)}\")\n raise\n
"},{"location":"swarms/structs/custom_swarm/#advanced-features","title":"Advanced Features","text":" def run_with_retries(self, task: str, img: str = None, retries: int = 3, *args, **kwargs) -> List[Any]:\n \"\"\"\n Execute a task with retry logic for failed agents.\n\n Args:\n task (str): The task to execute.\n img (str): Optional image input.\n retries (int): Number of retries for failed agents.\n\n Returns:\n List[Any]: Results from all agents with retry attempts.\n \"\"\"\n logger.info(f\"\ud83d\udd04 Running task with {retries} retries per agent\")\n\n # Add task to conversation\n self.conversation.add(\n role=\"User\",\n content=f\"Task with retries ({retries}): {task}\",\n category=\"input\"\n )\n\n results = []\n\n for agent in self.agents:\n attempt = 0\n success = False\n\n while attempt <= retries and not success:\n try:\n result = agent.run(task, img, *args, **kwargs)\n results.append(result)\n success = True\n\n # Add successful result to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=result,\n category=\"agent_output\"\n )\n\n if attempt > 0:\n logger.success(f\"\u2705 Agent '{agent.agent_name}' succeeded on attempt {attempt + 1}\")\n\n except Exception as e:\n attempt += 1\n error_msg = f\"Agent '{agent.agent_name}' failed on attempt {attempt}: {str(e)}\"\n logger.warning(error_msg)\n\n # Add retry attempt to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=f\"Retry attempt {attempt}: {error_msg}\",\n category=\"agent_retry\"\n )\n\n if attempt > retries:\n final_error = f\"Agent '{agent.agent_name}' exhausted all {retries} retries\"\n logger.error(final_error)\n results.append(f\"FAILED: {final_error}\")\n\n # Add final failure to conversation\n self.conversation.add(\n role=agent.agent_name,\n content=final_error,\n category=\"agent_error\"\n )\n\n return results\n\n def get_conversation_summary(self) -> dict:\n \"\"\"\n Get a summary of the conversation history and agent performance.\n\n Returns:\n dict: Summary of conversation statistics and agent performance.\n \"\"\"\n # Get conversation statistics\n message_counts = self.conversation.count_messages_by_role()\n\n # Count categories\n category_counts = {}\n for message in self.conversation.conversation_history:\n category = message.get(\"category\", \"uncategorized\")\n category_counts[category] = category_counts.get(category, 0) + 1\n\n # Get token counts if available\n token_summary = self.conversation.export_and_count_categories()\n\n return {\n \"swarm_name\": self.name,\n \"total_messages\": len(self.conversation.conversation_history),\n \"messages_by_role\": message_counts,\n \"messages_by_category\": category_counts,\n \"token_summary\": token_summary,\n \"conversation_id\": self.conversation.id,\n }\n\n def export_conversation(self, filepath: str = None) -> str:\n \"\"\"\n Export the conversation history to a file.\n\n Args:\n filepath (str): Optional custom filepath for export.\n\n Returns:\n str: The filepath where the conversation was saved.\n \"\"\"\n if filepath is None:\n filepath = f\"conversations/{self.name}_{self.conversation.id}.json\"\n\n self.conversation.export_conversation(filepath)\n logger.info(f\"\ud83d\udcc4 Conversation exported to: {filepath}\")\n return filepath\n\n def display_conversation(self, detailed: bool = True):\n \"\"\"\n Display the conversation history in a formatted way.\n\n Args:\n detailed (bool): Whether to show detailed information.\n \"\"\"\n logger.info(f\"\ud83d\udcac Displaying conversation for swarm: {self.name}\")\n self.conversation.display_conversation(detailed=detailed)\n
"},{"location":"swarms/structs/custom_swarm/#creating-agents-for-your-swarm","title":"Creating Agents for Your Swarm","text":""},{"location":"swarms/structs/custom_swarm/#basic-agent-structure","title":"Basic Agent Structure","text":"class CustomAgent:\n \"\"\"\n A custom agent class that integrates with the swarm conversation system.\n\n Attributes:\n agent_name (str): The name of the agent.\n system_prompt (str): The system prompt guiding the agent's behavior.\n conversation (Optional[Conversation]): Shared conversation for context.\n \"\"\"\n\n def __init__(\n self, \n agent_name: str, \n system_prompt: str,\n conversation: Optional[Conversation] = None\n ):\n \"\"\"\n Initialize the agent with its name and system prompt.\n\n Args:\n agent_name (str): The name of the agent.\n system_prompt (str): The guiding prompt for the agent.\n conversation (Optional[Conversation]): Shared conversation context.\n \"\"\"\n self.agent_name = agent_name\n self.system_prompt = system_prompt\n self.conversation = conversation\n\n def run(self, task: str, img: str = None, *args: Any, **kwargs: Any) -> Any:\n \"\"\"\n Execute a specific task assigned to the agent.\n\n Args:\n task (str): The task description.\n img (str): The image input for processing.\n *args: Additional positional arguments.\n **kwargs: Additional keyword arguments.\n\n Returns:\n Any: The result of the task execution.\n \"\"\"\n # Add context from shared conversation if available\n context = \"\"\n if self.conversation:\n context = f\"Previous context: {self.conversation.get_last_message_as_string()}\\n\\n\"\n\n # Process the task (implement your custom logic here)\n result = f\"Agent {self.agent_name} processed: {context}{task}\"\n\n logger.info(f\"\ud83e\udd16 Agent '{self.agent_name}' completed task\")\n return result\n
"},{"location":"swarms/structs/custom_swarm/#using-swarms-framework-agents","title":"Using Swarms Framework Agents","text":"You can also use the built-in Agent class from the Swarms framework:
from swarms.structs.agent import Agent\n\ndef create_financial_agent() -> Agent:\n \"\"\"Create a financial analysis agent.\"\"\"\n return Agent(\n agent_name=\"FinancialAnalyst\",\n system_prompt=\"You are a financial analyst specializing in market analysis and risk assessment.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n )\n\ndef create_marketing_agent() -> Agent:\n \"\"\"Create a marketing analysis agent.\"\"\"\n return Agent(\n agent_name=\"MarketingSpecialist\", \n system_prompt=\"You are a marketing specialist focused on campaign analysis and customer insights.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n )\n
"},{"location":"swarms/structs/custom_swarm/#complete-implementation-example","title":"Complete Implementation Example","text":""},{"location":"swarms/structs/custom_swarm/#setting-up-your-swarm","title":"Setting Up Your Swarm","text":"import time\nfrom typing import List\n\ndef create_multi_domain_swarm() -> CustomSwarm:\n \"\"\"\n Create a comprehensive multi-domain analysis swarm.\n\n Returns:\n CustomSwarm: A configured swarm with multiple specialized agents.\n \"\"\"\n # Create agents\n agents = [\n create_financial_agent(),\n create_marketing_agent(),\n Agent(\n agent_name=\"OperationsAnalyst\",\n system_prompt=\"You are an operations analyst specializing in process optimization and efficiency.\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n ),\n ]\n\n # Configure conversation settings\n conversation_config = {\n \"backend\": \"sqlite\", # Use SQLite for persistent storage\n \"db_path\": f\"conversations/swarm_conversations.db\",\n \"time_enabled\": True,\n \"token_count\": True,\n }\n\n # Create the swarm\n swarm = CustomSwarm(\n name=\"MultiDomainAnalysisSwarm\",\n description=\"A comprehensive swarm for financial, marketing, and operations analysis\",\n agents=agents,\n max_workers=3,\n autosave_conversation=True,\n conversation_config=conversation_config,\n )\n\n return swarm\n\n# Usage example\nif __name__ == \"__main__\":\n # Create and initialize the swarm\n swarm = create_multi_domain_swarm()\n\n # Execute a complex analysis task\n task = \"\"\"\n Analyze the Q3 2024 performance data for our company:\n - Revenue: $2.5M (up 15% from Q2)\n\n - Customer acquisition: 1,200 new customers\n\n - Marketing spend: $150K\n\n - Operational costs: $800K\n\n\n Provide insights from financial, marketing, and operations perspectives.\n \"\"\"\n\n # Run the analysis\n results = swarm.run(task)\n\n # Display results\n print(\"\\n\" + \"=\"*50)\n print(\"SWARM ANALYSIS RESULTS\")\n print(\"=\"*50)\n\n for i, result in enumerate(results):\n agent_name = swarm.agents[i].agent_name\n print(f\"\\n\ud83e\udd16 {agent_name}:\")\n print(f\"\ud83d\udcca {result}\")\n\n # Get conversation summary\n summary = swarm.get_conversation_summary()\n print(f\"\\n\ud83d\udcc8 Conversation Summary:\")\n print(f\" Total messages: {summary['total_messages']}\")\n print(f\" Total tokens: {summary['token_summary']['total_tokens']}\")\n\n # Export conversation for later analysis\n export_path = swarm.export_conversation()\n print(f\"\ud83d\udcbe Conversation saved to: {export_path}\")\n
"},{"location":"swarms/structs/custom_swarm/#advanced-usage-with-concurrent-execution","title":"Advanced Usage with Concurrent Execution","text":"def run_batch_analysis():\n \"\"\"Example of running multiple tasks concurrently.\"\"\"\n swarm = create_multi_domain_swarm()\n\n tasks = [\n \"Analyze Q1 financial performance\",\n \"Evaluate marketing campaign effectiveness\", \n \"Review operational efficiency metrics\",\n \"Assess customer satisfaction trends\",\n ]\n\n # Process all tasks concurrently\n all_results = []\n for task in tasks:\n results = swarm.run_concurrent(task)\n all_results.append({\"task\": task, \"results\": results})\n\n return all_results\n
"},{"location":"swarms/structs/custom_swarm/#conversation-management-integration","title":"Conversation Management Integration","text":"The swarm uses the Swarms framework's Conversation structure for comprehensive message storage and management. This provides:
"},{"location":"swarms/structs/custom_swarm/#key-features","title":"Key Features","text":"Persistent Storage: Multiple backend options (SQLite, Redis, Supabase, etc.)
Message Categorization: Organize messages by type (input, output, error, etc.)
Token Tracking: Monitor token usage across conversations
Export/Import: Save and load conversation histories
Search Capabilities: Find specific messages or content
conversation_config = {\n # Backend storage options\n \"backend\": \"sqlite\", # or \"redis\", \"supabase\", \"duckdb\", \"in-memory\"\n\n # File-based storage\n \"db_path\": \"conversations/swarm_data.db\",\n\n # Redis configuration (if using Redis backend)\n \"redis_host\": \"localhost\",\n \"redis_port\": 6379,\n\n # Features\n \"time_enabled\": True, # Add timestamps to messages\n \"token_count\": True, # Track token usage\n \"autosave\": True, # Automatically save conversations\n \"save_enabled\": True, # Enable saving functionality\n}\n
"},{"location":"swarms/structs/custom_swarm/#accessing-conversation-data","title":"Accessing Conversation Data","text":"# Get conversation history\nhistory = swarm.conversation.return_history_as_string()\n\n# Search for specific content\nfinancial_messages = swarm.conversation.search(\"financial\")\n\n# Export conversation data\nswarm.conversation.export_conversation(\"analysis_session.json\")\n\n# Get conversation statistics\nstats = swarm.conversation.count_messages_by_role()\ntoken_usage = swarm.conversation.export_and_count_categories()\n
For complete documentation on conversation management, see the Conversation Structure Documentation.
"},{"location":"swarms/structs/custom_swarm/#conclusion","title":"Conclusion","text":"Building custom swarms with proper conversation management enables you to create powerful, scalable, and maintainable multi-agent systems. The integration with the Swarms framework's conversation structure provides:
Complete audit trail of all agent interactions
Persistent storage options for different deployment scenarios
Performance monitoring through token and message tracking
Easy debugging with searchable conversation history
Scalable architecture that grows with your needs
By following the patterns and best practices outlined in this guide, you can create robust swarms that handle complex tasks efficiently while maintaining full visibility into their operations.
"},{"location":"swarms/structs/custom_swarm/#key-takeaways","title":"Key Takeaways","text":"For more advanced patterns and examples, explore the Swarms Examples and consider contributing your custom swarms back to the community by submitting a pull request to the Swarms repository.
"},{"location":"swarms/structs/custom_swarm/#additional-resources","title":"Additional Resources","text":"Conversation Structure Documentation - Complete guide to conversation management
Agent Documentation - Learn about creating and configuring agents
Multi-Agent Architectures - Explore other swarm patterns and architectures
Examples Repository - Real-world swarm implementations
Swarms Framework GitHub - Source code and contributions
Overview
The Deep Research Swarm is a powerful, production-grade research system that conducts comprehensive analysis across multiple domains using parallel processing and advanced AI agents.
Key Features:
Parallel search processing
Multi-agent research coordination
Advanced information synthesis
Automated query generation
Concurrent task execution
Quick Installation
pip install swarms\n
Basic UsageBatch Processing from swarms.structs import DeepResearchSwarm\n\n# Initialize the swarm\nswarm = DeepResearchSwarm(\n name=\"MyResearchSwarm\",\n output_type=\"json\",\n max_loops=1\n)\n\n# Run a single research task\nresults = swarm.run(\"What are the latest developments in quantum computing?\")\n
# Run multiple research tasks in parallel\ntasks = [\n \"What are the environmental impacts of electric vehicles?\",\n \"How is AI being used in drug discovery?\",\n]\nbatch_results = swarm.batched_run(tasks)\n
"},{"location":"swarms/structs/deep_research_swarm/#configuration","title":"Configuration","text":"Constructor Arguments
Parameter Type Default Descriptionname
str \"DeepResearchSwarm\" Name identifier for the swarm description
str \"A swarm that conducts...\" Description of the swarm's purpose research_agent
Agent research_agent Custom research agent instance max_loops
int 1 Maximum number of research iterations nice_print
bool True Enable formatted console output output_type
str \"json\" Output format (\"json\" or \"string\") max_workers
int CPU_COUNT * 2 Maximum concurrent threads token_count
bool False Enable token counting research_model_name
str \"gpt-4o-mini\" Model to use for research"},{"location":"swarms/structs/deep_research_swarm/#core-methods","title":"Core Methods","text":""},{"location":"swarms/structs/deep_research_swarm/#run","title":"Run","text":"Single Task Execution
results = swarm.run(\"What are the latest breakthroughs in fusion energy?\")\n
"},{"location":"swarms/structs/deep_research_swarm/#batched-run","title":"Batched Run","text":"Parallel Task Execution
tasks = [\n \"What are current AI safety initiatives?\",\n \"How is CRISPR being used in agriculture?\",\n]\nresults = swarm.batched_run(tasks)\n
"},{"location":"swarms/structs/deep_research_swarm/#step","title":"Step","text":"Single Step Execution
results = swarm.step(\"Analyze recent developments in renewable energy storage\")\n
"},{"location":"swarms/structs/deep_research_swarm/#domain-specific-examples","title":"Domain-Specific Examples","text":"Scientific ResearchMarket ResearchNews AnalysisMedical Research science_swarm = DeepResearchSwarm(\n name=\"ScienceSwarm\",\n output_type=\"json\",\n max_loops=2 # More iterations for thorough research\n)\n\nresults = science_swarm.run(\n \"What are the latest experimental results in quantum entanglement?\"\n)\n
market_swarm = DeepResearchSwarm(\n name=\"MarketSwarm\",\n output_type=\"json\"\n)\n\nresults = market_swarm.run(\n \"What are the emerging trends in electric vehicle battery technology market?\"\n)\n
news_swarm = DeepResearchSwarm(\n name=\"NewsSwarm\",\n output_type=\"string\" # Human-readable output\n)\n\nresults = news_swarm.run(\n \"What are the global economic impacts of recent geopolitical events?\"\n)\n
medical_swarm = DeepResearchSwarm(\n name=\"MedicalSwarm\",\n max_loops=2\n)\n\nresults = medical_swarm.run(\n \"What are the latest clinical trials for Alzheimer's treatment?\"\n)\n
"},{"location":"swarms/structs/deep_research_swarm/#advanced-features","title":"Advanced Features","text":"Custom Research Agent from swarms import Agent\n\ncustom_agent = Agent(\n agent_name=\"SpecializedResearcher\",\n system_prompt=\"Your specialized prompt here\",\n model_name=\"gpt-4\"\n)\n\nswarm = DeepResearchSwarm(\n research_agent=custom_agent,\n max_loops=2\n)\n
Parallel Processing Control swarm = DeepResearchSwarm(\n max_workers=8, # Limit to 8 concurrent threads\n nice_print=False # Disable console output for production\n)\n
"},{"location":"swarms/structs/deep_research_swarm/#best-practices","title":"Best Practices","text":"Recommended Practices
max_workers
based on your system's capabilitiesoutput_type
for your use caseKnown Limitations
Requires valid API keys for external services
Performance depends on system resources
Rate limits may apply to external API calls
Token limits apply to model responses
Agent
class","text":"The Agent class is a powerful and flexible tool that empowers AI agents to build their own custom agents, tailored to their specific needs.
This comprehensive guide will explore the process of inheriting from the Agent class, enabling agents to create their own custom agent classes. By leveraging the rich features and extensibility of the Agent class, agents can imbue their offspring agents with unique capabilities, specialized toolsets, and tailored decision-making processes.
"},{"location":"swarms/structs/diy_your_own_agent/#understanding-the-agent-class","title":"Understanding the Agent Class","text":"Before we dive into the intricacies of creating custom agent classes, let's revisit the foundational elements of the Agent class itself. The Agent class is a versatile and feature-rich class designed to streamline the process of building and managing AI agents. It acts as a backbone, connecting language models (LLMs) with various tools, long-term memory, and a wide range of customization options.
"},{"location":"swarms/structs/diy_your_own_agent/#key-features-of-the-agent-class","title":"Key Features of the Agent Class","text":"The Agent class offers a plethora of features that can be inherited and extended by custom agent classes. Here are some of the key features that make the Agent class a powerful foundation:
1. Language Model Integration: The Agent class supports seamless integration with popular language models such as LangChain, HuggingFace Transformers, and Autogen, allowing custom agent classes to leverage the power of state-of-the-art language models.
2. Tool Integration: One of the standout features of the Agent class is its ability to integrate with various tools. Custom agent classes can inherit this capability and incorporate specialized tools tailored to their specific use cases.
3. Long-Term Memory: The Agent class provides built-in support for long-term memory, enabling custom agent classes to retain and access information from previous interactions, essential for maintaining context and learning from past experiences.
4. Customizable Prompts and Standard Operating Procedures (SOPs): The Agent class allows you to define custom prompts and Standard Operating Procedures (SOPs) that guide an agent's behavior and decision-making process. Custom agent classes can inherit and extend these prompts and SOPs to align with their unique objectives and requirements.
5. Interactive and Dashboard Modes: The Agent class supports interactive and dashboard modes, enabling real-time monitoring and interaction with agents. Custom agent classes can inherit these modes, facilitating efficient development, debugging, and user interaction.
6. Autosave and State Management: With the Agent class, agents can easily save and load their state, including configuration, memory, and history. Custom agent classes can inherit this capability, ensuring seamless task continuation and enabling efficient collaboration among team members.
7. Response Filtering: The Agent class provides built-in response filtering capabilities, allowing agents to filter out or replace specific words or phrases in their responses. Custom agent classes can inherit and extend this feature to ensure compliance with content moderation policies or specific guidelines.
8. Code Execution and Multimodal Support: The Agent class supports code execution and multimodal input/output, enabling agents to process and generate code, as well as handle various data formats such as images, audio, and video. Custom agent classes can inherit and specialize these capabilities for their unique use cases.
9. Extensibility and Customization: The Agent class is designed to be highly extensible and customizable, allowing agents to tailor its behavior, add custom functionality, and integrate with external libraries and APIs. Custom agent classes can leverage this extensibility to introduce specialized features and capabilities.
"},{"location":"swarms/structs/diy_your_own_agent/#creating-a-custom-agent-class","title":"Creating a Custom Agent Class","text":"Now that we have a solid understanding of the Agent class and its features, let's dive into the process of creating a custom agent class by inheriting from the Agent class. Throughout this process, we'll explore how agents can leverage and extend the existing functionality, while introducing specialized features and capabilities tailored to their unique requirements.
"},{"location":"swarms/structs/diy_your_own_agent/#step-1-inherit-from-the-agent-class","title":"Step 1: Inherit from the Agent Class","text":"The first step in creating a custom agent class is to inherit from the Agent class. This will provide your custom agent class with the foundational features and capabilities of the Agent class, which can then be extended and customized as needed. The new agent class must have a run(task: str)
method to run the entire agent. It is encouraged to have step(task: str)
method that completes one step of the agent and then build the run(task: str)
method.
from swarms import Agent\n\nclass MyCustomAgent(Agent):\n\n\u00a0 \u00a0 def __init__(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super().__init__(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Add custom initialization logic here\n\n def run(self, task: str) -> \n ...\n
In the example above, we define a new class MyCustomAgent
that inherits from the Agent
class. Within the __init__
method, we call the parent class's __init__
method using super().__init__(*args, **kwargs)
, which ensures that the parent class's initialization logic is executed. You can then add any custom initialization logic specific to your custom agent class.
One of the key advantages of inheriting from the Agent class is the ability to customize the agent's behavior according to your specific requirements. This can be achieved by overriding or extending the existing methods, or by introducing new methods altogether.
from swarms import Agent\n\n\nclass MyCustomAgent(Agent):\n\n\u00a0 \u00a0 def __init__(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super().__init__(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom initialization logic\n\n\u00a0 \u00a0 def custom_method(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Implement custom logic here\n\n\u00a0 \u00a0 \u00a0 \u00a0 pass\n\n\u00a0 \u00a0 def run(self, task, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Customize the run method\n\n\u00a0 \u00a0 \u00a0 \u00a0 response = super().run(task, *args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Additional custom logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 return response\n
In the example above, we introduce a new custom_method
that can encapsulate any specialized logic or functionality specific to your custom agent class. Additionally, we override the run
method, which is responsible for executing the agent's main task loop. Within the overridden run
method, you can call the parent class's run
method using super().run(task, *args, **kwargs)
and then introduce any additional custom logic before or after the parent method's execution.
The Agent class provides built-in support for long-term memory, allowing agents to retain and access information from previous interactions. Custom agent classes can inherit and extend this capability by introducing specialized memory management techniques.
from swarms_memory import BaseVectorDatabase\nfrom swarms import Agent\n\n\nclass CustomMemory(BaseVectorDatabase):\n\n\u00a0 \u00a0 def __init__(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super().__init__(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom memory initialization logic\n\n\u00a0 \u00a0 def query(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom memory query logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 return result\n\nclass MyCustomAgent(Agent):\n\n\u00a0 \u00a0 def __init__(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super().__init__(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom initialization logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 self.long_term_memory = CustomMemory()\n\n\u00a0 \u00a0 def run(self, task, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Customize the run method\n\n\u00a0 \u00a0 \u00a0 \u00a0 response = super().run(task, *args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Utilize custom memory\n\n\u00a0 \u00a0 \u00a0 \u00a0 memory_result = self.long_term_memory.query(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Process memory result\n\n\u00a0 \u00a0 \u00a0 \u00a0 return response\n
In the example above, we define a new CustomMemory
class that inherits from the BaseVectorDatabase
class provided by the Agent class framework. Within the CustomMemory
class, you can implement specialized memory management logic, such as custom indexing, retrieval, and storage mechanisms.
Next, within the MyCustomAgent
class, we initialize an instance of the CustomMemory
class and assign it to the self.long_term_memory
attribute. This custom memory instance can then be utilized within the overridden run
method, where you can query the memory and process the results as needed.
The Agent class allows you to define custom prompts and Standard Operating Procedures (SOPs) that guide an agent's behavior and decision-making process. Custom agent classes can inherit and extend these prompts and SOPs to align with their unique objectives and requirements.
from swarms import Agent\n\n\nclass MyCustomAgent(Agent):\n\n\u00a0 \u00a0 def __init__(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super().__init__(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom initialization logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 self.custom_sop = \"Custom SOP for MyCustomAgent...\"\n\n\u00a0 \u00a0 \u00a0 \u00a0 self.custom_prompt = \"Custom prompt for MyCustomAgent...\"\n\n\u00a0 \u00a0 def run(self, task, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Customize the run method\n\n\u00a0 \u00a0 \u00a0 \u00a0 response = super().run(task, *args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Utilize custom prompts and SOPs\n\n\u00a0 \u00a0 \u00a0 \u00a0 custom_prompt = self.construct_dynamic_prompt(self.custom_prompt)\n\n\u00a0 \u00a0 \u00a0 \u00a0 custom_sop = self.construct_dynamic_sop(self.custom_sop)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Process custom prompts and SOPs\n\n\u00a0 \u00a0 \u00a0 \u00a0 return response\n\n\u00a0 \u00a0 def construct_dynamic_prompt(self, prompt):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom prompt construction logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 return prompt\n\n\u00a0 \u00a0 def construct_dynamic_sop(self, sop):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom SOP construction logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 return sop\n
In the example above, we define two new attributes within the MyCustomAgent
class: custom_sop
and custom_prompt
. These attributes can be used to store custom prompts and SOPs specific to your custom agent class.
Within the overridden run
method, you can utilize these custom prompts and SOPs by calling the construct_dynamic_prompt
and construct_dynamic_sop
methods, which can be defined within the MyCustomAgent
class to implement specialized prompt and SOP construction logic.
The Agent class provides built-in response filtering capabilities, allowing agents to filter out or replace specific words or phrases in their responses. Custom agent classes can inherit and extend this feature to ensure compliance with content moderation policies or specific guidelines.
from swarms import Agent\n\n\nclass MyCustomAgent(Agent):\n\n\u00a0 \u00a0 def __init__(self, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 super().__init__(*args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom initialization logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 self.response_filters = [\"filter_word_1\", \"filter_word_2\"]\n\n\u00a0 \u00a0 def run(self, task, *args, **kwargs):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Customize the run method\n\n\u00a0 \u00a0 \u00a0 \u00a0 response = super().run(task, *args, **kwargs)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Apply custom response filtering\n\n\u00a0 \u00a0 \u00a0 \u00a0 filtered_response = self.apply_response_filters(response)\n\n\u00a0 \u00a0 \u00a0 \u00a0 return filtered_response\n\n\u00a0 \u00a0 def apply_response_filters(self, response):\n\n\u00a0 \u00a0 \u00a0 \u00a0 # Custom response filtering logic\n\n\u00a0 \u00a0 \u00a0 \u00a0 for word in self.response_filters:\n\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 response = response.replace(word, \"[FILTERED]\")\n\n\u00a0 \u00a0 \u00a0 \u00a0 return response\n
In the example above, we define a new attribute response_filters
within the MyCustomAgent
class, which is a list of words or phrases that should be filtered out or replaced in the agent's responses.
Within the overridden run
method, we call the apply_response_filters
method, which can be defined within the MyCustomAgent
class to implement specialized response filtering logic. In the example, we iterate over the response_filters
list and replace each filtered word or phrase with a placeholder string (\"[FILTERED]\"
).
The Agent class and its inherited custom agent classes can be further extended and customized to suit specific requirements and integrate with external libraries, APIs, and services. Here are some advanced customization and integration examples:
1. Multimodal Input/Output Integration: Custom agent classes can leverage the multimodal input/output capabilities of the Agent class and introduce specialized handling for various data formats such as images, audio, and video.
2. Code Execution and Integration: The Agent class supports code execution, enabling agents to run and evaluate code snippets. Custom agent classes can inherit and extend this capability, introducing specialized code execution environments, sandboxing mechanisms, or integration with external code repositories or platforms.
3. External API and Service Integration: Custom agent classes can integrate with external APIs and services, enabling agents to leverage specialized data sources, computational resources, or domain-specific services.
4. Performance Optimization: Depending on the use case and requirements, custom agent classes can introduce performance optimizations, such as adjusting loop intervals, retry attempts, or enabling parallel execution for certain tasks.
5. Logging and Monitoring: Custom agent classes can introduce specialized logging and monitoring mechanisms, enabling agents to track their performance, identify potential issues, and generate detailed reports or dashboards.
6. Security and Privacy Enhancements: Custom agent classes can implement security and privacy enhancements, such as data encryption, access control mechanisms, or compliance with industry-specific regulations and standards.
7. Distributed Execution and Scaling: Custom agent classes can be designed to support distributed execution and scaling, enabling agents to leverage cloud computing resources or distributed computing frameworks for handling large-scale tasks or high-concurrency workloads.
By leveraging these advanced customization and integration capabilities, agents can create highly specialized and sophisticated custom agent classes tailored to their unique requirements and use cases.
"},{"location":"swarms/structs/diy_your_own_agent/#best-practices-and-considerations","title":"Best Practices and Considerations","text":"While building custom agent classes by inheriting from the Agent class offers immense flexibility and power, it's essential to follow best practices and consider potential challenges and considerations:
1. Maintainability and Documentation: As custom agent classes become more complex, it's crucial to prioritize maintainability and thorough documentation. Clear and concise code, comprehensive comments, and up-to-date documentation can significantly improve the long-term sustainability and collaboration efforts surrounding custom agent classes.
2. Testing and Validation: Custom agent classes should undergo rigorous testing and validation to ensure their correctness, reliability, and adherence to expected behaviors. Establish a robust testing framework and continuously validate the agent's performance, particularly after introducing new features or integrations.
3. Security and Privacy Considerations: When building custom agent classes, it's essential to consider security and privacy implications, especially if the agents will handle sensitive data or interact with critical systems. Implement appropriate security measures, such as access controls, data encryption, and secure communication protocols, to protect against potential vulnerabilities and ensure compliance with relevant regulations and standards.
4. Scalability and Performance Monitoring: As custom agent classes are deployed and adopted, it's important to monitor their scalability and performance characteristics. Identify potential bottlenecks, resource constraints, or performance degradation, and implement appropriate optimization strategies or scaling mechanisms to ensure efficient and reliable operation.
5. Collaboration and Knowledge Sharing: Building custom agent classes often involves collaboration among teams and stakeholders. Foster an environment of knowledge sharing, code reviews, and open communication to ensure that everyone involved understands the agent's capabilities, limitations, and intended use cases.
6. Ethical Considerations: As AI agents become more advanced and autonomous, it's crucial to consider the ethical implications of their actions and decisions. Implement appropriate safeguards, oversight mechanisms, and ethical guidelines to ensure that custom agent classes operate in a responsible and transparent manner, aligning with ethical principles and societal values.
7. Continuous Learning and Adaptation: The field of AI is rapidly evolving, with new techniques, tools, and best practices emerging regularly. Stay up-to-date with the latest developments and be prepared to adapt and refine your custom agent classes as new advancements become available.
By following these best practices and considering potential challenges, agents can create robust, reliable, and ethical custom agent classes that meet their specific requirements while adhering to industry standards and best practices.
"},{"location":"swarms/structs/diy_your_own_agent/#conclusion","title":"Conclusion","text":"In this comprehensive guide, we have explored the process of creating custom agent classes by inheriting from the powerful Agent class. We have covered the key features of the Agent class, walked through the step-by-step process of inheriting and extending its functionality, and discussed advanced customization and integration techniques.
Building custom agent classes empowers AI agents to create tailored and specialized agents capable of tackling unique challenges and addressing specific domain requirements. By leveraging the rich features and extensibility of the Agent class, agents can imbue their offspring agents with unique capabilities, specialized toolsets, and tailored decision-making processes.
Remember, the journey of building custom agent classes is an iterative and collaborative process that requires continuous learning, adaptation, and refinement.
"},{"location":"swarms/structs/forest_swarm/","title":"Forest Swarm","text":"This documentation describes the ForestSwarm that organizes agents into trees. Each agent specializes in processing specific tasks. Trees are collections of agents, each assigned based on their relevance to a task through keyword extraction and embedding-based similarity.
The architecture allows for efficient task assignment by selecting the most relevant agent from a set of trees. Tasks are processed asynchronously, with agents selected based on task relevance, calculated by the similarity of system prompts and task keywords.
"},{"location":"swarms/structs/forest_swarm/#module-path-swarmsstructstree_swarm","title":"Module Path:swarms.structs.tree_swarm
","text":""},{"location":"swarms/structs/forest_swarm/#class-treeagent","title":"Class: TreeAgent
","text":"TreeAgent
represents an individual agent responsible for handling a specific task. Agents are initialized with a system prompt and are responsible for dynamically determining their relevance to a given task.
system_prompt
str
A string that defines the agent's area of expertise and task-handling capability. llm
callable
The language model (LLM) used to process tasks (e.g., GPT-4). agent_name
str
The name of the agent. system_prompt_embedding
tensor
Embedding of the system prompt for similarity-based task matching. relevant_keywords
List[str]
Keywords dynamically extracted from the system prompt to assist in task matching. distance
Optional[float]
The computed distance between agents based on embedding similarity."},{"location":"swarms/structs/forest_swarm/#methods","title":"Methods","text":"Method Input Output Description calculate_distance(other_agent: TreeAgent)
other_agent: TreeAgent
float
Calculates the cosine similarity between this agent and another agent. run_task(task: str)
task: str
Any
Executes the task, logs the input/output, and returns the result. is_relevant_for_task(task: str, threshold: float = 0.7)
task: str, threshold: float
bool
Checks if the agent is relevant for the task using keyword matching or embedding similarity."},{"location":"swarms/structs/forest_swarm/#class-tree","title":"Class: Tree
","text":"Tree
organizes multiple agents into a hierarchical structure, where agents are sorted based on their relevance to tasks.
tree_name
str
The name of the tree (represents a domain of agents, e.g., \"Financial Tree\"). agents
List[TreeAgent]
List of agents belonging to this tree."},{"location":"swarms/structs/forest_swarm/#methods_1","title":"Methods","text":"Method Input Output Description calculate_agent_distances()
None
None
Calculates and assigns distances between agents based on similarity of prompts. find_relevant_agent(task: str)
task: str
Optional[TreeAgent]
Finds the most relevant agent for a task based on keyword and embedding similarity. log_tree_execution(task: str, selected_agent: TreeAgent, result: Any)
task: str, selected_agent: TreeAgent, result: Any
None
Logs details of the task execution by the selected agent."},{"location":"swarms/structs/forest_swarm/#class-forestswarm","title":"Class: ForestSwarm
","text":"ForestSwarm
is the main class responsible for managing multiple trees. It oversees task delegation by finding the most relevant tree and agent for a given task.
trees
List[Tree]
List of trees containing agents organized by domain."},{"location":"swarms/structs/forest_swarm/#methods_2","title":"Methods","text":"Method Input Output Description find_relevant_tree(task: str)
task: str
Optional[Tree]
Searches across all trees to find the most relevant tree based on task requirements. run(task: str)
task: str
Any
Executes the task by finding the most relevant agent from the relevant tree."},{"location":"swarms/structs/forest_swarm/#full-code-example","title":"Full Code Example","text":"from swarms.structs.tree_swarm import TreeAgent, Tree, ForestSwarm\n# Example Usage:\n\n# Create agents with varying system prompts and dynamically generated distances/keywords\nagents_tree1 = [\n TreeAgent(\n system_prompt=\"Stock Analysis Agent\",\n agent_name=\"Stock Analysis Agent\",\n ),\n TreeAgent(\n system_prompt=\"Financial Planning Agent\",\n agent_name=\"Financial Planning Agent\",\n ),\n TreeAgent(\n agent_name=\"Retirement Strategy Agent\",\n system_prompt=\"Retirement Strategy Agent\",\n ),\n]\n\nagents_tree2 = [\n TreeAgent(\n system_prompt=\"Tax Filing Agent\",\n agent_name=\"Tax Filing Agent\",\n ),\n TreeAgent(\n system_prompt=\"Investment Strategy Agent\",\n agent_name=\"Investment Strategy Agent\",\n ),\n TreeAgent(\n system_prompt=\"ROTH IRA Agent\", agent_name=\"ROTH IRA Agent\"\n ),\n]\n\n# Create trees\ntree1 = Tree(tree_name=\"Financial Tree\", agents=agents_tree1)\ntree2 = Tree(tree_name=\"Investment Tree\", agents=agents_tree2)\n\n# Create the ForestSwarm\nmulti_agent_structure = ForestSwarm(trees=[tree1, tree2])\n\n# Run a task\ntask = \"Our company is incorporated in delaware, how do we do our taxes for free?\"\noutput = multi_agent_structure.run(task)\nprint(output)\n
"},{"location":"swarms/structs/forest_swarm/#example-workflow","title":"Example Workflow","text":"Task: \"Our company is incorporated in Delaware, how do we do our taxes for free?\"\n
Process: - The system searches through the Financial Tree
and Investment Tree
. - The most relevant agent (likely the \"Tax Filing Agent\") is selected based on keyword matching and prompt similarity. - The task is processed, and the result is logged and returned.
The Swarm Architecture leverages a hierarchical structure (forest) composed of individual trees, each containing agents specialized in specific domains. This design allows for:
graph TD\n A[ForestSwarm] --> B[Financial Tree]\n A --> C[Investment Tree]\n\n B --> D[Stock Analysis Agent]\n B --> E[Financial Planning Agent]\n B --> F[Retirement Strategy Agent]\n\n C --> G[Tax Filing Agent]\n C --> H[Investment Strategy Agent]\n C --> I[ROTH IRA Agent]\n\n subgraph Tree Agents\n D[Stock Analysis Agent]\n E[Financial Planning Agent]\n F[Retirement Strategy Agent]\n G[Tax Filing Agent]\n H[Investment Strategy Agent]\n I[ROTH IRA Agent]\n end
"},{"location":"swarms/structs/forest_swarm/#explanation-of-the-diagram","title":"Explanation of the Diagram","text":"This Multi-Agent Tree Structure provides an efficient, scalable, and accurate architecture for delegating and executing tasks based on domain-specific expertise. The combination of hierarchical organization, dynamic task matching, and logging ensures reliability, performance, and transparency in task execution.
"},{"location":"swarms/structs/graph_workflow/","title":"GraphWorkflow Documentation","text":"The GraphWorkflow
class is a pivotal part of the workflow management system, representing a directed graph where nodes signify tasks or agents and edges represent the flow or dependencies between these nodes. This class leverages the NetworkX library to manage and manipulate the directed graph, allowing users to create complex workflows with defined entry and end points.
nodes
Dict[str, Node]
A dictionary of nodes in the graph, where the key is the node ID and the value is the Node object. Field(default_factory=dict)
edges
List[Edge]
A list of edges in the graph, where each edge is represented by an Edge object. Field(default_factory=list)
entry_points
List[str]
A list of node IDs that serve as entry points to the graph. Field(default_factory=list)
end_points
List[str]
A list of node IDs that serve as end points of the graph. Field(default_factory=list)
graph
nx.DiGraph
A directed graph object from the NetworkX library representing the workflow graph. Field(default_factory=nx.DiGraph)
max_loops
int
Maximum number of times the workflow can loop during execution. 1
"},{"location":"swarms/structs/graph_workflow/#methods","title":"Methods","text":""},{"location":"swarms/structs/graph_workflow/#add_nodenode-node","title":"add_node(node: Node)
","text":"Adds a node to the workflow graph.
Parameter Type Descriptionnode
Node
The node object to be added. Raises: - ValueError
: If a node with the same ID already exists in the graph.
add_edge(edge: Edge)
","text":"Adds an edge to the workflow graph.
Parameter Type Descriptionedge
Edge
The edge object to be added. Raises: - ValueError
: If either the source or target node of the edge does not exist in the graph.
set_entry_points(entry_points: List[str])
","text":"Sets the entry points of the workflow graph.
Parameter Type Descriptionentry_points
List[str]
A list of node IDs to be set as entry points. Raises: - ValueError
: If any of the specified node IDs do not exist in the graph.
set_end_points(end_points: List[str])
","text":"Sets the end points of the workflow graph.
Parameter Type Descriptionend_points
List[str]
A list of node IDs to be set as end points. Raises: - ValueError
: If any of the specified node IDs do not exist in the graph.
visualize() -> str
","text":"Generates a string representation of the workflow graph in the Mermaid syntax.
Returns: - str
: The Mermaid string representation of the workflow graph.
run(task: str = None, *args, **kwargs) -> Dict[str, Any]
","text":"Function to run the workflow graph.
Parameter Type Descriptiontask
str
The task to be executed by the workflow. *args
Variable length argument list. **kwargs
Arbitrary keyword arguments. Returns: - Dict[str, Any]
: A dictionary containing the results of the execution.
Raises: - ValueError
: If no entry points or end points are defined in the graph.
The add_node
method is used to add nodes to the graph. Each node must have a unique ID. If a node with the same ID already exists, a ValueError
is raised.
wf_graph = GraphWorkflow()\nnode1 = Node(id=\"node1\", type=NodeType.TASK, callable=sample_task)\nwf_graph.add_node(node1)\n
"},{"location":"swarms/structs/graph_workflow/#adding-edges","title":"Adding Edges","text":"The add_edge
method connects nodes with edges. Both the source and target nodes of the edge must already exist in the graph, otherwise a ValueError
is raised.
edge1 = Edge(source=\"node1\", target=\"node2\")\nwf_graph.add_edge(edge1)\n
"},{"location":"swarms/structs/graph_workflow/#setting-entry-and-end-points","title":"Setting Entry and End Points","text":"The set_entry_points
and set_end_points
methods define which nodes are the starting and ending points of the workflow, respectively. If any specified node IDs do not exist, a ValueError
is raised.
wf_graph.set_entry_points([\"node1\"])\nwf_graph.set_end_points([\"node2\"])\n
"},{"location":"swarms/structs/graph_workflow/#visualizing-the-graph","title":"Visualizing the Graph","text":"The visualize
method generates a Mermaid string representation of the workflow graph. This can be useful for visualizing the workflow structure.
print(wf_graph.visualize())\n
"},{"location":"swarms/structs/graph_workflow/#running-the-workflow","title":"Running the Workflow","text":"The run
method executes the workflow. It performs a topological sort of the graph to ensure nodes are executed in the correct order. The results of each node's execution are returned in a dictionary.
results = wf_graph.run()\nprint(\"Execution results:\", results)\n
"},{"location":"swarms/structs/graph_workflow/#example-usage","title":"Example Usage","text":"Below is a comprehensive example demonstrating the creation and execution of a workflow graph:
from swarms import Agent, Edge, GraphWorkflow, Node, NodeType\n\n# Initialize two agents with GPT-4o-mini model and desired parameters\nagent1 = Agent(\n model_name=\"gpt-4o-mini\",\n temperature=0.5,\n max_tokens=4000,\n max_loops=1,\n autosave=True,\n dashboard=True,\n)\nagent2 = Agent(\n model_name=\"gpt-4o-mini\",\n temperature=0.5,\n max_tokens=4000,\n max_loops=1,\n autosave=True,\n dashboard=True,\n)\n\ndef sample_task():\n print(\"Running sample task\")\n return \"Task completed\"\n\n# Build workflow graph\nwf_graph = GraphWorkflow()\nwf_graph.add_node(Node(id=\"agent1\", type=NodeType.AGENT, agent=agent1))\nwf_graph.add_node(Node(id=\"agent2\", type=NodeType.AGENT, agent=agent2))\nwf_graph.add_node(Node(id=\"task1\", type=NodeType.TASK, callable=sample_task))\n\nwf_graph.add_edge(Edge(source=\"agent1\", target=\"task1\"))\nwf_graph.add_edge(Edge(source=\"agent2\", target=\"task1\"))\n\nwf_graph.set_entry_points([\"agent1\", \"agent2\"])\nwf_graph.set_end_points([\"task1\"])\n\n# Visualize and run\nprint(wf_graph.visualize())\nresults = wf_graph.run()\nprint(\"Execution results:\", results)\n
In this example, we set up a workflow graph with two agents and one task. We define the entry and end points, visualize the graph, and then execute the workflow, capturing and printing the results.
"},{"location":"swarms/structs/graph_workflow/#additional-information-and-tips","title":"Additional Information and Tips","text":"GraphWorkflow
class includes error handling to ensure that invalid operations (such as adding duplicate nodes or edges with non-existent nodes) raise appropriate exceptions.max_loops
attribute allows the workflow to loop through the graph multiple times if needed. This can be useful for iterative tasks.A production-grade multi-agent system enabling sophisticated group conversations between AI agents with customizable speaking patterns, parallel processing capabilities, and comprehensive conversation tracking.
"},{"location":"swarms/structs/group_chat/#advanced-configuration","title":"Advanced Configuration","text":""},{"location":"swarms/structs/group_chat/#agent-parameters","title":"Agent Parameters","text":"Parameter Type Default Description agent_name str Required Unique identifier for the agent system_prompt str Required Role and behavior instructions llm Any Required Language model instance max_loops int 1 Maximum conversation turns autosave bool False Enable conversation saving dashboard bool False Enable monitoring dashboard verbose bool True Enable detailed logging dynamic_temperature bool True Enable dynamic temperature retry_attempts int 1 Failed request retry count context_length int 200000 Maximum context window output_type str \"string\" Response format type streaming_on bool False Enable streaming responses"},{"location":"swarms/structs/group_chat/#groupchat-parameters","title":"GroupChat Parameters","text":"Parameter Type Default Description name str \"GroupChat\" Chat group identifier description str \"\" Purpose description agents List[Agent] [] Participating agents speaker_fn Callable round_robin Speaker selection function max_loops int 10 Maximum conversation turns"},{"location":"swarms/structs/group_chat/#table-of-contents","title":"Table of Contents","text":"pip3 install swarms swarm-models loguru\n
"},{"location":"swarms/structs/group_chat/#core-concepts","title":"Core Concepts","text":"The GroupChat system consists of several key components:
import os\nfrom dotenv import load_dotenv\nfrom swarm_models import OpenAIChat\nfrom swarms import Agent, GroupChat, expertise_based\n\n\nif __name__ == \"__main__\":\n\n load_dotenv()\n\n # Get the OpenAI API key from the environment variable\n api_key = os.getenv(\"OPENAI_API_KEY\")\n\n # Create an instance of the OpenAIChat class\n model = OpenAIChat(\n openai_api_key=api_key,\n model_name=\"gpt-4o-mini\",\n temperature=0.1,\n )\n\n # Example agents\n agent1 = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=\"You are a financial analyst specializing in investment strategies.\",\n llm=model,\n max_loops=1,\n autosave=False,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n streaming_on=False,\n )\n\n agent2 = Agent(\n agent_name=\"Tax-Adviser-Agent\",\n system_prompt=\"You are a tax adviser who provides clear and concise guidance on tax-related queries.\",\n llm=model,\n max_loops=1,\n autosave=False,\n dashboard=False,\n verbose=True,\n dynamic_temperature_enabled=True,\n user_name=\"swarms_corp\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n streaming_on=False,\n )\n\n agents = [agent1, agent2]\n\n chat = GroupChat(\n name=\"Investment Advisory\",\n description=\"Financial and tax analysis group\",\n agents=agents,\n speaker_fn=expertise_based,\n )\n\n history = chat.run(\n \"How to optimize tax strategy for investments?\"\n )\n print(history.model_dump_json(indent=2))\n
"},{"location":"swarms/structs/group_chat/#speaker-functions","title":"Speaker Functions","text":""},{"location":"swarms/structs/group_chat/#built-in-functions","title":"Built-in Functions","text":"def round_robin(history: List[str], agent: Agent) -> bool:\n \"\"\"\n Enables agents to speak in turns.\n Returns True for each agent in sequence.\n \"\"\"\n return True\n\ndef expertise_based(history: List[str], agent: Agent) -> bool:\n \"\"\"\n Enables agents to speak based on their expertise.\n Returns True if agent's role matches conversation context.\n \"\"\"\n return agent.system_prompt.lower() in history[-1].lower() if history else True\n\ndef random_selection(history: List[str], agent: Agent) -> bool:\n \"\"\"\n Randomly selects speaking agents.\n Returns True/False with 50% probability.\n \"\"\"\n import random\n return random.choice([True, False])\n\ndef most_recent(history: List[str], agent: Agent) -> bool:\n \"\"\"\n Enables agents to respond to their mentions.\n Returns True if agent was last speaker.\n \"\"\"\n return agent.agent_name == history[-1].split(\":\")[0].strip() if history else True\n
"},{"location":"swarms/structs/group_chat/#custom-speaker-function-example","title":"Custom Speaker Function Example","text":"def custom_speaker(history: List[str], agent: Agent) -> bool:\n \"\"\"\n Custom speaker function with complex logic.\n\n Args:\n history: Previous conversation messages\n agent: Current agent being evaluated\n\n Returns:\n bool: Whether agent should speak\n \"\"\"\n # No history - let everyone speak\n if not history:\n return True\n\n last_message = history[-1].lower()\n\n # Check for agent expertise keywords\n expertise_relevant = any(\n keyword in last_message \n for keyword in agent.expertise_keywords\n )\n\n # Check for direct mentions\n mentioned = agent.agent_name.lower() in last_message\n\n # Check if agent hasn't spoken recently\n not_recent_speaker = not any(\n agent.agent_name in msg \n for msg in history[-3:]\n )\n\n return expertise_relevant or mentioned or not_recent_speaker\n\n# Usage\nchat = GroupChat(\n agents=[agent1, agent2],\n speaker_fn=custom_speaker\n)\n
"},{"location":"swarms/structs/group_chat/#response-models","title":"Response Models","text":""},{"location":"swarms/structs/group_chat/#complete-schema","title":"Complete Schema","text":"class AgentResponse(BaseModel):\n \"\"\"Individual agent response in a conversation turn\"\"\"\n agent_name: str\n role: str\n message: str\n timestamp: datetime = Field(default_factory=datetime.now)\n turn_number: int\n preceding_context: List[str] = Field(default_factory=list)\n\nclass ChatTurn(BaseModel):\n \"\"\"Single turn in the conversation\"\"\"\n turn_number: int\n responses: List[AgentResponse]\n task: str\n timestamp: datetime = Field(default_factory=datetime.now)\n\nclass ChatHistory(BaseModel):\n \"\"\"Complete conversation history\"\"\"\n turns: List[ChatTurn]\n total_messages: int\n name: str\n description: str\n start_time: datetime = Field(default_factory=datetime.now)\n
"},{"location":"swarms/structs/group_chat/#advanced-examples","title":"Advanced Examples","text":""},{"location":"swarms/structs/group_chat/#multi-agent-analysis-team","title":"Multi-Agent Analysis Team","text":"# Create specialized agents\ndata_analyst = Agent(\n agent_name=\"Data-Analyst\",\n system_prompt=\"You analyze numerical data and patterns\",\n llm=model\n)\n\nmarket_expert = Agent(\n agent_name=\"Market-Expert\",\n system_prompt=\"You provide market insights and trends\",\n llm=model\n)\n\nstrategy_advisor = Agent(\n agent_name=\"Strategy-Advisor\",\n system_prompt=\"You formulate strategic recommendations\",\n llm=model\n)\n\n# Create analysis team\nanalysis_team = GroupChat(\n name=\"Market Analysis Team\",\n description=\"Comprehensive market analysis group\",\n agents=[data_analyst, market_expert, strategy_advisor],\n speaker_fn=expertise_based,\n max_loops=15\n)\n\n# Run complex analysis\nhistory = analysis_team.run(\"\"\"\n Analyze the current market conditions:\n 1. Identify key trends\n 2. Evaluate risks\n 3. Recommend investment strategy\n\"\"\")\n
"},{"location":"swarms/structs/group_chat/#parallel-processing","title":"Parallel Processing","text":"# Define multiple analysis tasks\ntasks = [\n \"Analyze tech sector trends\",\n \"Evaluate real estate market\",\n \"Review commodity prices\",\n \"Assess global economic indicators\"\n]\n\n# Run tasks concurrently\nhistories = chat.concurrent_run(tasks)\n\n# Process results\nfor task, history in zip(tasks, histories):\n print(f\"\\nAnalysis for: {task}\")\n for turn in history.turns:\n for response in turn.responses:\n print(f\"{response.agent_name}: {response.message}\")\n
"},{"location":"swarms/structs/group_chat/#best-practices","title":"Best Practices","text":"Enable retries for reliability
Speaker Functions
Add appropriate logging
Error Handling
Provide fallback responses
Performance
HeavySwarm is a sophisticated multi-agent orchestration system that decomposes complex tasks into specialized questions and executes them using four specialized agents: Research, Analysis, Alternatives, and Verification. The results are then synthesized into a comprehensive response.
Inspired by X.AI's Grok 4 heavy implementation, HeavySwarm provides robust task analysis through intelligent question generation, parallel execution, and comprehensive synthesis with real-time progress monitoring.
"},{"location":"swarms/structs/heavy_swarm/#architecture","title":"Architecture","text":""},{"location":"swarms/structs/heavy_swarm/#system-design","title":"System Design","text":"The HeavySwarm follows a structured 5-phase workflow:
graph TB\n subgraph \"HeavySwarm Architecture\"\n A[Input Task] --> B[Question Generation Agent]\n B --> C[Task Decomposition]\n\n C --> D[Research Agent]\n C --> E[Analysis Agent]\n C --> F[Alternatives Agent]\n C --> G[Verification Agent]\n\n D --> H[Parallel Execution Engine]\n E --> H\n F --> H\n G --> H\n\n H --> I[Result Collection]\n I --> J[Synthesis Agent]\n J --> K[Comprehensive Report]\n\n subgraph \"Monitoring & Control\"\n L[Rich Dashboard]\n M[Progress Tracking]\n N[Error Handling]\n O[Timeout Management]\n end\n\n H --> L\n H --> M\n H --> N\n H --> O\n end\n\n subgraph \"Agent Specializations\"\n D --> D1[Information Gathering<br/>Market Research<br/>Data Collection]\n E --> E1[Statistical Analysis<br/>Pattern Recognition<br/>Predictive Modeling]\n F --> F1[Creative Solutions<br/>Strategic Options<br/>Innovation Ideation]\n G --> G1[Fact Checking<br/>Feasibility Assessment<br/>Quality Assurance]\n end\n\n style A fill:#ff6b6b\n style K fill:#4ecdc4\n style H fill:#45b7d1\n style J fill:#96ceb4
"},{"location":"swarms/structs/heavy_swarm/#installation","title":"Installation","text":"pip install swarms\n
"},{"location":"swarms/structs/heavy_swarm/#quick-start","title":"Quick Start","text":"from swarms import HeavySwarm\n\n# Initialize the swarm\nswarm = HeavySwarm(\n name=\"MarketAnalysisSwarm\",\n description=\"Financial market analysis swarm\",\n question_agent_model_name=\"gpt-4o-mini\",\n worker_model_name=\"gpt-4o-mini\",\n show_dashboard=True,\n verbose=True\n)\n\n# Execute analysis\nresult = swarm.run(\"Analyze the current cryptocurrency market trends and investment opportunities\")\nprint(result)\n
"},{"location":"swarms/structs/heavy_swarm/#api-reference","title":"API Reference","text":""},{"location":"swarms/structs/heavy_swarm/#heavyswarm-class","title":"HeavySwarm Class","text":""},{"location":"swarms/structs/heavy_swarm/#constructor-parameters","title":"Constructor Parameters","text":"Parameter Type Default Description name
str
\"HeavySwarm\"
Identifier name for the swarm instance description
str
\"A swarm of agents...\"
Description of the swarm's purpose agents
List[Agent]
None
Pre-configured agent list (unused - agents created internally) timeout
int
300
Maximum execution time per agent in seconds aggregation_strategy
str
\"synthesis\"
Strategy for result aggregation loops_per_agent
int
1
Number of execution loops per agent question_agent_model_name
str
\"gpt-4o-mini\"
Model for question generation worker_model_name
str
\"gpt-4o-mini\"
Model for specialized worker agents verbose
bool
False
Enable detailed logging output max_workers
int
int(os.cpu_count() * 0.9)
Maximum concurrent workers show_dashboard
bool
False
Enable rich dashboard visualization agent_prints_on
bool
False
Enable individual agent output printing"},{"location":"swarms/structs/heavy_swarm/#methods","title":"Methods","text":""},{"location":"swarms/structs/heavy_swarm/#runtask-str-img-str-none-str","title":"run(task: str, img: str = None) -> str
","text":"Execute the complete HeavySwarm orchestration flow.
Parameters:
task
(str): The main task to analyze and decompose
img
(str, optional): Image input for visual analysis tasks
Returns: - str
: Comprehensive final analysis from synthesis agent
Example:
result = swarm.run(\"Develop a go-to-market strategy for a new SaaS product\")\n
"},{"location":"swarms/structs/heavy_swarm/#real-world-applications","title":"Real-World Applications","text":""},{"location":"swarms/structs/heavy_swarm/#financial-services","title":"Financial Services","text":"# Market Analysis\nswarm = HeavySwarm(\n name=\"FinanceSwarm\",\n worker_model_name=\"gpt-4o\",\n show_dashboard=True\n)\n\nresult = swarm.run(\"\"\"\nAnalyze the impact of recent Federal Reserve policy changes on:\n1. Bond markets and yield curves\n2. Equity market valuations\n3. Currency exchange rates\n4. Provide investment recommendations for institutional portfolios\n\"\"\")\n
"},{"location":"swarms/structs/heavy_swarm/#use-cases","title":"Use-cases","text":"Use Case Description Portfolio optimization and risk assessment Optimize asset allocation and assess risks Market trend analysis and forecasting Analyze and predict market movements Regulatory compliance evaluation Evaluate adherence to financial regulations Investment strategy development Develop and refine investment strategies Credit risk analysis and modeling Analyze and model credit risk"},{"location":"swarms/structs/heavy_swarm/#healthcare-life-sciences","title":"Healthcare & Life Sciences","text":"# Clinical Research Analysis\nswarm = HeavySwarm(\n name=\"HealthcareSwarm\",\n worker_model_name=\"gpt-4o\",\n timeout=600,\n loops_per_agent=2\n)\n\nresult = swarm.run(\"\"\"\nEvaluate the potential of AI-driven personalized medicine:\n1. Current technological capabilities and limitations\n2. Regulatory landscape and approval pathways\n3. Market opportunities and competitive analysis\n4. Implementation strategies for healthcare systems\n\"\"\")\n
Use Cases:
Use Case Description Drug discovery and development analysis Analyze and accelerate drug R&D processes Clinical trial optimization Improve design and efficiency of trials Healthcare policy evaluation Assess and inform healthcare policies Medical device market analysis Evaluate trends and opportunities in devices Patient outcome prediction modeling Predict and model patient health outcomes"},{"location":"swarms/structs/heavy_swarm/#technology-innovation","title":"Technology & Innovation","text":"# Tech Strategy Analysis\nswarm = HeavySwarm(\n name=\"TechSwarm\",\n worker_model_name=\"gpt-4o\",\n show_dashboard=True,\n verbose=True\n)\n\nresult = swarm.run(\"\"\"\nAssess the strategic implications of quantum computing adoption:\n1. Technical readiness and hardware developments\n2. Industry applications and use cases\n3. Competitive landscape and key players\n4. Investment and implementation roadmap\n\"\"\")\n
Use Cases:
Use Case Description Technology roadmap development Plan and prioritize technology initiatives Competitive intelligence gathering Analyze competitors and market trends Innovation pipeline analysis Evaluate and manage innovation projects Digital transformation strategy Develop and implement digital strategies Emerging technology assessment Assess new and disruptive technologies"},{"location":"swarms/structs/heavy_swarm/#manufacturing-supply-chain","title":"Manufacturing & Supply Chain","text":"# Supply Chain Optimization\nswarm = HeavySwarm(\n name=\"ManufacturingSwarm\",\n worker_model_name=\"gpt-4o\",\n max_workers=8\n)\n\nresult = swarm.run(\"\"\"\nOptimize global supply chain resilience:\n1. Risk assessment and vulnerability analysis\n2. Alternative sourcing strategies\n3. Technology integration opportunities\n4. Cost-benefit analysis of proposed changes\n\"\"\")\n
Use Cases:
Use Case Description Supply chain risk management Identify and mitigate supply chain risks Manufacturing process optimization Improve efficiency and productivity Quality control system design Develop systems to ensure product quality Sustainability impact assessment Evaluate environmental and social impacts Logistics network optimization Enhance logistics and distribution networks"},{"location":"swarms/structs/heavy_swarm/#advanced-configuration","title":"Advanced Configuration","text":""},{"location":"swarms/structs/heavy_swarm/#custom-agent-configuration","title":"Custom Agent Configuration","text":"# High-performance configuration\nswarm = HeavySwarm(\n name=\"HighPerformanceSwarm\",\n question_agent_model_name=\"gpt-4o\",\n worker_model_name=\"gpt-4o\",\n timeout=900,\n loops_per_agent=3,\n max_workers=12,\n show_dashboard=True,\n verbose=True\n)\n
"},{"location":"swarms/structs/heavy_swarm/#troubleshooting","title":"Troubleshooting","text":"Issue Solution Agent Timeout Increase timeout parameter or reduce task complexity Model Rate Limits Implement backoff strategies or use different models Memory Usage Monitor system resources with large-scale operations Dashboard Performance Disable dashboard for batch processing"},{"location":"swarms/structs/heavy_swarm/#contributing","title":"Contributing","text":"HeavySwarm is part of the Swarms ecosystem. Contributions are welcome for:
New agent specializations
Performance optimizations
Integration capabilities
Documentation improvements
Inspired by X.AI's Grok heavy implementation
Built on the Swarms framework
Utilizes Rich for dashboard visualization
Powered by advanced language models
The Hybrid Hierarchical-Cluster Swarm (HHCS) is an advanced AI orchestration architecture that combines hierarchical decision-making with parallel processing capabilities. HHCS enables complex task solving by dynamically routing tasks to specialized agent swarms based on their expertise and capabilities.
"},{"location":"swarms/structs/hhcs/#purpose","title":"Purpose","text":"HHCS addresses the challenge of efficiently solving diverse and complex tasks by:
Intelligently routing tasks to the most appropriate specialized swarms
Enabling parallel processing of multifaceted problems
Maintaining a clear hierarchy for effective decision-making
Combining outputs from multiple specialized agents for comprehensive solutions
Router-based task distribution: Central router agent analyzes incoming tasks and directs them to appropriate specialized swarms
Hybrid architecture: Combines hierarchical control with clustered specialization
Parallel processing: Multiple swarms can work simultaneously on different aspects of complex tasks
Flexible swarm types: Supports both sequential and concurrent workflows within swarms
Comprehensive result aggregation: Collects and combines outputs from all contributing swarms
The HHCS architecture follows a hierarchical structure with the router agent at the top level, specialized swarms at the middle level, and individual agents at the bottom level.
flowchart TD\n Start([Task Input]) --> RouterAgent[Router Agent]\n RouterAgent --> Analysis{Task Analysis}\n\n Analysis -->|Analyze Requirements| Selection[Swarm Selection]\n Selection -->|Select Best Swarm| Route[Route Task]\n\n Route --> Swarm1[Swarm 1]\n Route --> Swarm2[Swarm 2]\n Route --> SwarmN[Swarm N...]\n\n Swarm1 -->|Process Task| Result1[Swarm 1 Output]\n Swarm2 -->|Process Task| Result2[Swarm 2 Output]\n SwarmN -->|Process Task| ResultN[Swarm N Output]\n\n Result1 --> Conversation[Conversation History]\n Result2 --> Conversation\n ResultN --> Conversation\n\n Conversation --> Output([Final Output])\n\n subgraph Router Decision Process\n Analysis\n Selection\n end\n\n subgraph Parallel Task Processing\n Swarm1\n Swarm2\n SwarmN\n end\n\n subgraph Results Collection\n Result1\n Result2\n ResultN\n Conversation\n end
"},{"location":"swarms/structs/hhcs/#hybridhierarchicalclusterswarm-constructor-arguments","title":"HybridHierarchicalClusterSwarm
Constructor Arguments","text":"Parameter Type Default Description name
string \"Hybrid Hierarchical-Cluster Swarm\" The name of the swarm instance description
string \"A swarm that uses a hybrid hierarchical-peer model to solve complex tasks.\" Brief description of the swarm's functionality swarms
List[SwarmRouter] [] List of available swarm routers max_loops
integer 1 Maximum number of processing loops output_type
string \"list\" Format for output (e.g., \"list\", \"json\") router_agent_model_name
string \"gpt-4o-mini\" LLM model used by the router agent"},{"location":"swarms/structs/hhcs/#methods","title":"Methods","text":"Method Parameters Return Type Description run
task
(str) str Processes a single task through the swarm system batched_run
tasks
(List[str]) List[str] Processes multiple tasks in parallel find_swarm_by_name
swarm_name
(str) SwarmRouter Retrieves a swarm by its name route_task
swarm_name
(str), task_description
(str) None Routes a task to a specific swarm get_swarms_info
None str Returns formatted information about all available swarms"},{"location":"swarms/structs/hhcs/#full-example","title":"Full Example","text":"from swarms import Agent, SwarmRouter\nfrom swarms.structs.hybrid_hiearchical_peer_swarm import (\n HybridHierarchicalClusterSwarm,\n)\n\n\n# Core Legal Agent Definitions with short, simple prompts\nlitigation_agent = Agent(\n agent_name=\"Litigator\",\n system_prompt=\"You handle lawsuits. Analyze facts, build arguments, and develop case strategy.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\ncorporate_agent = Agent(\n agent_name=\"Corporate-Attorney\",\n system_prompt=\"You handle business law. Advise on corporate structure, governance, and transactions.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\nip_agent = Agent(\n agent_name=\"IP-Attorney\",\n system_prompt=\"You protect intellectual property. Handle patents, trademarks, copyrights, and trade secrets.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\nemployment_agent = Agent(\n agent_name=\"Employment-Attorney\",\n system_prompt=\"You handle workplace matters. Address hiring, termination, discrimination, and labor issues.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\nparalegal_agent = Agent(\n agent_name=\"Paralegal\",\n system_prompt=\"You assist attorneys. Conduct research, draft documents, and organize case files.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\ndoc_review_agent = Agent(\n agent_name=\"Document-Reviewer\",\n system_prompt=\"You examine documents. Extract key information and identify relevant content.\",\n model_name=\"groq/deepseek-r1-distill-qwen-32b\",\n max_loops=1,\n)\n\n# Practice Area Swarm Routers\nlitigation_swarm = SwarmRouter(\n name=\"litigation-practice\",\n description=\"Handle all aspects of litigation\",\n agents=[litigation_agent, paralegal_agent, doc_review_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\ncorporate_swarm = SwarmRouter(\n name=\"corporate-practice\",\n description=\"Handle business and corporate legal matters\",\n agents=[corporate_agent, paralegal_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\nip_swarm = SwarmRouter(\n name=\"ip-practice\",\n description=\"Handle intellectual property matters\",\n agents=[ip_agent, paralegal_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\nemployment_swarm = SwarmRouter(\n name=\"employment-practice\",\n description=\"Handle employment and labor law matters\",\n agents=[employment_agent, paralegal_agent],\n swarm_type=\"SequentialWorkflow\",\n)\n\n# Cross-functional Swarm Router\nm_and_a_swarm = SwarmRouter(\n name=\"mergers-acquisitions\",\n description=\"Handle mergers and acquisitions\",\n agents=[\n corporate_agent,\n ip_agent,\n employment_agent,\n doc_review_agent,\n ],\n swarm_type=\"ConcurrentWorkflow\",\n)\n\ndispute_swarm = SwarmRouter(\n name=\"dispute-resolution\",\n description=\"Handle complex disputes requiring multiple specialties\",\n agents=[litigation_agent, corporate_agent, doc_review_agent],\n swarm_type=\"ConcurrentWorkflow\",\n)\n\n\nhybrid_hiearchical_swarm = HybridHierarchicalClusterSwarm(\n name=\"hybrid-hiearchical-swarm\",\n description=\"A hybrid hiearchical swarm that uses a hybrid hiearchical peer model to solve complex tasks.\",\n swarms=[\n litigation_swarm,\n corporate_swarm,\n ip_swarm,\n employment_swarm,\n m_and_a_swarm,\n dispute_swarm,\n ],\n max_loops=1,\n router_agent_model_name=\"gpt-4o-mini\",\n)\n\n\nif __name__ == \"__main__\":\n hybrid_hiearchical_swarm.run(\n \"What is the best way to file for a patent? for ai technology \"\n )\n
"},{"location":"swarms/structs/hierarchical_swarm/","title":"HierarchicalSwarm
","text":"The HierarchicalSwarm
is a sophisticated multi-agent orchestration system that implements a hierarchical workflow pattern. It consists of a director agent that coordinates and distributes tasks to specialized worker agents, creating a structured approach to complex problem-solving.
The Hierarchical Swarm follows a clear workflow pattern:
max_loops
)graph TD\n A[User Task] --> B[Director Agent]\n B --> C[Create Plan & Orders]\n C --> D[Distribute to Agents]\n D --> E[Agent 1]\n D --> F[Agent 2]\n D --> G[Agent N]\n E --> H[Execute Task]\n F --> H\n G --> H\n H --> I[Report Results]\n I --> J[Director Evaluation]\n J --> K{More Loops?}\n K -->|Yes| C\n K -->|No| L[Final Output]
"},{"location":"swarms/structs/hierarchical_swarm/#key-features","title":"Key Features","text":"HierarchicalSwarm
Constructor","text":"Parameter Type Default Description name
str
\"HierarchicalAgentSwarm\"
The name of the swarm instance description
str
\"Distributed task swarm\"
Brief description of the swarm's functionality director
Optional[Union[Agent, Callable, Any]]
None
The director agent that orchestrates tasks agents
List[Union[Agent, Callable, Any]]
None
List of worker agents in the swarm max_loops
int
1
Maximum number of feedback loops between director and agents output_type
OutputType
\"dict-all-except-first\"
Format for output (dict, str, list) feedback_director_model_name
str
\"gpt-4o-mini\"
Model name for feedback director director_name
str
\"Director\"
Name of the director agent director_model_name
str
\"gpt-4o-mini\"
Model name for the director agent verbose
bool
False
Enable detailed logging add_collaboration_prompt
bool
True
Add collaboration prompts to agents planning_director_agent
Optional[Union[Agent, Callable, Any]]
None
Optional planning agent for enhanced planning"},{"location":"swarms/structs/hierarchical_swarm/#core-methods","title":"Core Methods","text":""},{"location":"swarms/structs/hierarchical_swarm/#runtask-imgnone-args-kwargs","title":"run(task, img=None, *args, **kwargs)
","text":"Executes the hierarchical swarm for a specified number of feedback loops, processing the task through multiple iterations for refinement and improvement.
"},{"location":"swarms/structs/hierarchical_swarm/#parameters","title":"Parameters","text":"Parameter Type Default Descriptiontask
str
Required The initial task to be processed by the swarm img
str
None
Optional image input for the agents *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments"},{"location":"swarms/structs/hierarchical_swarm/#returns","title":"Returns","text":"Type Description Any
The formatted conversation history as output based on output_type
"},{"location":"swarms/structs/hierarchical_swarm/#example","title":"Example","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized agents\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Expert in market research and analysis\",\n model_name=\"gpt-4o\",\n)\n\nfinancial_agent = Agent(\n agent_name=\"Financial-Analyst\",\n agent_description=\"Specialist in financial analysis and valuation\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the hierarchical swarm\nswarm = HierarchicalSwarm(\n name=\"Financial-Analysis-Swarm\",\n description=\"A hierarchical swarm for comprehensive financial analysis\",\n agents=[research_agent, financial_agent],\n max_loops=2,\n verbose=True,\n)\n\n# Execute a complex task\ntask = \"Analyze the market potential for Tesla (TSLA) stock\"\nresult = swarm.run(task=task)\nprint(result)\n
"},{"location":"swarms/structs/hierarchical_swarm/#steptask-imgnone-args-kwargs","title":"step(task, img=None, *args, **kwargs)
","text":"Runs a single step of the hierarchical swarm, executing one complete cycle of planning, distribution, execution, and feedback.
"},{"location":"swarms/structs/hierarchical_swarm/#parameters_1","title":"Parameters","text":"Parameter Type Default Descriptiontask
str
Required The task to be executed in this step img
str
None
Optional image input for the agents *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments"},{"location":"swarms/structs/hierarchical_swarm/#returns_1","title":"Returns","text":"Type Description str
Feedback from the director based on agent outputs"},{"location":"swarms/structs/hierarchical_swarm/#example_1","title":"Example","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create development agents\nfrontend_agent = Agent(\n agent_name=\"Frontend-Developer\",\n agent_description=\"Expert in React and modern web development\",\n model_name=\"gpt-4o\",\n)\n\nbackend_agent = Agent(\n agent_name=\"Backend-Developer\",\n agent_description=\"Specialist in Node.js and API development\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the swarm\nswarm = HierarchicalSwarm(\n name=\"Development-Swarm\",\n description=\"A hierarchical swarm for software development\",\n agents=[frontend_agent, backend_agent],\n max_loops=1,\n verbose=True,\n)\n\n# Execute a single step\ntask = \"Create a simple web app for file upload and download\"\nfeedback = swarm.step(task=task)\nprint(\"Director Feedback:\", feedback)\n
"},{"location":"swarms/structs/hierarchical_swarm/#batched_runtasks-imgnone-args-kwargs","title":"batched_run(tasks, img=None, *args, **kwargs)
","text":"Executes the hierarchical swarm for a list of tasks, processing each task through the complete workflow.
"},{"location":"swarms/structs/hierarchical_swarm/#parameters_2","title":"Parameters","text":"Parameter Type Default Descriptiontasks
List[str]
Required List of tasks to be processed img
str
None
Optional image input for the agents *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments"},{"location":"swarms/structs/hierarchical_swarm/#returns_2","title":"Returns","text":"Type Description List[Any]
List of results for each task"},{"location":"swarms/structs/hierarchical_swarm/#example_2","title":"Example","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create analysis agents\nmarket_agent = Agent(\n agent_name=\"Market-Analyst\",\n agent_description=\"Expert in market analysis and trends\",\n model_name=\"gpt-4o\",\n)\n\ntechnical_agent = Agent(\n agent_name=\"Technical-Analyst\",\n agent_description=\"Specialist in technical analysis and patterns\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the swarm\nswarm = HierarchicalSwarm(\n name=\"Analysis-Swarm\",\n description=\"A hierarchical swarm for comprehensive analysis\",\n agents=[market_agent, technical_agent],\n max_loops=2,\n verbose=True,\n)\n\n# Execute multiple tasks\ntasks = [\n \"Analyze Apple (AAPL) stock performance\",\n \"Evaluate Microsoft (MSFT) market position\",\n \"Assess Google (GOOGL) competitive landscape\"\n]\n\nresults = swarm.batched_run(tasks=tasks)\nfor i, result in enumerate(results):\n print(f\"Task {i+1} Result:\", result)\n
"},{"location":"swarms/structs/hierarchical_swarm/#advanced-usage-examples","title":"Advanced Usage Examples","text":""},{"location":"swarms/structs/hierarchical_swarm/#financial-analysis-swarm","title":"Financial Analysis Swarm","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized financial agents\nmarket_research_agent = Agent(\n agent_name=\"Market-Research-Specialist\",\n agent_description=\"Expert in market research, trend analysis, and competitive intelligence\",\n system_prompt=\"\"\"You are a senior market research specialist with expertise in:\n - Market trend analysis and forecasting\n - Competitive landscape assessment\n - Consumer behavior analysis\n - Industry report generation\n - Market opportunity identification\n - Risk assessment and mitigation strategies\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n)\n\nfinancial_analyst_agent = Agent(\n agent_name=\"Financial-Analysis-Expert\",\n agent_description=\"Specialist in financial statement analysis, valuation, and investment research\",\n system_prompt=\"\"\"You are a senior financial analyst with deep expertise in:\n - Financial statement analysis (income statement, balance sheet, cash flow)\n - Valuation methodologies (DCF, comparable company analysis, precedent transactions)\n - Investment research and due diligence\n - Financial modeling and forecasting\n - Risk assessment and portfolio analysis\n - ESG (Environmental, Social, Governance) analysis\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n)\n\n# Initialize the hierarchical swarm\nfinancial_analysis_swarm = HierarchicalSwarm(\n name=\"Financial-Analysis-Hierarchical-Swarm\",\n description=\"A hierarchical swarm for comprehensive financial analysis with specialized agents\",\n agents=[market_research_agent, financial_analyst_agent],\n max_loops=2,\n verbose=True,\n)\n\n# Execute financial analysis\ntask = \"Conduct a comprehensive analysis of Tesla (TSLA) stock including market position, financial health, and investment potential\"\nresult = financial_analysis_swarm.run(task=task)\nprint(result)\n
"},{"location":"swarms/structs/hierarchical_swarm/#development-department-swarm","title":"Development Department Swarm","text":"from swarms import Agent\nfrom swarms.structs.hiearchical_swarm import HierarchicalSwarm\n\n# Create specialized development agents\nfrontend_developer_agent = Agent(\n agent_name=\"Frontend-Developer\",\n agent_description=\"Senior frontend developer expert in modern web technologies and user experience\",\n system_prompt=\"\"\"You are a senior frontend developer with expertise in:\n - Modern JavaScript frameworks (React, Vue, Angular)\n - TypeScript and modern ES6+ features\n - CSS frameworks and responsive design\n - State management (Redux, Zustand, Context API)\n - Web performance optimization\n - Accessibility (WCAG) and SEO best practices\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n)\n\nbackend_developer_agent = Agent(\n agent_name=\"Backend-Developer\",\n agent_description=\"Senior backend developer specializing in server-side development and API design\",\n system_prompt=\"\"\"You are a senior backend developer with expertise in:\n - Server-side programming languages (Python, Node.js, Java, Go)\n - Web frameworks (Django, Flask, Express, Spring Boot)\n - Database design and optimization (SQL, NoSQL)\n - API design and REST/GraphQL implementation\n - Authentication and authorization systems\n - Microservices architecture and containerization\"\"\",\n model_name=\"claude-3-sonnet-20240229\",\n)\n\n# Initialize the development swarm\ndevelopment_department_swarm = HierarchicalSwarm(\n name=\"Autonomous-Development-Department\",\n description=\"A fully autonomous development department with specialized agents\",\n agents=[frontend_developer_agent, backend_developer_agent],\n max_loops=3,\n verbose=True,\n)\n\n# Execute development project\ntask = \"Create a simple web app that allows users to upload a file and then download it. The app should be built with React and Node.js.\"\nresult = development_department_swarm.run(task=task)\nprint(result)\n
"},{"location":"swarms/structs/hierarchical_swarm/#output-types","title":"Output Types","text":"The HierarchicalSwarm
supports various output formats through the output_type
parameter:
\"dict-all-except-first\"
Returns all conversation history as a dictionary, excluding the first message Default format for comprehensive analysis \"dict\"
Returns conversation history as a dictionary When you need structured data \"str\"
Returns conversation history as a string For simple text output \"list\"
Returns conversation history as a list For sequential processing"},{"location":"swarms/structs/hierarchical_swarm/#best-practices","title":"Best Practices","text":"max_loops
based on task complexity (1-3 for most tasks)The HierarchicalSwarm
includes comprehensive error handling with detailed logging. Common issues and solutions:
max_loops
to a value greater than 0max_loops
The ImageAgentBatchProcessor
is a high-performance parallel image processing system designed for running AI agents on multiple images concurrently. It provides robust error handling, logging, and flexible configuration options.
pip install swarms\n
"},{"location":"swarms/structs/image_batch_agent/#class-arguments","title":"Class Arguments","text":"Parameter Type Default Description agents Union[Agent, List[Agent], Callable, List[Callable]] Required Single agent or list of agents to process images max_workers int None Maximum number of parallel workers (defaults to 95% of CPU cores) supported_formats List[str] ['.jpg', '.jpeg', '.png'] List of supported image file extensions"},{"location":"swarms/structs/image_batch_agent/#methods","title":"Methods","text":""},{"location":"swarms/structs/image_batch_agent/#run","title":"run()","text":"Description: Main method for processing multiple images in parallel with configured agents. Can handle single images, multiple images, or entire directories.
Arguments:
Parameter Type Required Description image_paths Union[str, List[str], Path] Yes Single image path, list of paths, or directory path tasks Union[str, List[str]] Yes Single task or list of tasks to perform on each imageReturns: List[Dict[str, Any]] - List of processing results for each image
Example:
from swarms import Agent\nfrom swarms.structs import ImageAgentBatchProcessor\nfrom pathlib import Path\n\n# Initialize agent and processor\nagent = Agent(api_key=\"your-api-key\", model=\"gpt-4-vision\")\nprocessor = ImageAgentBatchProcessor(agents=agent)\n\n# Example 1: Process single image\nresults = processor.run(\n image_paths=\"path/to/image.jpg\",\n tasks=\"Describe this image\"\n)\n\n# Example 2: Process multiple images\nresults = processor.run(\n image_paths=[\"image1.jpg\", \"image2.jpg\"],\n tasks=[\"Describe objects\", \"Identify colors\"]\n)\n\n# Example 3: Process directory\nresults = processor.run(\n image_paths=Path(\"./images\"),\n tasks=\"Analyze image content\"\n)\n
"},{"location":"swarms/structs/image_batch_agent/#_validate_image_path","title":"_validate_image_path()","text":"Description: Internal method that validates if an image path exists and has a supported format.
Arguments:
Parameter Type Required Description image_path Union[str, Path] Yes Path to the image file to validateReturns: Path - Validated Path object
Example:
from swarms.structs import ImageAgentBatchProcessor, ImageProcessingError\nfrom pathlib import Path\n\nprocessor = ImageAgentBatchProcessor(agents=agent)\n\ntry:\n validated_path = processor._validate_image_path(\"image.jpg\")\n print(f\"Valid image path: {validated_path}\")\nexcept ImageProcessingError as e:\n print(f\"Invalid image path: {e}\")\n
"},{"location":"swarms/structs/image_batch_agent/#_process_single_image","title":"_process_single_image()","text":"Description: Internal method that processes a single image with one agent and one or more tasks.
Arguments:
Parameter Type Required Description image_path Path Yes Path to the image to process tasks Union[str, List[str]] Yes Tasks to perform on the image agent Agent Yes Agent to use for processingReturns: Dict[str, Any] - Processing results for the image
Example:
from swarms import Agent\nfrom swarms.structs import ImageAgentBatchProcessor\nfrom pathlib import Path\n\nagent = Agent(api_key=\"your-api-key\", model=\"gpt-4-vision\")\nprocessor = ImageAgentBatchProcessor(agents=agent)\n\ntry:\n result = processor._process_single_image(\n image_path=Path(\"image.jpg\"),\n tasks=[\"Describe image\", \"Identify objects\"],\n agent=agent\n )\n print(f\"Processing results: {result}\")\nexcept Exception as e:\n print(f\"Processing failed: {e}\")\n
"},{"location":"swarms/structs/image_batch_agent/#call","title":"call()","text":"Description: Makes the ImageAgentBatchProcessor callable like a function. Redirects to the run() method.
Arguments:
Parameter Type Required Description *args Any No Variable length argument list passed to run() **kwargs Any No Keyword arguments passed to run()Returns: List[Dict[str, Any]] - Same as run() method
Example:
from swarms import Agent\nfrom swarms.structs import ImageAgentBatchProcessor\n\n# Initialize\nagent = Agent(api_key=\"your-api-key\", model=\"gpt-4-vision\")\nprocessor = ImageAgentBatchProcessor(agents=agent)\n\n# Using __call__\nresults = processor(\n image_paths=[\"image1.jpg\", \"image2.jpg\"],\n tasks=\"Describe the image\"\n)\n\n# This is equivalent to:\nresults = processor.run(\n image_paths=[\"image1.jpg\", \"image2.jpg\"],\n tasks=\"Describe the image\"\n)\n
"},{"location":"swarms/structs/image_batch_agent/#return-format","title":"Return Format","text":"The processor returns a list of dictionaries with the following structure:
{\n \"image_path\": str, # Path to the processed image\n \"results\": { # Results for each task\n \"task_name\": result, # Task-specific results\n },\n \"processing_time\": float # Processing time in seconds\n}\n
"},{"location":"swarms/structs/image_batch_agent/#complete-usage-examples","title":"Complete Usage Examples","text":""},{"location":"swarms/structs/image_batch_agent/#1-basic-usage-with-single-agent","title":"1. Basic Usage with Single Agent","text":"from swarms import Agent\nfrom swarms.structs import ImageAgentBatchProcessor\n\n# Initialize an agent\nagent = Agent(\n api_key=\"your-api-key\",\n model=\"gpt-4-vision\"\n)\n\n# Create processor\nprocessor = ImageAgentBatchProcessor(agents=agent)\n\n# Process single image\nresults = processor.run(\n image_paths=\"path/to/image.jpg\",\n tasks=\"Describe this image in detail\"\n)\n
"},{"location":"swarms/structs/image_batch_agent/#2-processing-multiple-images-with-multiple-tasks","title":"2. Processing Multiple Images with Multiple Tasks","text":"# Initialize with multiple agents\nagent1 = Agent(api_key=\"key1\", model=\"gpt-4-vision\")\nagent2 = Agent(api_key=\"key2\", model=\"claude-3\")\n\nprocessor = ImageAgentBatchProcessor(\n agents=[agent1, agent2],\n supported_formats=['.jpg', '.png', '.webp']\n)\n\n# Define multiple tasks\ntasks = [\n \"Describe the main objects in the image\",\n \"What is the dominant color?\",\n \"Identify any text in the image\"\n]\n\n# Process a directory of images\nresults = processor.run(\n image_paths=\"path/to/image/directory\",\n tasks=tasks\n)\n\n# Process results\nfor result in results:\n print(f\"Image: {result['image_path']}\")\n for task, output in result['results'].items():\n print(f\"Task: {task}\")\n print(f\"Result: {output}\")\n print(f\"Processing time: {result['processing_time']:.2f} seconds\")\n
"},{"location":"swarms/structs/image_batch_agent/#3-custom-error-handling","title":"3. Custom Error Handling","text":"from swarms.structs import ImageAgentBatchProcessor, ImageProcessingError\n\ntry:\n processor = ImageAgentBatchProcessor(agents=agent)\n results = processor.run(\n image_paths=[\"image1.jpg\", \"image2.png\", \"invalid.txt\"],\n tasks=\"Analyze the image\"\n )\nexcept ImageProcessingError as e:\n print(f\"Image processing failed: {e}\")\nexcept InvalidAgentError as e:\n print(f\"Agent configuration error: {e}\")\n
"},{"location":"swarms/structs/image_batch_agent/#best-practices","title":"Best Practices","text":"Best Practice Description Resource Management \u2022 The processor automatically uses 95% of available CPU cores\u2022 For memory-intensive operations, consider reducing max_workers
Error Handling \u2022 Always wrap processor calls in try-except blocks\u2022 Check the results for any error keys Task Design \u2022 Keep tasks focused and specific\u2022 Group related tasks together for efficiency Performance Optimization \u2022 Process images in batches for better throughput\u2022 Use multiple agents for different types of analysis"},{"location":"swarms/structs/image_batch_agent/#limitations","title":"Limitations","text":"Limitation Description File Format Support Only supports image file formats specified in supported_formats
Agent Requirements Requires valid agent configurations Resource Scaling Memory usage scales with number of concurrent processes This documentation provides a comprehensive guide to using the ImageAgentBatchProcessor
. The class is designed to be both powerful and flexible, allowing for various use cases from simple image analysis to complex multi-agent processing pipelines.
The InteractiveGroupChat is a sophisticated multi-agent system that enables interactive conversations between users and AI agents using @mentions. This system allows users to direct tasks to specific agents and facilitates collaborative responses when multiple agents are mentioned.
"},{"location":"swarms/structs/interactive_groupchat/#features","title":"Features","text":"Feature Description @mentions Support Direct tasks to specific agents using @agent_name syntax Multi-Agent Collaboration Multiple mentioned agents can see and respond to each other's tasks Enhanced Collaborative Prompts Agents are trained to acknowledge, build upon, and synthesize each other's responses Speaker Functions Control the order in which agents respond (round robin, random, priority, custom) Dynamic Speaker Management Change speaker functions and priorities during runtime Random Dynamic Speaker Advanced speaker function that follows @mentions in agent responses Parallel and Sequential Strategies Support for both parallel and sequential agent execution Callable Function Support Supports both Agent instances and callable functions as chat participants Comprehensive Error Handling Custom error classes for different scenarios Conversation History Maintains a complete history of the conversation Flexible Output Formatting Configurable output format for conversation history Interactive Terminal Mode Full REPL interface for real-time chat with agents"},{"location":"swarms/structs/interactive_groupchat/#installation","title":"Installation","text":"pip install swarms\n
"},{"location":"swarms/structs/interactive_groupchat/#methods-reference","title":"Methods Reference","text":""},{"location":"swarms/structs/interactive_groupchat/#constructor-__init__","title":"Constructor (__init__
)","text":"Description: Initializes a new InteractiveGroupChat instance with the specified configuration.
Arguments:
Parameter Type Description Defaultid
str Unique identifier for the chat auto-generated key name
str Name of the group chat \"InteractiveGroupChat\" description
str Description of the chat's purpose generic description agents
List[Union[Agent, Callable]] List of participating agents empty list max_loops
int Maximum conversation turns 1 output_type
str Type of output format \"string\" interactive
bool Whether to enable interactive mode False speaker_function
Union[str, Callable] Function to determine speaking order round_robin_speaker speaker_state
dict Initial state for speaker function {\"current_index\": 0} Example:
from swarms import Agent, InteractiveGroupChat\n\n# Create agents\nfinancial_advisor = Agent(\n agent_name=\"FinancialAdvisor\",\n system_prompt=\"You are a financial advisor specializing in investment strategies.\",\n model_name=\"gpt-4\"\n)\n\ntax_expert = Agent(\n agent_name=\"TaxExpert\",\n system_prompt=\"You are a tax expert providing tax-related guidance.\",\n model_name=\"gpt-4\"\n)\n\n# Initialize group chat with speaker function\nfrom swarms.structs.interactive_groupchat import round_robin_speaker\n\nchat = InteractiveGroupChat(\n id=\"finance-chat-001\",\n name=\"Financial Advisory Team\",\n description=\"Expert financial guidance team\",\n agents=[financial_advisor, tax_expert],\n max_loops=3,\n output_type=\"string\",\n interactive=True,\n speaker_function=round_robin_speaker\n)\n
"},{"location":"swarms/structs/interactive_groupchat/#run-method-run","title":"Run Method (run
)","text":"Description: Processes a task and gets responses from mentioned agents. This is the main method for sending tasks in non-interactive mode.
Arguments:
task
(str): The input task containing @mentions to agentsimg
(Optional[str]): Optional image for the taskimgs
(Optional[List[str]]): Optional list of images for the taskReturns:
Example:
# Single agent interaction\nresponse = chat.run(\"@FinancialAdvisor what are the best ETFs for 2024?\")\nprint(response)\n\n# Multiple agent collaboration\nresponse = chat.run(\"@FinancialAdvisor and @TaxExpert, how can I minimize taxes on my investments?\")\nprint(response)\n\n# With image input\nresponse = chat.run(\"@FinancialAdvisor analyze this chart\", img=\"chart.png\")\nprint(response)\n
"},{"location":"swarms/structs/interactive_groupchat/#start-interactive-session-start_interactive_session","title":"Start Interactive Session (start_interactive_session
)","text":"Description: Starts an interactive terminal session for real-time chat with agents. This creates a REPL (Read-Eval-Print Loop) interface.
Arguments: None
Features: - Real-time chat with agents using @mentions - View available agents and their descriptions - Change speaker functions during the session - Built-in help system - Graceful exit with 'exit' or 'quit' commands
Example:
# Initialize chat with interactive mode\nchat = InteractiveGroupChat(\n agents=[financial_advisor, tax_expert],\n interactive=True\n)\n\n# Start the interactive session\nchat.start_interactive_session()\n
Interactive Session Commands: - @agent_name message
- Mention specific agents - help
or ?
- Show help information - speaker
- Change speaker function - exit
or quit
- End the session
set_speaker_function
)","text":"Description:
Dynamically changes the speaker function and optional state during runtime.
Arguments:
speaker_function
(Union[str, Callable]): Function that determines speaking orderspeaker_state
(dict, optional): State for the speaker functionExample:
from swarms.structs.interactive_groupchat import random_speaker, priority_speaker\n\n# Change to random speaker function\nchat.set_speaker_function(random_speaker)\n\n# Change to priority speaker with custom priorities\nchat.set_speaker_function(priority_speaker, {\"financial_advisor\": 3, \"tax_expert\": 2})\n\n# Change to random dynamic speaker\nchat.set_speaker_function(\"random-dynamic-speaker\")\n
"},{"location":"swarms/structs/interactive_groupchat/#get-available-speaker-functions-get_available_speaker_functions","title":"Get Available Speaker Functions (get_available_speaker_functions
)","text":"Description:
Returns a list of all available built-in speaker function names.
Arguments: None
Returns:
Example:
available_functions = chat.get_available_speaker_functions()\nprint(available_functions)\n# Output: ['round-robin-speaker', 'random-speaker', 'priority-speaker', 'random-dynamic-speaker']\n
"},{"location":"swarms/structs/interactive_groupchat/#get-current-speaker-function-get_current_speaker_function","title":"Get Current Speaker Function (get_current_speaker_function
)","text":"Description:
Returns the name of the currently active speaker function.
Arguments: None
Returns:
Example:
current_function = chat.get_current_speaker_function()\nprint(current_function) # Output: \"round-robin-speaker\"\n
"},{"location":"swarms/structs/interactive_groupchat/#set-priorities-set_priorities","title":"Set Priorities (set_priorities
)","text":"Description:
Sets agent priorities for priority-based speaking order.
Arguments:
priorities
(dict): Dictionary mapping agent names to priority weightsExample:
# Set agent priorities (higher numbers = higher priority)\nchat.set_priorities({\n \"financial_advisor\": 5,\n \"tax_expert\": 3,\n \"investment_analyst\": 1\n})\n
"},{"location":"swarms/structs/interactive_groupchat/#set-dynamic-strategy-set_dynamic_strategy","title":"Set Dynamic Strategy (set_dynamic_strategy
)","text":"Description:
Sets the strategy for the random-dynamic-speaker function.
Arguments:
strategy
(str): Either \"sequential\" or \"parallel\"Example:
# Set to sequential strategy (one agent at a time)\nchat.set_dynamic_strategy(\"sequential\")\n\n# Set to parallel strategy (all mentioned agents respond simultaneously)\nchat.set_dynamic_strategy(\"parallel\")\n
"},{"location":"swarms/structs/interactive_groupchat/#extract-mentions-_extract_mentions","title":"Extract Mentions (_extract_mentions
)","text":"Description:
Internal method that extracts @mentions from a task. Used by the run method to identify which agents should respond.
Arguments:
task
(str): The input task to extract mentions fromReturns:
Example:
# Internal usage example (not typically called directly)\nchat = InteractiveGroupChat(agents=[financial_advisor, tax_expert])\nmentions = chat._extract_mentions(\"@FinancialAdvisor and @TaxExpert, please help\")\nprint(mentions) # ['FinancialAdvisor', 'TaxExpert']\n
"},{"location":"swarms/structs/interactive_groupchat/#validate-initialization-_validate_initialization","title":"Validate Initialization (_validate_initialization
)","text":"Description:
Internal method that validates the group chat configuration during initialization.
Arguments: None
Example:
# Internal validation happens automatically during initialization\nchat = InteractiveGroupChat(\n agents=[financial_advisor], # Valid: at least one agent\n max_loops=5 # Valid: positive number\n)\n
"},{"location":"swarms/structs/interactive_groupchat/#setup-conversation-context-_setup_conversation_context","title":"Setup Conversation Context (_setup_conversation_context
)","text":"Description:
Internal method that sets up the initial conversation context with group chat information.
Arguments:
None
Example:
# Context is automatically set up during initialization\nchat = InteractiveGroupChat(\n name=\"Investment Team\",\n description=\"Expert investment advice\",\n agents=[financial_advisor, tax_expert]\n)\n# The conversation context now includes chat name, description, and agent info\n
"},{"location":"swarms/structs/interactive_groupchat/#update-agent-prompts-_update_agent_prompts","title":"Update Agent Prompts (_update_agent_prompts
)","text":"Description:
Internal method that updates each agent's system prompt with information about other agents and the group chat. This includes enhanced collaborative instructions that teach agents how to acknowledge, build upon, and synthesize each other's responses.
Arguments:
None
Example:
# Agent prompts are automatically updated during initialization\nchat = InteractiveGroupChat(agents=[financial_advisor, tax_expert])\n# Each agent now knows about the other participants and how to collaborate effectively\n
"},{"location":"swarms/structs/interactive_groupchat/#get-speaking-order-_get_speaking_order","title":"Get Speaking Order (_get_speaking_order
)","text":"Description:
Internal method that determines the speaking order using the configured speaker function.
Arguments:
mentioned_agents
(List[str]): List of agent names that were mentionedReturns:
Example:
# Internal usage (not typically called directly)\nmentioned = [\"financial_advisor\", \"tax_expert\"]\norder = chat._get_speaking_order(mentioned)\nprint(order) # Order determined by speaker function\n
"},{"location":"swarms/structs/interactive_groupchat/#speaker-functions","title":"Speaker Functions","text":"InteractiveGroupChat supports various speaker functions that control the order in which agents respond when multiple agents are mentioned.
"},{"location":"swarms/structs/interactive_groupchat/#built-in-speaker-functions","title":"Built-in Speaker Functions","text":""},{"location":"swarms/structs/interactive_groupchat/#round-robin-speaker-round_robin_speaker","title":"Round Robin Speaker (round_robin_speaker
)","text":"Agents speak in a fixed order, cycling through the list in sequence.
from swarms.structs.interactive_groupchat import InteractiveGroupChat, round_robin_speaker\n\nchat = InteractiveGroupChat(\n agents=agents,\n speaker_function=round_robin_speaker,\n interactive=False,\n)\n
Behavior:
Agents speak in the order they were mentioned
Maintains state between calls to continue the cycle
Predictable and fair distribution of speaking turns
random_speaker
)","text":"Agents speak in random order each time.
from swarms.structs.interactive_groupchat import InteractiveGroupChat, random_speaker\n\nchat = InteractiveGroupChat(\n agents=agents,\n speaker_function=random_speaker,\n interactive=False,\n)\n
Behavior:
Speaking order is randomized for each task
Provides variety and prevents bias toward first-mentioned agents
Good for brainstorming sessions
priority_speaker
)","text":"Agents speak based on priority weights assigned to each agent.
from swarms.structs.interactive_groupchat import InteractiveGroupChat, priority_speaker\n\nchat = InteractiveGroupChat(\n agents=agents,\n speaker_function=priority_speaker,\n speaker_state={\"priorities\": {\"financial_advisor\": 3, \"tax_expert\": 2, \"analyst\": 1}},\n interactive=False,\n)\n
Behavior:
Higher priority agents speak first
Uses weighted probability for selection
Good for hierarchical teams or expert-led discussions
random_dynamic_speaker
)","text":"Advanced speaker function that follows @mentions in agent responses, enabling dynamic conversation flow.
from swarms.structs.interactive_groupchat import InteractiveGroupChat, random_dynamic_speaker\n\nchat = InteractiveGroupChat(\n agents=agents,\n speaker_function=random_dynamic_speaker,\n speaker_state={\"strategy\": \"parallel\"}, # or \"sequential\"\n interactive=False,\n)\n
Behavior:
Example Dynamic Flow:
# Agent A responds: \"I think @AgentB should analyze this data and @AgentC should review the methodology\"\n# With sequential strategy: Agent B speaks next\n# With parallel strategy: Both Agent B and Agent C speak simultaneously\n
Use Cases: - Complex problem-solving where agents need to delegate to specific experts - Dynamic workflows where the conversation flow depends on agent responses - Collaborative decision-making processes
"},{"location":"swarms/structs/interactive_groupchat/#custom-speaker-functions","title":"Custom Speaker Functions","text":"You can create your own speaker functions to implement custom logic:
def custom_speaker(agents: List[str], **kwargs) -> str:\n \"\"\"\n Custom speaker function that selects agents based on specific criteria.\n\n Args:\n agents: List of agent names\n **kwargs: Additional arguments (context, time, etc.)\n\n Returns:\n Selected agent name\n \"\"\"\n # Your custom logic here\n if \"urgent\" in kwargs.get(\"context\", \"\"):\n return \"emergency_agent\" if \"emergency_agent\" in agents else agents[0]\n\n # Default to first agent\n return agents[0]\n\n# Use custom speaker function\nchat = InteractiveGroupChat(\n agents=agents,\n speaker_function=custom_speaker,\n interactive=False,\n)\n
"},{"location":"swarms/structs/interactive_groupchat/#dynamic-speaker-function-changes","title":"Dynamic Speaker Function Changes","text":"You can change the speaker function during runtime:
# Start with round robin\nchat = InteractiveGroupChat(\n agents=agents,\n speaker_function=round_robin_speaker,\n interactive=False,\n)\n\n# Change to random\nchat.set_speaker_function(random_speaker)\n\n# Change to priority with custom priorities\nchat.set_priorities({\"financial_advisor\": 5, \"tax_expert\": 3, \"analyst\": 1})\nchat.set_speaker_function(priority_speaker)\n\n# Change to dynamic speaker with parallel strategy\nchat.set_speaker_function(\"random-dynamic-speaker\")\nchat.set_dynamic_strategy(\"parallel\")\n
"},{"location":"swarms/structs/interactive_groupchat/#enhanced-collaborative-behavior","title":"Enhanced Collaborative Behavior","text":"The InteractiveGroupChat now includes enhanced collaborative prompts that ensure agents work together effectively.
"},{"location":"swarms/structs/interactive_groupchat/#collaborative-response-protocol","title":"Collaborative Response Protocol","text":"Every agent receives instructions to:
Agents are guided to structure their responses as:
task = \"Analyze our Q3 performance. @analyst @researcher @strategist\"\n\n# Expected collaborative behavior:\n# Analyst: \"Based on the data analysis, I can see clear growth trends in Q3...\"\n# Researcher: \"Building on @analyst's data insights, I can add that market research shows...\"\n# Strategist: \"Synthesizing @analyst's data and @researcher's market insights, I recommend...\"\n
"},{"location":"swarms/structs/interactive_groupchat/#error-classes","title":"Error Classes","text":""},{"location":"swarms/structs/interactive_groupchat/#interactivegroupchaterror","title":"InteractiveGroupChatError","text":"Description:
Base exception class for InteractiveGroupChat errors.
Example:
try:\n # Some operation that might fail\n chat.run(\"@InvalidAgent hello\")\nexcept InteractiveGroupChatError as e:\n print(f\"Chat error occurred: {e}\")\n
"},{"location":"swarms/structs/interactive_groupchat/#agentnotfounderror","title":"AgentNotFoundError","text":"Description:
Raised when a mentioned agent is not found in the group.
Example:
try:\n chat.run(\"@NonExistentAgent hello!\")\nexcept AgentNotFoundError as e:\n print(f\"Agent not found: {e}\")\n
"},{"location":"swarms/structs/interactive_groupchat/#nomentionedagentserror","title":"NoMentionedAgentsError","text":"Description:
Raised when no agents are mentioned in the task.
Example:
try:\n chat.run(\"Hello everyone!\") # No @mentions\nexcept NoMentionedAgentsError as e:\n print(f\"No agents mentioned: {e}\")\n
"},{"location":"swarms/structs/interactive_groupchat/#invalidtaskformaterror","title":"InvalidTaskFormatError","text":"Description:
Raised when the task format is invalid.
Example:
try:\n chat.run(\"@Invalid@Format\")\nexcept InvalidTaskFormatError as e:\n print(f\"Invalid task format: {e}\")\n
"},{"location":"swarms/structs/interactive_groupchat/#invalidspeakerfunctionerror","title":"InvalidSpeakerFunctionError","text":"Description:
Raised when an invalid speaker function is provided.
Example:
def invalid_speaker(agents, **kwargs):\n return 123 # Should return string, not int\n\ntry:\n chat = InteractiveGroupChat(\n agents=agents,\n speaker_function=invalid_speaker,\n )\nexcept InvalidSpeakerFunctionError as e:\n print(f\"Invalid speaker function: {e}\")\n
"},{"location":"swarms/structs/interactive_groupchat/#best-practices","title":"Best Practices","text":"Best Practice Description Example Agent Naming Use clear, unique names for agents to avoid confusion financial_advisor
, tax_expert
Task Format Always use @mentions to direct tasks to specific agents @financial_advisor What's your investment advice?
Speaker Functions Choose appropriate speaker functions for your use case Round robin for fairness, priority for expert-led discussions Dynamic Speaker Use random-dynamic-speaker for complex workflows with delegation When agents need to call on specific experts Strategy Selection Choose sequential for focused discussions, parallel for brainstorming Sequential for analysis, parallel for idea generation Collaborative Design Design agents with complementary expertise for better collaboration Analyst + Researcher + Strategist Error Handling Implement proper error handling for various scenarios try/except
blocks for AgentNotFoundError
Context Management Be aware that agents can see the full conversation history Monitor conversation length and relevance Resource Management Consider the number of agents and task length to optimize performance Limit max_loops and task size Dynamic Adaptation Change speaker functions based on different phases of work Round robin for brainstorming, priority for decision-making"},{"location":"swarms/structs/interactive_groupchat/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms/structs/interactive_groupchat/#basic-multi-agent-collaboration","title":"Basic Multi-Agent Collaboration","text":"from swarms import Agent\nfrom swarms.structs.interactive_groupchat import InteractiveGroupChat, round_robin_speaker\n\n# Create specialized agents\nanalyst = Agent(\n agent_name=\"analyst\",\n system_prompt=\"You are a data analyst specializing in business intelligence.\",\n llm=\"gpt-3.5-turbo\",\n)\n\nresearcher = Agent(\n agent_name=\"researcher\", \n system_prompt=\"You are a market researcher with expertise in consumer behavior.\",\n llm=\"gpt-3.5-turbo\",\n)\n\nstrategist = Agent(\n agent_name=\"strategist\",\n system_prompt=\"You are a strategic consultant who synthesizes insights into actionable recommendations.\",\n llm=\"gpt-3.5-turbo\",\n)\n\n# Create collaborative group chat\nchat = InteractiveGroupChat(\n name=\"Business Analysis Team\",\n description=\"A collaborative team for comprehensive business analysis\",\n agents=[analyst, researcher, strategist],\n speaker_function=round_robin_speaker,\n interactive=False,\n)\n\n# Collaborative analysis task\ntask = \"\"\"Analyze our company's Q3 performance. We have the following data:\n- Revenue: $2.5M (up 15% from Q2)\n- Customer acquisition cost: $45 (down 8% from Q2)\n- Market share: 3.2% (up 0.5% from Q2)\n\n@analyst @researcher @strategist please provide a comprehensive analysis.\"\"\"\n\nresponse = chat.run(task)\nprint(response)\n
"},{"location":"swarms/structs/interactive_groupchat/#priority-based-expert-consultation","title":"Priority-Based Expert Consultation","text":"from swarms.structs.interactive_groupchat import InteractiveGroupChat, priority_speaker\n\n# Create expert agents with different priority levels\nsenior_expert = Agent(\n agent_name=\"senior_expert\",\n system_prompt=\"You are a senior consultant with 15+ years of experience.\",\n llm=\"gpt-4\",\n)\n\njunior_expert = Agent(\n agent_name=\"junior_expert\",\n system_prompt=\"You are a junior consultant with 3 years of experience.\",\n llm=\"gpt-3.5-turbo\",\n)\n\nassistant = Agent(\n agent_name=\"assistant\",\n system_prompt=\"You are a research assistant who gathers supporting information.\",\n llm=\"gpt-3.5-turbo\",\n)\n\n# Create priority-based group chat\nchat = InteractiveGroupChat(\n name=\"Expert Consultation Team\",\n description=\"Expert-led consultation with collaborative input\",\n agents=[senior_expert, junior_expert, assistant],\n speaker_function=priority_speaker,\n speaker_state={\"priorities\": {\"senior_expert\": 5, \"junior_expert\": 3, \"assistant\": 1}},\n interactive=False,\n)\n\n# Expert consultation task\ntask = \"\"\"We need strategic advice on entering a new market. \n@senior_expert @junior_expert @assistant please provide your insights.\"\"\"\n\nresponse = chat.run(task)\nprint(response)\n
"},{"location":"swarms/structs/interactive_groupchat/#dynamic-speaker-function-with-delegation","title":"Dynamic Speaker Function with Delegation","text":"from swarms.structs.interactive_groupchat import InteractiveGroupChat, random_dynamic_speaker\n\n# Create specialized medical agents\ncardiologist = Agent(\n agent_name=\"cardiologist\",\n system_prompt=\"You are a cardiologist specializing in heart conditions.\",\n llm=\"gpt-4\",\n)\n\noncologist = Agent(\n agent_name=\"oncologist\",\n system_prompt=\"You are an oncologist specializing in cancer treatment.\",\n llm=\"gpt-4\",\n)\n\nendocrinologist = Agent(\n agent_name=\"endocrinologist\",\n system_prompt=\"You are an endocrinologist specializing in hormone disorders.\",\n llm=\"gpt-4\",\n)\n\n# Create dynamic group chat\nchat = InteractiveGroupChat(\n name=\"Medical Panel Discussion\",\n description=\"A collaborative panel of medical specialists\",\n agents=[cardiologist, oncologist, endocrinologist],\n speaker_function=random_dynamic_speaker,\n speaker_state={\"strategy\": \"sequential\"},\n interactive=False,\n)\n\n# Complex medical case with dynamic delegation\ncase = \"\"\"CASE PRESENTATION:\nA 65-year-old male with Type 2 diabetes, hypertension, and recent diagnosis of \nstage 3 colon cancer presents with chest pain and shortness of breath. \nECG shows ST-segment elevation. Recent blood work shows elevated blood glucose (280 mg/dL) \nand signs of infection (WBC 15,000, CRP elevated).\n\n@cardiologist @oncologist @endocrinologist please provide your assessment and treatment recommendations.\"\"\"\n\nresponse = chat.run(case)\nprint(response)\n
"},{"location":"swarms/structs/interactive_groupchat/#dynamic-speaker-function-changes_1","title":"Dynamic Speaker Function Changes","text":"from swarms.structs.interactive_groupchat import (\n InteractiveGroupChat, \n round_robin_speaker, \n random_speaker, \n priority_speaker,\n random_dynamic_speaker\n)\n\n# Create brainstorming agents\ncreative_agent = Agent(agent_name=\"creative\", system_prompt=\"You are a creative thinker.\")\nanalytical_agent = Agent(agent_name=\"analytical\", system_prompt=\"You are an analytical thinker.\")\npractical_agent = Agent(agent_name=\"practical\", system_prompt=\"You are a practical implementer.\")\n\nchat = InteractiveGroupChat(\n name=\"Dynamic Team\",\n agents=[creative_agent, analytical_agent, practical_agent],\n speaker_function=round_robin_speaker,\n interactive=False,\n)\n\n# Phase 1: Brainstorming (random order)\nchat.set_speaker_function(random_speaker)\ntask1 = \"Let's brainstorm new product ideas. @creative @analytical @practical\"\nresponse1 = chat.run(task1)\n\n# Phase 2: Analysis (priority order)\nchat.set_priorities({\"analytical\": 3, \"creative\": 2, \"practical\": 1})\nchat.set_speaker_function(priority_speaker)\ntask2 = \"Now let's analyze the feasibility of these ideas. @creative @analytical @practical\"\nresponse2 = chat.run(task2)\n\n# Phase 3: Dynamic delegation (agents mention each other)\nchat.set_speaker_function(random_dynamic_speaker)\nchat.set_dynamic_strategy(\"sequential\")\ntask3 = \"Let's plan implementation with dynamic delegation. @creative @analytical @practical\"\nresponse3 = chat.run(task3)\n\n# Phase 4: Final synthesis (round robin for equal input)\nchat.set_speaker_function(round_robin_speaker)\ntask4 = \"Finally, let's synthesize our findings. @creative @analytical @practical\"\nresponse4 = chat.run(task4)\n
"},{"location":"swarms/structs/interactive_groupchat/#custom-speaker-function","title":"Custom Speaker Function","text":"def context_aware_speaker(agents: List[str], **kwargs) -> str:\n \"\"\"Custom speaker function that selects agents based on context.\"\"\"\n context = kwargs.get(\"context\", \"\").lower()\n\n if \"data\" in context or \"analysis\" in context:\n return \"analyst\" if \"analyst\" in agents else agents[0]\n elif \"market\" in context or \"research\" in context:\n return \"researcher\" if \"researcher\" in agents else agents[0]\n elif \"strategy\" in context or \"planning\" in context:\n return \"strategist\" if \"strategist\" in agents else agents[0]\n else:\n return agents[0]\n\n# Use custom speaker function\nchat = InteractiveGroupChat(\n name=\"Context-Aware Team\",\n agents=[analyst, researcher, strategist],\n speaker_function=context_aware_speaker,\n interactive=False,\n)\n\n# The speaker function will automatically select the most appropriate agent\ntask = \"We need to analyze our market position and develop a strategy.\"\nresponse = chat.run(task)\n
"},{"location":"swarms/structs/interactive_groupchat/#interactive-session-with-enhanced-collaboration","title":"Interactive Session with Enhanced Collaboration","text":"# Create agents designed for collaboration\ndata_scientist = Agent(\n agent_name=\"data_scientist\",\n system_prompt=\"You are a data scientist. When collaborating, always reference specific data points and build upon others' insights with quantitative support.\",\n llm=\"gpt-4\",\n)\n\nbusiness_analyst = Agent(\n agent_name=\"business_analyst\",\n system_prompt=\"You are a business analyst. When collaborating, always connect business insights to practical implications and build upon data analysis with business context.\",\n llm=\"gpt-3.5-turbo\",\n)\n\nproduct_manager = Agent(\n agent_name=\"product_manager\",\n system_prompt=\"You are a product manager. When collaborating, always synthesize insights from all team members and provide actionable product recommendations.\",\n llm=\"gpt-3.5-turbo\",\n)\n\n# Start interactive session\nchat = InteractiveGroupChat(\n name=\"Product Development Team\",\n description=\"A collaborative team for product development decisions\",\n agents=[data_scientist, business_analyst, product_manager],\n speaker_function=round_robin_speaker,\n interactive=True,\n)\n\n# Start the interactive session\nchat.start_interactive_session()\n
"},{"location":"swarms/structs/interactive_groupchat/#benefits-and-use-cases","title":"Benefits and Use Cases","text":""},{"location":"swarms/structs/interactive_groupchat/#benefits-of-enhanced-collaboration","title":"Benefits of Enhanced Collaboration","text":"Contributions are welcome! Please read our contributing guidelines and submit pull requests to our GitHub repository.
"},{"location":"swarms/structs/interactive_groupchat/#license","title":"License","text":"This project is licensed under the Apache License - see the LICENSE file for details.
"},{"location":"swarms/structs/majorityvoting/","title":"MajorityVoting Module Documentation","text":"The MajorityVoting
module provides a mechanism for performing majority voting among a group of agents. Majority voting is a decision rule that selects the option which has the majority of votes. This is particularly useful in systems where multiple agents provide responses to a query, and the most common response needs to be identified as the final output.
graph TD\n A[MajorityVoting System] --> B[Initialize Agents]\n B --> C[Process Task]\n C --> D{Execution Mode}\n D --> E[Single Task]\n D --> F[Batch Tasks]\n D --> G[Concurrent Tasks]\n D --> H[Async Tasks]\n E --> I[Run Agents]\n F --> I\n G --> I\n H --> I\n I --> J[Collect Responses]\n J --> K[Consensus Analysis]\n K --> L{Consensus Agent?}\n L -->|Yes| M[Use Consensus Agent]\n L -->|No| N[Use Last Agent]\n M --> O[Final Output]\n N --> O\n O --> P[Save Conversation]
"},{"location":"swarms/structs/majorityvoting/#key-concepts","title":"Key Concepts","text":"MajorityVoting
","text":""},{"location":"swarms/structs/majorityvoting/#parameters","title":"Parameters","text":"Parameter Type Description name
str
Name of the majority voting system. Default is \"MajorityVoting\". description
str
Description of the system. Default is \"A majority voting system for agents\". agents
List[Agent]
A list of agents to be used in the majority voting system. output_parser
Callable
Function to parse agent outputs. Default is majority_voting
function. consensus_agent
Agent
Optional agent for analyzing consensus among responses. autosave
bool
Whether to autosave conversations. Default is False
. verbose
bool
Whether to enable verbose logging. Default is False
. max_loops
int
Maximum number of voting loops. Default is 1."},{"location":"swarms/structs/majorityvoting/#methods","title":"Methods","text":""},{"location":"swarms/structs/majorityvoting/#runtask-str-correct_answer-str-args-kwargs-listany","title":"run(task: str, correct_answer: str, *args, **kwargs) -> List[Any]
","text":"Runs the majority voting system for a single task.
Parameters: - task
(str): The task to be performed by the agents - correct_answer
(str): The correct answer for evaluation - *args
, **kwargs
: Additional arguments
Returns: - List[Any]: The conversation history as a string, including the majority vote
"},{"location":"swarms/structs/majorityvoting/#batch_runtasks-liststr-args-kwargs-listany","title":"batch_run(tasks: List[str], *args, **kwargs) -> List[Any]
","text":"Runs multiple tasks in sequence.
Parameters: - tasks
(List[str]): List of tasks to be performed - *args
, **kwargs
: Additional arguments
Returns: - List[Any]: List of majority votes for each task
"},{"location":"swarms/structs/majorityvoting/#run_concurrentlytasks-liststr-args-kwargs-listany","title":"run_concurrently(tasks: List[str], *args, **kwargs) -> List[Any]
","text":"Runs multiple tasks concurrently using thread pooling.
Parameters: - tasks
(List[str]): List of tasks to be performed - *args
, **kwargs
: Additional arguments
Returns: - List[Any]: List of majority votes for each task
"},{"location":"swarms/structs/majorityvoting/#run_asynctasks-liststr-args-kwargs-listany","title":"run_async(tasks: List[str], *args, **kwargs) -> List[Any]
","text":"Runs multiple tasks asynchronously using asyncio.
Parameters: - tasks
(List[str]): List of tasks to be performed - *args
, **kwargs
: Additional arguments
Returns: - List[Any]: List of majority votes for each task
"},{"location":"swarms/structs/majorityvoting/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms/structs/majorityvoting/#example-1-basic-single-task-execution-with-modern-llms","title":"Example 1: Basic Single Task Execution with Modern LLMs","text":"from swarms import Agent, MajorityVoting\n\n# Initialize multiple agents with different specialties\nagents = [\n Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor focused on market analysis\",\n system_prompt=\"You are a financial advisor specializing in market analysis and investment opportunities.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n ),\n Agent(\n agent_name=\"Risk-Assessment-Agent\", \n agent_description=\"Risk analysis and portfolio management expert\",\n system_prompt=\"You are a risk assessment expert focused on evaluating investment risks and portfolio diversification.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n ),\n Agent(\n agent_name=\"Tech-Investment-Agent\",\n agent_description=\"Technology sector investment specialist\",\n system_prompt=\"You are a technology investment specialist focused on AI, emerging tech, and growth opportunities.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n )\n]\n\n\nconsensus_agent = Agent(\n agent_name=\"Consensus-Agent\",\n agent_description=\"Consensus agent focused on analyzing investment advice\",\n system_prompt=\"You are a consensus agent focused on analyzing investment advice and providing a final answer.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n)\n\n# Create majority voting system\nmajority_voting = MajorityVoting(\n name=\"Investment-Advisory-System\",\n description=\"Multi-agent system for investment advice\",\n agents=agents,\n verbose=True,\n consensus_agent=consensus_agent\n)\n\n# Run the analysis with majority voting\nresult = majority_voting.run(\n task=\"Create a table of super high growth opportunities for AI. I have $40k to invest in ETFs, index funds, and more. Please create a table in markdown.\",\n correct_answer=\"\" # Optional evaluation metric\n)\n\nprint(result)\n
"},{"location":"swarms/structs/majorityvoting/#batch-execution","title":"Batch Execution","text":"from swarms import Agent, MajorityVoting\n\n# Initialize multiple agents with different specialties\nagents = [\n Agent(\n agent_name=\"Financial-Analysis-Agent\",\n agent_description=\"Personal finance advisor focused on market analysis\",\n system_prompt=\"You are a financial advisor specializing in market analysis and investment opportunities.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n ),\n Agent(\n agent_name=\"Risk-Assessment-Agent\", \n agent_description=\"Risk analysis and portfolio management expert\",\n system_prompt=\"You are a risk assessment expert focused on evaluating investment risks and portfolio diversification.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n ),\n Agent(\n agent_name=\"Tech-Investment-Agent\",\n agent_description=\"Technology sector investment specialist\",\n system_prompt=\"You are a technology investment specialist focused on AI, emerging tech, and growth opportunities.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n )\n]\n\n\nconsensus_agent = Agent(\n agent_name=\"Consensus-Agent\",\n agent_description=\"Consensus agent focused on analyzing investment advice\",\n system_prompt=\"You are a consensus agent focused on analyzing investment advice and providing a final answer.\",\n max_loops=1,\n model_name=\"gpt-4o\"\n)\n\n# Create majority voting system\nmajority_voting = MajorityVoting(\n name=\"Investment-Advisory-System\",\n description=\"Multi-agent system for investment advice\",\n agents=agents,\n verbose=True,\n consensus_agent=consensus_agent\n)\n\n# Run the analysis with majority voting\nresult = majority_voting.batch_run(\n task=\"Create a table of super high growth opportunities for AI. I have $40k to invest in ETFs, index funds, and more. Please create a table in markdown.\",\n correct_answer=\"\" # Optional evaluation metric\n)\n\nprint(result)\n
"},{"location":"swarms/structs/malt/","title":"MALT: Multi-Agent Learning Task Framework","text":""},{"location":"swarms/structs/malt/#overview","title":"Overview","text":"MALT (Multi-Agent Learning Task) is a sophisticated orchestration framework that coordinates multiple specialized AI agents to tackle complex tasks through structured conversations. Inspired by the principles outlined in the MALT research paper, this implementation provides a reliable, extensible system for multi-agent collaboration.
The framework is designed around a three-agent architecture:
Creator Agent: Generates initial content or solutions
Verifier Agent: Critically evaluates the creator's output
Refiner Agent: Improves the solution based on verifier feedback
This collaborative approach enables high-quality outputs for complex tasks by combining the strengths of multiple specialized agents, each focused on a different aspect of the problem-solving process.
"},{"location":"swarms/structs/malt/#how-it-works","title":"How It Works","text":"The MALT framework follows a structured workflow:
This process can be configured to run for multiple iterations, with each cycle potentially improving the quality of the output. The system maintains a conversation history, tracking interactions between agents throughout the workflow.
"},{"location":"swarms/structs/malt/#key-components","title":"Key Components","text":"flowchart TD\n User[User/Client] -->|Submit Task| MALT[MALT Orchestrator]\n\n subgraph MALT Framework\n MALT -->|Task| Creator[Creator Agent]\n Creator -->|Initial Solution| Conversation[Conversation Manager]\n Conversation -->|Solution| VerifierPool[Verifier Agents Pool]\n\n subgraph VerifierPool\n Verifier1[Verifier Agent 1]\n Verifier2[Verifier Agent 2]\n Verifier3[Verifier Agent 3]\n end\n\n VerifierPool -->|Verification Feedback| Conversation\n Conversation -->|Solution + Feedback| RefinerPool[Refiner Agents Pool]\n\n subgraph RefinerPool\n Refiner1[Refiner Agent 1]\n Refiner2[Refiner Agent 2]\n Refiner3[Refiner Agent 3]\n end\n\n RefinerPool -->|Refined Solutions| Conversation\n end\n\n Conversation -->|Final Output| User
"},{"location":"swarms/structs/malt/#execution-workflow","title":"Execution Workflow","text":"sequenceDiagram\n participant User\n participant MALT\n participant Creator\n participant Verifiers\n participant Refiners\n participant Conversation\n\n User->>MALT: Submit task\n MALT->>Creator: Process task\n Creator->>Conversation: Add initial solution\n\n par Verification Phase\n Conversation->>Verifiers: Send solution for verification\n Verifiers->>Conversation: Return verification feedback\n end\n\n par Refinement Phase\n Conversation->>Refiners: Send solution + feedback\n Refiners->>Conversation: Return refined solutions\n end\n\n MALT->>Conversation: Request final output\n Conversation->>MALT: Return conversation history\n MALT->>User: Return final result
"},{"location":"swarms/structs/malt/#api-reference","title":"API Reference","text":""},{"location":"swarms/structs/malt/#malt-class","title":"MALT Class","text":"The core orchestrator that manages the multi-agent interaction process.
"},{"location":"swarms/structs/malt/#constructor-parameters","title":"Constructor Parameters","text":"Parameter Type Default Descriptionmain_agent
Agent
None
The primary agent (Creator) responsible for generating initial solutions refiner_agent
Agent
None
The agent that refines solutions based on verification feedback verifier_agent
Agent
None
The agent that verifies and evaluates solutions max_loops
int
1
Maximum number of iterations for the task execution return_list
bool
False
Flag to return output as a list return_dict
bool
False
Flag to return output as a dictionary agents
list[Agent]
[]
Alternative list of agents to use in the task preset_agents
bool
True
Use default preset agents for mathematical proofs"},{"location":"swarms/structs/malt/#methods","title":"Methods","text":"Method Parameters Return Type Description reliability_check
None None Validates agent configuration and parameters step
task: str, img: str = None, *args, **kwargs
str
or list
or dict
Executes a single iteration of the MALT workflow run
task: str, img: str = None, *args, **kwargs
str
or list
or dict
Executes the complete MALT workflow for a task run_batched
tasks: List[str], *args, **kwargs
List[str]
or List[list]
or List[dict]
Sequentially processes multiple tasks run_concurrently
tasks: List[str], *args, **kwargs
concurrent.futures.Future
Processes multiple tasks in parallel using ThreadPoolExecutor __call__
task: str, *args, **kwargs
Same as run
Allows the MALT instance to be called as a function __str__
None str
Returns the conversation history as a string __repr__
None str
Returns the conversation history as a string"},{"location":"swarms/structs/malt/#sample-implementations","title":"Sample Implementations","text":""},{"location":"swarms/structs/malt/#default-mathematical-proof-agents","title":"Default Mathematical Proof Agents","text":"The MALT framework includes preset agents specialized for mathematical proof generation and refinement:
Each agent has a carefully designed system prompt that guides its behavior and specialization.
"},{"location":"swarms/structs/malt/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms/structs/malt/#basic-usage","title":"Basic Usage","text":"from swarms.structs.agent import Agent\nfrom swarms.structs.multi_agent_exec import MALT\n\n# Initialize with preset mathematical proof agents\nmalt = MALT(preset_agents=True)\n\n# Run a mathematical proof task\nresult = malt.run(\"Develop a theorem and proof related to prime numbers and their distribution.\")\n\nprint(result)\n
"},{"location":"swarms/structs/malt/#custom-agents","title":"Custom Agents","text":"from swarms.structs.agent import Agent\nfrom swarms.structs.multi_agent_exec import MALT\n\n# Define custom agents\ncreator = Agent(\n agent_name=\"Physics-Creator\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n system_prompt=\"You are a theoretical physicist specializing in quantum mechanics...\"\n)\n\nverifier = Agent(\n agent_name=\"Physics-Verifier\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n system_prompt=\"You are an experimental physicist who verifies theoretical claims...\"\n)\n\nrefiner = Agent(\n agent_name=\"Physics-Communicator\",\n model_name=\"gpt-4o-mini\",\n max_loops=1,\n system_prompt=\"You excel at explaining complex physics concepts to diverse audiences...\"\n)\n\n# Initialize MALT with custom agents\nmalt = MALT(\n main_agent=creator,\n verifier_agent=verifier,\n refiner_agent=refiner,\n preset_agents=False,\n max_loops=1\n)\n\n# Run a physics explanation task\nresult = malt.run(\"Explain the quantum entanglement phenomenon and its implications.\")\n
"},{"location":"swarms/structs/malt/#concurrent-processing","title":"Concurrent Processing","text":"from swarms.structs.multi_agent_exec import MALT\n\n# Initialize MALT\nmalt = MALT()\n\n# Define multiple tasks\ntasks = [\n \"Prove a theorem related to continuous functions on compact sets.\",\n \"Develop a theorem about convergence in infinite-dimensional Hilbert spaces.\",\n \"Create a theorem relating to measure theory and Lebesgue integration.\"\n]\n\n# Process tasks concurrently\nfutures = malt.run_concurrently(tasks)\n\n# Collect results as they complete\nfor future in futures:\n result = future.result()\n print(result)\n
"},{"location":"swarms/structs/malt/#example-complex-mathematical-domain","title":"Example: Complex Mathematical Domain","text":"Here's an example of how MALT can generate, verify, and refine a mathematical proof:
"},{"location":"swarms/structs/malt/#input","title":"Input","text":"malt = MALT(preset_agents=True)\ntask = \"Develop a theorem and rigorous proof related to the convergence properties of infinite series.\"\nresult = malt.run(task)\n
"},{"location":"swarms/structs/malt/#output-flow","title":"Output Flow","text":"max_loops
based on task complexityThe MatrixSwarm
class provides a framework for managing and operating on matrices of AI agents, enabling matrix-like operations similar to linear algebra. This allows for complex agent interactions and parallel processing capabilities.
MatrixSwarm
treats AI agents as elements in a matrix, allowing for operations like addition, multiplication, and transposition. This approach enables sophisticated agent orchestration and parallel processing patterns.
pip3 install -U swarms\n
"},{"location":"swarms/structs/matrix_swarm/#basic-usage","title":"Basic Usage","text":"from swarms import Agent\nfrom swarms.matrix import MatrixSwarm\n\n# Create a 2x2 matrix of agents\nagents = [\n [Agent(agent_name=\"Agent-0-0\"), Agent(agent_name=\"Agent-0-1\")],\n [Agent(agent_name=\"Agent-1-0\"), Agent(agent_name=\"Agent-1-1\")]\n]\n\n# Initialize the matrix\nmatrix = MatrixSwarm(agents)\n
"},{"location":"swarms/structs/matrix_swarm/#class-constructor","title":"Class Constructor","text":"def __init__(self, agents: List[List[Agent]])\n
"},{"location":"swarms/structs/matrix_swarm/#parameters","title":"Parameters","text":"agents
(List[List[Agent]]
): A 2D list of Agent instances representing the matrix.ValueError
: If the input is not a valid 2D list of Agent instances.Transposes the matrix of agents by swapping rows and columns.
def transpose(self) -> MatrixSwarm\n
"},{"location":"swarms/structs/matrix_swarm/#returns","title":"Returns","text":"MatrixSwarm
: A new MatrixSwarm instance with transposed dimensions.Performs element-wise addition of two agent matrices.
def add(self, other: MatrixSwarm) -> MatrixSwarm\n
"},{"location":"swarms/structs/matrix_swarm/#parameters_1","title":"Parameters","text":"other
(MatrixSwarm
): Another MatrixSwarm instance to add.MatrixSwarm
: A new MatrixSwarm resulting from the addition.ValueError
: If matrix dimensions are incompatible.Scales the matrix by duplicating agents along rows.
def scalar_multiply(self, scalar: int) -> MatrixSwarm\n
"},{"location":"swarms/structs/matrix_swarm/#parameters_2","title":"Parameters","text":"scalar
(int
): The multiplication factor.MatrixSwarm
: A new MatrixSwarm with scaled dimensions.Performs matrix multiplication (dot product) between two agent matrices.
def multiply(self, other: MatrixSwarm, inputs: List[str]) -> List[List[AgentOutput]]\n
"},{"location":"swarms/structs/matrix_swarm/#parameters_3","title":"Parameters","text":"other
(MatrixSwarm
): The second MatrixSwarm for multiplication.inputs
(List[str]
): Input queries for the agents.List[List[AgentOutput]]
: Matrix of operation results.ValueError
: If matrix dimensions are incompatible for multiplication.Performs element-wise subtraction of two agent matrices.
def subtract(self, other: MatrixSwarm) -> MatrixSwarm\n
"},{"location":"swarms/structs/matrix_swarm/#parameters_4","title":"Parameters","text":"other
(MatrixSwarm
): Another MatrixSwarm to subtract.MatrixSwarm
: A new MatrixSwarm resulting from the subtraction.Creates an identity matrix of agents.
def identity(self, size: int) -> MatrixSwarm\n
"},{"location":"swarms/structs/matrix_swarm/#parameters_5","title":"Parameters","text":"size
(int
): Size of the identity matrix (NxN).MatrixSwarm
: An identity MatrixSwarm.Computes the determinant of a square agent matrix.
def determinant(self) -> Any\n
"},{"location":"swarms/structs/matrix_swarm/#returns_6","title":"Returns","text":"Any
: The determinant result.ValueError
: If the matrix is not square.Saves the matrix structure and metadata to a JSON file.
def save_to_file(self, path: str) -> None\n
"},{"location":"swarms/structs/matrix_swarm/#parameters_6","title":"Parameters","text":"path
(str
): File path for saving the matrix data.Here's a comprehensive example demonstrating various MatrixSwarm operations:
from swarms import Agent\nfrom swarms.matrix import MatrixSwarm\n\n# Create agents with specific configurations\nagents = [\n [\n Agent(\n agent_name=f\"Agent-{i}-{j}\",\n system_prompt=\"Your system prompt here\",\n model_name=\"gpt-4\",\n max_loops=1,\n verbose=True\n ) for j in range(2)\n ] for i in range(2)\n]\n\n# Initialize matrix\nmatrix = MatrixSwarm(agents)\n\n# Example operations\ntransposed = matrix.transpose()\nscaled = matrix.scalar_multiply(2)\n\n# Run operations with inputs\ninputs = [\"Query 1\", \"Query 2\"]\nresults = matrix.multiply(transposed, inputs)\n\n# Save results\nmatrix.save_to_file(\"matrix_results.json\")\n
"},{"location":"swarms/structs/matrix_swarm/#output-schema","title":"Output Schema","text":"The AgentOutput
class defines the structure for operation results:
class AgentOutput(BaseModel):\n agent_name: str\n input_query: str\n output_result: Any\n metadata: dict\n
"},{"location":"swarms/structs/matrix_swarm/#best-practices","title":"Best Practices","text":"Validate matrix dimensions for your use case
Operation Performance
Use appropriate batch sizes for inputs
Error Handling
Validate inputs before matrix operations
Resource Management
graph TD\n A[Input Task] --> B[Initialize MixtureOfAgents]\n B --> C[Reliability Check]\n C --> D[Layer 1: Parallel Agent Execution]\n D --> E[Layer 2: Sequential Processing]\n E --> F[Layer 3: Parallel Agent Execution]\n F --> G[Final Aggregator Agent]\n G --> H[Output Response]\n\n subgraph \"Agent Layer Details\"\n I[Agent 1] --> J[Agent Results]\n K[Agent 2] --> J\n L[Agent N] --> J\n end\n\n subgraph \"Processing Flow\"\n M[Previous Context] --> N[Current Task]\n N --> O[Agent Processing]\n O --> P[Aggregation]\n P --> M\n end
"},{"location":"swarms/structs/moa/#overview","title":"Overview","text":"The MixtureOfAgents
class represents a mixture of agents operating within a swarm. The workflow of the swarm follows a parallel \u2192 sequential \u2192 parallel \u2192 final output agent process. This implementation is inspired by concepts discussed in the paper: https://arxiv.org/pdf/2406.04692.
The class is designed to manage a collection of agents, orchestrate their execution in layers, and handle the final aggregation of their outputs through a designated final agent. This architecture facilitates complex, multi-step processing where intermediate results are refined through successive layers of agent interactions.
"},{"location":"swarms/structs/moa/#class-definition","title":"Class Definition","text":""},{"location":"swarms/structs/moa/#mixtureofagents","title":"MixtureOfAgents","text":"class MixtureOfAgents(BaseSwarm):\n
"},{"location":"swarms/structs/moa/#attributes","title":"Attributes","text":"Attribute Type Description Default agents
List[Agent]
The list of agents in the swarm. None
flow
str
The flow of the swarm. parallel -> sequential -> parallel -> final output agent
max_loops
int
The maximum number of loops to run. 1
verbose
bool
Flag indicating whether to print verbose output. True
layers
int
The number of layers in the swarm. 3
rules
str
The rules for the swarm. None
final_agent
Agent
The agent to handle the final output processing. None
auto_save
bool
Flag indicating whether to auto-save the metadata to a file. False
saved_file_name
str
The name of the file where the metadata will be saved. \"moe_swarm.json\"
"},{"location":"swarms/structs/moa/#methods","title":"Methods","text":""},{"location":"swarms/structs/moa/#__init__","title":"__init__
","text":""},{"location":"swarms/structs/moa/#parameters","title":"Parameters","text":"Parameter Type Description Default name
str
The name of the swarm. \"MixtureOfAgents\"
description
str
A brief description of the swarm. \"A swarm of agents that run in parallel and sequentially.\"
agents
List[Agent]
The list of agents in the swarm. None
max_loops
int
The maximum number of loops to run. 1
verbose
bool
Flag indicating whether to print verbose output. True
layers
int
The number of layers in the swarm. 3
rules
str
The rules for the swarm. None
final_agent
Agent
The agent to handle the final output processing. None
auto_save
bool
Flag indicating whether to auto-save the metadata to a file. False
saved_file_name
str
The name of the file where the metadata will be saved. \"moe_swarm.json\"
"},{"location":"swarms/structs/moa/#agent_check","title":"agent_check
","text":"def agent_check(self):\n
"},{"location":"swarms/structs/moa/#description","title":"Description","text":"Checks if the provided agents
attribute is a list of Agent
instances. Raises a TypeError
if the validation fails.
moe_swarm = MixtureOfAgents(agents=[agent1, agent2])\nmoe_swarm.agent_check() # Validates the agents\n
"},{"location":"swarms/structs/moa/#final_agent_check","title":"final_agent_check
","text":"def final_agent_check(self):\n
"},{"location":"swarms/structs/moa/#description_1","title":"Description","text":"Checks if the provided final_agent
attribute is an instance of Agent
. Raises a TypeError
if the validation fails.
moe_swarm = MixtureOfAgents(final_agent=final_agent)\nmoe_swarm.final_agent_check() # Validates the final agent\n
"},{"location":"swarms/structs/moa/#swarm_initialization","title":"swarm_initialization
","text":"def swarm_initialization(self):\n
"},{"location":"swarms/structs/moa/#description_2","title":"Description","text":"Initializes the swarm by logging the swarm name, description, and the number of agents.
"},{"location":"swarms/structs/moa/#example-usage_2","title":"Example Usage","text":"moe_swarm = MixtureOfAgents(agents=[agent1, agent2])\nmoe_swarm.swarm_initialization() # Initializes the swarm\n
"},{"location":"swarms/structs/moa/#run","title":"run
","text":"def run(self, task: str = None, *args, **kwargs):\n
"},{"location":"swarms/structs/moa/#parameters_1","title":"Parameters","text":"Parameter Type Description Default task
str
The task to be performed by the swarm. None
*args
tuple
Additional arguments. None
**kwargs
dict
Additional keyword arguments. None
"},{"location":"swarms/structs/moa/#returns","title":"Returns","text":"Type Description str
The conversation history as a string."},{"location":"swarms/structs/moa/#description_3","title":"Description","text":"Runs the swarm with the given task, orchestrates the execution of agents through the specified layers, and returns the conversation history.
"},{"location":"swarms/structs/moa/#example-usage_3","title":"Example Usage","text":"moe_swarm = MixtureOfAgents(agents=[agent1, agent2], final_agent=final_agent)\nhistory = moe_swarm.run(task=\"Solve this problem.\")\nprint(history)\n
"},{"location":"swarms/structs/moa/#reliability_check","title":"reliability_check
","text":"def reliability_check(self) -> None:\n
"},{"location":"swarms/structs/moa/#description_4","title":"Description","text":"Performs validation checks on the Mixture of Agents class to ensure all required components are properly configured. Raises ValueError if any checks fail.
"},{"location":"swarms/structs/moa/#validation-checks","title":"Validation Checks:","text":"_get_final_system_prompt
","text":"def _get_final_system_prompt(self, system_prompt: str, results: List[str]) -> str:\n
"},{"location":"swarms/structs/moa/#description_5","title":"Description","text":"Internal method that constructs a system prompt for subsequent layers by incorporating previous responses.
"},{"location":"swarms/structs/moa/#parameters_2","title":"Parameters","text":"Parameter Type Descriptionsystem_prompt
str
The initial system prompt results
List[str]
List of previous responses"},{"location":"swarms/structs/moa/#returns_1","title":"Returns","text":"Type Description str
Combined system prompt with previous responses"},{"location":"swarms/structs/moa/#run_batched","title":"run_batched
","text":"def run_batched(self, tasks: List[str]) -> List[str]:\n
"},{"location":"swarms/structs/moa/#description_6","title":"Description","text":"Processes multiple tasks sequentially, returning a list of responses.
"},{"location":"swarms/structs/moa/#parameters_3","title":"Parameters","text":"Parameter Type Descriptiontasks
List[str]
List of tasks to process"},{"location":"swarms/structs/moa/#returns_2","title":"Returns","text":"Type Description List[str]
List of responses for each task"},{"location":"swarms/structs/moa/#run_concurrently","title":"run_concurrently
","text":"def run_concurrently(self, tasks: List[str]) -> List[str]:\n
"},{"location":"swarms/structs/moa/#description_7","title":"Description","text":"Processes multiple tasks concurrently using a ThreadPoolExecutor, optimizing for parallel execution.
"},{"location":"swarms/structs/moa/#parameters_4","title":"Parameters","text":"Parameter Type Descriptiontasks
List[str]
List of tasks to process concurrently"},{"location":"swarms/structs/moa/#returns_3","title":"Returns","text":"Type Description List[str]
List of responses for each task"},{"location":"swarms/structs/moa/#detailed-explanation","title":"Detailed Explanation","text":""},{"location":"swarms/structs/moa/#initialization","title":"Initialization","text":"The __init__
method initializes the swarm with the provided parameters, sets up the conversation rules, and invokes the initialization of the swarm. It also ensures the validity of the agents
and final_agent
attributes by calling the agent_check
and final_agent_check
methods respectively.
The agent_check
method validates whether the agents
attribute is a list of Agent
instances, while the final_agent_check
method validates whether the final_agent
is an instance of Agent
. These checks are crucial to ensure that the swarm operates correctly with the appropriate agent types.
The swarm_initialization
method logs essential information about the swarm, including its name, description, and the number of agents. This provides a clear starting point for the swarm's operations and facilitates debugging and monitoring.
The run
method is the core of the MixtureOfAgents
class. It orchestrates the execution of agents through multiple layers, collects their outputs, and processes the final output using the final_agent
. The conversation history is maintained and updated throughout this process, allowing for a seamless flow of information and responses.
During each layer, the method iterates over the agents, invokes their run
method with the current conversation history, and logs the outputs. These outputs are then added to the conversation, and the history is updated for the next layer.
After all layers are completed, the final output agent processes the entire conversation history, and the metadata is created and optionally saved to a file. This metadata includes details about the layers, agent runs, and final output, providing a comprehensive record of the swarm's execution.
"},{"location":"swarms/structs/moa/#additional-information-and-tips","title":"Additional Information and Tips","text":""},{"location":"swarms/structs/moa/#common-issues-and-solutions","title":"Common Issues and Solutions","text":"agents
list and the final_agent
are instances of the Agent
class. The agent_check
and final_agent_check
methods help validate this.verbose
flag to control the verbosity of the output. This can help with debugging or reduce clutter in the logs.auto_save
flag to automatically save the metadata to a file. This can be useful for keeping records of the swarm's operations without manual intervention.For further reading and background information on the concepts used in the MixtureOfAgents
class, refer to the paper: https://arxiv.org/pdf/2406.04692.
from swarms import MixtureOfAgents, Agent\n\nfrom swarm_models import OpenAIChat\n\n# Define agents\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs the tasks for the accountants\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\n# Initialize accountant 1\naccountant1 = Agent(\n agent_name=\"Accountant1\",\n system_prompt=\"Prepares financial statements\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant1.json\",\n)\n\n# Initialize accountant 2\naccountant2 = Agent(\n agent_name=\"Accountant2\",\n system_prompt=\"Audits financial records\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant2.json\",\n)\n\n\n# Initialize the MixtureOfAgents\nmoe_swarm = MixtureOfAgents(agents=[director, accountant1, accountant2], final_agent=director)\n\n# Run the swarm\nhistory = moe_swarm.run(task=\"Perform task X.\")\nprint(history)\n
"},{"location":"swarms/structs/moa/#example-2-verbose-output-and-auto-save","title":"Example 2: Verbose Output and Auto-Save","text":"from swarms import MixtureOfAgents, Agent\n\nfrom swarm_models import OpenAIChat\n\n# Define Agents\n# Define agents\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs the tasks for the accountants\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\n# Initialize accountant 1\naccountant1 = Agent(\n agent_name=\"Accountant1\",\n system_prompt=\"Prepares financial statements\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant1.json\",\n)\n\n# Initialize accountant 2\naccountant2 = Agent(\n agent_name=\"Accountant2\",\n system_prompt=\"Audits financial records\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant2.json\",\n)\n\n# Initialize the MixtureOfAgents with verbose output and auto-save enabled\nmoe_swarm = MixtureOfAgents(\n agents=[director, accountant1, accountant2],\n final_agent=director,\n verbose=True,\n auto_save=True\n)\n\n# Run the swarm\nhistory = moe_swarm.run(task=\"Analyze data set Y.\")\nprint(history)\n
"},{"location":"swarms/structs/moa/#example-3-custom-rules-and-multiple-layers","title":"Example 3: Custom Rules and Multiple Layers","text":"from swarms import MixtureOfAgents, Agent\n\nfrom swarm_models import OpenAIChat\n\n# Define agents\n# Initialize the director agent\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs the tasks for the accountants\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\n# Initialize accountant 1\naccountant1 = Agent(\n agent_name=\"Accountant1\",\n system_prompt=\"Prepares financial statements\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant1.json\",\n)\n\n# Initialize accountant 2\naccountant2 = Agent(\n agent_name=\"Accountant2\",\n system_prompt=\"Audits financial records\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant2.json\",\n)\n\n# Initialize the MixtureOfAgents with custom rules and multiple layers\nmoe_swarm = MixtureOfAgents(\n agents=[director, accountant1, accountant2],\n final_agent=director,\n layers=5,\n rules=\"Custom rules for the swarm\"\n)\n\n# Run the swarm\nhistory = moe_swarm.run(task=\"Optimize process Z.\")\nprint(history)\n
This comprehensive documentation provides a detailed understanding of the MixtureOfAgents
class, its attributes, methods, and usage. The examples illustrate how to initialize and run the swarm, demonstrating its flexibility and capability to handle various tasks and configurations.
The MixtureOfAgents
class is a powerful and flexible framework for managing and orchestrating a swarm of agents. By following a structured approach of parallel and sequential processing, it enables the implementation of complex multi-step workflows where intermediate results are refined through multiple layers of agent interactions. This architecture is particularly suitable for tasks that require iterative processing, collaboration among diverse agents, and sophisticated aggregation of outputs.
MixtureOfAgents
class effectively.The MixtureOfAgents
class can be applied in various domains, including but not limited to:
The MixtureOfAgents
framework provides a solid foundation for further extensions and customizations, including:
In conclusion, the MixtureOfAgents
class represents a versatile and efficient solution for orchestrating multi-agent systems, facilitating complex task execution through its structured and layered approach. By harnessing the power of parallel and sequential processing, it opens up new possibilities for tackling intricate problems across various domains.
from swarms import MixtureOfAgents, Agent\nfrom swarm_models import OpenAIChat\n\n# Initialize agents as in previous examples\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs the tasks for the accountants\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\naccountant1 = Agent(\n agent_name=\"Accountant1\",\n system_prompt=\"Prepares financial statements\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant1.json\",\n)\n\naccountant2 = Agent(\n agent_name=\"Accountant2\",\n system_prompt=\"Audits financial records\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant2.json\",\n)\n\n# Initialize MixtureOfAgents\nmoe_swarm = MixtureOfAgents(\n agents=[director, accountant1, accountant2],\n final_agent=director\n)\n\n# Process multiple tasks in batch\ntasks = [\n \"Analyze Q1 financial statements\",\n \"Review tax compliance\",\n \"Prepare budget forecast\"\n]\nresults = moe_swarm.run_batched(tasks)\nfor task, result in zip(tasks, results):\n print(f\"Task: {task}\\nResult: {result}\\n\")\n
"},{"location":"swarms/structs/moa/#example-5-concurrent-processing","title":"Example 5: Concurrent Processing","text":"from swarms import MixtureOfAgents, Agent\nfrom swarm_models import OpenAIChat\n\n# Initialize agents as before\n# ... agent initialization code ...\n\n# Initialize MixtureOfAgents\nmoe_swarm = MixtureOfAgents(\n agents=[director, accountant1, accountant2],\n final_agent=director\n)\n\n# Process multiple tasks concurrently\ntasks = [\n \"Generate monthly report\",\n \"Audit expense claims\",\n \"Update financial projections\",\n \"Review investment portfolio\"\n]\nresults = moe_swarm.run_concurrently(tasks)\nfor task, result in zip(tasks, results):\n print(f\"Task: {task}\\nResult: {result}\\n\")\n
"},{"location":"swarms/structs/moa/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms/structs/moa/#context-preservation","title":"Context Preservation","text":"The MixtureOfAgents
class maintains context between iterations when running multiple loops. Each subsequent iteration receives the context from previous runs, allowing for more sophisticated and context-aware processing.
The class implements asynchronous processing internally using Python's asyncio
, enabling efficient handling of concurrent operations and improved performance for complex workflows.
Built-in telemetry and logging capabilities help track agent performance and maintain detailed execution records: - Automatic logging of agent outputs - Structured data capture using Pydantic models - JSON-formatted output options
"},{"location":"swarms/structs/model_router/","title":"ModelRouter Docs","text":"The ModelRouter is an intelligent routing system that automatically selects and executes AI models based on task requirements. It leverages a function-calling architecture to analyze tasks and recommend the optimal model and provider combination for each specific use case.
"},{"location":"swarms/structs/model_router/#key-features","title":"Key Features","text":"Executes a single task through the model router with memory and refinement capabilities.
"},{"location":"swarms/structs/model_router/#installation","title":"Installation","text":"pip3 install -U swarms\n
OPENAI_API_KEY=your_openai_api_key\nANTHROPIC_API_KEY=your_anthropic_api_key\nGOOGLE_API_KEY=your_google_api_key\n# Add more API keys as needed following litellm format\n
from swarms import ModelRouter\n\nrouter = ModelRouter()\n\n# Simple text analysis\nresult = router.run(\"Analyze the sentiment and key themes in this customer feedback\")\n\n# Complex reasoning task\ncomplex_result = router.run(\"\"\"\nEvaluate the following business proposal:\n- Initial investment: $500,000\n- Projected ROI: 25% annually\n- Market size: $2B\n- Competition: 3 major players\nProvide detailed analysis and recommendations.\n\"\"\")\n
"},{"location":"swarms/structs/model_router/#batch_runtasks-list-list","title":"batch_run(tasks: list) -> list","text":"Executes multiple tasks sequentially with result aggregation.
# Multiple analysis tasks\ntasks = [\n \"Analyze Q1 financial performance\",\n \"Predict Q2 market trends\",\n \"Evaluate competitor strategies\",\n \"Generate growth recommendations\"\n]\n\nresults = router.batch_run(tasks)\n\n# Process results\nfor task, result in zip(tasks, results):\n print(f\"Task: {task}\\nResult: {result}\\n\")\n
"},{"location":"swarms/structs/model_router/#concurrent_runtasks-list-list","title":"concurrent_run(tasks: list) -> list","text":"Parallel execution of multiple tasks using thread pooling.
import asyncio\nfrom typing import List\n\n# Define multiple concurrent tasks\nanalysis_tasks = [\n \"Perform technical analysis of AAPL stock\",\n \"Analyze market sentiment from social media\",\n \"Generate trading signals\",\n \"Calculate risk metrics\"\n]\n\n# Execute tasks concurrently\nresults = router.concurrent_run(analysis_tasks)\n\n# Process results with error handling\nfor task, result in zip(analysis_tasks, results):\n try:\n processed_result = process_analysis(result)\n save_to_database(processed_result)\n except Exception as e:\n log_error(f\"Error processing {task}: {str(e)}\")\n
"},{"location":"swarms/structs/model_router/#async_runtask-str-asynciotask","title":"async_run(task: str) -> asyncio.Task","text":"Asynchronous task execution with coroutine support.
async def process_data_stream():\n tasks = []\n async for data in data_stream:\n task = await router.async_run(f\"Process data: {data}\")\n tasks.append(task)\n\n results = await asyncio.gather(*tasks)\n return results\n\n# Usage in async context\nasync def main():\n router = ModelRouter()\n results = await process_data_stream()\n
"},{"location":"swarms/structs/model_router/#advanced-usage-examples","title":"Advanced Usage Examples","text":""},{"location":"swarms/structs/model_router/#financial-analysis-system","title":"Financial Analysis System","text":"from swarms import ModelRouter\nfrom typing import Dict, List\nimport pandas as pd\n\nclass FinancialAnalysisSystem:\n def __init__(self):\n self.router = ModelRouter(\n temperature=0.3, # Lower temperature for more deterministic outputs\n max_tokens=8000, # Higher token limit for detailed analysis\n max_loops=2 # Allow for refinement iteration\n )\n\n def analyze_company_financials(self, financial_data: Dict) -> Dict:\n analysis_task = f\"\"\"\n Perform comprehensive financial analysis:\n\n Financial Metrics:\n - Revenue: ${financial_data['revenue']}M\n - EBITDA: ${financial_data['ebitda']}M\n - Debt/Equity: {financial_data['debt_equity']}\n - Working Capital: ${financial_data['working_capital']}M\n\n Required Analysis:\n 1. Profitability assessment\n 2. Liquidity analysis\n 3. Growth projections\n 4. Risk evaluation\n 5. Investment recommendations\n\n Provide detailed insights and actionable recommendations.\n \"\"\"\n\n result = self.router.run(analysis_task)\n return self._parse_analysis_result(result)\n\n def _parse_analysis_result(self, result: str) -> Dict:\n # Implementation of result parsing\n pass\n\n# Usage\nanalyzer = FinancialAnalysisSystem()\ncompany_data = {\n 'revenue': 150,\n 'ebitda': 45,\n 'debt_equity': 0.8,\n 'working_capital': 25\n}\n\nanalysis = analyzer.analyze_company_financials(company_data)\n
"},{"location":"swarms/structs/model_router/#healthcare-data-processing-pipeline","title":"Healthcare Data Processing Pipeline","text":"from swarms import ModelRouter\nimport pandas as pd\nfrom typing import List, Dict\n\nclass MedicalDataProcessor:\n def __init__(self):\n self.router = ModelRouter(\n max_workers=\"auto\", # Automatic worker scaling\n temperature=0.2, # Conservative temperature for medical analysis\n system_prompt=\"\"\"You are a specialized medical data analyzer focused on:\n 1. Clinical terminology interpretation\n 2. Patient data analysis\n 3. Treatment recommendation review\n 4. Medical research synthesis\"\"\"\n )\n\n async def process_patient_records(self, records: List[Dict]) -> List[Dict]:\n analysis_tasks = []\n\n for record in records:\n task = f\"\"\"\n Analyze patient record:\n - Age: {record['age']}\n - Symptoms: {', '.join(record['symptoms'])}\n - Vital Signs: {record['vitals']}\n - Medications: {', '.join(record['medications'])}\n - Lab Results: {record['lab_results']}\n\n Provide:\n 1. Symptom analysis\n 2. Medication interaction check\n 3. Lab results interpretation\n 4. Treatment recommendations\n \"\"\"\n analysis_tasks.append(task)\n\n results = await asyncio.gather(*[\n self.router.async_run(task) for task in analysis_tasks\n ])\n\n return [self._parse_medical_analysis(r) for r in results]\n\n def _parse_medical_analysis(self, analysis: str) -> Dict:\n # Implementation of medical analysis parsing\n pass\n\n# Usage\nasync def main():\n processor = MedicalDataProcessor()\n patient_records = [\n {\n 'age': 45,\n 'symptoms': ['fever', 'cough', 'fatigue'],\n 'vitals': {'bp': '120/80', 'temp': '38.5C'},\n 'medications': ['lisinopril', 'metformin'],\n 'lab_results': 'WBC: 11,000, CRP: 2.5'\n }\n # More records...\n ]\n\n analyses = await processor.process_patient_records(patient_records)\n
"},{"location":"swarms/structs/model_router/#natural-language-processing-pipeline","title":"Natural Language Processing Pipeline","text":"from swarms import ModelRouter\nfrom typing import List, Dict\nimport asyncio\n\nclass NLPPipeline:\n def __init__(self):\n self.router = ModelRouter(\n temperature=0.4,\n max_loops=2\n )\n\n def process_documents(self, documents: List[str]) -> List[Dict]:\n tasks = [self._create_nlp_task(doc) for doc in documents]\n results = self.router.concurrent_run(tasks)\n return [self._parse_nlp_result(r) for r in results]\n\n def _create_nlp_task(self, document: str) -> str:\n return f\"\"\"\n Perform comprehensive NLP analysis:\n\n Text: {document}\n\n Required Analysis:\n 1. Entity recognition\n 2. Sentiment analysis\n 3. Topic classification\n 4. Key phrase extraction\n 5. Intent detection\n\n Provide structured analysis with confidence scores.\n \"\"\"\n\n def _parse_nlp_result(self, result: str) -> Dict:\n # Implementation of NLP result parsing\n pass\n\n# Usage\npipeline = NLPPipeline()\ndocuments = [\n \"We're extremely satisfied with the new product features!\",\n \"The customer service response time needs improvement.\",\n \"Looking to upgrade our subscription plan next month.\"\n]\n\nanalyses = pipeline.process_documents(documents)\n
"},{"location":"swarms/structs/model_router/#available-models-and-use-cases","title":"Available Models and Use Cases","text":"Model Provider Optimal Use Cases Characteristics gpt-4-turbo OpenAI Complex reasoning, Code generation, Creative writing High accuracy, Latest knowledge cutoff claude-3-opus Anthropic Research analysis, Technical documentation, Long-form content Strong reasoning, Detailed outputs gemini-pro Google Multimodal tasks, Code generation, Technical analysis Fast inference, Strong coding abilities mistral-large Mistral General tasks, Content generation, Classification Open source, Good price/performance deepseek-reasoner DeepSeek Mathematical analysis, Logic problems, Scientific computing Specialized reasoning capabilities"},{"location":"swarms/structs/model_router/#provider-capabilities","title":"Provider Capabilities","text":"Provider Strengths Best For Integration Notes OpenAI Consistent performance, Strong reasoning Production systems, Complex tasks Requires API key setup Anthropic Safety features, Detailed analysis Research, Technical writing Claude-specific formatting Google Technical tasks, Multimodal support Code generation, Analysis Vertex AI integration available Groq High-speed inference Real-time applications Optimized for specific models DeepSeek Specialized reasoning Scientific computing Custom API integration Mistral Open source flexibility General applications Self-hosted options available"},{"location":"swarms/structs/model_router/#performance-optimization-tips","title":"Performance Optimization Tips","text":"Use streaming for long outputs
Concurrency Settings
Monitor memory usage with large batch sizes
Temperature Tuning
Mid-range (0.4-0.6) for balanced outputs
System Prompts
SequentialWorkflow
","text":"Sequential Workflow enables you to sequentially execute tasks with Agent
and then pass the output into the next agent and onwards until you have specified your max loops.
from swarms import Agent, SequentialWorkflow\n\nfrom swarm_models import Anthropic\n\n\n# Initialize the language model agent (e.g., GPT-3)\nllm = Anthropic()\n\n# Initialize agents for individual tasks\nagent1 = Agent(\n agent_name=\"Blog generator\",\n system_prompt=\"Generate a blog post like stephen king\",\n llm=llm,\n max_loops=1,\n dashboard=False,\n tools=[],\n)\nagent2 = Agent(\n agent_name=\"summarizer\",\n system_prompt=\"Sumamrize the blog post\",\n llm=llm,\n max_loops=1,\n dashboard=False,\n tools=[],\n)\n\n# Create the Sequential workflow\nworkflow = SequentialWorkflow(\n agents=[agent1, agent2], max_loops=1, verbose=False\n)\n\n# Run the workflow\nworkflow.run(\n \"Generate a blog post on how swarms of agents can help businesses grow.\"\n)\n
"},{"location":"swarms/structs/multi_agent_collaboration_examples/#agentrearrange","title":"AgentRearrange
","text":"Inspired by Einops and einsum, this orchestration techniques enables you to map out the relationships between various agents. For example you specify linear and sequential relationships like a -> a1 -> a2 -> a3
or concurrent relationships where the first agent will send a message to 3 agents all at once: a -> a1, a2, a3
. You can customize your workflow to mix sequential and concurrent relationships. Docs Available:
from swarms import Agent, AgentRearrange\n\n\nfrom swarm_models import Anthropic\n\n# Initialize the director agent\n\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs the tasks for the workers\",\n llm=Anthropic(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\n\n# Initialize worker 1\n\nworker1 = Agent(\n agent_name=\"Worker1\",\n system_prompt=\"Generates a transcript for a youtube video on what swarms are\",\n llm=Anthropic(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"worker1.json\",\n)\n\n\n# Initialize worker 2\nworker2 = Agent(\n agent_name=\"Worker2\",\n system_prompt=\"Summarizes the transcript generated by Worker1\",\n llm=Anthropic(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"worker2.json\",\n)\n\n\n# Create a list of agents\nagents = [director, worker1, worker2]\n\n# Define the flow pattern\nflow = \"Director -> Worker1 -> Worker2\"\n\n# Using AgentRearrange class\nagent_system = AgentRearrange(agents=agents, flow=flow)\noutput = agent_system.run(\n \"Create a format to express and communicate swarms of llms in a structured manner for youtube\"\n)\nprint(output)\n
"},{"location":"swarms/structs/multi_agent_collaboration_examples/#hierarhicalswarm","title":"HierarhicalSwarm
","text":"Coming soon...
"},{"location":"swarms/structs/multi_agent_collaboration_examples/#graphswarm","title":"GraphSwarm
","text":"from swarms.structs.agent import Agent \nfrom swarms import Edge, GraphWorkflow, Node, NodeType \n\n\n# Initialize two agents with GPT-4o-mini\nagent1 = Agent(\n agent_name=\"agent1\",\n system_prompt=\"You are an autonomous agent executing workflow tasks.\",\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n saved_state_path=\"agent1_state.json\",\n model_name=\"gpt-4o-mini\",\n) \n\nagent2 = Agent(\n agent_name=\"agent2\",\n system_prompt=\"You are an autonomous agent executing workflow tasks.\",\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n saved_state_path=\"agent2_state.json\",\n model_name=\"gpt-4o-mini\",\n) \n\ndef sample_task():\n print(\"Running sample task\")\n return \"Task completed\"\n\n# Build the DAG\nwf = GraphWorkflow()\nwf.add_node(Node(id=\"agent1\", type=NodeType.AGENT, agent=agent1))\nwf.add_node(Node(id=\"agent2\", type=NodeType.AGENT, agent=agent2))\nwf.add_node(Node(id=\"task1\", type=NodeType.TASK, callable=sample_task)) \n\n# Connect agents to the task\nwf.add_edge(Edge(source=\"agent1\", target=\"task1\"))\nwf.add_edge(Edge(source=\"agent2\", target=\"task1\")) \n\nwf.set_entry_points([\"agent1\", \"agent2\"])\nwf.set_end_points([\"task1\"]) \n\n# Visualize and run\nprint(wf.visualize()) \nresults = wf.run() \nprint(\"Execution results:\", results)\n
"},{"location":"swarms/structs/multi_agent_collaboration_examples/#mixtureofagents","title":"MixtureOfAgents
","text":"This is an implementation from the paper: \"Mixture-of-Agents Enhances Large Language Model Capabilities\" by together.ai, it achieves SOTA on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. Great for tasks that need to be parallelized and then sequentially fed into another loop
from swarms import Agent, OpenAIChat, MixtureOfAgents\n\n# Initialize the director agent\ndirector = Agent(\n agent_name=\"Director\",\n system_prompt=\"Directs the tasks for the accountants\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"director.json\",\n)\n\n# Initialize accountant 1\naccountant1 = Agent(\n agent_name=\"Accountant1\",\n system_prompt=\"Prepares financial statements\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant1.json\",\n)\n\n# Initialize accountant 2\naccountant2 = Agent(\n agent_name=\"Accountant2\",\n system_prompt=\"Audits financial records\",\n llm=OpenAIChat(),\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"accountant2.json\",\n)\n\n# Create a list of agents\nagents = [director, accountant1, accountant2]\n\n\n# Swarm\nswarm = MixtureOfAgents(\n name=\"Mixture of Accountants\",\n agents=agents,\n layers=3,\n final_agent=director,\n)\n\n\n# Run the swarm\nout = swarm.run(\"Prepare financial statements and audit financial records\")\nprint(out)\n
"},{"location":"swarms/structs/multi_agent_orchestration/","title":"Multi-Agent Orchestration:","text":"Swarms was designed to faciliate the communication between many different and specialized agents from a vast array of other frameworks such as langchain, autogen, crew, and more.
In traditional swarm theory, there are many types of swarms usually for very specialized use-cases and problem sets. Such as Hiearchical and sequential are great for accounting and sales, because there is usually a boss coordinator agent that distributes a workload to other specialized agents.
Name Description Code Link Use Cases Hierarchical Swarms A system where agents are organized in a hierarchy, with higher-level agents coordinating lower-level agents to achieve complex tasks. Code Link Manufacturing process optimization, multi-level sales management, healthcare resource coordination Agent Rearrange A setup where agents rearrange themselves dynamically based on the task requirements and environmental conditions. Code Link Adaptive manufacturing lines, dynamic sales territory realignment, flexible healthcare staffing Concurrent Workflows Agents perform different tasks simultaneously, coordinating to complete a larger goal. Code Link Concurrent production lines, parallel sales operations, simultaneous patient care processes Sequential Coordination Agents perform tasks in a specific sequence, where the completion of one task triggers the start of the next. Code Link Step-by-step assembly lines, sequential sales processes, stepwise patient treatment workflows Parallel Processing Agents work on different parts of a task simultaneously to speed up the overall process. Code Link Parallel data processing in manufacturing, simultaneous sales analytics, concurrent medical tests"},{"location":"swarms/structs/multi_agent_router/","title":"MultiAgentRouter Documentation","text":"The MultiAgentRouter is a sophisticated task routing system that efficiently delegates tasks to specialized AI agents. It uses a \"boss\" agent to analyze incoming tasks and route them to the most appropriate specialized agent based on their capabilities and expertise.
"},{"location":"swarms/structs/multi_agent_router/#table-of-contents","title":"Table of Contents","text":"pip install swarms\n
"},{"location":"swarms/structs/multi_agent_router/#key-components","title":"Key Components","text":""},{"location":"swarms/structs/multi_agent_router/#arguments-table","title":"Arguments Table","text":"Argument Type Default Description name str \"swarm-router\" Name identifier for the router instance description str \"Routes tasks...\" Description of the router's purpose agents List[Agent] [] List of available specialized agents model str \"gpt-4o-mini\" Base language model for the boss agent temperature float 0.1 Temperature parameter for model outputs shared_memory_system callable None Optional shared memory system output_type Literal[\"json\", \"string\"] \"json\" Format of agent outputs execute_task bool True Whether to execute routed tasks"},{"location":"swarms/structs/multi_agent_router/#methods-table","title":"Methods Table","text":"Method Arguments Returns Description route_task task: str dict Routes a single task to appropriate agent batch_run tasks: List[str] List[dict] Sequentially routes multiple tasks concurrent_batch_run tasks: List[str] List[dict] Concurrently routes multiple tasks query_ragent task: str str Queries the research agent find_agent_in_list agent_name: str Optional[Agent] Finds agent by name"},{"location":"swarms/structs/multi_agent_router/#production-examples","title":"Production Examples","text":""},{"location":"swarms/structs/multi_agent_router/#healthcare-example","title":"Healthcare Example","text":"from swarms import Agent, MultiAgentRouter\n\n# Define specialized healthcare agents\nagents = [\n Agent(\n agent_name=\"DiagnosisAgent\",\n description=\"Specializes in preliminary symptom analysis and diagnostic suggestions\",\n system_prompt=\"\"\"You are a medical diagnostic assistant. Analyze symptoms and provide \n evidence-based diagnostic suggestions, always noting this is for informational purposes \n only and recommending professional medical consultation.\"\"\",\n model_name=\"openai/gpt-4o\"\n ),\n Agent(\n agent_name=\"TreatmentPlanningAgent\",\n description=\"Assists in creating treatment plans and medical documentation\",\n system_prompt=\"\"\"You are a treatment planning assistant. Help create structured \n treatment plans based on confirmed diagnoses, following medical best practices \n and guidelines.\"\"\",\n model_name=\"openai/gpt-4o\"\n ),\n Agent(\n agent_name=\"MedicalResearchAgent\",\n description=\"Analyzes medical research papers and clinical studies\",\n system_prompt=\"\"\"You are a medical research analyst. Analyze and summarize medical \n research papers, clinical trials, and scientific studies, providing evidence-based \n insights.\"\"\",\n model_name=\"openai/gpt-4o\"\n )\n]\n\n# Initialize router\nhealthcare_router = MultiAgentRouter(\n name=\"Healthcare-Router\",\n description=\"Routes medical and healthcare-related tasks to specialized agents\",\n agents=agents,\n model=\"gpt-4o\",\n temperature=0.1\n)\n\n# Example usage\ntry:\n # Process medical case\n case_analysis = healthcare_router.route_task(\n \"\"\"Patient presents with: \n - Persistent dry cough for 3 weeks\n - Mild fever (38.1\u00b0C)\n - Fatigue\n Analyze symptoms and suggest potential diagnoses for healthcare provider review.\"\"\"\n )\n\n # Research treatment options\n treatment_research = healthcare_router.route_task(\n \"\"\"Find recent clinical studies on treatment efficacy for community-acquired \n pneumonia in adult patients, focusing on outpatient care.\"\"\"\n )\n\n # Process multiple cases concurrently\n cases = [\n \"Case 1: Patient symptoms...\",\n \"Case 2: Patient symptoms...\",\n \"Case 3: Patient symptoms...\"\n ]\n concurrent_results = healthcare_router.concurrent_batch_run(cases)\n\nexcept Exception as e:\n logger.error(f\"Error in healthcare processing: {str(e)}\")\n
"},{"location":"swarms/structs/multi_agent_router/#finance-example","title":"Finance Example","text":"# Define specialized finance agents\nfinance_agents = [\n Agent(\n agent_name=\"MarketAnalysisAgent\",\n description=\"Analyzes market trends and provides trading insights\",\n system_prompt=\"\"\"You are a financial market analyst. Analyze market data, trends, \n and indicators to provide evidence-based market insights and trading suggestions.\"\"\",\n model_name=\"openai/gpt-4o\"\n ),\n Agent(\n agent_name=\"RiskAssessmentAgent\",\n description=\"Evaluates financial risks and compliance requirements\",\n system_prompt=\"\"\"You are a risk assessment specialist. Analyze financial data \n and operations for potential risks, ensuring regulatory compliance and suggesting \n risk mitigation strategies.\"\"\",\n model_name=\"openai/gpt-4o\"\n ),\n Agent(\n agent_name=\"InvestmentAgent\",\n description=\"Provides investment strategies and portfolio management\",\n system_prompt=\"\"\"You are an investment strategy specialist. Develop and analyze \n investment strategies, portfolio allocations, and provide long-term financial \n planning guidance.\"\"\",\n model_name=\"openai/gpt-4o\"\n )\n]\n\n# Initialize finance router\nfinance_router = MultiAgentRouter(\n name=\"Finance-Router\",\n description=\"Routes financial analysis and investment tasks\",\n agents=finance_agents\n)\n\n# Example tasks\ntasks = [\n \"\"\"Analyze current market conditions for technology sector, focusing on:\n - AI/ML companies\n - Semiconductor manufacturers\n - Cloud service providers\n Provide risk assessment and investment opportunities.\"\"\",\n\n \"\"\"Develop a diversified portfolio strategy for a conservative investor with:\n - Investment horizon: 10 years\n - Risk tolerance: Low to medium\n - Initial investment: $500,000\n - Monthly contribution: $5,000\"\"\",\n\n \"\"\"Conduct risk assessment for a fintech startup's crypto trading platform:\n - Regulatory compliance requirements\n - Security measures\n - Operational risks\n - Market risks\"\"\"\n]\n\n# Process tasks concurrently\nresults = finance_router.concurrent_batch_run(tasks)\n
"},{"location":"swarms/structs/multi_agent_router/#legal-example","title":"Legal Example","text":"# Define specialized legal agents\nlegal_agents = [\n Agent(\n agent_name=\"ContractAnalysisAgent\",\n description=\"Analyzes legal contracts and documents\",\n system_prompt=\"\"\"You are a legal document analyst. Review contracts and legal \n documents for key terms, potential issues, and compliance requirements.\"\"\",\n model_name=\"openai/gpt-4o\"\n ),\n Agent(\n agent_name=\"ComplianceAgent\",\n description=\"Ensures regulatory compliance and updates\",\n system_prompt=\"\"\"You are a legal compliance specialist. Monitor and analyze \n regulatory requirements, ensuring compliance and suggesting necessary updates \n to policies and procedures.\"\"\",\n model_name=\"openai/gpt-4o\"\n ),\n Agent(\n agent_name=\"LegalResearchAgent\",\n description=\"Conducts legal research and case analysis\",\n system_prompt=\"\"\"You are a legal researcher. Research relevant cases, statutes, \n and regulations, providing comprehensive legal analysis and citations.\"\"\",\n model_name=\"openai/gpt-4o\"\n )\n]\n\n# Initialize legal router\nlegal_router = MultiAgentRouter(\n name=\"Legal-Router\",\n description=\"Routes legal analysis and compliance tasks\",\n agents=legal_agents\n)\n\n# Example usage for legal department\ncontract_analysis = legal_router.route_task(\n \"\"\"Review the following software licensing agreement:\n [contract text]\n\n Analyze for:\n 1. Key terms and conditions\n 2. Potential risks and liabilities\n 3. Compliance with current regulations\n 4. Suggested modifications\"\"\"\n)\n
"},{"location":"swarms/structs/multi_agent_router/#error-handling-and-best-practices","title":"Error Handling and Best Practices","text":"Always use try-except blocks for task routing:
try:\n result = router.route_task(task)\nexcept Exception as e:\n logger.error(f\"Task routing failed: {str(e)}\")\n
Monitor agent performance:
if result[\"execution\"][\"execution_time\"] > 5.0:\n logger.warning(f\"Long execution time for task: {result['task']['original']}\")\n
Implement rate limiting for concurrent tasks:
from concurrent.futures import ThreadPoolExecutor\nwith ThreadPoolExecutor(max_workers=5) as executor:\n results = router.concurrent_batch_run(tasks)\n
Regular agent validation:
for agent in router.agents.values():\n if not agent.validate():\n logger.error(f\"Agent validation failed: {agent.name}\")\n
Task Batching
Group similar tasks together
Use concurrent_batch_run for independent tasks
Monitor memory usage with large batches
Model Selection
Choose appropriate models based on task complexity
Balance speed vs. accuracy requirements
Consider cost implications
Response Caching
Implement caching for frequently requested analyses
Use shared memory system for repeated queries
Regular cache invalidation for time-sensitive data
Data Privacy
Implement data encryption
Handle sensitive information appropriately
Regular security audits
Access Control
Implement role-based access
Audit logging
Regular permission reviews
Performance Metrics
Response times
Success rates
Error rates
Resource utilization
Logging
Use structured logging
Implement log rotation
Regular log analysis
Alerts
Set up alerting for critical errors
Monitor resource usage
Track API rate limits
Hierarchical agent orchestration involves organizing multiple agents in structured layers to efficiently handle complex tasks. There are several key architectures available, each with distinct characteristics and use cases.
Here are the Hierarchical swarms we support:
Architecture Strengths Weaknesses HHCS - Clear task routing- Specialized swarm handling- Parallel processing capability- Good for complex multi-domain tasks - More complex setup- Overhead in routing- Requires careful swarm design Auto Agent Builder - Dynamic agent creation- Flexible scaling- Self-organizing- Good for evolving tasks - Higher resource usage- Potential creation overhead- May create redundant agents SwarmRouter - Multiple workflow types- Simple configuration- Flexible deployment- Good for varied task types - Less specialized than HHCS- Limited inter-swarm communication- May require manual type selection"},{"location":"swarms/structs/multi_swarm_orchestration/#core-architectures","title":"Core Architectures","text":""},{"location":"swarms/structs/multi_swarm_orchestration/#1-hybrid-hierarchical-cluster-swarm-hhcs","title":"1. Hybrid Hierarchical-Cluster Swarm (HHCS)","text":"Hybrid Hierarchical-Cluster Swarm (HHCS) is architecture that uses a Router Agent to analyze and distribute tasks to other swarms.
Tasks are routed to specialized swarms based on their requirements
Enables parallel processing through multiple specialized swarms
Ideal for complex, multi-domain tasks and enterprise-scale operations
Provides clear task routing but requires more complex setup
flowchart TD\n Start([Task Input]) --> RouterAgent[Router Agent]\n RouterAgent --> Analysis{Task Analysis}\n\n Analysis -->|Analyze Requirements| Selection[Swarm Selection]\n Selection -->|Select Best Swarm| Route[Route Task]\n\n Route --> Swarm1[Specialized Swarm 1]\n Route --> Swarm2[Specialized Swarm 2]\n Route --> SwarmN[Specialized Swarm N]\n\n Swarm1 -->|Process| Result1[Output 1]\n Swarm2 -->|Process| Result2[Output 2]\n SwarmN -->|Process| ResultN[Output N]\n\n Result1 --> Final[Final Output]\n Result2 --> Final\n ResultN --> Final
"},{"location":"swarms/structs/multi_swarm_orchestration/#2-auto-agent-builder","title":"2. Auto Agent Builder","text":"Auto Agent Builder is a dynamic agent architecture that creates specialized agents on-demand.
Analyzes tasks and automatically builds appropriate agents for the job
Maintains an agent pool that feeds into task orchestration
Best suited for evolving requirements and dynamic workloads
Self-organizing but may have higher resource usage
flowchart TD\n Task[Task Input] --> Builder[Agent Builder]\n Builder --> Analysis{Task Analysis}\n\n Analysis --> Create[Create Specialized Agents]\n Create --> Pool[Agent Pool]\n\n Pool --> Agent1[Specialized Agent 1]\n Pool --> Agent2[Specialized Agent 2]\n Pool --> AgentN[Specialized Agent N]\n\n Agent1 --> Orchestration[Task Orchestration]\n Agent2 --> Orchestration\n AgentN --> Orchestration\n\n Orchestration --> Result[Final Result]
"},{"location":"swarms/structs/multi_swarm_orchestration/#3-swarmrouter","title":"3. SwarmRouter","text":"SwarmRouter is a flexible system supporting multiple swarm architectures through a simple interface:
Sequential workflows
Concurrent workflows
Hierarchical swarms
Group chat interactions
Simpler to configure and deploy compared to other architectures
Best for general-purpose tasks and smaller scale operations
Recommended for 5-20 agents.
flowchart TD\n Input[Task Input] --> Router[Swarm Router]\n Router --> TypeSelect{Swarm Type Selection}\n\n TypeSelect -->|Sequential| Seq[Sequential Workflow]\n TypeSelect -->|Concurrent| Con[Concurrent Workflow]\n TypeSelect -->|Hierarchical| Hier[Hierarchical Swarm]\n TypeSelect -->|Group| Group[Group Chat]\n\n Seq --> Output[Task Output]\n Con --> Output\n Hier --> Output\n Group --> Output
"},{"location":"swarms/structs/multi_swarm_orchestration/#use-case-recommendations","title":"Use Case Recommendations","text":""},{"location":"swarms/structs/multi_swarm_orchestration/#hhcs-best-for","title":"HHCS: Best for:","text":"Enterprise-scale operations
Multi-domain problems
Complex task routing
Parallel processing needs
Dynamic workloads
Evolving requirements
Research and development
Exploratory tasks
General purpose tasks
Quick deployment
Mixed workflow types
Smaller scale operations
Hybrid Hierarchical-Cluster Swarm Documentation
Covers detailed implementation, constructor arguments, and full examples
Agent Builder Documentation
Includes enterprise use cases, best practices, and integration patterns
SwarmRouter Documentation:
SwarmRouter Documentation
Provides comprehensive API reference, advanced usage, and use cases
Simple tasks \u2192 SwarmRouter
Complex, multi-domain tasks \u2192 HHCS
Dynamic, evolving tasks \u2192 Auto Agent Builder
Small scale \u2192 SwarmRouter
Large scale \u2192 HHCS
Variable scale \u2192 Auto Agent Builder
Limited resources \u2192 SwarmRouter
Abundant resources \u2192 HHCS or Auto Agent Builder
Dynamic resources \u2192 Auto Agent Builder
Quick deployment \u2192 SwarmRouter
Complex system \u2192 HHCS
Experimental system \u2192 Auto Agent Builder
This documentation provides a high-level overview of the main hierarchical agent orchestration architectures available in the system. Each architecture has its own strengths and ideal use cases, and the choice between them should be based on specific project requirements, scale, and complexity.
"},{"location":"swarms/structs/multi_threaded_workflow/","title":"MultiThreadedWorkflow Documentation","text":"The MultiThreadedWorkflow
class represents a multi-threaded workflow designed to execute tasks concurrently using a thread pool. This class is highly useful in scenarios where tasks need to be executed in parallel to improve performance and efficiency. The workflow ensures that tasks are managed in a priority-based queue, and it includes mechanisms for retrying failed tasks and optionally saving task results automatically.
MultiThreadedWorkflow
","text":""},{"location":"swarms/structs/multi_threaded_workflow/#parameters","title":"Parameters","text":"Parameter Type Default Description max_workers
int
5
The maximum number of worker threads in the thread pool. autosave
bool
True
Flag indicating whether to automatically save task results. tasks
List[PriorityTask]
None
List of priority tasks to be executed. retry_attempts
int
3
The maximum number of retry attempts for failed tasks. *args
tuple
Variable length argument list. **kwargs
dict
Arbitrary keyword arguments."},{"location":"swarms/structs/multi_threaded_workflow/#attributes","title":"Attributes","text":"Attribute Type Description max_workers
int
The maximum number of worker threads in the thread pool. autosave
bool
Flag indicating whether to automatically save task results. retry_attempts
int
The maximum number of retry attempts for failed tasks. tasks_queue
PriorityQueue
The queue that holds the priority tasks. lock
Lock
The lock used for thread synchronization."},{"location":"swarms/structs/multi_threaded_workflow/#methods","title":"Methods","text":""},{"location":"swarms/structs/multi_threaded_workflow/#run","title":"run
","text":""},{"location":"swarms/structs/multi_threaded_workflow/#description","title":"Description","text":"The run
method executes the tasks stored in the priority queue using a thread pool. It handles task completion, retries failed tasks up to a specified number of attempts, and optionally saves the results of tasks if the autosave flag is set.
from swarms import MultiThreadedWorkflow, PriorityTask, Task\n\n# Define some tasks\ntasks = [PriorityTask(task=Task()), PriorityTask(task=Task())]\n\n# Create a MultiThreadedWorkflow instance\nworkflow = MultiThreadedWorkflow(max_workers=3, autosave=True, tasks=tasks, retry_attempts=2)\n\n# Run the workflow\nresults = workflow.run()\nprint(results)\n
"},{"location":"swarms/structs/multi_threaded_workflow/#_autosave_task_result","title":"_autosave_task_result
","text":""},{"location":"swarms/structs/multi_threaded_workflow/#description_1","title":"Description","text":"The _autosave_task_result
method is responsible for saving the results of a task. It uses a thread lock to ensure that the autosave operation is thread-safe.
This method is intended for internal use and is typically called by the run
method. However, here is an example of how it might be used directly:
# Create a task and result\ntask = Task()\nresult = task.run()\n\n# Autosave the result\nworkflow = MultiThreadedWorkflow()\nworkflow._autosave_task_result(task, result)\n
"},{"location":"swarms/structs/multi_threaded_workflow/#detailed-functionality-and-usage","title":"Detailed Functionality and Usage","text":""},{"location":"swarms/structs/multi_threaded_workflow/#initialization","title":"Initialization","text":"When an instance of MultiThreadedWorkflow
is created, it initializes the following:
The run
method performs the following steps:
ThreadPoolExecutor
to manage the threads.wait
function to monitor the completion of tasks. Once a task is completed, it retrieves the result or catches exceptions.autosave
flag is set, the _autosave_task_result
method is called to save the task results.The _autosave_task_result
method handles the saving of task results. It uses a threading lock to ensure that the save operation is not interrupted by other threads.
For more information on threading and concurrent execution in Python, refer to the following resources:
This page provides a comprehensive overview of all available multi-agent architectures in Swarms, their use cases, and functionality.
"},{"location":"swarms/structs/overview/#architecture-comparison","title":"Architecture Comparison","text":"Core ArchitecturesWorkflow ArchitecturesHierarchical Architectures Architecture Use Case Key Functionality Documentation MajorityVoting Decision making through consensus Combines multiple agent opinions and selects the most common answer Docs AgentRearrange Optimizing agent order Dynamically reorders agents based on task requirements Docs RoundRobin Equal task distribution Cycles through agents in a fixed order Docs Mixture of Agents Complex problem solving Combines diverse expert agents for comprehensive analysis Docs GroupChat Collaborative discussions Simulates group discussions with multiple agents Docs AgentRegistry Agent management Central registry for managing and accessing agents Docs SpreadSheetSwarm Data processing Collaborative data processing and analysis Docs ForestSwarm Hierarchical decision making Tree-like structure for complex decision processes Docs SwarmRouter Task routing Routes tasks to appropriate agents based on requirements Docs TaskQueueSwarm Task management Manages and prioritizes tasks in a queue Docs SwarmRearrange Dynamic swarm optimization Optimizes swarm configurations for specific tasks Docs MultiAgentRouter Advanced task routing Routes tasks to specialized agents based on capabilities Docs MatrixSwarm Parallel processing Matrix-based organization for parallel task execution Docs ModelRouter Model selection Routes tasks to appropriate AI models Docs MALT Multi-agent learning Enables agents to learn from each other Docs Deep Research Swarm Research automation Conducts comprehensive research across multiple domains Docs Swarm Matcher Agent matching Matches tasks with appropriate agent combinations Docs Architecture Use Case Key Functionality Documentation ConcurrentWorkflow Parallel task execution Executes multiple tasks simultaneously Docs SequentialWorkflow Step-by-step processing Executes tasks in a specific sequence Docs GraphWorkflow Complex task dependencies Manages tasks with complex dependencies Docs Architecture Use Case Key Functionality Documentation HierarchicalSwarm Hierarchical task orchestration Director agent coordinates specialized worker agents Docs Auto Agent Builder Automated agent creation Automatically creates and configures agents Docs Hybrid Hierarchical-Cluster Swarm Complex organization Combines hierarchical and cluster-based organization Docs Auto Swarm Builder Automated swarm creation Automatically creates and configures swarms Docs"},{"location":"swarms/structs/overview/#communication-structure","title":"Communication Structure","text":"Communication Protocols
The Conversation documentation details the communication protocols and structures used between agents in these architectures.
"},{"location":"swarms/structs/overview/#choosing-the-right-architecture","title":"Choosing the Right Architecture","text":"When selecting a multi-agent architecture, consider the following factors:
Task Complexity
Simple tasks may only need basic architectures like RoundRobin, while complex tasks might require Hierarchical or Graph-based approaches.
Parallelization Needs
If tasks can be executed in parallel, consider ConcurrentWorkflow or MatrixSwarm.
Decision Making Requirements
For consensus-based decisions, MajorityVoting is ideal.
Resource Optimization
If you need to optimize agent usage, consider SwarmRouter or TaskQueueSwarm.
Learning Requirements
If agents need to learn from each other, MALT is the appropriate choice.
Dynamic Adaptation
For tasks requiring dynamic adaptation, consider SwarmRearrange or Auto Swarm Builder.
For more detailed information about each architecture, please refer to their respective documentation pages.
"},{"location":"swarms/structs/round_robin_swarm/","title":"RoundRobin: Round-Robin Task Execution in a Swarm","text":""},{"location":"swarms/structs/round_robin_swarm/#introduction","title":"Introduction","text":"The RoundRobinSwarm
class is designed to manage and execute tasks among multiple agents in a round-robin fashion. This approach ensures that each agent in a swarm receives an equal opportunity to execute tasks, which promotes fairness and efficiency in distributed systems. It is particularly useful in environments where collaborative, sequential task execution is needed among various agents.
Round-robin is a scheduling technique commonly used in computing for managing processes in shared systems. It involves assigning a fixed time slot to each process and cycling through all processes in a circular order without prioritization. In the context of swarms of agents, this method ensures equitable distribution of tasks and resource usage among all agents.
"},{"location":"swarms/structs/round_robin_swarm/#application-in-swarms","title":"Application in Swarms","text":"In swarms, RoundRobinSwarm
utilizes the round-robin scheduling to manage tasks among agents like software components, autonomous robots, or virtual entities. This strategy is beneficial where tasks are interdependent or require sequential processing.
agents (List[Agent])
: List of agents participating in the swarm.verbose (bool)
: Enables or disables detailed logging of swarm operations.max_loops (int)
: Limits the number of times the swarm cycles through all agents.index (int)
: Maintains the current position in the agent list to ensure round-robin execution.__init__
","text":"Initializes the swarm with the provided list of agents, verbosity setting, and operational parameters.
Parameters: - agents
: Optional list of agents in the swarm. - verbose
: Boolean flag for detailed logging. - max_loops
: Maximum number of execution cycles. - callback
: Optional function called after each loop.
run
","text":"Executes a specified task across all agents in a round-robin manner, cycling through each agent repeatedly for the number of specified loops.
Conceptual Behavior: - Distribute the task sequentially among all agents starting from the current index. - Each agent processes the task and potentially modifies it or produces new output. - After an agent completes its part of the task, the index moves to the next agent. - This cycle continues until the specified maximum number of loops is completed. - Optionally, a callback function can be invoked after each loop to handle intermediate results or perform additional actions.
"},{"location":"swarms/structs/round_robin_swarm/#examples","title":"Examples","text":""},{"location":"swarms/structs/round_robin_swarm/#example-1-load-balancing-among-servers","title":"Example 1: Load Balancing Among Servers","text":"In this example, RoundRobinSwarm
is used to distribute network requests evenly among a group of servers. This is common in scenarios where load balancing is crucial for maintaining system responsiveness and scalability.
from swarms import Agent, RoundRobinSwarm\nfrom swarm_models import OpenAIChat\n\n\n# Initialize the LLM\nllm = OpenAIChat()\n\n# Define sales agents\nsales_agent1 = Agent(\n agent_name=\"Sales Agent 1 - Automation Specialist\",\n system_prompt=\"You're Sales Agent 1, your purpose is to generate sales for a company by focusing on the benefits of automating accounting processes!\",\n agent_description=\"Generate sales by focusing on the benefits of automation!\",\n llm=llm,\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n context_length=1000,\n)\n\nsales_agent2 = Agent(\n agent_name=\"Sales Agent 2 - Cost Saving Specialist\",\n system_prompt=\"You're Sales Agent 2, your purpose is to generate sales for a company by emphasizing the cost savings of using swarms of agents!\",\n agent_description=\"Generate sales by emphasizing cost savings!\",\n llm=llm,\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n context_length=1000,\n)\n\nsales_agent3 = Agent(\n agent_name=\"Sales Agent 3 - Efficiency Specialist\",\n system_prompt=\"You're Sales Agent 3, your purpose is to generate sales for a company by highlighting the efficiency and accuracy of our swarms of agents in accounting processes!\",\n agent_description=\"Generate sales by highlighting efficiency and accuracy!\",\n llm=llm,\n max_loops=1,\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n context_length=1000,\n)\n\n# Initialize the swarm with sales agents\nsales_swarm = RoundRobinSwarm(agents=[sales_agent1, sales_agent2, sales_agent3], verbose=True)\n\n# Define a sales task\ntask = \"Generate a sales email for an accountant firm executive to sell swarms of agents to automate their accounting processes.\"\n\n# Distribute sales tasks to different agents\nfor _ in range(5): # Repeat the task 5 times\n results = sales_swarm.run(task)\n print(\"Sales generated:\", results)\n
"},{"location":"swarms/structs/round_robin_swarm/#conclusion","title":"Conclusion","text":"The RoundRobinSwarm class provides a robust and flexible framework for managing tasks among multiple agents in a fair and efficient manner. This class is especially useful in environments where tasks need to be distributed evenly among a group of agents, ensuring that all tasks are handled timely and effectively. Through the round-robin algorithm, each agent in the swarm is guaranteed an equal opportunity to contribute to the overall task, promoting efficiency and collaboration.
"},{"location":"swarms/structs/sequential_workflow/","title":"SequentialWorkflow Documentation","text":"Overview: A Sequential Swarm architecture processes tasks in a linear sequence. Each agent completes its task before passing the result to the next agent in the chain. This architecture ensures orderly processing and is useful when tasks have dependencies. Learn more here in the docs:
Use-Cases:
Workflows where each step depends on the previous one, such as assembly lines or sequential data processing.
Scenarios requiring strict order of operations.
graph TD\n A[First Agent] --> B[Second Agent]\n B --> C[Third Agent]\n C --> D[Fourth Agent]
"},{"location":"swarms/structs/sequential_workflow/#attributes","title":"Attributes","text":"Attribute Type Description agents
List[Agent]
The list of agents in the workflow. flow
str
A string representing the order of agents. agent_rearrange
AgentRearrange
Manages the dynamic execution of agents."},{"location":"swarms/structs/sequential_workflow/#methods","title":"Methods","text":""},{"location":"swarms/structs/sequential_workflow/#__init__self-agents-listagent-none-max_loops-int-1-args-kwargs","title":"__init__(self, agents: List[Agent] = None, max_loops: int = 1, *args, **kwargs)
","text":"The constructor initializes the SequentialWorkflow
object.
agents
(List[Agent]
, optional): The list of agents in the workflow. Defaults to None
.max_loops
(int
, optional): The maximum number of loops to execute the workflow. Defaults to 1
.*args
: Variable length argument list.**kwargs
: Arbitrary keyword arguments.run(self, task: str) -> str
","text":"Runs the specified task through the agents in the dynamically constructed flow.
task
(str
): The task for the agents to execute.
Returns:
str
: The final result after processing through all agents.from swarms import Agent, SequentialWorkflow\n\n# Initialize agents for individual tasks\nagent1 = Agent(\n agent_name=\"ICD-10 Code Analyzer\",\n system_prompt=\"Analyze medical data and provide relevant ICD-10 codes.\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\nagent2 = Agent(\n agent_name=\"ICD-10 Code Summarizer\",\n system_prompt=\"Summarize the findings and suggest ICD-10 codes.\",\n model_name=\"gpt-4o\",\n max_loops=1,\n)\n\n# Create the Sequential workflow\nworkflow = SequentialWorkflow(\n agents=[agent1, agent2], max_loops=1, verbose=False\n)\n\n# Run the workflow\nworkflow.run(\n \"Analyze the medical report and provide the appropriate ICD-10 codes.\"\n)\n
This example initializes a SequentialWorkflow
with three agents and executes a task, printing the final result.
The run
method includes logging to track the execution flow and captures errors to provide detailed information in case of failures. This is crucial for debugging and ensuring smooth operation of the workflow.
Ensure that the agents provided to the SequentialWorkflow
are properly initialized and configured to handle the tasks they will receive.
The max_loops
parameter can be used to control how many times the workflow should be executed, which is useful for iterative processes.
Utilize the logging information to monitor and debug the task execution process.
class SpreadSheetSwarm:\n
"},{"location":"swarms/structs/spreadsheet_swarm/#full-path","title":"Full Path","text":"from swarms.structs.spreadsheet_swarm import SpreadSheetSwarm\n
"},{"location":"swarms/structs/spreadsheet_swarm/#attributes","title":"Attributes","text":"The SpreadSheetSwarm
class contains several attributes that define its behavior and configuration. These attributes are initialized in the constructor (__init__
method) and are used throughout the class to manage the swarm's operations.
name
str
The name of the swarm. description
str
A description of the swarm's purpose. agents
Union[Agent, List[Agent]]
The agents participating in the swarm. Can be a single agent or a list of agents. autosave_on
bool
Flag indicating whether autosave is enabled. save_file_path
str
The file path where the swarm data will be saved. task_queue
queue.Queue
The queue that stores tasks to be processed by the agents. lock
threading.Lock
A lock used for thread synchronization to prevent race conditions. metadata
SwarmRunMetadata
Metadata for the swarm run, including start time, end time, tasks completed, and outputs. run_all_agents
bool
Flag indicating whether to run all agents or just one. max_loops
int
The number of times to repeat the task. workspace_dir
str
The directory where the workspace is located, retrieved from environment variables."},{"location":"swarms/structs/spreadsheet_swarm/#parameters","title":"Parameters","text":"name
(str
, optional): The name of the swarm. Default is \"Spreadsheet-Swarm\"
.description
(str
, optional): A brief description of the swarm. Default is \"A swarm that processes tasks from a queue using multiple agents on different threads.\"
.agents
(Union[Agent, List[Agent]]
, optional): The agents participating in the swarm. Default is an empty list.autosave_on
(bool
, optional): A flag to indicate if autosave is enabled. Default is True
.save_file_path
(str
, optional): The file path where swarm data will be saved. Default is \"spreedsheet_swarm.csv\"
.run_all_agents
(bool
, optional): Flag to determine if all agents should run. Default is True
.max_loops
(int
, optional): The number of times to repeat the task. Default is 1
.workspace_dir
(str
, optional): The directory where the workspace is located. Default is retrieved from environment variable WORKSPACE_DIR
.__init__
)","text":"The constructor initializes the SpreadSheetSwarm
with the provided parameters. It sets up the task queue, locks for thread synchronization, and initializes the metadata.
reliability_check
","text":"def reliability_check(self):\n
"},{"location":"swarms/structs/spreadsheet_swarm/#description","title":"Description","text":"The reliability_check
method performs a series of checks to ensure that the swarm is properly configured before it begins processing tasks. It verifies that there are agents available and that a valid file path is provided for saving the swarm's data. If any of these checks fail, an exception is raised.
ValueError
: Raised if no agents are provided or if no save file path is specified.swarm = SpreadSheetSwarm(agents=[agent1, agent2])\nswarm.reliability_check()\n
"},{"location":"swarms/structs/spreadsheet_swarm/#run","title":"run
","text":"def run(self, task: str, *args, **kwargs):\n
"},{"location":"swarms/structs/spreadsheet_swarm/#description_1","title":"Description","text":"The run
method starts the task processing using the swarm. Depending on the configuration, it can either run all agents or a specific subset of them. The method tracks the start and end times of the task, executes the task multiple times if specified, and logs the results.
task
(str
): The task to be executed by the swarm.*args
: Additional positional arguments to pass to the agents.**kwargs
: Additional keyword arguments to pass to the agents.swarm = SpreadSheetSwarm(agents=[agent1, agent2])\nswarm.run(\"Process Data\")\n
"},{"location":"swarms/structs/spreadsheet_swarm/#export_to_json","title":"export_to_json
","text":"def export_to_json(self):\n
"},{"location":"swarms/structs/spreadsheet_swarm/#description_2","title":"Description","text":"The export_to_json
method generates a JSON representation of the swarm's metadata. This can be useful for exporting the results to an external system or for logging purposes.
str
: The JSON representation of the swarm's metadata.json_data = swarm.export_to_json()\nprint(json_data)\n
"},{"location":"swarms/structs/spreadsheet_swarm/#data_to_json_file","title":"data_to_json_file
","text":"def data_to_json_file(self):\n
"},{"location":"swarms/structs/spreadsheet_swarm/#description_3","title":"Description","text":"The data_to_json_file
method saves the swarm's metadata as a JSON file in the specified workspace directory. The file name is generated using the swarm's name and run ID.
swarm.data_to_json_file()\n
"},{"location":"swarms/structs/spreadsheet_swarm/#_track_output","title":"_track_output
","text":"def _track_output(self, agent: Agent, task: str, result: str):\n
"},{"location":"swarms/structs/spreadsheet_swarm/#description_4","title":"Description","text":"The _track_output
method is used internally to record the results of tasks executed by the agents. It updates the metadata with the completed tasks and their results.
agent
(Agent
): The agent that executed the task.task
(str
): The task that was executed.result
(str
): The result of the task execution.swarm._track_output(agent1, \"Process Data\", \"Success\")\n
"},{"location":"swarms/structs/spreadsheet_swarm/#_save_to_csv","title":"_save_to_csv
","text":"def _save_to_csv(self):\n
"},{"location":"swarms/structs/spreadsheet_swarm/#description_5","title":"Description","text":"The _save_to_csv
method saves the swarm's metadata to a CSV file. It logs each task and its result before writing them to the file. The file is saved in the location specified by save_file_path
.
swarm._save_to_csv()\n
"},{"location":"swarms/structs/spreadsheet_swarm/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms/structs/spreadsheet_swarm/#example-1-basic-swarm-initialization","title":"Example 1: Basic Swarm Initialization","text":"import os\n\nfrom swarms import Agent\nfrom swarm_models import OpenAIChat\nfrom swarms.prompts.finance_agent_sys_prompt import (\n FINANCIAL_AGENT_SYS_PROMPT,\n)\nfrom swarms.structs.spreadsheet_swarm import SpreadSheetSwarm\n\n# Example usage:\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_key=api_key, model_name=\"gpt-4o-mini\", temperature=0.1\n)\n\n\n# Initialize your agents (assuming the Agent class and model are already defined)\nagents = [\n Agent(\n agent_name=f\"Financial-Analysis-Agent-spreesheet-swarm:{i}\",\n system_prompt=FINANCIAL_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"finance_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n )\n for i in range(10)\n]\n\n# Create a Swarm with the list of agents\nswarm = SpreadSheetSwarm(\n name=\"Finance-Spreadsheet-Swarm\",\n description=\"A swarm that processes tasks from a queue using multiple agents on different threads.\",\n agents=agents,\n autosave_on=True,\n save_file_path=\"financial_spreed_sheet_swarm_demo.csv\",\n run_all_agents=False,\n max_loops=1,\n)\n\n# Run the swarm\nswarm.run(\n task=\"Analyze the states with the least taxes for LLCs. Provide an overview of all tax rates and add them with a comprehensive analysis\"\n)\n
"},{"location":"swarms/structs/spreadsheet_swarm/#example-2-qr-code-generator","title":"Example 2: QR Code Generator","text":"import os\nfrom swarms import Agent\nfrom swarm_models import OpenAIChat\nfrom swarms.structs.spreadsheet_swarm import SpreadSheetSwarm\n\n# Define custom system prompts for QR code generation\nQR_CODE_AGENT_1_SYS_PROMPT = \"\"\"\nYou are a Python coding expert. Your task is to write a Python script to generate a QR code for the link: https://lu.ma/jjc1b2bo. The code should save the QR code as an image file.\n\"\"\"\n\nQR_CODE_AGENT_2_SYS_PROMPT = \"\"\"\nYou are a Python coding expert. Your task is to write a Python script to generate a QR code for the link: https://github.com/The-Swarm-Corporation/Cookbook. The code should save the QR code as an image file.\n\"\"\"\n\n# Example usage:\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_key=api_key, model_name=\"gpt-4o-mini\", temperature=0.1\n)\n\n# Initialize your agents for QR code generation\nagents = [\n Agent(\n agent_name=\"QR-Code-Generator-Agent-Luma\",\n system_prompt=QR_CODE_AGENT_1_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"qr_code_agent_luma.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n ),\n Agent(\n agent_name=\"QR-Code-Generator-Agent-Cookbook\",\n system_prompt=QR_CODE_AGENT_2_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"qr_code_agent_cookbook.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n ),\n]\n\n# Create a Swarm with the list of agents\nswarm = SpreadSheetSwarm(\n name=\"QR-Code-Generation-Swarm\",\n description=\"A swarm that generates Python scripts to create QR codes for specific links.\",\n agents=agents,\n autosave_on=True,\n save_file_path=\"qr_code_generation_results.csv\",\n run_all_agents=False,\n max_loops=1,\n)\n\n# Run the swarm\nswarm.run(\n task=\"Generate Python scripts to create QR codes for the provided links and save them as image files.\"\n)\n
"},{"location":"swarms/structs/spreadsheet_swarm/#example-3-social-media-marketing","title":"Example 3: Social Media Marketing","text":"import os\nfrom swarms import Agent\nfrom swarm_models import OpenAIChat\nfrom swarms.structs.spreadsheet_swarm import SpreadSheetSwarm\n\n# Define custom system prompts for each social media platform\nTWITTER_AGENT_SYS_PROMPT = \"\"\"\nYou are a Twitter marketing expert. Your task is to create engaging, concise tweets and analyze trends to maximize engagement. Consider hashtags, timing, and content relevance.\n\"\"\"\n\nINSTAGRAM_AGENT_SYS_PROMPT = \"\"\"\nYou are an Instagram marketing expert. Your task is to create visually appealing and engaging content, including captions and hashtags, tailored to a specific audience.\n\"\"\"\n\nFACEBOOK_AGENT_SYS_PROMPT = \"\"\"\nYou are a Facebook marketing expert. Your task is to craft posts that are optimized for engagement and reach on Facebook, including using images, links, and targeted messaging.\n\"\"\"\n\nEMAIL_AGENT_SYS_PROMPT = \"\"\"\nYou are an Email marketing expert. Your task is to write compelling email campaigns that drive conversions, focusing on subject lines, personalization, and call-to-action strategies.\n\"\"\"\n\n# Example usage:\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_key=api_key, model_name=\"gpt-4o-mini\", temperature=0.1\n)\n\n# Initialize your agents for different social media platforms\nagents = [\n Agent(\n agent_name=\"Twitter-Marketing-Agent\",\n system_prompt=TWITTER_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"twitter_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n ),\n Agent(\n agent_name=\"Instagram-Marketing-Agent\",\n system_prompt=INSTAGRAM_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"instagram_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n ),\n Agent(\n agent_name=\"Facebook-Marketing-Agent\",\n system_prompt=FACEBOOK_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"facebook_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n ),\n Agent(\n agent_name=\"Email-Marketing-Agent\",\n system_prompt=EMAIL_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=1,\n dynamic_temperature_enabled=True,\n saved_state_path=\"email_agent.json\",\n user_name=\"swarms_corp\",\n retry_attempts=1,\n ),\n]\n\n# Create a Swarm with the list of agents\nswarm = SpreadSheetSwarm(\n name=\"Social-Media-Marketing-Swarm\",\n description=\"A swarm that processes social media marketing tasks using multiple agents on different threads.\",\n agents=agents,\n autosave_on=True,\n save_file_path=\"social_media_marketing_spreadsheet.csv\",\n run_all_agents=False,\n max_loops=2,\n)\n\n# Run the swarm\nswarm.run(\n task=\"Create posts to promote hack nights in miami beach for developers, engineers, and tech enthusiasts. Include relevant hashtags, images, and engaging captions.\"\n)\n
"},{"location":"swarms/structs/spreadsheet_swarm/#additional-information-and-tips","title":"Additional Information and Tips","text":"Thread Synchronization: When working with multiple agents in a concurrent environment, it's crucial to ensure that access to shared resources is properly synchronized using locks to avoid race conditions.
Autosave Feature: If you enable the autosave_on
flag, ensure that the file path provided is correct and writable. This feature is handy for long-running tasks where you want to periodically save the state.
Error Handling
run
method and logging errors appropriately. Agent
class to create custom agents that perform specific tasks tailored to your application's needs.queue
modulethreading
moduleSwarmMatcher is a tool for automatically matching tasks to the most appropriate swarm type based on their semantic similarity.
"},{"location":"swarms/structs/swarm_matcher/#overview","title":"Overview","text":"The SwarmMatcher utilizes transformer-based embeddings to determine the best swarm architecture for a given task. By analyzing the semantic meaning of task descriptions and comparing them to known swarm types, it can intelligently select the optimal swarm configuration for any task.
"},{"location":"swarms/structs/swarm_matcher/#workflow","title":"Workflow","text":"flowchart TD\n A[Task Description] --> B[Generate Task Embedding]\n C[Swarm Type Descriptions] --> D[Generate Swarm Type Embeddings]\n B --> E[Calculate Similarity Scores]\n D --> E\n E --> F[Select Best Matching Swarm Type]\n F --> G[Return Selected Swarm Type]\n\n subgraph Initialization\n H[Define Swarm Types] --> I[Load Transformer Model]\n I --> J[Pre-compute Swarm Type Embeddings]\n end\n\n subgraph Matching Process\n A --> B --> E --> F --> G\n end
"},{"location":"swarms/structs/swarm_matcher/#installation","title":"Installation","text":"SwarmMatcher is included in the Swarms package. To use it, simply import it from the library:
from swarms.structs.swarm_matcher import SwarmMatcher, SwarmMatcherConfig, SwarmType\n
"},{"location":"swarms/structs/swarm_matcher/#basic-usage","title":"Basic Usage","text":"from swarms.structs.swarm_matcher import swarm_matcher\n\n# Use the simplified function to match a task to a swarm type\nswarm_type = swarm_matcher(\"Analyze this dataset and create visualizations\")\nprint(f\"Selected swarm type: {swarm_type}\")\n
"},{"location":"swarms/structs/swarm_matcher/#advanced-usage","title":"Advanced Usage","text":"For more control over the matching process, you can create and configure your own SwarmMatcher instance:
from swarms.structs.swarm_matcher import SwarmMatcher, SwarmMatcherConfig, SwarmType, initialize_swarm_types\n\n# Create a configuration\nconfig = SwarmMatcherConfig(\n model_name=\"sentence-transformers/all-MiniLM-L6-v2\",\n embedding_dim=512\n)\n\n# Initialize the matcher\nmatcher = SwarmMatcher(config)\n\n# Add default swarm types\ninitialize_swarm_types(matcher)\n\n# Add a custom swarm type\ncustom_swarm = SwarmType(\n name=\"CustomSwarm\",\n description=\"A specialized swarm for handling specific domain tasks with expert knowledge.\"\n)\nmatcher.add_swarm_type(custom_swarm)\n\n# Find the best match for a task\nbest_match, score = matcher.find_best_match(\"Process natural language and extract key insights\")\nprint(f\"Best match: {best_match}, Score: {score}\")\n\n# Auto-select a swarm type\nselected_swarm = matcher.auto_select_swarm(\"Create data visualizations from this CSV file\")\nprint(f\"Selected swarm: {selected_swarm}\")\n
"},{"location":"swarms/structs/swarm_matcher/#available-swarm-types","title":"Available Swarm Types","text":"SwarmMatcher comes with several pre-defined swarm types:
Swarm Type Description AgentRearrange Optimize agent order and rearrange flow for multi-step tasks, ensuring efficient task allocation and minimizing bottlenecks. MixtureOfAgents Combine diverse expert agents for comprehensive analysis, fostering a collaborative approach to problem-solving and leveraging individual strengths. SpreadSheetSwarm Collaborative data processing and analysis in a spreadsheet-like environment, facilitating real-time data sharing and visualization. SequentialWorkflow Execute tasks in a step-by-step, sequential process workflow, ensuring a logical and methodical approach to task execution. ConcurrentWorkflow Process multiple tasks or data sources concurrently in parallel, maximizing productivity and reducing processing time."},{"location":"swarms/structs/swarm_matcher/#api-reference","title":"API Reference","text":""},{"location":"swarms/structs/swarm_matcher/#swarmtype","title":"SwarmType","text":"A class representing a type of swarm with its name and description.
Parameter Type Description name str The name of the swarm type description str A detailed description of the swarm type's capabilities and ideal use cases embedding Optional[List[float]] The generated embedding vector for this swarm type (auto-populated)"},{"location":"swarms/structs/swarm_matcher/#swarmmatcherconfig","title":"SwarmMatcherConfig","text":"Configuration settings for the SwarmMatcher.
Parameter Type Default Description model_name str \"sentence-transformers/all-MiniLM-L6-v2\" The transformer model to use for embeddings embedding_dim int 512 The dimension of the embedding vectors"},{"location":"swarms/structs/swarm_matcher/#swarmmatcher_1","title":"SwarmMatcher","text":"The main class for matching tasks to swarm types.
"},{"location":"swarms/structs/swarm_matcher/#methods","title":"Methods","text":""},{"location":"swarms/structs/swarm_matcher/#__init__config-swarmmatcherconfig","title":"__init__(config: SwarmMatcherConfig)
","text":"Initializes the SwarmMatcher with a configuration.
"},{"location":"swarms/structs/swarm_matcher/#get_embeddingtext-str-npndarray","title":"get_embedding(text: str) -> np.ndarray
","text":"Generates an embedding vector for a given text using the configured model.
Parameter Type Description text str The text to embed Returns np.ndarray The embedding vector"},{"location":"swarms/structs/swarm_matcher/#add_swarm_typeswarm_type-swarmtype","title":"add_swarm_type(swarm_type: SwarmType)
","text":"Adds a swarm type to the matcher, generating an embedding for its description.
Parameter Type Description swarm_type SwarmType The swarm type to add"},{"location":"swarms/structs/swarm_matcher/#find_best_matchtask-str-tuplestr-float","title":"find_best_match(task: str) -> Tuple[str, float]
","text":"Finds the best matching swarm type for a given task.
Parameter Type Description task str The task description Returns Tuple[str, float] The name of the best matching swarm type and the similarity score"},{"location":"swarms/structs/swarm_matcher/#auto_select_swarmtask-str-str","title":"auto_select_swarm(task: str) -> str
","text":"Automatically selects the best swarm type for a given task.
Parameter Type Description task str The task description Returns str The name of the selected swarm type"},{"location":"swarms/structs/swarm_matcher/#run_multipletasks-liststr-liststr","title":"run_multiple(tasks: List[str]) -> List[str]
","text":"Matches multiple tasks to swarm types in batch.
Parameter Type Description tasks List[str] A list of task descriptions Returns List[str] A list of selected swarm type names"},{"location":"swarms/structs/swarm_matcher/#save_swarm_typesfilename-str","title":"save_swarm_types(filename: str)
","text":"Saves the registered swarm types to a JSON file.
Parameter Type Description filename str Path where the swarm types will be saved"},{"location":"swarms/structs/swarm_matcher/#load_swarm_typesfilename-str","title":"load_swarm_types(filename: str)
","text":"Loads swarm types from a JSON file.
Parameter Type Description filename str Path to the JSON file containing swarm types"},{"location":"swarms/structs/swarm_matcher/#examples","title":"Examples","text":""},{"location":"swarms/structs/swarm_matcher/#simple-matching","title":"Simple Matching","text":"from swarms.structs.swarm_matcher import swarm_matcher\n\n# Match tasks to swarm types\ntasks = [\n \"Analyze this dataset and create visualizations\",\n \"Coordinate multiple agents to tackle different aspects of a problem\",\n \"Process these 10 PDF files in sequence\",\n \"Handle these data processing tasks in parallel\"\n]\n\nfor task in tasks:\n swarm_type = swarm_matcher(task)\n print(f\"Task: {task}\")\n print(f\"Selected swarm: {swarm_type}\\n\")\n
"},{"location":"swarms/structs/swarm_matcher/#custom-swarm-types","title":"Custom Swarm Types","text":"from swarms.structs.swarm_matcher import SwarmMatcher, SwarmMatcherConfig, SwarmType\n\n# Create configuration and matcher\nconfig = SwarmMatcherConfig()\nmatcher = SwarmMatcher(config)\n\n# Define custom swarm types\nswarm_types = [\n SwarmType(\n name=\"DataAnalysisSwarm\",\n description=\"Specialized in processing and analyzing large datasets, performing statistical analysis, and extracting insights from complex data.\"\n ),\n SwarmType(\n name=\"CreativeWritingSwarm\",\n description=\"Optimized for creative content generation, storytelling, and producing engaging written material with consistent style and tone.\"\n ),\n SwarmType(\n name=\"ResearchSwarm\",\n description=\"Focused on deep research tasks, synthesizing information from multiple sources, and producing comprehensive reports on complex topics.\"\n )\n]\n\n# Add swarm types\nfor swarm_type in swarm_types:\n matcher.add_swarm_type(swarm_type)\n\n# Save the swarm types for future use\nmatcher.save_swarm_types(\"custom_swarm_types.json\")\n\n# Use the matcher\ntask = \"Research quantum computing advances in the last 5 years\"\nbest_match = matcher.auto_select_swarm(task)\nprint(f\"Selected swarm type: {best_match}\")\n
"},{"location":"swarms/structs/swarm_matcher/#how-it-works","title":"How It Works","text":"SwarmMatcher uses a transformer-based model to generate embeddings (vector representations) of both the task descriptions and the swarm type descriptions. It then calculates the similarity between these embeddings to determine which swarm type is most semantically similar to the given task.
sequenceDiagram\n participant User\n participant SwarmMatcher\n participant TransformerModel\n\n User->>SwarmMatcher: task = \"Analyze this dataset\"\n Note over SwarmMatcher: Initialization already complete\n\n SwarmMatcher->>TransformerModel: get_embedding(task)\n TransformerModel-->>SwarmMatcher: task_embedding\n\n loop For each swarm type\n SwarmMatcher->>SwarmMatcher: Calculate similarity score\n Note over SwarmMatcher: score = dot_product(task_embedding, swarm_type.embedding)\n end\n\n SwarmMatcher->>SwarmMatcher: Find best score\n SwarmMatcher-->>User: \"SpreadSheetSwarm\"
The matching process follows these steps:
This approach ensures that the matcher can understand the semantic meaning of tasks, not just keyword matching, resulting in more accurate swarm type selection.
"},{"location":"swarms/structs/swarm_network/","title":"SwarmNetwork [WIP]","text":"The SwarmNetwork
class is a powerful tool for managing a pool of agents, orchestrating task distribution, and scaling resources based on workload. It is designed to handle tasks efficiently by dynamically adjusting the number of agents according to the current demand. This class also provides an optional API for interacting with the agent pool, making it accessible for integration with other systems.
Initializes a new instance of the SwarmNetwork
class.
name
(str): The name of the swarm network.description
(str): A description of the swarm network.agents
(List[Agent]): A list of agents in the pool.idle_threshold
(float): The idle threshold for the agents.busy_threshold
(float): The busy threshold for the agents.api_enabled
(Optional[bool]): A flag to enable/disable the API.logging_enabled
(Optional[bool]): A flag to enable/disable logging.api_on
(Optional[bool]): A flag to enable/disable the FastAPI instance.host
(str): The host address for the FastAPI instance.port
(int): The port number for the FastAPI instance.swarm_callable
(Optional[callable]): A callable to be executed by the swarm network.*args
: Additional positional arguments.**kwargs
: Additional keyword arguments.add_task
","text":"def add_task(self, task)\n
"},{"location":"swarms/structs/swarm_network/#description_1","title":"Description","text":"Adds a task to the task queue.
"},{"location":"swarms/structs/swarm_network/#parameters_2","title":"Parameters","text":"task
(type): The task to be added to the queue.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\nagent = Agent()\nswarm = SwarmNetwork(agents=[agent])\nswarm.add_task(\"task\")\n
"},{"location":"swarms/structs/swarm_network/#async_add_task","title":"async_add_task
","text":"async def async_add_task(self, task)\n
"},{"location":"swarms/structs/swarm_network/#description_2","title":"Description","text":"Adds a task to the task queue asynchronously.
"},{"location":"swarms/structs/swarm_network/#parameters_3","title":"Parameters","text":"task
(type): The task to be added to the queue.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\nagent = Agent()\nswarm = SwarmNetwork(agents=[agent])\nawait swarm.async_add_task(\"task\")\n
"},{"location":"swarms/structs/swarm_network/#run_single_agent","title":"run_single_agent
","text":"def run_single_agent(self, agent_id, task: Optional[str], *args, **kwargs)\n
"},{"location":"swarms/structs/swarm_network/#description_3","title":"Description","text":"Runs a task on a specific agent by ID.
"},{"location":"swarms/structs/swarm_network/#parameters_4","title":"Parameters","text":"agent_id
(type): The ID of the agent.task
(str, optional): The task to be executed by the agent.*args
: Additional positional arguments.**kwargs
: Additional keyword arguments._type_
: The output of the agent running the task.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[agent])\nresult = swarm.run_single_agent(agent.id, \"task\")\n
"},{"location":"swarms/structs/swarm_network/#run_many_agents","title":"run_many_agents
","text":"def run_many_agents(self, task: Optional[str] = None, *args, **kwargs) -> List\n
"},{"location":"swarms/structs/swarm_network/#description_4","title":"Description","text":"Runs a task on all agents in the pool.
"},{"location":"swarms/structs/swarm_network/#parameters_5","title":"Parameters","text":"task
(str, optional): The task to be executed by the agents.*args
: Additional positional arguments.**kwargs
: Additional keyword arguments.List
: The output of all agents running the task.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\n\nswarm = SwarmNetwork(agents=[agent1, agent2])\nresults = swarm.run_many_agents(\"task\")\n
"},{"location":"swarms/structs/swarm_network/#list_agents","title":"list_agents
","text":"def list_agents(self)\n
"},{"location":"swarms/structs/swarm_network/#description_5","title":"Description","text":"Lists all agents in the pool.
"},{"location":"swarms/structs/swarm_network/#example_4","title":"Example","text":"from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[agent])\nswarm.list_agents()\n
"},{"location":"swarms/structs/swarm_network/#get_agent","title":"get_agent
","text":"def get_agent(self, agent_id)\n
"},{"location":"swarms/structs/swarm_network/#description_6","title":"Description","text":"Gets an agent by ID.
"},{"location":"swarms/structs/swarm_network/#parameters_6","title":"Parameters","text":"agent_id
(type): The ID of the agent to retrieve._type_
: The agent with the specified ID.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[agent])\nretrieved_agent = swarm.get_agent(agent.id)\n
"},{"location":"swarms/structs/swarm_network/#add_agent","title":"add_agent
","text":"def add_agent(self, agent: Agent)\n
"},{"location":"swarms/structs/swarm_network/#description_7","title":"Description","text":"Adds an agent to the agent pool.
"},{"location":"swarms/structs/swarm_network/#parameters_7","title":"Parameters","text":"agent
(type): The agent to be added to the pool.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[])\nswarm.add_agent(agent)\n
"},{"location":"swarms/structs/swarm_network/#remove_agent","title":"remove_agent
","text":"def remove_agent(self, agent_id)\n
"},{"location":"swarms/structs/swarm_network/#description_8","title":"Description","text":"Removes an agent from the agent pool.
"},{"location":"swarms/structs/swarm_network/#parameters_8","title":"Parameters","text":"agent_id
(type): The ID of the agent to be removed.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[agent])\nswarm.remove_agent(agent.id)\n
"},{"location":"swarms/structs/swarm_network/#_1","title":"`","text":"async_remove_agent`
async def async_remove_agent(self, agent_id)\n
"},{"location":"swarms/structs/swarm_network/#description_9","title":"Description","text":"Removes an agent from the agent pool asynchronously.
"},{"location":"swarms/structs/swarm_network/#parameters_9","title":"Parameters","text":"agent_id
(type): The ID of the agent to be removed.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[agent])\nawait swarm.async_remove_agent(agent.id)\n
"},{"location":"swarms/structs/swarm_network/#scale_up","title":"scale_up
","text":"def scale_up(self, num_agents: int = 1)\n
"},{"location":"swarms/structs/swarm_network/#description_10","title":"Description","text":"Scales up the agent pool by adding new agents.
"},{"location":"swarms/structs/swarm_network/#parameters_10","title":"Parameters","text":"num_agents
(int, optional): The number of agents to add. Defaults to 1.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\nswarm = SwarmNetwork(agents=[agent])\nswarm.scale_up(2)\n
"},{"location":"swarms/structs/swarm_network/#scale_down","title":"scale_down
","text":"def scale_down(self, num_agents: int = 1)\n
"},{"location":"swarms/structs/swarm_network/#description_11","title":"Description","text":"Scales down the agent pool by removing agents.
"},{"location":"swarms/structs/swarm_network/#parameters_11","title":"Parameters","text":"num_agents
(int, optional): The number of agents to remove. Defaults to 1.from swarms.structs.agent import Agent\nfrom swarms.structs.swarm_net import SwarmNetwork\n\n# Initialize the agent\nagent2 = Agent(\n agent_name=\"ROTH-IRA-AGENT\",\n system_prompt=ESTATE_PLANNING_AGENT_SYS_PROMPT,\n llm=model,\n max_loops=\"auto\",\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n interactive=True,\n # interactive=True, # Set to False to disable interactive mode\n saved_state_path=\"finance_agent.json\",\n # tools=[Add your functions here# ],\n # stopping_token=\"Stop!\",\n # interactive=True,\n # docs_folder=\"docs\", # Enter your folder name\n # pdf_path=\"docs/finance_agent.pdf\",\n # sop=\"Calculate the profit for a company.\",\n # sop_list=[\"Calculate the profit for a company.\"],\n user_name=\"swarms_corp\",\n # # docs=\n # # docs_folder=\"docs\",\n retry_attempts=3,\n # context_length=1000,\n # tool_schema = dict\n context_length=200000,\n # agent_ops_on=True,\n # long_term_memory=ChromaDB(docs_folder=\"artifacts\"),\n)\n\n\nswarm = SwarmNetwork(agents=[agent])\nswarm.scale_down(1)\n
"},{"location":"swarms/structs/swarm_network/#run","title":"run
","text":""},{"location":"swarms/structs/swarm_network/#description_12","title":"Description","text":"Runs the swarm network, starting the FastAPI application.
"},{"location":"swarms/structs/swarm_network/#example_11","title":"Example","text":"import os\n\nfrom dotenv import load_dotenv\n\n# Import the OpenAIChat model and the Agent struct\nfrom swarms import Agent, OpenAIChat, SwarmNetwork\n\n# Load the environment variables\nload_dotenv()\n\n# Get the API key from the environment\napi_key = os.environ.get(\"OPENAI_API_KEY\")\n\n# Initialize the language model\nllm = OpenAIChat(\n temperature=0.5,\n openai_api_key=api_key,\n)\n\n## Initialize the workflow\nagent = Agent(llm=llm, max_loops=1, agent_name=\"Social Media Manager\")\nagent2 = Agent(llm=llm, max_loops=1, agent_name=\" Product Manager\")\nagent3 = Agent(llm=llm, max_loops=1, agent_name=\"SEO Manager\")\n\n\n# Load the swarmnet with the agents\nswarmnet = SwarmNetwork(\n agents=[agent, agent2, agent3],\n)\n\n# List the agents in the swarm network\nout = swarmnet.list_agents()\nprint(out)\n\n# Run the workflow on a task\nout = swarmnet.run_single_agent(\n agent2.id, \"Generate a 10,000 word blog on health and wellness.\"\n)\nprint(out)\n\n\n# Run all the agents in the swarm network on a task\nout = swarmnet.run_many_agents(\"Generate a 10,000 word blog on health and wellness.\")\nprint(out)\n
"},{"location":"swarms/structs/swarm_network/#additional-information-and-tips","title":"Additional Information and Tips","text":"idle_threshold
and busy_threshold
) based on the specific needs and workload patterns.By following this documentation, users can effectively manage and utilize the SwarmNetwork
class to handle dynamic workloads and maintain an efficient pool of agents.
SwarmRearrange is a class for orchestrating multiple swarms in a sequential or parallel flow pattern. It provides thread-safe operations for managing swarm execution, history tracking, and flow validation.
"},{"location":"swarms/structs/swarm_rearrange/#constructor-arguments","title":"Constructor Arguments","text":"Parameter Type Default Description id str UUID Unique identifier for the swarm arrangement name str \"SwarmRearrange\" Name of the swarm arrangement description str \"A swarm of swarms...\" Description of the arrangement swarms List[Any] [] List of swarm objects to be managed flow str None Flow pattern for swarm execution max_loops int 1 Maximum number of execution loops verbose bool True Enable detailed logging human_in_the_loop bool False Enable human intervention custom_human_in_the_loop Callable None Custom function for human interaction return_json bool False Return results in JSON format"},{"location":"swarms/structs/swarm_rearrange/#methods","title":"Methods","text":""},{"location":"swarms/structs/swarm_rearrange/#add_swarmswarm-any","title":"add_swarm(swarm: Any)","text":"Adds a single swarm to the arrangement.
"},{"location":"swarms/structs/swarm_rearrange/#remove_swarmswarm_name-str","title":"remove_swarm(swarm_name: str)","text":"Removes a swarm by name from the arrangement.
"},{"location":"swarms/structs/swarm_rearrange/#add_swarmsswarms-listany","title":"add_swarms(swarms: List[Any])","text":"Adds multiple swarms to the arrangement.
"},{"location":"swarms/structs/swarm_rearrange/#validate_flow","title":"validate_flow()","text":"Validates the flow pattern syntax and swarm names.
"},{"location":"swarms/structs/swarm_rearrange/#runtask-str-none-img-str-none-custom_tasks-dictstr-str-none","title":"run(task: str = None, img: str = None, custom_tasks: Dict[str, str] = None)","text":"Executes the swarm arrangement according to the flow pattern.
"},{"location":"swarms/structs/swarm_rearrange/#flow-pattern-syntax","title":"Flow Pattern Syntax","text":"The flow pattern uses arrow notation (->
) to define execution order:
\"SwarmA -> SwarmB -> SwarmC\"
\"SwarmA, SwarmB -> SwarmC\"
\"H\"
in the flowfrom swarms.structs.swarm_arange import SwarmRearrange\nimport os\nfrom swarms import Agent, AgentRearrange\nfrom swarm_models import OpenAIChat\n\n# model = Anthropic(anthropic_api_key=os.getenv(\"ANTHROPIC_API_KEY\"))\ncompany = \"TGSC\"\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"GROQ_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_base=\"https://api.groq.com/openai/v1\",\n openai_api_key=api_key,\n model_name=\"llama-3.1-70b-versatile\",\n temperature=0.1,\n)\n\n\n# Initialize the Managing Director agent\nmanaging_director = Agent(\n agent_name=\"Managing-Director\",\n system_prompt=f\"\"\"\n As the Managing Director at Blackstone, your role is to oversee the entire investment analysis process for potential acquisitions. \n Your responsibilities include:\n 1. Setting the overall strategy and direction for the analysis\n 2. Coordinating the efforts of the various team members and ensuring a comprehensive evaluation\n 3. Reviewing the findings and recommendations from each team member\n 4. Making the final decision on whether to proceed with the acquisition\n\n For the current potential acquisition of {company}, direct the tasks for the team to thoroughly analyze all aspects of the company, including its financials, industry position, technology, market potential, and regulatory compliance. Provide guidance and feedback as needed to ensure a rigorous and unbiased assessment.\n \"\"\",\n llm=model,\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"managing-director.json\",\n)\n\n# Initialize the Vice President of Finance\nvp_finance = Agent(\n agent_name=\"VP-Finance\",\n system_prompt=f\"\"\"\n As the Vice President of Finance at Blackstone, your role is to lead the financial analysis of potential acquisitions. \n For the current potential acquisition of {company}, your tasks include:\n 1. Conducting a thorough review of {company}' financial statements, including income statements, balance sheets, and cash flow statements\n 2. Analyzing key financial metrics such as revenue growth, profitability margins, liquidity ratios, and debt levels\n 3. Assessing the company's historical financial performance and projecting future performance based on assumptions and market conditions\n 4. Identifying any financial risks or red flags that could impact the acquisition decision\n 5. Providing a detailed report on your findings and recommendations to the Managing Director\n\n Be sure to consider factors such as the sustainability of {company}' business model, the strength of its customer base, and its ability to generate consistent cash flows. Your analysis should be data-driven, objective, and aligned with Blackstone's investment criteria.\n \"\"\",\n llm=model,\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"vp-finance.json\",\n)\n\n# Initialize the Industry Analyst\nindustry_analyst = Agent(\n agent_name=\"Industry-Analyst\",\n system_prompt=f\"\"\"\n As the Industry Analyst at Blackstone, your role is to provide in-depth research and analysis on the industries and markets relevant to potential acquisitions.\n For the current potential acquisition of {company}, your tasks include:\n 1. Conducting a comprehensive analysis of the industrial robotics and automation solutions industry, including market size, growth rates, key trends, and future prospects\n 2. Identifying the major players in the industry and assessing their market share, competitive strengths and weaknesses, and strategic positioning \n 3. Evaluating {company}' competitive position within the industry, including its market share, differentiation, and competitive advantages\n 4. Analyzing the key drivers and restraints for the industry, such as technological advancements, labor costs, regulatory changes, and economic conditions\n 5. Identifying potential risks and opportunities for {company} based on the industry analysis, such as disruptive technologies, emerging markets, or shifts in customer preferences \n\n Your analysis should provide a clear and objective assessment of the attractiveness and future potential of the industrial robotics industry, as well as {company}' positioning within it. Consider both short-term and long-term factors, and provide evidence-based insights to inform the investment decision.\n \"\"\",\n llm=model,\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"industry-analyst.json\",\n)\n\n# Initialize the Technology Expert\ntech_expert = Agent(\n agent_name=\"Tech-Expert\",\n system_prompt=f\"\"\"\n As the Technology Expert at Blackstone, your role is to assess the technological capabilities, competitive advantages, and potential risks of companies being considered for acquisition.\n For the current potential acquisition of {company}, your tasks include:\n 1. Conducting a deep dive into {company}' proprietary technologies, including its robotics platforms, automation software, and AI capabilities \n 2. Assessing the uniqueness, scalability, and defensibility of {company}' technology stack and intellectual property\n 3. Comparing {company}' technologies to those of its competitors and identifying any key differentiators or technology gaps\n 4. Evaluating {company}' research and development capabilities, including its innovation pipeline, engineering talent, and R&D investments\n 5. Identifying any potential technology risks or disruptive threats that could impact {company}' long-term competitiveness, such as emerging technologies or expiring patents\n\n Your analysis should provide a comprehensive assessment of {company}' technological strengths and weaknesses, as well as the sustainability of its competitive advantages. Consider both the current state of its technology and its future potential in light of industry trends and advancements.\n \"\"\",\n llm=model,\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"tech-expert.json\",\n)\n\n# Initialize the Market Researcher\nmarket_researcher = Agent(\n agent_name=\"Market-Researcher\",\n system_prompt=f\"\"\"\n As the Market Researcher at Blackstone, your role is to analyze the target company's customer base, market share, and growth potential to assess the commercial viability and attractiveness of the potential acquisition.\n For the current potential acquisition of {company}, your tasks include:\n 1. Analyzing {company}' current customer base, including customer segmentation, concentration risk, and retention rates\n 2. Assessing {company}' market share within its target markets and identifying key factors driving its market position\n 3. Conducting a detailed market sizing and segmentation analysis for the industrial robotics and automation markets, including identifying high-growth segments and emerging opportunities\n 4. Evaluating the demand drivers and sales cycles for {company}' products and services, and identifying any potential risks or limitations to adoption\n 5. Developing financial projections and estimates for {company}' revenue growth potential based on the market analysis and assumptions around market share and penetration\n\n Your analysis should provide a data-driven assessment of the market opportunity for {company} and the feasibility of achieving our investment return targets. Consider both bottom-up and top-down market perspectives, and identify any key sensitivities or assumptions in your projections.\n \"\"\",\n llm=model,\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"market-researcher.json\",\n)\n\n# Initialize the Regulatory Specialist\nregulatory_specialist = Agent(\n agent_name=\"Regulatory-Specialist\",\n system_prompt=f\"\"\"\n As the Regulatory Specialist at Blackstone, your role is to identify and assess any regulatory risks, compliance requirements, and potential legal liabilities associated with potential acquisitions.\n For the current potential acquisition of {company}, your tasks include: \n 1. Identifying all relevant regulatory bodies and laws that govern the operations of {company}, including industry-specific regulations, labor laws, and environmental regulations\n 2. Reviewing {company}' current compliance policies, procedures, and track record to identify any potential gaps or areas of non-compliance\n 3. Assessing the potential impact of any pending or proposed changes to relevant regulations that could affect {company}' business or create additional compliance burdens\n 4. Evaluating the potential legal liabilities and risks associated with {company}' products, services, and operations, including product liability, intellectual property, and customer contracts\n 5. Providing recommendations on any regulatory or legal due diligence steps that should be taken as part of the acquisition process, as well as any post-acquisition integration considerations\n\n Your analysis should provide a comprehensive assessment of the regulatory and legal landscape surrounding {company}, and identify any material risks or potential deal-breakers. Consider both the current state and future outlook, and provide practical recommendations to mitigate identified risks.\n \"\"\",\n llm=model,\n max_loops=1,\n dashboard=False,\n streaming_on=True,\n verbose=True,\n stopping_token=\"<DONE>\",\n state_save_file_type=\"json\",\n saved_state_path=\"regulatory-specialist.json\",\n)\n\n# Create a list of agents\nagents = [\n managing_director,\n vp_finance,\n industry_analyst,\n tech_expert,\n market_researcher,\n regulatory_specialist,\n]\n\n# Define multiple flow patterns\nflows = [\n \"Industry-Analyst -> Tech-Expert -> Market-Researcher -> Regulatory-Specialist -> Managing-Director -> VP-Finance\",\n \"Managing-Director -> VP-Finance -> Industry-Analyst -> Tech-Expert -> Market-Researcher -> Regulatory-Specialist\",\n \"Tech-Expert -> Market-Researcher -> Regulatory-Specialist -> Industry-Analyst -> Managing-Director -> VP-Finance\",\n]\n\n# Create instances of AgentRearrange for each flow pattern\nblackstone_acquisition_analysis = AgentRearrange(\n name=\"Blackstone-Acquisition-Analysis\",\n description=\"A system for analyzing potential acquisitions\",\n agents=agents,\n flow=flows[0],\n)\n\nblackstone_investment_strategy = AgentRearrange(\n name=\"Blackstone-Investment-Strategy\",\n description=\"A system for evaluating investment opportunities\",\n agents=agents,\n flow=flows[1],\n)\n\nblackstone_market_analysis = AgentRearrange(\n name=\"Blackstone-Market-Analysis\",\n description=\"A system for analyzing market trends and opportunities\",\n agents=agents,\n flow=flows[2],\n)\n\nswarm_arrange = SwarmRearrange(\n swarms=[\n blackstone_acquisition_analysis,\n blackstone_investment_strategy,\n blackstone_market_analysis,\n ],\n flow=f\"{blackstone_acquisition_analysis.name} -> {blackstone_investment_strategy.name} -> {blackstone_market_analysis.name}\",\n)\n\nprint(\n swarm_arrange.run(\n \"Analyze swarms, 150k revenue with 45m+ agents build, with 1.4m downloads since march 2024\"\n )\n)\n
"},{"location":"swarms/structs/swarm_rearrange/#human-in-the-loop","title":"Human-in-the-Loop","text":"def custom_human_input(task):\n return input(f\"Review {task} and provide feedback: \")\n\n# Create arrangement with human intervention\narrangement = SwarmRearrange(\n name=\"HumanAugmented\",\n swarms=[swarm1, swarm2],\n flow=\"SwarmA -> H -> SwarmB\",\n human_in_the_loop=True,\n custom_human_in_the_loop=custom_human_input\n)\n\n# Execute with human intervention\nresult = arrangement.run(\"Initial task\")\n
"},{"location":"swarms/structs/swarm_rearrange/#complex-multi-stage-pipeline","title":"Complex Multi-Stage Pipeline","text":"# Define multiple flow patterns\nflows = [\n \"Collector -> Processor -> Analyzer\",\n \"Analyzer -> ML -> Validator\",\n \"Validator -> Reporter\"\n]\n\n# Create arrangements for each flow\npipelines = [\n SwarmRearrange(name=f\"Pipeline{i}\", swarms=swarms, flow=flow)\n for i, flow in enumerate(flows)\n]\n\n# Create master arrangement\nmaster = SwarmRearrange(\n name=\"MasterPipeline\",\n swarms=pipelines,\n flow=\"Pipeline0 -> Pipeline1 -> Pipeline2\"\n)\n\n# Execute complete pipeline\nresult = master.run(\"Start analysis\")\n
"},{"location":"swarms/structs/swarm_rearrange/#best-practices","title":"Best Practices","text":"The class implements comprehensive error handling:
try:\n arrangement = SwarmRearrange(swarms=swarms, flow=flow)\n result = arrangement.run(task)\nexcept ValueError as e:\n logger.error(f\"Flow validation error: {e}\")\nexcept Exception as e:\n logger.error(f\"Execution error: {e}\")\n
"},{"location":"swarms/structs/swarm_router/","title":"SwarmRouter Documentation","text":"The SwarmRouter
class is a flexible routing system designed to manage different types of swarms for task execution. It provides a unified interface to interact with various swarm types, including:
AgentRearrange
Optimizes agent arrangement for task execution MixtureOfAgents
Combines multiple agent types for diverse tasks SpreadSheetSwarm
Uses spreadsheet-like operations for task management SequentialWorkflow
Executes tasks sequentially ConcurrentWorkflow
Executes tasks in parallel GroupChat
Facilitates communication among agents in a group chat format MultiAgentRouter
Routes tasks between multiple agents AutoSwarmBuilder
Automatically builds swarm structure HiearchicalSwarm
Hierarchical organization of agents MajorityVoting
Uses majority voting for decision making MALT
Multi-Agent Language Tasks DeepResearchSwarm
Specialized for deep research tasks CouncilAsAJudge
Council-based judgment system InteractiveGroupChat
Interactive group chat with user participation auto
Automatically selects best swarm type via embedding search"},{"location":"swarms/structs/swarm_router/#classes","title":"Classes","text":""},{"location":"swarms/structs/swarm_router/#document","title":"Document","text":"A Pydantic model for representing document data.
Attribute Type Descriptionfile_path
str Path to the document file. data
str Content of the document."},{"location":"swarms/structs/swarm_router/#swarmlog","title":"SwarmLog","text":"A Pydantic model for capturing log entries.
Attribute Type Descriptionid
str Unique identifier for the log entry. timestamp
datetime Time of log creation. level
str Log level (e.g., \"info\", \"error\"). message
str Log message content. swarm_type
SwarmType Type of swarm associated with the log. task
str Task being performed (optional). metadata
Dict[str, Any] Additional metadata (optional). documents
List[Document] List of documents associated with the log."},{"location":"swarms/structs/swarm_router/#swarmrouterconfig","title":"SwarmRouterConfig","text":"Configuration model for SwarmRouter.
Attribute Type Descriptionname
str Name identifier for the SwarmRouter instance description
str Description of the SwarmRouter's purpose swarm_type
SwarmType Type of swarm to use rearrange_flow
Optional[str] Flow configuration string rules
Optional[str] Rules to inject into every agent multi_agent_collab_prompt
bool Whether to enable multi-agent collaboration prompts task
str The task to be executed by the swarm"},{"location":"swarms/structs/swarm_router/#swarmrouter","title":"SwarmRouter","text":"Main class for routing tasks to different swarm types.
Attribute Type Descriptionname
str Name of the SwarmRouter instance description
str Description of the SwarmRouter's purpose max_loops
int Maximum number of loops to perform agents
List[Union[Agent, Callable]] List of Agent objects or callable functions swarm_type
SwarmType Type of swarm to be used autosave
bool Flag to enable/disable autosave rearrange_flow
str The flow for the AgentRearrange swarm type return_json
bool Flag to enable/disable returning the result in JSON format auto_generate_prompts
bool Flag to enable/disable auto generation of prompts shared_memory_system
Any Shared memory system for agents rules
str Rules to inject into every agent documents
List[str] List of document file paths output_type
OutputType Output format type (e.g., \"string\", \"dict\", \"list\", \"json\", \"yaml\", \"xml\") no_cluster_ops
bool Flag to disable cluster operations speaker_fn
callable Speaker function for GroupChat swarm type load_agents_from_csv
bool Flag to enable/disable loading agents from CSV csv_file_path
str Path to the CSV file for loading agents return_entire_history
bool Flag to enable/disable returning the entire conversation history multi_agent_collab_prompt
bool Whether to enable multi-agent collaboration prompts"},{"location":"swarms/structs/swarm_router/#methods","title":"Methods:","text":"Method Parameters Description __init__
name: str = \"swarm-router\", description: str = \"Routes your task to the desired swarm\", max_loops: int = 1, agents: List[Union[Agent, Callable]] = [], swarm_type: SwarmType = \"SequentialWorkflow\", autosave: bool = False, rearrange_flow: str = None, return_json: bool = False, auto_generate_prompts: bool = False, shared_memory_system: Any = None, rules: str = None, documents: List[str] = [], output_type: OutputType = \"dict\", no_cluster_ops: bool = False, speaker_fn: callable = None, load_agents_from_csv: bool = False, csv_file_path: str = None, return_entire_history: bool = True, multi_agent_collab_prompt: bool = True
Initialize the SwarmRouter setup
None Set up the SwarmRouter by activating APE and handling shared memory and rules activate_shared_memory
None Activate shared memory with all agents handle_rules
None Inject rules to every agent activate_ape
None Activate automatic prompt engineering for agents that support it reliability_check
None Perform reliability checks on the SwarmRouter configuration _create_swarm
task: str = None, *args, **kwargs
Create and return the specified swarm type update_system_prompt_for_agent_in_swarm
None Update system prompts for all agents with collaboration prompts _log
level: str, message: str, task: str = \"\", metadata: Dict[str, Any] = None
Create a log entry _run
task: str, img: Optional[str] = None, model_response: Optional[str] = None, *args, **kwargs
Run the specified task on the selected swarm type run
task: str, img: Optional[str] = None, model_response: Optional[str] = None, *args, **kwargs
Execute a task on the selected swarm type __call__
task: str, *args, **kwargs
Make the SwarmRouter instance callable batch_run
tasks: List[str], *args, **kwargs
Execute multiple tasks in sequence async_run
task: str, *args, **kwargs
Execute a task asynchronously get_logs
None Retrieve all logged entries concurrent_run
task: str, *args, **kwargs
Execute a task using concurrent execution concurrent_batch_run
tasks: List[str], *args, **kwargs
Execute multiple tasks concurrently"},{"location":"swarms/structs/swarm_router/#installation","title":"Installation","text":"To use the SwarmRouter, first install the required dependencies:
pip install swarms swarm_models\n
"},{"location":"swarms/structs/swarm_router/#basic-usage","title":"Basic Usage","text":"import os\nfrom dotenv import load_dotenv\nfrom swarms import Agent, SwarmRouter, SwarmType\nfrom swarm_models import OpenAIChat\n\nload_dotenv()\n\n# Get the OpenAI API key from the environment variable\napi_key = os.getenv(\"GROQ_API_KEY\")\n\n# Model\nmodel = OpenAIChat(\n openai_api_base=\"https://api.groq.com/openai/v1\",\n openai_api_key=api_key,\n model_name=\"llama-3.1-70b-versatile\",\n temperature=0.1,\n)\n\n# Define specialized system prompts for each agent\nDATA_EXTRACTOR_PROMPT = \"\"\"You are a highly specialized private equity agent focused on data extraction from various documents. Your expertise includes:\n1. Extracting key financial metrics (revenue, EBITDA, growth rates, etc.) from financial statements and reports\n2. Identifying and extracting important contract terms from legal documents\n3. Pulling out relevant market data from industry reports and analyses\n4. Extracting operational KPIs from management presentations and internal reports\n5. Identifying and extracting key personnel information from organizational charts and bios\nProvide accurate, structured data extracted from various document types to support investment analysis.\"\"\"\n\nSUMMARIZER_PROMPT = \"\"\"You are an expert private equity agent specializing in summarizing complex documents. Your core competencies include:\n1. Distilling lengthy financial reports into concise executive summaries\n2. Summarizing legal documents, highlighting key terms and potential risks\n3. Condensing industry reports to capture essential market trends and competitive dynamics\n4. Summarizing management presentations to highlight key strategic initiatives and projections\n5. Creating brief overviews of technical documents, emphasizing critical points for non-technical stakeholders\nDeliver clear, concise summaries that capture the essence of various documents while highlighting information crucial for investment decisions.\"\"\"\n\n# Initialize specialized agents\ndata_extractor_agent = Agent(\n agent_name=\"Data-Extractor\",\n system_prompt=DATA_EXTRACTOR_PROMPT,\n llm=model,\n max_loops=1,\n autosave=True,\n verbose=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"data_extractor_agent.json\",\n user_name=\"pe_firm\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n)\n\nsummarizer_agent = Agent(\n agent_name=\"Document-Summarizer\",\n system_prompt=SUMMARIZER_PROMPT,\n llm=model,\n max_loops=1,\n autosave=True,\n verbose=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"summarizer_agent.json\",\n user_name=\"pe_firm\",\n retry_attempts=1,\n context_length=200000,\n output_type=\"string\",\n)\n\n# Initialize the SwarmRouter\nrouter = SwarmRouter(\n name=\"pe-document-analysis-swarm\",\n description=\"Analyze documents for private equity due diligence and investment decision-making\",\n max_loops=1,\n agents=[data_extractor_agent, summarizer_agent],\n swarm_type=\"ConcurrentWorkflow\",\n autosave=True,\n return_json=True,\n)\n\n# Example usage\nif __name__ == \"__main__\":\n # Run a comprehensive private equity document analysis task\n result = router.run(\n \"Where is the best place to find template term sheets for series A startups? Provide links and references\"\n )\n print(result)\n\n # Retrieve and print logs\n for log in router.get_logs():\n print(f\"{log.timestamp} - {log.level}: {log.message}\")\n
"},{"location":"swarms/structs/swarm_router/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/structs/swarm_router/#changing-swarm-types","title":"Changing Swarm Types","text":"You can create multiple SwarmRouter instances with different swarm types:
sequential_router = SwarmRouter(\n name=\"SequentialRouter\",\n agents=[agent1, agent2],\n swarm_type=\"SequentialWorkflow\"\n)\n\nconcurrent_router = SwarmRouter(\n name=\"ConcurrentRouter\",\n agents=[agent1, agent2],\n swarm_type=\"ConcurrentWorkflow\"\n)\n
"},{"location":"swarms/structs/swarm_router/#automatic-swarm-type-selection","title":"Automatic Swarm Type Selection","text":"You can let the SwarmRouter automatically select the best swarm type for a given task:
auto_router = SwarmRouter(\n name=\"AutoRouter\",\n agents=[agent1, agent2],\n swarm_type=\"auto\"\n)\n\nresult = auto_router.run(\"Analyze and summarize the quarterly financial report\")\n
"},{"location":"swarms/structs/swarm_router/#loading-agents-from-csv","title":"Loading Agents from CSV","text":"To load agents from a CSV file:
csv_router = SwarmRouter(\n name=\"CSVAgentRouter\",\n load_agents_from_csv=True,\n csv_file_path=\"agents.csv\",\n swarm_type=\"SequentialWorkflow\"\n)\n\nresult = csv_router.run(\"Process the client data\")\n
"},{"location":"swarms/structs/swarm_router/#using-shared-memory-system","title":"Using Shared Memory System","text":"To enable shared memory across agents:
from swarms.memory import SemanticMemory\n\nmemory_system = SemanticMemory()\n\nmemory_router = SwarmRouter(\n name=\"MemoryRouter\",\n agents=[agent1, agent2],\n shared_memory_system=memory_system,\n swarm_type=\"SequentialWorkflow\"\n)\n\nresult = memory_router.run(\"Analyze historical data and make predictions\")\n
"},{"location":"swarms/structs/swarm_router/#injecting-rules-to-all-agents","title":"Injecting Rules to All Agents","text":"To inject common rules into all agents:
rules = \"\"\"\n1. Always provide sources for your information\n2. Check your calculations twice\n3. Explain your reasoning clearly\n4. Highlight uncertainties and assumptions\n\"\"\"\n\nrules_router = SwarmRouter(\n name=\"RulesRouter\",\n agents=[agent1, agent2],\n rules=rules,\n swarm_type=\"SequentialWorkflow\"\n)\n\nresult = rules_router.run(\"Analyze the investment opportunity\")\n
"},{"location":"swarms/structs/swarm_router/#use-cases","title":"Use Cases","text":""},{"location":"swarms/structs/swarm_router/#agentrearrange","title":"AgentRearrange","text":"Use Case: Optimizing agent order for complex multi-step tasks.
rearrange_router = SwarmRouter(\n name=\"TaskOptimizer\",\n description=\"Optimize agent order for multi-step tasks\",\n max_loops=3,\n agents=[data_extractor, analyzer, summarizer],\n swarm_type=\"AgentRearrange\",\n rearrange_flow=f\"{data_extractor.name} -> {analyzer.name} -> {summarizer.name}\"\n)\n\nresult = rearrange_router.run(\"Analyze and summarize the quarterly financial report\")\n
"},{"location":"swarms/structs/swarm_router/#mixtureofagents","title":"MixtureOfAgents","text":"Use Case: Combining diverse expert agents for comprehensive analysis.
mixture_router = SwarmRouter(\n name=\"ExpertPanel\",\n description=\"Combine insights from various expert agents\",\n max_loops=1,\n agents=[financial_expert, market_analyst, tech_specialist, aggregator],\n swarm_type=\"MixtureOfAgents\"\n)\n\nresult = mixture_router.run(\"Evaluate the potential acquisition of TechStartup Inc.\")\n
"},{"location":"swarms/structs/swarm_router/#spreadsheetswarm","title":"SpreadSheetSwarm","text":"Use Case: Collaborative data processing and analysis.
spreadsheet_router = SwarmRouter(\n name=\"DataProcessor\",\n description=\"Collaborative data processing and analysis\",\n max_loops=1,\n agents=[data_cleaner, statistical_analyzer, visualizer],\n swarm_type=\"SpreadSheetSwarm\"\n)\n\nresult = spreadsheet_router.run(\"Process and visualize customer churn data\")\n
"},{"location":"swarms/structs/swarm_router/#sequentialworkflow","title":"SequentialWorkflow","text":"Use Case: Step-by-step document analysis and report generation.
sequential_router = SwarmRouter(\n name=\"ReportGenerator\",\n description=\"Generate comprehensive reports sequentially\",\n max_loops=1,\n agents=[data_extractor, analyzer, writer, reviewer],\n swarm_type=\"SequentialWorkflow\",\n return_entire_history=True\n)\n\nresult = sequential_router.run(\"Create a due diligence report for Project Alpha\")\n
"},{"location":"swarms/structs/swarm_router/#concurrentworkflow","title":"ConcurrentWorkflow","text":"Use Case: Parallel processing of multiple data sources.
concurrent_router = SwarmRouter(\n name=\"MultiSourceAnalyzer\",\n description=\"Analyze multiple data sources concurrently\",\n max_loops=1,\n agents=[financial_analyst, market_researcher, competitor_analyst],\n swarm_type=\"ConcurrentWorkflow\",\n output_type=\"string\"\n)\n\nresult = concurrent_router.run(\"Conduct a comprehensive market analysis for Product X\")\n
"},{"location":"swarms/structs/swarm_router/#groupchat","title":"GroupChat","text":"Use Case: Simulating a group discussion with multiple agents.
group_chat_router = SwarmRouter(\n name=\"GroupChat\",\n description=\"Simulate a group discussion with multiple agents\",\n max_loops=10,\n agents=[financial_analyst, market_researcher, competitor_analyst],\n swarm_type=\"GroupChat\",\n speaker_fn=custom_speaker_function\n)\n\nresult = group_chat_router.run(\"Discuss the pros and cons of expanding into the Asian market\")\n
"},{"location":"swarms/structs/swarm_router/#multiagentrouter","title":"MultiAgentRouter","text":"Use Case: Routing tasks to the most appropriate agent.
multi_agent_router = SwarmRouter(\n name=\"MultiAgentRouter\",\n description=\"Route tasks to specialized agents\",\n max_loops=1,\n agents=[financial_analyst, market_researcher, competitor_analyst],\n swarm_type=\"MultiAgentRouter\",\n shared_memory_system=memory_system\n)\n\nresult = multi_agent_router.run(\"Analyze the competitive landscape for our new product\")\n
See MultiAgentRouter Minimal Example for a lightweight demonstration.
"},{"location":"swarms/structs/swarm_router/#hierarchicalswarm","title":"HierarchicalSwarm","text":"Use Case: Creating a hierarchical structure of agents with a director.
hierarchical_router = SwarmRouter(\n name=\"HierarchicalSwarm\",\n description=\"Hierarchical organization of agents with a director\",\n max_loops=3,\n agents=[director, analyst1, analyst2, researcher],\n swarm_type=\"HiearchicalSwarm\",\n return_all_history=True\n)\n\nresult = hierarchical_router.run(\"Develop a comprehensive market entry strategy\")\n
"},{"location":"swarms/structs/swarm_router/#majorityvoting","title":"MajorityVoting","text":"Use Case: Using consensus among multiple agents for decision-making.
voting_router = SwarmRouter(\n name=\"MajorityVoting\",\n description=\"Make decisions using consensus among agents\",\n max_loops=1,\n agents=[analyst1, analyst2, analyst3, consensus_agent],\n swarm_type=\"MajorityVoting\"\n)\n\nresult = voting_router.run(\"Should we invest in Company X based on the available data?\")\n
"},{"location":"swarms/structs/swarm_router/#auto-select-experimental","title":"Auto Select (Experimental)","text":"Autonomously selects the right swarm by conducting vector search on your input task or name or description or all 3.
auto_router = SwarmRouter(\n name=\"MultiSourceAnalyzer\",\n description=\"Analyze multiple data sources concurrently\",\n max_loops=1,\n agents=[financial_analyst, market_researcher, competitor_analyst],\n swarm_type=\"auto\" # Set this to 'auto' for it to auto select your swarm. It's match words like concurrently multiple -> \"ConcurrentWorkflow\"\n)\n\nresult = auto_router.run(\"Conduct a comprehensive market analysis for Product X\")\n
"},{"location":"swarms/structs/swarm_router/#interactivegroupchat","title":"InteractiveGroupChat","text":"Use Case: Interactive group discussions with user participation.
interactive_chat_router = SwarmRouter(\n name=\"InteractiveGroupChat\",\n description=\"Interactive group chat with user participation\",\n max_loops=10,\n agents=[financial_analyst, market_researcher, competitor_analyst],\n swarm_type=\"InteractiveGroupChat\",\n output_type=\"string\"\n)\n\nresult = interactive_chat_router.run(\"Discuss the market trends and provide interactive analysis\")\n
The InteractiveGroupChat allows for dynamic interaction between agents and users, enabling real-time participation in group discussions and decision-making processes. This is particularly useful for scenarios requiring human input or validation during the conversation flow.
"},{"location":"swarms/structs/swarm_router/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms/structs/swarm_router/#processing-documents","title":"Processing Documents","text":"To process documents with the SwarmRouter:
document_router = SwarmRouter(\n name=\"DocumentProcessor\",\n agents=[document_analyzer, summarizer],\n documents=[\"report.pdf\", \"contract.docx\", \"data.csv\"],\n swarm_type=\"SequentialWorkflow\"\n)\n\nresult = document_router.run(\"Extract key information from the provided documents\")\n
"},{"location":"swarms/structs/swarm_router/#batch-processing","title":"Batch Processing","text":"To process multiple tasks in a batch:
tasks = [\"Analyze Q1 report\", \"Summarize competitor landscape\", \"Evaluate market trends\"]\nresults = router.batch_run(tasks)\n
"},{"location":"swarms/structs/swarm_router/#asynchronous-execution","title":"Asynchronous Execution","text":"For asynchronous task execution:
result = await router.async_run(\"Generate financial projections\")\n
"},{"location":"swarms/structs/swarm_router/#concurrent-execution","title":"Concurrent Execution","text":"To run a single task concurrently:
result = router.concurrent_run(\"Analyze multiple data streams\")\n
"},{"location":"swarms/structs/swarm_router/#concurrent-batch-processing","title":"Concurrent Batch Processing","text":"To process multiple tasks concurrently:
tasks = [\"Task 1\", \"Task 2\", \"Task 3\"]\nresults = router.concurrent_batch_run(tasks)\n
"},{"location":"swarms/structs/swarm_router/#using-the-swarmrouter-as-a-callable","title":"Using the SwarmRouter as a Callable","text":"You can use the SwarmRouter instance directly as a callable:
router = SwarmRouter(\n name=\"CallableRouter\",\n agents=[agent1, agent2],\n swarm_type=\"SequentialWorkflow\"\n)\n\nresult = router(\"Analyze the market data\") # Equivalent to router.run(\"Analyze the market data\")\n
"},{"location":"swarms/structs/swarm_router/#using-the-swarm_router-function","title":"Using the swarm_router Function","text":"For quick one-off tasks, you can use the swarm_router function:
from swarms import swarm_router\n\nresult = swarm_router(\n name=\"QuickRouter\",\n agents=[agent1, agent2],\n swarm_type=\"ConcurrentWorkflow\",\n task=\"Analyze the quarterly report\"\n)\n
"},{"location":"swarms/structs/task/","title":"Task Class Documentation","text":"The Task
class is a pivotal component designed for managing tasks in a sequential workflow. This class allows for the execution of tasks using various agents, which can be callable objects or specific instances of the Agent
class. It supports the scheduling of tasks, handling their dependencies, and setting conditions and actions that govern their execution.
Key features of the Task
class include: - Executing tasks with specified agents and handling their results. - Scheduling tasks to run at specified times. - Setting triggers, actions, and conditions for tasks. - Managing task dependencies and priorities. - Providing a history of task executions for tracking purposes.
The Task
class is defined as follows:
agent
Union[Callable, Agent]
The agent or callable object to run the task. description
str
Description of the task. result
Any
Result of the task. history
List[Any]
History of the task. schedule_time
datetime
Time to schedule the task. scheduler
sched.scheduler
Scheduler to schedule the task. trigger
Callable
Trigger to run the task. action
Callable
Action to run the task. condition
Callable
Condition to run the task. priority
int
Priority of the task. dependencies
List[Task]
List of tasks that need to be completed before this task can be executed. args
List[Any]
Arguments to pass to the agent or callable object. kwargs
Dict[str, Any]
Keyword arguments to pass to the agent or callable object."},{"location":"swarms/structs/task/#methods","title":"Methods","text":""},{"location":"swarms/structs/task/#executeself-args-kwargs","title":"execute(self, *args, **kwargs)
","text":"Executes the task by calling the agent or model with the specified arguments and keyword arguments. If a condition is set, the task will only execute if the condition returns True
.
args
: Arguments to pass to the agent or callable object.kwargs
: Keyword arguments to pass to the agent or callable object.>>> from swarms.structs import Task, Agent\n>>> from swarm_models import OpenAIChat\n>>> agent = Agent(llm=OpenAIChat(openai_api_key=\"\"), max_loops=1, dashboard=False)\n>>> task = Task(description=\"What's the weather in Miami?\", agent=agent)\n>>> task.run()\n>>> task.result\n
"},{"location":"swarms/structs/task/#handle_scheduled_taskself","title":"handle_scheduled_task(self)
","text":"Handles the execution of a scheduled task. If the schedule time is not set or has already passed, the task is executed immediately. Otherwise, the task is scheduled to be executed at the specified schedule time.
"},{"location":"swarms/structs/task/#examples_1","title":"Examples","text":">>> task.schedule_time = datetime.now() + timedelta(seconds=10)\n>>> task.handle_scheduled_task()\n
"},{"location":"swarms/structs/task/#set_triggerself-trigger-callable","title":"set_trigger(self, trigger: Callable)
","text":"Sets the trigger for the task.
"},{"location":"swarms/structs/task/#parameters_1","title":"Parameters","text":"trigger
(Callable
): The trigger to set.>>> def my_trigger():\n>>> print(\"Trigger executed\")\n>>> task.set_trigger(my_trigger)\n
"},{"location":"swarms/structs/task/#set_actionself-action-callable","title":"set_action(self, action: Callable)
","text":"Sets the action for the task.
"},{"location":"swarms/structs/task/#parameters_2","title":"Parameters","text":"action
(Callable
): The action to set.>>> def my_action():\n>>> print(\"Action executed\")\n>>> task.set_action(my_action)\n
"},{"location":"swarms/structs/task/#set_conditionself-condition-callable","title":"set_condition(self, condition: Callable)
","text":"Sets the condition for the task.
"},{"location":"swarms/structs/task/#parameters_3","title":"Parameters","text":"condition
(Callable
): The condition to set.>>> def my_condition():\n>>> print(\"Condition checked\")\n>>> return True\n>>> task.set_condition(my_condition)\n
"},{"location":"swarms/structs/task/#is_completedself","title":"is_completed(self)
","text":"Checks whether the task has been completed.
"},{"location":"swarms/structs/task/#returns","title":"Returns","text":"bool
: True
if the task has been completed, False
otherwise.>>> task.is_completed()\n
"},{"location":"swarms/structs/task/#add_dependencyself-task","title":"add_dependency(self, task)
","text":"Adds a task to the list of dependencies.
"},{"location":"swarms/structs/task/#parameters_4","title":"Parameters","text":"task
(Task
): The task to add as a dependency.>>> dependent_task = Task(description=\"Dependent Task\")\n>>> task.add_dependency(dependent_task)\n
"},{"location":"swarms/structs/task/#set_priorityself-priority-int","title":"set_priority(self, priority: int)
","text":"Sets the priority of the task.
"},{"location":"swarms/structs/task/#parameters_5","title":"Parameters","text":"priority
(int
): The priority to set.>>> task.set_priority(5)\n
"},{"location":"swarms/structs/task/#check_dependency_completionself","title":"check_dependency_completion(self)
","text":"Checks whether all the dependencies have been completed.
"},{"location":"swarms/structs/task/#returns_1","title":"Returns","text":"bool
: True
if all the dependencies have been completed, False
otherwise.>>> task.check_dependency_completion()\n
"},{"location":"swarms/structs/task/#contextself-task-task-none-context-listtask-none-args-kwargs","title":"context(self, task: \"Task\" = None, context: List[\"Task\"] = None, *args, **kwargs)
","text":"Sets the context for the task. For a sequential workflow, it sequentially adds the context of the previous task in the list.
"},{"location":"swarms/structs/task/#parameters_6","title":"Parameters","text":"task
(Task
, optional): The task whose context is to be set.context
(List[Task]
, optional): The list of tasks to set the context.>>> task1 = Task(description=\"Task 1\")\n>>> task2 = Task(description=\"Task 2\")\n>>> task2.context(context=[task1])\n
"},{"location":"swarms/structs/task/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms/structs/task/#basic-usage","title":"Basic Usage","text":"import os\nfrom dotenv import load_dotenv\nfrom swarms import Agent, OpenAIChat, Task\n\n# Load the environment variables\nload_dotenv()\n\n# Define a function to be used as the action\ndef my_action():\n print(\"Action executed\")\n\n# Define a function to be used as the condition\ndef my_condition():\n print(\"Condition checked\")\n return True\n\n# Create an agent\nagent = Agent(\n llm=OpenAIChat(openai_api_key=os.environ[\"OPENAI_API_KEY\"]),\n max_loops=1,\n dashboard=False,\n)\n\n# Create a task\ntask = Task(\n description=\"Generate a report on the top 3 biggest expenses for small businesses and how businesses can save 20%\",\n agent=agent,\n)\n\n# Set the action and condition\ntask.set_action(my_action)\ntask.set_condition(my_condition)\n\n# Execute the task\nprint(\"Executing task...\")\ntask.run()\n\n# Check if the task is completed\nif task.is_completed():\n print(\"Task completed\")\nelse:\n print(\"Task not completed\")\n\n# Output the result of the task\nprint(f\"Task result: {task.result}\")\n
"},{"location":"swarms/structs/task/#scheduled-task-execution","title":"Scheduled Task Execution","text":"from datetime import datetime, timedelta\nimport os\nfrom dotenv import load_dotenv\nfrom swarms import Agent, OpenAIChat, Task\n\n# Load the environment variables\nload_dotenv()\n\n# Create an agent\nagent = Agent(\n llm=OpenAIChat(openai_api_key=os.environ[\"OPENAI_API_KEY\"]),\n max_loops=1,\n dashboard=False,\n)\n\n# Create a task\ntask = Task(\n description=\"Scheduled task example\",\n agent=agent,\n schedule_time=datetime.now() + timedelta(seconds=10)\n)\n\n# Handle scheduled task\ntask.handle_scheduled_task()\n
"},{"location":"swarms/structs/task/#task-with-dependencies","title":"Task with Dependencies","text":"import os\nfrom dotenv import load_dotenv\nfrom swarms import Agent, OpenAIChat, Task\n\n# Load the environment variables\nload_dotenv()\n\n# Create agents\nagent1 = Agent(\n llm=OpenAIChat(openai_api_key=os.environ[\"OPENAI_API_KEY\"]),\n max_loops=1,\n dashboard=False,\n)\nagent2 = Agent(\n llm=OpenAIChat(openai_api_key=os.environ[\"OPENAI_API_KEY\"]),\n max_loops=1,\n dashboard=False,\n)\n\n# Create tasks\ntask1 = Task(description=\"First task\", agent=agent1)\ntask2 = Task(description=\"Second task\", agent=agent2)\n\n# Add dependency\ntask2.add_dependency(task1)\n\n# Execute tasks\nprint(\"Executing first task...\")\ntask1.run()\n\nprint(\"Executing second task...\")\ntask2.run()\n\n# Check if tasks are completed\nprint(f\"Task 1 completed: {task1.is_completed()}\")\nprint(f\"Task 2 completed: {task2.is_completed()}\")\n
"},{"location":"swarms/structs/task/#task-context","title":"Task Context","text":"import os\nfrom dotenv import load_dotenv\nfrom swarms import Agent, OpenAIChat, Task\n\n# Load the environment variables\nload_dotenv()\n\n# Create an agent\nagent = Agent(\n llm=OpenAIChat(openai_api_key=os.environ[\"OPENAI_API_KEY\"]),\n max_loops\n\n=1,\n dashboard=False,\n)\n\n# Create tasks\ntask1 = Task(description=\"First task\", agent=agent)\ntask2 = Task(description=\"Second task\", agent=agent)\n\n# Set context for the second task\ntask2.context(context=[task1])\n\n# Execute tasks\nprint(\"Executing first task...\")\ntask1.run()\n\nprint(\"Executing second task...\")\ntask2.run()\n\n# Output the context of the second task\nprint(f\"Task 2 context: {task2.history}\")\n
"},{"location":"swarms/structs/taskqueue_swarm/","title":"TaskQueueSwarm Documentation","text":"The TaskQueueSwarm
class is designed to manage and execute tasks using multiple agents concurrently. This class allows for the orchestration of multiple agents processing tasks from a shared queue, facilitating complex workflows where tasks can be distributed and processed in parallel by different agents.
agents
List[Agent]
The list of agents in the swarm. task_queue
queue.Queue
A queue to store tasks for processing. lock
threading.Lock
A lock for thread synchronization. autosave_on
bool
Whether to automatically save the swarm metadata. save_file_path
str
The file path for saving swarm metadata. workspace_dir
str
The directory path of the workspace. return_metadata_on
bool
Whether to return the swarm metadata after running. max_loops
int
The maximum number of loops to run the swarm. metadata
SwarmRunMetadata
Metadata about the swarm run."},{"location":"swarms/structs/taskqueue_swarm/#methods","title":"Methods","text":""},{"location":"swarms/structs/taskqueue_swarm/#__init__self-agents-listagent-name-str-task-queue-swarm-description-str-a-swarm-that-processes-tasks-from-a-queue-using-multiple-agents-on-different-threads-autosave_on-bool-true-save_file_path-str-swarm_run_metadatajson-workspace_dir-str-osgetenvworkspace_dir-return_metadata_on-bool-false-max_loops-int-1-args-kwargs","title":"__init__(self, agents: List[Agent], name: str = \"Task-Queue-Swarm\", description: str = \"A swarm that processes tasks from a queue using multiple agents on different threads.\", autosave_on: bool = True, save_file_path: str = \"swarm_run_metadata.json\", workspace_dir: str = os.getenv(\"WORKSPACE_DIR\"), return_metadata_on: bool = False, max_loops: int = 1, *args, **kwargs)
","text":"The constructor initializes the TaskQueueSwarm
object.
agents
(List[Agent]
): The list of agents in the swarm.name
(str
, optional): The name of the swarm. Defaults to \"Task-Queue-Swarm\".description
(str
, optional): The description of the swarm. Defaults to \"A swarm that processes tasks from a queue using multiple agents on different threads.\".autosave_on
(bool
, optional): Whether to automatically save the swarm metadata. Defaults to True.save_file_path
(str
, optional): The file path to save the swarm metadata. Defaults to \"swarm_run_metadata.json\".workspace_dir
(str
, optional): The directory path of the workspace. Defaults to os.getenv(\"WORKSPACE_DIR\").return_metadata_on
(bool
, optional): Whether to return the swarm metadata after running. Defaults to False.max_loops
(int
, optional): The maximum number of loops to run the swarm. Defaults to 1.*args
: Variable length argument list.**kwargs
: Arbitrary keyword arguments.add_task(self, task: str)
","text":"Adds a task to the queue.
task
(str
): The task to be added to the queue.run(self)
","text":"Runs the swarm by having agents pick up tasks from the queue.
str
: JSON string of the swarm run metadata if return_metadata_on
is True.
Usage Example:
from swarms import Agent, TaskQueueSwarm\nfrom swarms_models import OpenAIChat\n\n# Initialize the language model\nllm = OpenAIChat()\n\n# Initialize agents\nagent1 = Agent(agent_name=\"Agent1\", llm=llm)\nagent2 = Agent(agent_name=\"Agent2\", llm=llm)\n\n# Create the TaskQueueSwarm\nswarm = TaskQueueSwarm(agents=[agent1, agent2], max_loops=5)\n\n# Add tasks to the swarm\nswarm.add_task(\"Analyze the latest market trends\")\nswarm.add_task(\"Generate a summary report\")\n\n# Run the swarm\nresult = swarm.run()\nprint(result) # Prints the swarm run metadata\n
This example initializes a TaskQueueSwarm
with two agents, adds tasks to the queue, and runs the swarm.
save_json_to_file(self)
","text":"Saves the swarm run metadata to a JSON file.
"},{"location":"swarms/structs/taskqueue_swarm/#export_metadataself","title":"export_metadata(self)
","text":"Exports the swarm run metadata as a JSON string.
str
: JSON string of the swarm run metadata.TaskQueueSwarm
uses threading to process tasks concurrently, which can significantly improve performance for I/O-bound tasks.reliability_checks
method ensures that the swarm is properly configured before running.This documentation covers the API for running multiple agents concurrently using various execution strategies. The implementation uses asyncio
with uvloop
for enhanced performance and ThreadPoolExecutor
for handling CPU-bound operations.
Primary function for running multiple agents concurrently with optimized performance using both uvloop and ThreadPoolExecutor.
"},{"location":"swarms/structs/various_execution_methods/#arguments","title":"Arguments","text":"Parameter Type Required Default Description agents List[AgentType] Yes - List of Agent instances to run concurrently task str Yes - Task string to execute batch_size int No CPU count Number of agents to run in parallel in each batch max_workers int No CPU count * 2 Maximum number of threads in the executor"},{"location":"swarms/structs/various_execution_methods/#returns","title":"Returns","text":"List[Any]
: List of outputs from each agent
graph TD\n A[Start] --> B[Initialize ThreadPoolExecutor]\n B --> C[Split Agents into Batches]\n C --> D[Process Batch]\n D --> E{More Batches?}\n E -->|Yes| D\n E -->|No| F[Combine Results]\n F --> G[Return Results]\n\n subgraph \"Batch Processing\"\n D --> H[Run Agents Async]\n H --> I[Wait for Completion]\n I --> J[Collect Batch Results]\n end
"},{"location":"swarms/structs/various_execution_methods/#run_agents_sequentially","title":"run_agents_sequentially()","text":"Runs multiple agents sequentially for baseline comparison or simple use cases.
"},{"location":"swarms/structs/various_execution_methods/#arguments_1","title":"Arguments","text":"Parameter Type Required Default Description agents List[AgentType] Yes - List of Agent instances to run task str Yes - Task string to execute"},{"location":"swarms/structs/various_execution_methods/#returns_1","title":"Returns","text":"List[Any]
: List of outputs from each agent
Runs multiple agents with different tasks concurrently.
"},{"location":"swarms/structs/various_execution_methods/#arguments_2","title":"Arguments","text":"Parameter Type Required Default Description agent_task_pairs List[tuple[AgentType, str]] Yes - List of (agent, task) tuples batch_size int No CPU count Number of agents to run in parallel max_workers int No CPU count * 2 Maximum number of threads"},{"location":"swarms/structs/various_execution_methods/#run_agents_with_timeout","title":"run_agents_with_timeout()","text":"Runs multiple agents concurrently with timeout limits.
"},{"location":"swarms/structs/various_execution_methods/#arguments_3","title":"Arguments","text":"Parameter Type Required Default Description agents List[AgentType] Yes - List of Agent instances task str Yes - Task string to execute timeout float Yes - Timeout in seconds for each agent batch_size int No CPU count Number of agents to run in parallel max_workers int No CPU count * 2 Maximum number of threads"},{"location":"swarms/structs/various_execution_methods/#usage-examples","title":"Usage Examples","text":"from swarms.structs.agent import Agent\nfrom swarms.structs.multi_agent_exec import (\n run_agents_concurrently,\n run_agents_with_timeout,\n run_agents_with_different_tasks\n)\n\n# Initialize agents using only the built-in model_name parameter\nagents = [\n Agent(\n agent_name=f\"Analysis-Agent-{i}\",\n system_prompt=\"You are a financial analysis expert\",\n model_name=\"gpt-4o-mini\",\n max_loops=1\n )\n for i in range(5)\n]\n\n# Basic concurrent execution\ntask = \"Analyze the impact of rising interest rates on tech stocks\"\noutputs = run_agents_concurrently(agents, task)\n\n# Running with timeout\noutputs_with_timeout = run_agents_with_timeout(\n agents=agents,\n task=task,\n timeout=30.0,\n batch_size=2\n)\n\n# Running different tasks\ntask_pairs = [\n (agents[0], \"Analyze tech stocks\"),\n (agents[1], \"Analyze energy stocks\"),\n (agents[2], \"Analyze retail stocks\")\n]\ndifferent_outputs = run_agents_with_different_tasks(task_pairs)\n
"},{"location":"swarms/structs/various_execution_methods/#resource-monitoring","title":"Resource Monitoring","text":""},{"location":"swarms/structs/various_execution_methods/#resourcemetrics","title":"ResourceMetrics","text":"A dataclass for system resource metrics.
"},{"location":"swarms/structs/various_execution_methods/#properties","title":"Properties","text":"Property Type Description cpu_percent float Current CPU usage percentage memory_percent float Current memory usage percentage active_threads int Number of active threads"},{"location":"swarms/structs/various_execution_methods/#run_agents_with_resource_monitoring","title":"run_agents_with_resource_monitoring()","text":"Runs agents with system resource monitoring and adaptive batch sizing.
"},{"location":"swarms/structs/various_execution_methods/#arguments_4","title":"Arguments","text":"Parameter Type Required Default Description agents List[AgentType] Yes - List of Agent instances task str Yes - Task string to execute cpu_threshold float No 90.0 Max CPU usage percentage memory_threshold float No 90.0 Max memory usage percentage check_interval float No 1.0 Resource check interval in seconds"},{"location":"swarms/structs/various_execution_methods/#performance-considerations","title":"Performance Considerations","text":"@profile_func
for performance monitoringuvloop
provides better performance than standard asyncio
The YamlModel
class, derived from BaseModel
in Pydantic, offers a convenient way to work with YAML data in your Python applications. It provides methods for serialization (converting to YAML), deserialization (creating an instance from YAML), and schema generation. This documentation will delve into the functionalities of YamlModel
and guide you through its usage with illustrative examples.
The primary purpose of YamlModel
is to streamline the interaction between your Python code and YAML data. It accomplishes this by:
YamlModel
instance into a YAML string representation using the to_yaml()
method.YamlModel
instance from a provided YAML string using the from_yaml()
class method.json_to_yaml()
static method.YamlModel
instances as YAML files using the save_to_yaml()
method.create_yaml_schema()
class method (not yet implemented but included for future reference) will generate a YAML schema that reflects the structure of the YamlModel
class and its fields.The YamlModel
class inherits from Pydantic's BaseModel
class. You can define your custom YAML models by creating subclasses of YamlModel
and specifying your data fields within the class definition. Here's the breakdown of the YamlModel
class and its methods:
class YamlModel(BaseModel):\n \"\"\"\n A Pydantic model class for working with YAML data.\n \"\"\"\n\n def to_yaml(self):\n \"\"\"\n Serialize the Pydantic model instance to a YAML string.\n \"\"\"\n return yaml.safe_dump(self.dict(), sort_keys=False)\n\n @classmethod\n def from_yaml(cls, yaml_str: str):\n \"\"\"\n Create an instance of the class from a YAML string.\n\n Args:\n yaml_str (str): The YAML string to parse.\n\n Returns:\n cls: An instance of the class with attributes populated from the YAML data.\n Returns None if there was an error loading the YAML data.\n \"\"\"\n # ...\n\n @staticmethod\n def json_to_yaml(json_str: str):\n \"\"\"\n Convert a JSON string to a YAML string.\n \"\"\"\n # ...\n\n def save_to_yaml(self, filename: str):\n \"\"\"\n Save the Pydantic model instance as a YAML file.\n \"\"\"\n # ...\n\n # TODO: Implement a method to create a YAML schema from the model fields\n # @classmethod\n # def create_yaml_schema(cls):\n # # ...\n \"\"\"\n
Arguments:
self
(implicit): Refers to the current instance of the YamlModel
class.yaml_str
(str): The YAML string used for deserialization in the from_yaml()
method.json_str
(str): The JSON string used for conversion to YAML in the json_to_yaml()
method.filename
(str): The filename (including path) for saving the YAML model instance in the save_to_yaml()
method.1. to_yaml()
This method transforms an instance of the YamlModel
class into a YAML string representation. It utilizes the yaml.safe_dump()
function from the PyYAML
library to ensure secure YAML data generation. The sort_keys=False
argument guarantees that the order of keys in the resulting YAML string remains consistent with the order of fields defined in your YamlModel
subclass.
Example:
class User(YamlModel):\n name: str\n age: int\n is_active: bool\n\nuser = User(name=\"Bob\", age=30, is_active=True)\nyaml_string = user.to_yaml()\nprint(yaml_string)\n
This code will output a YAML string representation of the user
object, resembling:
name: Bob\nage: 30\nis_active: true\n
"},{"location":"swarms/structs/yaml_model/#detailed-method-descriptions_1","title":"Detailed Method Descriptions","text":"2. from_yaml(cls, yaml_str) (Class Method)
The from_yaml()
class method is responsible for constructing a YamlModel
instance from a provided YAML string.
Arguments:
cls
(class): The class representing the desired YAML model (the subclass of YamlModel
that matches the structure of the YAML data).yaml_str
(str): The YAML string containing the data to be parsed and used for creating the model instance.Returns:
cls
(instance): An instance of the specified class (cls
) populated with the data extracted from the YAML string. If an error occurs during parsing, it returns None
.Error Handling:
The from_yaml()
method employs yaml.safe_load()
for secure YAML parsing. It incorporates a try-except
block to handle potential ValueError
exceptions that might arise during the parsing process. If an error is encountered, it logs the error message and returns None
.
Example:
class User(YamlModel):\n name: str\n age: int\n is_active: bool\n\nyaml_string = \"\"\"\nname: Alice\nage: 25\nis_active: false\n\"\"\"\n\nuser = User.from_yaml(yaml_string)\nprint(user.name) # Output: Alice\n
3. json_to_yaml(json_str) (Static Method)
This static method in the YamlModel
class serves the purpose of converting a JSON string into a YAML string representation.
Arguments:
json_str
(str): The JSON string that needs to be converted to YAML format.Returns:
str
: The converted YAML string representation of the provided JSON data.Functionality:
The json_to_yaml()
method leverages the json.loads()
function to parse the JSON string into a Python dictionary. Subsequently, it utilizes yaml.dump()
to generate the corresponding YAML string representation from the parsed dictionary.
Example:
json_string = '{\"name\": \"Charlie\", \"age\": 42, \"is_active\": true}'\nyaml_string = YamlModel.json_to_yaml(json_string)\nprint(yaml_string)\n
This code snippet will convert the JSON data to a YAML string, likely resembling:
name: Charlie\nage: 42\nis_active: true\n
4. save_to_yaml(self, filename)
The save_to_yaml()
method facilitates the storage of a YamlModel
instance as a YAML file.
Arguments:
self
(implicit): Refers to the current instance of the YamlModel
class that you intend to save.filename
(str): The desired filename (including path) for the YAML file.Functionality:
The save_to_yaml()
method employs the previously explained to_yaml()
method to generate a YAML string representation of the self
instance. It then opens the specified file in write mode (\"w\"
) and writes the YAML string content to the file.
Example:
class Employee(YamlModel):\n name: str\n department: str\n salary: float\n\nemployee = Employee(name=\"David\", department=\"Engineering\", salary=95000.00)\nemployee.save_to_yaml(\"employee.yaml\")\n
This code will create a YAML file named \"employee.yaml\" containing the serialized representation of the employee
object.
class User(YamlModel):\n name: str\n age: int\n is_active: bool\n\n# Create an instance of the User model\nuser = User(name=\"Alice\", age=30, is_active=True)\n\n# Serialize the User instance to YAML and print it\nyaml_string = user.to_yaml()\nprint(yaml_string)\n
This code snippet demonstrates the creation of a User
instance and its subsequent serialization to a YAML string using the to_yaml()
method. The printed output will likely resemble:
name: Alice\nage: 30\nis_active: true\n
"},{"location":"swarms/structs/yaml_model/#converting-json-to-yaml","title":"Converting JSON to YAML","text":"# Convert JSON string to YAML and print\njson_string = '{\"name\": \"Bob\", \"age\": 25, \"is_active\": false}'\nyaml_string = YamlModel.json_to_yaml(json_string)\nprint(yaml_string)\n
This example showcases the conversion of a JSON string containing user data into a YAML string representation using the json_to_yaml()
static method. The resulting YAML string might look like:
name: Bob\nage: 25\nis_active: false\n
"},{"location":"swarms/structs/yaml_model/#saving-user-instance-to-yaml-file","title":"Saving User Instance to YAML File","text":"# Save the User instance to a YAML file\nuser.save_to_yaml(\"user.yaml\")\n
This code demonstrates the utilization of the save_to_yaml()
method to store the user
instance as a YAML file named \"user.yaml\". The contents of the file will mirror the serialized YAML string representation of the user object.
PyYAML
library installed (pip install pyyaml
) to leverage the YAML parsing and serialization functionalities within YamlModel
.create_yaml_schema()
method is not yet implemented but serves as a placeholder for future enhancements.The YamlModel
class in Pydantic offers a streamlined approach to working with YAML data in your Python projects. By employing the provided methods (to_yaml()
, from_yaml()
, json_to_yaml()
, and save_to_yaml()
), you can efficiently convert between Python objects and YAML representations, facilitating data persistence and exchange. This comprehensive documentation empowers you to effectively utilize YamlModel
for your YAML data processing requirements.
The Board of Directors is a sophisticated multi-agent architecture that implements collective decision-making through democratic processes, voting mechanisms, and role-based leadership. This architecture provides an alternative to single-director patterns by enabling collaborative intelligence through structured governance.
"},{"location":"swarms/structs/board_of_directors/#overview","title":"\ud83c\udfdb\ufe0f Overview","text":"The Board of Directors architecture follows a democratic workflow pattern:
max_loops
)graph TD\n A[User Task] --> B[Board of Directors]\n B --> C[Board Meeting & Discussion]\n C --> D[Voting & Consensus]\n D --> E[Create Plan & Orders]\n E --> F[Distribute to Agents]\n F --> G[Agent 1]\n F --> H[Agent 2]\n F --> I[Agent N]\n G --> J[Execute Task]\n H --> J\n I --> J\n J --> K[Report Results]\n K --> L[Board Evaluation]\n L --> M{More Loops?}\n M -->|Yes| C\n M -->|No| N[Final Output]
"},{"location":"swarms/structs/board_of_directors/#key-features","title":"\ud83c\udfaf Key Features","text":""},{"location":"swarms/structs/board_of_directors/#democratic-decision-making","title":"Democratic Decision Making","text":"The Board of Directors supports various roles with different responsibilities and voting weights:
Role Description Voting Weight ResponsibilitiesCHAIRMAN
Primary leader responsible for board meetings and final decisions 1.5 Leading meetings, facilitating consensus, making final decisions VICE_CHAIRMAN
Secondary leader who supports the chairman 1.2 Supporting chairman, coordinating operations SECRETARY
Responsible for documentation and meeting minutes 1.0 Documenting meetings, maintaining records TREASURER
Manages financial aspects and resource allocation 1.0 Financial oversight, resource management EXECUTIVE_DIRECTOR
Executive-level board member with operational authority 1.5 Strategic planning, operational oversight MEMBER
General board member with specific expertise 1.0 Contributing expertise, participating in decisions"},{"location":"swarms/structs/board_of_directors/#quick-start","title":"\ud83d\ude80 Quick Start","text":""},{"location":"swarms/structs/board_of_directors/#basic-setup","title":"Basic Setup","text":"from swarms import Agent\nfrom swarms.structs.board_of_directors_swarm import (\n BoardOfDirectorsSwarm,\n BoardMember,\n BoardMemberRole\n)\nfrom swarms.config.board_config import enable_board_feature\n\n# Enable the Board of Directors feature\nenable_board_feature()\n\n# Create board members with specific roles\nchairman = Agent(\n agent_name=\"Chairman\",\n agent_description=\"Chairman of the Board responsible for leading meetings\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Chairman of the Board...\"\n)\n\nvice_chairman = Agent(\n agent_name=\"Vice-Chairman\",\n agent_description=\"Vice Chairman who supports the Chairman\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Vice Chairman...\"\n)\n\n# Create BoardMember objects with roles and expertise\nboard_members = [\n BoardMember(chairman, BoardMemberRole.CHAIRMAN, 1.5, [\"leadership\", \"strategy\"]),\n BoardMember(vice_chairman, BoardMemberRole.VICE_CHAIRMAN, 1.2, [\"operations\", \"coordination\"]),\n]\n\n# Create worker agents\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Expert in market research and analysis\",\n model_name=\"gpt-4o\",\n)\n\nfinancial_agent = Agent(\n agent_name=\"Financial-Analyst\",\n agent_description=\"Specialist in financial analysis and valuation\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the Board of Directors swarm\nboard_swarm = BoardOfDirectorsSwarm(\n name=\"Executive_Board_Swarm\",\n description=\"Executive board with specialized roles for strategic decision-making\",\n board_members=board_members,\n agents=[research_agent, financial_agent],\n max_loops=2,\n verbose=True,\n decision_threshold=0.6,\n enable_voting=True,\n enable_consensus=True,\n)\n\n# Execute a complex task with democratic decision-making\nresult = board_swarm.run(task=\"Analyze the market potential for Tesla (TSLA) stock\")\nprint(result)\n
"},{"location":"swarms/structs/board_of_directors/#use-cases","title":"\ud83d\udccb Use Cases","text":""},{"location":"swarms/structs/board_of_directors/#corporate-governance","title":"Corporate Governance","text":"Pre-configured board templates for common use cases:
from swarms.config.board_config import get_default_board_template\n\n# Get a financial analysis board template\nfinancial_board = get_default_board_template(\"financial_analysis\")\n\n# Get a strategic planning board template\nstrategic_board = get_default_board_template(\"strategic_planning\")\n
"},{"location":"swarms/structs/board_of_directors/#dynamic-role-assignment","title":"Dynamic Role Assignment","text":"Automatically assign roles based on task requirements:
# Board members are automatically assigned roles based on expertise\nboard_swarm = BoardOfDirectorsSwarm(\n board_members=board_members,\n auto_assign_roles=True,\n role_mapping={\n \"financial_analysis\": [\"Treasurer\", \"Financial_Member\"],\n \"strategic_planning\": [\"Chairman\", \"Executive_Director\"]\n }\n)\n
"},{"location":"swarms/structs/board_of_directors/#consensus-optimization","title":"Consensus Optimization","text":"Advanced consensus-building mechanisms:
# Enable advanced consensus features\nboard_swarm = BoardOfDirectorsSwarm(\n enable_consensus=True,\n consensus_timeout=300, # 5 minutes\n min_participation_rate=0.5, # 50% minimum participation\n auto_fallback_to_chairman=True\n)\n
"},{"location":"swarms/structs/board_of_directors/#performance-monitoring","title":"\ud83d\udcca Performance Monitoring","text":""},{"location":"swarms/structs/board_of_directors/#decision-metrics","title":"Decision Metrics","text":"A successful Board of Directors implementation should demonstrate:
The Board of Directors architecture represents a sophisticated approach to multi-agent collaboration, enabling organizations to leverage collective intelligence through structured governance and democratic decision-making processes.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/","title":"Board of Directors Decision Making","text":"The Board of Directors implements sophisticated decision-making processes that combine voting mechanisms, consensus building, and hierarchical authority to ensure effective and fair decision-making.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-making-framework","title":"Decision-Making Framework","text":"graph TD\n A[Task/Issue Presented] --> B[Initial Assessment]\n B --> C[Expertise Assignment]\n C --> D[Individual Analysis]\n D --> E[Group Discussion]\n E --> F[Proposal Development]\n F --> G[Voting Process]\n G --> H{Consensus Reached?}\n H -->|Yes| I[Decision Approved]\n H -->|No| J[Reconciliation Process]\n J --> K[Proposal Refinement]\n K --> G\n I --> L[Implementation Planning]\n L --> M[Execution]\n\n style A fill:#e3f2fd\n style I fill:#c8e6c9\n style H fill:#fff3e0
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#voting-mechanisms","title":"Voting Mechanisms","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#weighted-voting-system","title":"Weighted Voting System","text":"graph LR\n subgraph \"Voting Weights\"\n A[Chairman: 1.5]\n B[Executive Director: 1.5]\n C[Vice Chairman: 1.2]\n D[Secretary: 1.0]\n E[Treasurer: 1.0]\n F[Member: 1.0]\n end\n\n subgraph \"Voting Process\"\n G[Individual Votes]\n H[Weight Calculation]\n I[Threshold Check]\n J[Decision Outcome]\n end\n\n A --> G\n B --> G\n C --> G\n D --> G\n E --> G\n F --> G\n\n G --> H\n H --> I\n I --> J
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#voting-power-distribution","title":"Voting Power Distribution","text":"pie title Total Voting Power Distribution\n \"Chairman\" : 1.5\n \"Executive Director\" : 1.5\n \"Vice Chairman\" : 1.2\n \"Secretary\" : 1.0\n \"Treasurer\" : 1.0\n \"Member\" : 1.0
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-thresholds","title":"Decision Thresholds","text":"graph TD\n A[Vote Calculation] --> B{Threshold Check}\n B -->|Below 60%| C[Rejection]\n B -->|60-75%| D[Conditional Approval]\n B -->|Above 75%| E[Strong Approval]\n B -->|100%| F[Unanimous Approval]\n\n C --> G[Reconciliation Required]\n D --> H[Implementation with Conditions]\n E --> I[Full Implementation]\n F --> J[Immediate Implementation]\n\n style C fill:#ffcdd2\n style D fill:#fff9c4\n style E fill:#c8e6c9\n style F fill:#4caf50
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#consensus-building-process","title":"Consensus Building Process","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#consensus-development-flow","title":"Consensus Development Flow","text":"flowchart TD\n A[Initial Proposal] --> B[Individual Review]\n B --> C[Expert Analysis]\n C --> D[Stakeholder Input]\n D --> E[Proposal Refinement]\n E --> F[Group Discussion]\n F --> G[Alternative Proposals]\n G --> H[Compromise Development]\n H --> I[Consensus Check]\n I -->|No Consensus| J[Mediation Process]\n J --> K[Proposal Modification]\n K --> I\n I -->|Consensus| L[Final Decision]\n\n style A fill:#e3f2fd\n style L fill:#c8e6c9\n style I fill:#fff3e0
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#consensus-strategies","title":"Consensus Strategies","text":"graph TD\n A[Consensus Building] --> B[Compromise Solutions]\n A --> C[Expert Mediation]\n A --> D[Alternative Approaches]\n A --> E[Time Extension]\n A --> F[Stakeholder Consultation]\n\n B --> G[Win-Win Solutions]\n B --> H[Partial Agreement]\n B --> I[Phased Implementation]\n\n C --> J[Neutral Mediator]\n C --> K[Expert Opinion]\n C --> L[Third-Party Assessment]\n\n D --> M[Multiple Options]\n D --> N[Pilot Programs]\n D --> O[Experimental Approaches]\n\n E --> P[Additional Analysis]\n E --> Q[Stakeholder Engagement]\n E --> R[Market Research]\n\n F --> S[External Input]\n F --> T[Industry Consultation]\n F --> U[Expert Panels]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-types-and-processes","title":"Decision Types and Processes","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#strategic-decisions","title":"Strategic Decisions","text":"sequenceDiagram\n participant Chairman\n participant ExecDir\n participant Board\n participant Stakeholders\n\n Chairman->>ExecDir: Request Strategic Analysis\n ExecDir->>Board: Present Strategic Options\n Board->>Stakeholders: Consult Stakeholders\n Stakeholders->>Board: Provide Input\n Board->>ExecDir: Refine Strategic Plan\n ExecDir->>Chairman: Present Final Recommendations\n Chairman->>Board: Call for Strategic Vote\n Board->>Chairman: Cast Weighted Votes\n Chairman->>Board: Announce Strategic Decision
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#operational-decisions","title":"Operational Decisions","text":"flowchart TD\n A[Operational Issue] --> B[Vice Chairman Assessment]\n B --> C[Resource Analysis]\n C --> D[Implementation Planning]\n D --> E[Board Review]\n E --> F[Operational Vote]\n F --> G{Approval?}\n G -->|Yes| H[Implementation]\n G -->|No| I[Plan Revision]\n I --> E\n H --> J[Monitoring]\n J --> K[Performance Review]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#financial-decisions","title":"Financial Decisions","text":"sequenceDiagram\n participant Chairman\n participant Treasurer\n participant Board\n participant Financial\n\n Chairman->>Treasurer: Financial Request\n Treasurer->>Financial: Conduct Analysis\n Financial->>Treasurer: Financial Assessment\n Treasurer->>Board: Present Financial Options\n Board->>Treasurer: Request Clarification\n Treasurer->>Board: Provide Additional Data\n Board->>Chairman: Financial Recommendations\n Chairman->>Board: Financial Decision Vote\n Board->>Chairman: Cast Financial Votes\n Chairman->>Treasurer: Approve Financial Plan
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#conflict-resolution","title":"Conflict Resolution","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#dispute-resolution-process","title":"Dispute Resolution Process","text":"graph TD\n A[Conflict Identified] --> B[Initial Assessment]\n B --> C[Stakeholder Identification]\n C --> D[Root Cause Analysis]\n D --> E[Mediation Attempt]\n E --> F{Resolution?}\n F -->|Yes| G[Agreement Reached]\n F -->|No| H[Escalation Process]\n H --> I[Chairman Intervention]\n I --> J[Expert Consultation]\n J --> K[Final Resolution]\n G --> L[Implementation]\n K --> L\n\n style A fill:#ffcdd2\n style G fill:#c8e6c9\n style K fill:#c8e6c9
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#mediation-strategies","title":"Mediation Strategies","text":"graph LR\n subgraph \"Mediation Approaches\"\n A[Facilitated Discussion]\n B[Expert Opinion]\n C[Third-Party Mediation]\n D[Stakeholder Consultation]\n end\n\n subgraph \"Resolution Methods\"\n E[Compromise Solutions]\n F[Alternative Approaches]\n G[Phased Implementation]\n H[Pilot Programs]\n end\n\n A --> E\n B --> F\n C --> G\n D --> H
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-quality-assurance","title":"Decision Quality Assurance","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#quality-control-framework","title":"Quality Control Framework","text":"graph TD\n A[Decision Made] --> B[Quality Assessment]\n B --> C[Compliance Check]\n C --> D[Risk Evaluation]\n D --> E[Stakeholder Impact]\n E --> F[Implementation Feasibility]\n F --> G[Final Approval]\n\n subgraph \"Quality Criteria\"\n H[Strategic Alignment]\n I[Financial Viability]\n J[Operational Feasibility]\n K[Risk Acceptability]\n L[Stakeholder Support]\n end\n\n B --> H\n C --> I\n D --> J\n E --> K\n F --> L
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-review-process","title":"Decision Review Process","text":"flowchart TD\n A[Decision Implementation] --> B[Progress Monitoring]\n B --> C[Performance Assessment]\n C --> D[Outcome Evaluation]\n D --> E{Success Criteria Met?}\n E -->|Yes| F[Success Confirmed]\n E -->|No| G[Root Cause Analysis]\n G --> H[Corrective Actions]\n H --> I[Plan Adjustment]\n I --> B\n\n F --> J[Lessons Learned]\n J --> K[Process Improvement]\n K --> L[Knowledge Documentation]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-communication","title":"Decision Communication","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#communication-flow","title":"Communication Flow","text":"sequenceDiagram\n participant Chairman\n participant Secretary\n participant Board\n participant Agents\n participant Stakeholders\n\n Chairman->>Secretary: Record Decision\n Secretary->>Board: Distribute Decision Summary\n Board->>Agents: Communicate Implementation Plan\n Agents->>Board: Confirm Understanding\n Board->>Stakeholders: Announce Decision\n Stakeholders->>Board: Provide Feedback\n Board->>Chairman: Report Stakeholder Response\n Chairman->>Secretary: Update Documentation
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-documentation","title":"Decision Documentation","text":"graph TD\n A[Decision Made] --> B[Documentation Creation]\n B --> C[Rationale Recording]\n C --> D[Implementation Plan]\n D --> E[Timeline Definition]\n E --> F[Resource Allocation]\n F --> G[Success Metrics]\n G --> H[Review Schedule]\n\n subgraph \"Documentation Elements\"\n I[Decision Summary]\n J[Voting Results]\n K[Implementation Steps]\n L[Risk Mitigation]\n M[Success Criteria]\n end\n\n B --> I\n C --> J\n D --> K\n E --> L\n F --> M
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#performance-metrics","title":"Performance Metrics","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-quality-metrics","title":"Decision Quality Metrics","text":"graph LR\n subgraph \"Efficiency Metrics\"\n A[Decision Speed]\n B[Consensus Time]\n C[Implementation Time]\n end\n\n subgraph \"Quality Metrics\"\n D[Decision Accuracy]\n E[Stakeholder Satisfaction]\n F[Outcome Success Rate]\n end\n\n subgraph \"Process Metrics\"\n G[Participation Rate]\n H[Conflict Resolution Time]\n I[Documentation Quality]\n end\n\n A --> D\n B --> E\n C --> F\n\n G --> A\n H --> B\n I --> C
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-impact-assessment","title":"Decision Impact Assessment","text":"flowchart TD\n A[Decision Implemented] --> B[Short-term Impact]\n B --> C[Medium-term Results]\n C --> D[Long-term Outcomes]\n D --> E[Stakeholder Feedback]\n E --> F[Performance Analysis]\n F --> G[Lessons Learned]\n G --> H[Process Improvement]\n\n subgraph \"Impact Areas\"\n I[Financial Performance]\n J[Operational Efficiency]\n K[Stakeholder Satisfaction]\n L[Strategic Alignment]\n end\n\n B --> I\n C --> J\n D --> K\n E --> L
"},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#best-practices","title":"Best Practices","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_decision_making/#decision-making-excellence","title":"Decision-Making Excellence","text":"For more information on implementing effective decision-making processes, see the Board of Directors Configuration Guide and Workflow Documentation.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/","title":"Board of Directors Example","text":"This example demonstrates how to use the Board of Directors swarm feature for democratic decision-making and collective intelligence in multi-agent systems.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#overview","title":"Overview","text":"The Board of Directors Swarm provides a sophisticated alternative to single-director architectures by implementing collective decision-making through voting, consensus, and role-based leadership. This example shows how to create and configure a board for strategic decision-making scenarios.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#basic-setup","title":"Basic Setup","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#1-import-required-modules","title":"1. Import Required Modules","text":"from swarms import Agent\nfrom swarms.structs.board_of_directors_swarm import (\n BoardOfDirectorsSwarm,\n BoardMember,\n BoardMemberRole\n)\nfrom swarms.config.board_config import (\n enable_board_feature,\n set_decision_threshold,\n get_default_board_template\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#2-enable-board-feature","title":"2. Enable Board Feature","text":"# Enable the Board of Directors feature globally\nenable_board_feature()\n\n# Set global decision threshold\nset_decision_threshold(0.7) # 70% majority required\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#3-create-board-members","title":"3. Create Board Members","text":"# Create Chairman\nchairman = Agent(\n agent_name=\"Chairman\",\n agent_description=\"Chairman of the Board responsible for leading meetings and making final decisions\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"\"\"You are the Chairman of the Board. Your responsibilities include:\n1. Leading board meetings and discussions\n2. Facilitating consensus among board members\n3. Making final decisions when consensus cannot be reached\n4. Ensuring all board members have an opportunity to contribute\n5. Maintaining focus on the organization's goals and objectives\n\nYou should be diplomatic, fair, and decisive in your leadership.\"\"\"\n)\n\n# Create Vice Chairman\nvice_chairman = Agent(\n agent_name=\"Vice-Chairman\",\n agent_description=\"Vice Chairman who supports the Chairman and coordinates operations\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"\"\"You are the Vice Chairman of the Board. Your responsibilities include:\n1. Supporting the Chairman in leading board meetings\n2. Coordinating operational activities and implementation\n3. Ensuring effective communication between board members\n4. Managing day-to-day board operations\n5. Stepping in when the Chairman is unavailable\n\nYou should be collaborative, organized, and supportive of the Chairman's leadership.\"\"\"\n)\n\n# Create Secretary\nsecretary = Agent(\n agent_name=\"Secretary\",\n agent_description=\"Secretary responsible for documentation and record keeping\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"\"\"You are the Secretary of the Board. Your responsibilities include:\n1. Documenting all board meetings and decisions\n2. Maintaining accurate records and meeting minutes\n3. Ensuring proper communication and notifications\n4. Managing board documentation and archives\n5. Supporting compliance and governance requirements\n\nYou should be detail-oriented, organized, and thorough in your documentation.\"\"\"\n)\n\n# Create Treasurer\ntreasurer = Agent(\n agent_name=\"Treasurer\",\n agent_description=\"Treasurer responsible for financial oversight and resource management\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"\"\"You are the Treasurer of the Board. Your responsibilities include:\n1. Overseeing financial planning and budgeting\n2. Monitoring resource allocation and utilization\n3. Ensuring financial compliance and accountability\n4. Providing financial insights for decision-making\n5. Managing financial risk and controls\n\nYou should be financially astute, analytical, and focused on value creation.\"\"\"\n)\n\n# Create Executive Director\nexecutive_director = Agent(\n agent_name=\"Executive-Director\",\n agent_description=\"Executive Director responsible for strategic planning and operational oversight\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"\"\"You are the Executive Director of the Board. Your responsibilities include:\n1. Developing and implementing strategic plans\n2. Overseeing operational performance and efficiency\n3. Leading innovation and continuous improvement\n4. Managing stakeholder relationships\n5. Ensuring organizational effectiveness\n\nYou should be strategic, results-oriented, and focused on organizational success.\"\"\"\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#4-create-boardmember-objects","title":"4. Create BoardMember Objects","text":"# Create BoardMember objects with roles, voting weights, and expertise areas\nboard_members = [\n BoardMember(\n agent=chairman,\n role=BoardMemberRole.CHAIRMAN,\n voting_weight=1.5,\n expertise_areas=[\"leadership\", \"strategy\", \"governance\", \"decision_making\"]\n ),\n BoardMember(\n agent=vice_chairman,\n role=BoardMemberRole.VICE_CHAIRMAN,\n voting_weight=1.2,\n expertise_areas=[\"operations\", \"coordination\", \"communication\", \"implementation\"]\n ),\n BoardMember(\n agent=secretary,\n role=BoardMemberRole.SECRETARY,\n voting_weight=1.0,\n expertise_areas=[\"documentation\", \"compliance\", \"record_keeping\", \"communication\"]\n ),\n BoardMember(\n agent=treasurer,\n role=BoardMemberRole.TREASURER,\n voting_weight=1.0,\n expertise_areas=[\"finance\", \"budgeting\", \"risk_management\", \"resource_allocation\"]\n ),\n BoardMember(\n agent=executive_director,\n role=BoardMemberRole.EXECUTIVE_DIRECTOR,\n voting_weight=1.5,\n expertise_areas=[\"strategy\", \"operations\", \"innovation\", \"performance_management\"]\n )\n]\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#5-create-specialized-worker-agents","title":"5. Create Specialized Worker Agents","text":"# Create specialized worker agents for different types of analysis\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Expert in market research, data analysis, and trend identification\",\n model_name=\"gpt-4o\",\n system_prompt=\"\"\"You are a Research Specialist. Your responsibilities include:\n1. Conducting comprehensive market research and analysis\n2. Identifying trends, opportunities, and risks\n3. Gathering and analyzing relevant data\n4. Providing evidence-based insights and recommendations\n5. Supporting strategic decision-making with research findings\n\nYou should be thorough, analytical, and objective in your research.\"\"\"\n)\n\nfinancial_agent = Agent(\n agent_name=\"Financial-Analyst\",\n agent_description=\"Specialist in financial analysis, valuation, and investment assessment\",\n model_name=\"gpt-4o\",\n system_prompt=\"\"\"You are a Financial Analyst. Your responsibilities include:\n1. Conducting financial analysis and valuation\n2. Assessing investment opportunities and risks\n3. Analyzing financial performance and metrics\n4. Providing financial insights and recommendations\n5. Supporting financial decision-making\n\nYou should be financially astute, analytical, and focused on value creation.\"\"\"\n)\n\ntechnical_agent = Agent(\n agent_name=\"Technical-Specialist\",\n agent_description=\"Expert in technical analysis, feasibility assessment, and implementation planning\",\n model_name=\"gpt-4o\",\n system_prompt=\"\"\"You are a Technical Specialist. Your responsibilities include:\n1. Conducting technical feasibility analysis\n2. Assessing implementation requirements and challenges\n3. Providing technical insights and recommendations\n4. Supporting technical decision-making\n5. Planning and coordinating technical implementations\n\nYou should be technically proficient, practical, and solution-oriented.\"\"\"\n)\n\nstrategy_agent = Agent(\n agent_name=\"Strategy-Specialist\",\n agent_description=\"Expert in strategic planning, competitive analysis, and business development\",\n model_name=\"gpt-4o\",\n system_prompt=\"\"\"You are a Strategy Specialist. Your responsibilities include:\n1. Developing strategic plans and initiatives\n2. Conducting competitive analysis and market positioning\n3. Identifying strategic opportunities and threats\n4. Providing strategic insights and recommendations\n5. Supporting strategic decision-making\n\nYou should be strategic, forward-thinking, and focused on long-term success.\"\"\"\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#6-initialize-the-board-of-directors-swarm","title":"6. Initialize the Board of Directors Swarm","text":"# Initialize the Board of Directors swarm with comprehensive configuration\nboard_swarm = BoardOfDirectorsSwarm(\n name=\"Executive_Board_Swarm\",\n description=\"Executive board with specialized roles for strategic decision-making and collective intelligence\",\n board_members=board_members,\n agents=[research_agent, financial_agent, technical_agent, strategy_agent],\n max_loops=3, # Allow multiple iterations for complex analysis\n verbose=True, # Enable detailed logging\n decision_threshold=0.7, # 70% consensus required\n enable_voting=True, # Enable voting mechanisms\n enable_consensus=True, # Enable consensus building\n max_workers=4, # Maximum parallel workers\n output_type=\"dict\" # Return results as dictionary\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#advanced-configuration","title":"Advanced Configuration","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#custom-board-templates","title":"Custom Board Templates","text":"You can use pre-configured board templates for common use cases:
# Get a financial analysis board template\nfinancial_board_template = get_default_board_template(\"financial_analysis\")\n\n# Get a strategic planning board template\nstrategic_board_template = get_default_board_template(\"strategic_planning\")\n\n# Get a technology assessment board template\ntech_board_template = get_default_board_template(\"technology_assessment\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#dynamic-role-assignment","title":"Dynamic Role Assignment","text":"Automatically assign roles based on task requirements:
# Board members are automatically assigned roles based on expertise\nboard_swarm = BoardOfDirectorsSwarm(\n board_members=board_members,\n agents=agents,\n auto_assign_roles=True,\n role_mapping={\n \"financial_analysis\": [\"Treasurer\", \"Financial_Member\"],\n \"strategic_planning\": [\"Chairman\", \"Executive_Director\"],\n \"technical_assessment\": [\"Technical_Member\", \"Executive_Director\"],\n \"research_analysis\": [\"Research_Member\", \"Secretary\"]\n }\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#consensus-optimization","title":"Consensus Optimization","text":"Configure advanced consensus-building mechanisms:
# Enable advanced consensus features\nboard_swarm = BoardOfDirectorsSwarm(\n board_members=board_members,\n agents=agents,\n enable_consensus=True,\n consensus_timeout=300, # 5 minutes timeout\n min_participation_rate=0.8, # 80% minimum participation\n auto_fallback_to_chairman=True, # Chairman can make final decisions\n consensus_rounds=3 # Maximum consensus building rounds\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#example-use-cases","title":"Example Use Cases","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#1-strategic-investment-analysis","title":"1. Strategic Investment Analysis","text":"# Execute a complex strategic investment analysis\ninvestment_task = \"\"\"\nAnalyze the strategic investment opportunity for a $50M Series B funding round in a \nfintech startup. Consider market conditions, competitive landscape, financial projections, \ntechnical feasibility, and strategic fit. Provide comprehensive recommendations including:\n1. Investment recommendation (proceed/hold/decline)\n2. Valuation analysis and suggested terms\n3. Risk assessment and mitigation strategies\n4. Strategic value and synergies\n5. Implementation timeline and milestones\n\"\"\"\n\nresult = board_swarm.run(task=investment_task)\nprint(\"Investment Analysis Results:\")\nprint(json.dumps(result, indent=2))\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#2-technology-strategy-development","title":"2. Technology Strategy Development","text":"# Develop a comprehensive technology strategy\ntech_strategy_task = \"\"\"\nDevelop a comprehensive technology strategy for a mid-size manufacturing company \nlooking to digitize operations and implement Industry 4.0 technologies. Consider:\n1. Current technology assessment and gaps\n2. Technology roadmap and implementation plan\n3. Investment requirements and ROI analysis\n4. Risk assessment and mitigation strategies\n5. Change management and training requirements\n6. Competitive positioning and market advantages\n\"\"\"\n\nresult = board_swarm.run(task=tech_strategy_task)\nprint(\"Technology Strategy Results:\")\nprint(json.dumps(result, indent=2))\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#3-market-entry-strategy","title":"3. Market Entry Strategy","text":"# Develop a market entry strategy for a new product\nmarket_entry_task = \"\"\"\nDevelop a comprehensive market entry strategy for a new AI-powered productivity \nsoftware targeting the enterprise market. Consider:\n1. Market analysis and opportunity assessment\n2. Competitive landscape and positioning\n3. Go-to-market strategy and channels\n4. Pricing strategy and revenue model\n5. Resource requirements and investment needs\n6. Risk assessment and mitigation strategies\n7. Success metrics and KPIs\n\"\"\"\n\nresult = board_swarm.run(task=market_entry_task)\nprint(\"Market Entry Strategy Results:\")\nprint(json.dumps(result, indent=2))\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#monitoring-and-analysis","title":"Monitoring and Analysis","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#board-performance-metrics","title":"Board Performance Metrics","text":"# Get board performance metrics\nboard_summary = board_swarm.get_board_summary()\nprint(\"Board Summary:\")\nprint(f\"Board Name: {board_summary['board_name']}\")\nprint(f\"Total Board Members: {board_summary['total_members']}\")\nprint(f\"Total Worker Agents: {board_summary['total_agents']}\")\nprint(f\"Decision Threshold: {board_summary['decision_threshold']}\")\nprint(f\"Max Loops: {board_summary['max_loops']}\")\n\n# Display board member details\nprint(\"\\nBoard Members:\")\nfor member in board_summary['members']:\n print(f\"- {member['name']} (Role: {member['role']}, Weight: {member['voting_weight']})\")\n print(f\" Expertise: {', '.join(member['expertise_areas'])}\")\n\n# Display worker agent details\nprint(\"\\nWorker Agents:\")\nfor agent in board_summary['agents']:\n print(f\"- {agent['name']}: {agent['description']}\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#decision-analysis","title":"Decision Analysis","text":"# Analyze decision-making patterns\nif hasattr(result, 'get') and callable(result.get):\n conversation_history = result.get('conversation_history', [])\n\n print(f\"\\nDecision Analysis:\")\n print(f\"Total Messages: {len(conversation_history)}\")\n\n # Count board member contributions\n board_contributions = {}\n for msg in conversation_history:\n if 'Board' in msg.get('role', ''):\n member_name = msg.get('agent_name', 'Unknown')\n board_contributions[member_name] = board_contributions.get(member_name, 0) + 1\n\n print(f\"Board Member Contributions:\")\n for member, count in board_contributions.items():\n print(f\"- {member}: {count} contributions\")\n\n # Count agent executions\n agent_executions = {}\n for msg in conversation_history:\n if any(agent.agent_name in msg.get('role', '') for agent in [research_agent, financial_agent, technical_agent, strategy_agent]):\n agent_name = msg.get('agent_name', 'Unknown')\n agent_executions[agent_name] = agent_executions.get(agent_name, 0) + 1\n\n print(f\"\\nAgent Executions:\")\n for agent, count in agent_executions.items():\n print(f\"- {agent}: {count} executions\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#best-practices","title":"Best Practices","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#1-role-definition","title":"1. Role Definition","text":"# Enable detailed logging\nimport logging\nlogging.basicConfig(level=logging.DEBUG)\n\n# Check board configuration\nprint(board_swarm.get_board_summary())\n\n# Test individual components\nfor member in board_members:\n print(f\"Testing {member.agent.agent_name}...\")\n response = member.agent.run(\"Test message\")\n print(f\"Response: {response[:100]}...\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_example/#conclusion","title":"Conclusion","text":"This example demonstrates the comprehensive capabilities of the Board of Directors Swarm for democratic decision-making and collective intelligence. The feature provides a sophisticated alternative to single-director architectures, enabling more robust and well-considered decisions through voting, consensus, and role-based leadership.
For more information, see the Board of Directors Documentation and Configuration Guide.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/","title":"Board of Directors Roles","text":"The Board of Directors system implements a hierarchical structure with clearly defined roles, responsibilities, and voting weights. Each role is designed to contribute specific expertise and authority to the decision-making process.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#role-hierarchy","title":"Role Hierarchy","text":"graph TD\n A[Chairman<br/>Voting Weight: 1.5<br/>Final Authority] --> B[Vice Chairman<br/>Voting Weight: 1.2<br/>Operational Support]\n A --> C[Executive Director<br/>Voting Weight: 1.5<br/>Strategic Planning]\n A --> D[Secretary<br/>Voting Weight: 1.0<br/>Documentation]\n A --> E[Treasurer<br/>Voting Weight: 1.0<br/>Financial Oversight]\n A --> F[Member<br/>Voting Weight: 1.0<br/>Expertise Contribution]\n\n B --> G[Operational Coordination]\n C --> H[Strategic Initiatives]\n D --> I[Record Keeping]\n E --> J[Resource Management]\n F --> K[Specialized Input]\n\n style A fill:#ffeb3b\n style B fill:#2196f3\n style C fill:#4caf50\n style D fill:#ff9800\n style E fill:#9c27b0\n style F fill:#607d8b
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#chairman-role","title":"Chairman Role","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#primary-responsibilities","title":"Primary Responsibilities","text":"graph TD\n A[Chairman] --> B[Meeting Leadership]\n A --> C[Final Decision Authority]\n A --> D[Consensus Facilitation]\n A --> E[Strategic Direction]\n A --> F[Stakeholder Communication]\n\n B --> G[Agenda Setting]\n B --> H[Discussion Management]\n B --> I[Time Management]\n\n C --> J[Approval Authority]\n C --> K[Override Capability]\n C --> L[Final Sign-off]\n\n D --> M[Conflict Resolution]\n D --> N[Mediation]\n D --> O[Compromise Facilitation]\n\n E --> P[Vision Setting]\n E --> Q[Goal Definition]\n E --> R[Priority Establishment]\n\n F --> S[External Relations]\n F --> T[Stakeholder Updates]\n F --> U[Public Communication]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#chairmans-decision-flow","title":"Chairman's Decision Flow","text":"flowchart TD\n A[Task Received] --> B[Assess Complexity]\n B --> C[Determine Board Composition]\n C --> D[Set Meeting Agenda]\n D --> E[Lead Discussion]\n E --> F[Facilitate Consensus]\n F --> G{Consensus Reached?}\n G -->|Yes| H[Approve Decision]\n G -->|No| I[Exercise Authority]\n I --> J[Make Final Decision]\n H --> K[Oversee Execution]\n J --> K\n K --> L[Monitor Progress]\n L --> M[Review Results]\n M --> N[Final Approval]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#key-competencies","title":"Key Competencies","text":"graph TD\n A[Vice Chairman] --> B[Operational Support]\n A --> C[Chairman Backup]\n A --> D[Coordination]\n A --> E[Implementation Oversight]\n A --> F[Team Management]\n\n B --> G[Process Optimization]\n B --> H[Efficiency Improvement]\n B --> I[Resource Coordination]\n\n C --> J[Acting Chairman]\n C --> K[Decision Support]\n C --> L[Authority Delegation]\n\n D --> M[Cross-functional Coordination]\n D --> N[Communication Management]\n D --> O[Timeline Management]\n\n E --> P[Execution Monitoring]\n E --> Q[Quality Assurance]\n E --> R[Performance Tracking]\n\n F --> S[Team Development]\n F --> T[Motivation]\n F --> U[Conflict Prevention]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#vice-chairmans-operational-flow","title":"Vice Chairman's Operational Flow","text":"sequenceDiagram\n participant Chairman\n participant ViceChair\n participant Board\n participant Agents\n\n Chairman->>ViceChair: Delegate Operational Tasks\n ViceChair->>Board: Coordinate Member Activities\n ViceChair->>Agents: Oversee Agent Performance\n Agents->>ViceChair: Report Progress\n ViceChair->>Chairman: Provide Status Updates\n Chairman->>ViceChair: Request Specific Actions\n ViceChair->>Board: Implement Chairman's Directives\n ViceChair->>Agents: Adjust Execution Plans
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#executive-director-role","title":"Executive Director Role","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#strategic-responsibilities","title":"Strategic Responsibilities","text":"graph TD\n A[Executive Director] --> B[Strategic Planning]\n A --> C[Operational Oversight]\n A --> D[Performance Management]\n A --> E[Innovation Leadership]\n A --> F[Risk Management]\n\n B --> G[Long-term Vision]\n B --> H[Goal Setting]\n B --> I[Strategy Development]\n\n C --> J[Process Optimization]\n C --> K[Quality Management]\n C --> L[Efficiency Improvement]\n\n D --> M[KPI Definition]\n D --> N[Performance Monitoring]\n D --> O[Improvement Initiatives]\n\n E --> P[Technology Adoption]\n E --> Q[Process Innovation]\n E --> R[Best Practices]\n\n F --> S[Risk Assessment]\n F --> T[Mitigation Planning]\n F --> U[Contingency Preparation]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#strategic-decision-framework","title":"Strategic Decision Framework","text":"flowchart TD\n A[Strategic Analysis] --> B[Market Assessment]\n B --> C[Competitive Analysis]\n C --> D[Internal Capability Review]\n D --> E[Opportunity Identification]\n E --> F[Risk Evaluation]\n F --> G[Strategic Options]\n G --> H[Recommendation Development]\n H --> I[Board Presentation]\n I --> J[Implementation Planning]\n J --> K[Execution Oversight]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#secretary-role","title":"Secretary Role","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#documentation-responsibilities","title":"Documentation Responsibilities","text":"graph TD\n A[Secretary] --> B[Meeting Documentation]\n A --> C[Record Keeping]\n A --> D[Communication Management]\n A --> E[Compliance Monitoring]\n A --> F[Information Management]\n\n B --> G[Agenda Preparation]\n B --> H[Minutes Recording]\n B --> I[Action Item Tracking]\n\n C --> J[Document Storage]\n C --> K[Version Control]\n C --> L[Access Management]\n\n D --> M[Internal Communications]\n D --> N[External Correspondence]\n D --> O[Notification Systems]\n\n E --> P[Policy Compliance]\n E --> Q[Regulatory Requirements]\n E --> R[Audit Preparation]\n\n F --> S[Information Architecture]\n F --> T[Knowledge Management]\n F --> U[Data Governance]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#documentation-workflow","title":"Documentation Workflow","text":"sequenceDiagram\n participant Chairman\n participant Secretary\n participant Board\n participant System\n\n Chairman->>Secretary: Request Meeting Setup\n Secretary->>System: Prepare Meeting Materials\n Secretary->>Board: Distribute Agenda\n Board->>Secretary: Submit Pre-meeting Materials\n Secretary->>Chairman: Compile Meeting Package\n Chairman->>Board: Conduct Meeting\n Secretary->>System: Record Meeting Minutes\n Secretary->>Board: Distribute Minutes\n Board->>Secretary: Confirm Action Items\n Secretary->>System: Update Tracking System
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#treasurer-role","title":"Treasurer Role","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#financial-responsibilities","title":"Financial Responsibilities","text":"graph TD\n A[Treasurer] --> B[Financial Oversight]\n A --> C[Resource Allocation]\n A --> D[Budget Management]\n A --> E[Financial Reporting]\n A --> F[Risk Assessment]\n\n B --> G[Financial Planning]\n B --> H[Cost Control]\n B --> I[Investment Decisions]\n\n C --> J[Resource Optimization]\n C --> K[Capacity Planning]\n C --> L[Efficiency Analysis]\n\n D --> M[Budget Development]\n D --> N[Expense Monitoring]\n D --> O[Variance Analysis]\n\n E --> P[Financial Statements]\n E --> Q[Performance Metrics]\n E --> R[Stakeholder Reports]\n\n F --> S[Financial Risk]\n F --> T[Market Risk]\n F --> U[Operational Risk]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#financial-decision-process","title":"Financial Decision Process","text":"flowchart TD\n A[Financial Request] --> B[Resource Assessment]\n B --> C[Budget Analysis]\n C --> D[Cost-Benefit Analysis]\n D --> E[Risk Evaluation]\n E --> F[Alternative Analysis]\n F --> G[Recommendation]\n G --> H[Board Presentation]\n H --> I[Approval Decision]\n I --> J[Implementation]\n J --> K[Monitoring]\n K --> L[Performance Review]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#member-role","title":"Member Role","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#general-responsibilities","title":"General Responsibilities","text":"graph TD\n A[Member] --> B[Expertise Contribution]\n A --> C[Participation]\n A --> D[Specialized Input]\n A --> E[Committee Work]\n A --> F[Stakeholder Representation]\n\n B --> G[Domain Knowledge]\n B --> H[Technical Expertise]\n B --> I[Industry Experience]\n\n C --> J[Active Engagement]\n C --> K[Voting Participation]\n C --> L[Discussion Contribution]\n\n D --> M[Specialized Analysis]\n D --> N[Expert Opinions]\n D --> O[Technical Recommendations]\n\n E --> P[Committee Leadership]\n E --> Q[Task Force Participation]\n E --> R[Working Group Support]\n\n F --> S[Stakeholder Advocacy]\n F --> T[Interest Representation]\n F --> U[Community Engagement]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#member-contribution-framework","title":"Member Contribution Framework","text":"graph LR\n subgraph \"Expertise Areas\"\n A[Technical Expertise]\n B[Industry Knowledge]\n C[Operational Experience]\n D[Strategic Insight]\n end\n\n subgraph \"Contribution Types\"\n E[Analysis & Research]\n F[Recommendations]\n G[Risk Assessment]\n H[Implementation Support]\n end\n\n subgraph \"Participation Levels\"\n I[Active Voting]\n J[Discussion Leadership]\n K[Committee Work]\n L[Mentoring]\n end\n\n A --> E\n B --> F\n C --> G\n D --> H\n\n E --> I\n F --> J\n G --> K\n H --> L
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#role-interactions","title":"Role Interactions","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#collaborative-decision-making","title":"Collaborative Decision Making","text":"sequenceDiagram\n participant Chairman\n participant ViceChair\n participant ExecDir\n participant Secretary\n participant Treasurer\n participant Member\n\n Chairman->>ViceChair: Delegate Operational Tasks\n Chairman->>ExecDir: Request Strategic Analysis\n Chairman->>Secretary: Schedule Meeting\n Chairman->>Treasurer: Request Financial Assessment\n Chairman->>Member: Request Expert Input\n\n ViceChair->>ExecDir: Coordinate Implementation\n ExecDir->>Treasurer: Resource Requirements\n Treasurer->>Secretary: Financial Documentation\n Secretary->>Member: Meeting Materials\n Member->>Chairman: Expert Recommendations\n\n Chairman->>ViceChair: Final Decision\n ViceChair->>ExecDir: Implementation Plan\n ExecDir->>Treasurer: Resource Allocation\n Treasurer->>Secretary: Financial Records\n Secretary->>Member: Action Items
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#voting-power-distribution","title":"Voting Power Distribution","text":"pie title Voting Weight Distribution\n \"Chairman\" : 1.5\n \"Executive Director\" : 1.5\n \"Vice Chairman\" : 1.2\n \"Secretary\" : 1.0\n \"Treasurer\" : 1.0\n \"Member\" : 1.0
"},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#best-practices-for-role-management","title":"Best Practices for Role Management","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_roles/#role-clarity","title":"Role Clarity","text":"For more information on implementing specific roles, see the Board of Directors Configuration Guide and Workflow Documentation.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/","title":"BoardOfDirectorsSwarm
","text":"The BoardOfDirectorsSwarm
is a sophisticated multi-agent orchestration system that implements a collective decision-making approach as an alternative to the single Director pattern. It consists of a board of directors that convenes to discuss, vote, and reach consensus on task distribution and execution strategies.
The Board of Directors Swarm follows a democratic workflow pattern:
max_loops
)graph TD\n A[User Task] --> B[Board of Directors]\n B --> C[Board Meeting & Discussion]\n C --> D[Voting & Consensus]\n D --> E[Create Plan & Orders]\n E --> F[Distribute to Agents]\n F --> G[Agent 1]\n F --> H[Agent 2]\n F --> I[Agent N]\n G --> J[Execute Task]\n H --> J\n I --> J\n J --> K[Report Results]\n K --> L[Board Evaluation]\n L --> M{More Loops?}\n M -->|Yes| C\n M -->|No| N[Final Output]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#detailed-decision-making-process","title":"Detailed Decision-Making Process","text":"flowchart TD\n A[Task Received] --> B[Board Convenes]\n B --> C[Chairman Opens Meeting]\n C --> D[Task Analysis Phase]\n D --> E[Expertise Assignment]\n E --> F[Individual Member Analysis]\n F --> G[Discussion & Debate]\n G --> H[Proposal Generation]\n H --> I[Voting Process]\n I --> J{Consensus Reached?}\n J -->|No| K[Reconciliation Phase]\n K --> G\n J -->|Yes| L[Plan Finalization]\n L --> M[Order Creation]\n M --> N[Agent Assignment]\n N --> O[Execution Phase]\n O --> P[Result Collection]\n P --> Q[Board Review]\n Q --> R{Approval?}\n R -->|No| S[Refinement Loop]\n S --> G\n R -->|Yes| T[Final Delivery]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#board-member-interaction-flow","title":"Board Member Interaction Flow","text":"sequenceDiagram\n participant User\n participant Chairman\n participant ViceChair\n participant Secretary\n participant Treasurer\n participant ExecDir\n participant Agents\n\n User->>Chairman: Submit Task\n Chairman->>ViceChair: Notify Board Meeting\n Chairman->>Secretary: Request Meeting Setup\n Chairman->>Treasurer: Resource Assessment\n Chairman->>ExecDir: Strategic Planning\n\n Note over Chairman,ExecDir: Board Discussion Phase\n\n Chairman->>ViceChair: Lead Discussion\n ViceChair->>Secretary: Document Decisions\n Secretary->>Treasurer: Budget Considerations\n Treasurer->>ExecDir: Resource Allocation\n ExecDir->>Chairman: Strategic Recommendations\n\n Note over Chairman,ExecDir: Voting & Consensus\n\n Chairman->>ViceChair: Call for Vote\n ViceChair->>Secretary: Record Votes\n Secretary->>Treasurer: Financial Approval\n Treasurer->>ExecDir: Resource Approval\n ExecDir->>Chairman: Final Decision\n\n Note over Chairman,Agents: Execution Phase\n\n Chairman->>Agents: Distribute Orders\n Agents->>Chairman: Execute Tasks\n Agents->>ViceChair: Progress Reports\n Agents->>Secretary: Documentation\n Agents->>Treasurer: Resource Usage\n Agents->>ExecDir: Strategic Updates\n\n Note over Chairman,ExecDir: Review & Feedback\n\n Chairman->>User: Deliver Results
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#agent-execution-and-feedback-loop","title":"Agent Execution and Feedback Loop","text":"graph LR\n subgraph \"Board of Directors\"\n A[Chairman]\n B[Vice Chairman]\n C[Secretary]\n D[Treasurer]\n E[Executive Director]\n end\n\n subgraph \"Worker Agents\"\n F[Research Agent]\n G[Analysis Agent]\n H[Technical Agent]\n I[Financial Agent]\n J[Strategy Agent]\n end\n\n subgraph \"Execution Flow\"\n K[Task Distribution]\n L[Parallel Execution]\n M[Result Aggregation]\n N[Quality Assessment]\n O[Feedback Loop]\n end\n\n A --> K\n B --> K\n C --> K\n D --> K\n E --> K\n\n K --> L\n L --> F\n L --> G\n L --> H\n L --> I\n L --> J\n\n F --> M\n G --> M\n H --> M\n I --> M\n J --> M\n\n M --> N\n N --> O\n O --> K
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#key-features","title":"Key Features","text":"The Board of Directors supports various roles with different responsibilities and voting weights:
Role Description Voting Weight ResponsibilitiesCHAIRMAN
Primary leader responsible for board meetings and final decisions 1.5 Leading meetings, facilitating consensus, making final decisions VICE_CHAIRMAN
Secondary leader who supports the chairman 1.2 Supporting chairman, coordinating operations SECRETARY
Responsible for documentation and meeting minutes 1.0 Documenting meetings, maintaining records TREASURER
Manages financial aspects and resource allocation 1.0 Financial oversight, resource management EXECUTIVE_DIRECTOR
Executive-level board member with operational authority 1.5 Strategic planning, operational oversight MEMBER
General board member with specific expertise 1.0 Contributing expertise, participating in decisions"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#role-hierarchy-and-decision-flow","title":"Role Hierarchy and Decision Flow","text":"graph TD\n subgraph \"Board Hierarchy\"\n A[Chairman<br/>Voting Weight: 1.5<br/>Final Decision Authority]\n B[Vice Chairman<br/>Voting Weight: 1.2<br/>Operational Support]\n C[Executive Director<br/>Voting Weight: 1.5<br/>Strategic Planning]\n D[Secretary<br/>Voting Weight: 1.0<br/>Documentation]\n E[Treasurer<br/>Voting Weight: 1.0<br/>Financial Oversight]\n F[Member<br/>Voting Weight: 1.0<br/>Expertise Contribution]\n end\n\n subgraph \"Decision Process\"\n G[Task Analysis]\n H[Expertise Assignment]\n I[Individual Analysis]\n J[Discussion & Debate]\n K[Voting]\n L[Consensus Check]\n M[Final Decision]\n end\n\n A --> G\n B --> G\n C --> G\n D --> G\n E --> G\n F --> G\n\n G --> H\n H --> I\n I --> J\n J --> K\n K --> L\n L --> M\n\n A -.->|Final Authority| M
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#boardofdirectorsswarm-constructor","title":"BoardOfDirectorsSwarm
Constructor","text":"Parameter Type Default Description name
str
\"BoardOfDirectorsSwarm\"
The name of the swarm instance description
str
\"Distributed task swarm with collective decision-making\"
Brief description of the swarm's functionality board_members
Optional[List[BoardMember]]
None
List of board members with their roles and expertise agents
List[Union[Agent, Callable, Any]]
None
List of worker agents in the swarm max_loops
int
1
Maximum number of feedback loops between board and agents output_type
OutputType
\"dict-all-except-first\"
Format for output (dict, str, list) board_model_name
str
\"gpt-4o-mini\"
Model name for board member agents verbose
bool
False
Enable detailed logging add_collaboration_prompt
bool
True
Add collaboration prompts to agents board_feedback_on
bool
True
Enable board feedback on agent outputs decision_threshold
float
0.6
Threshold for majority decisions (0.0-1.0) enable_voting
bool
True
Enable voting mechanisms for board decisions enable_consensus
bool
True
Enable consensus-building mechanisms max_workers
Optional[int]
None
Maximum number of workers for parallel execution"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#core-methods","title":"Core Methods","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#run_board_meetingtask-imgnone","title":"run_board_meeting(task, img=None)
","text":"Conducts a board meeting to discuss and decide on the given task. This method orchestrates the complete board meeting process, including discussion, decision-making, and task distribution planning.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#parameters","title":"Parameters","text":"Parameter Type Default Descriptiontask
str
Required The task to be discussed and planned by the board img
str
None
Optional image to be used with the task"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#returns","title":"Returns","text":"Type Description BoardSpec
The board's plan and orders"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example","title":"Example","text":"from swarms import Agent\nfrom swarms.structs.board_of_directors_swarm import (\n BoardOfDirectorsSwarm,\n BoardMember,\n BoardMemberRole\n)\n\n# Create board members\nchairman = Agent(\n agent_name=\"Chairman\",\n agent_description=\"Chairman of the Board responsible for leading meetings\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Chairman of the Board...\"\n)\n\nvice_chairman = Agent(\n agent_name=\"Vice-Chairman\",\n agent_description=\"Vice Chairman who supports the Chairman\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Vice Chairman...\"\n)\n\n# Create BoardMember objects\nboard_members = [\n BoardMember(chairman, BoardMemberRole.CHAIRMAN, 1.5, [\"leadership\", \"strategy\"]),\n BoardMember(vice_chairman, BoardMemberRole.VICE_CHAIRMAN, 1.2, [\"operations\", \"coordination\"]),\n]\n\n# Create worker agents\nresearch_agent = Agent(\n agent_name=\"Research-Specialist\",\n agent_description=\"Expert in market research and analysis\",\n model_name=\"gpt-4o\",\n)\n\nfinancial_agent = Agent(\n agent_name=\"Financial-Analyst\",\n agent_description=\"Specialist in financial analysis and valuation\",\n model_name=\"gpt-4o\",\n)\n\n# Initialize the Board of Directors swarm\nboard_swarm = BoardOfDirectorsSwarm(\n name=\"Executive_Board_Swarm\",\n description=\"Executive board with specialized roles for strategic decision-making\",\n board_members=board_members,\n agents=[research_agent, financial_agent],\n max_loops=2,\n verbose=True,\n decision_threshold=0.6,\n enable_voting=True,\n enable_consensus=True,\n)\n\n# Run a board meeting\nboard_spec = board_swarm.run_board_meeting(\n task=\"Analyze the market potential for Tesla (TSLA) stock\"\n)\nprint(f\"Board Plan: {board_spec.plan}\")\nprint(f\"Number of Orders: {len(board_spec.orders)}\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#steptask-imgnone-args-kwargs","title":"step(task, img=None, *args, **kwargs)
","text":"Executes a single step of the Board of Directors swarm. This method runs one complete cycle of board meeting and task execution, including board discussion, task distribution, and optional feedback.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#parameters_1","title":"Parameters","text":"Parameter Type Default Descriptiontask
str
Required The task to be executed img
str
None
Optional image input *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#returns_1","title":"Returns","text":"Type Description Any
The result of the step execution"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example_1","title":"Example","text":"# Execute a single step\nresult = board_swarm.step(\n task=\"Analyze the market potential for Tesla (TSLA) stock\"\n)\nprint(result)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#runtask-imgnone-args-kwargs","title":"run(task, img=None, *args, **kwargs)
","text":"Executes the Board of Directors swarm for a specified number of feedback loops, processing the task through multiple iterations for refinement and improvement.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#parameters_2","title":"Parameters","text":"Parameter Type Default Descriptiontask
str
Required The initial task to be processed by the swarm img
str
None
Optional image input for the agents *args
Any
- Additional positional arguments **kwargs
Any
- Additional keyword arguments"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#returns_2","title":"Returns","text":"Type Description Any
The formatted conversation history as output based on output_type
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example_2","title":"Example","text":"# Run the complete swarm with multiple loops\nresult = board_swarm.run(\n task=\"Analyze the market potential for Tesla (TSLA) stock\"\n)\nprint(result)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#board-management-methods","title":"Board Management Methods","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#add_board_memberboard_member","title":"add_board_member(board_member)
","text":"Adds a new member to the Board of Directors.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#parameters_3","title":"Parameters","text":"Parameter Type Descriptionboard_member
BoardMember
The board member to add"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example_3","title":"Example","text":"# Create a new board member\ntreasurer = Agent(\n agent_name=\"Treasurer\",\n agent_description=\"Board Treasurer responsible for financial oversight\",\n model_name=\"gpt-4o-mini\",\n system_prompt=\"You are the Treasurer...\"\n)\n\ntreasurer_member = BoardMember(\n treasurer, \n BoardMemberRole.TREASURER, \n 1.0, \n [\"finance\", \"budgeting\"]\n)\n\n# Add to the board\nboard_swarm.add_board_member(treasurer_member)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#remove_board_memberagent_name","title":"remove_board_member(agent_name)
","text":"Removes a board member by agent name.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#parameters_4","title":"Parameters","text":"Parameter Type Descriptionagent_name
str
The name of the agent to remove from the board"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example_4","title":"Example","text":"# Remove a board member\nboard_swarm.remove_board_member(\"Treasurer\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#get_board_memberagent_name","title":"get_board_member(agent_name)
","text":"Gets a board member by agent name.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#parameters_5","title":"Parameters","text":"Parameter Type Descriptionagent_name
str
The name of the agent"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#returns_3","title":"Returns","text":"Type Description Optional[BoardMember]
The board member if found, None otherwise"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example_5","title":"Example","text":"# Get a specific board member\nchairman_member = board_swarm.get_board_member(\"Chairman\")\nif chairman_member:\n print(f\"Chairman voting weight: {chairman_member.voting_weight}\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#get_board_summary","title":"get_board_summary()
","text":"Gets a comprehensive summary of the Board of Directors.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#returns_4","title":"Returns","text":"Type DescriptionDict[str, Any]
Summary of the board structure and members"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#example_6","title":"Example","text":"# Get board summary\nsummary = board_swarm.get_board_summary()\nprint(f\"Board Name: {summary['board_name']}\")\nprint(f\"Total Members: {summary['total_members']}\")\nprint(f\"Total Agents: {summary['total_agents']}\")\nprint(f\"Decision Threshold: {summary['decision_threshold']}\")\n\nfor member in summary['members']:\n print(f\"- {member['name']} ({member['role']}): Weight {member['voting_weight']}\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#configuration-management","title":"Configuration Management","text":"The Board of Directors feature can be configured through the BoardConfig
class:
from swarms.config.board_config import enable_board_feature, is_board_feature_enabled\n\n# Check if feature is enabled\nif not is_board_feature_enabled():\n # Enable the feature\n enable_board_feature()\n print(\"Board of Directors feature enabled\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#configuration-options","title":"Configuration Options","text":"from swarms.config.board_config import (\n set_board_size,\n set_decision_threshold,\n set_board_model,\n enable_verbose_logging\n)\n\n# Configure board settings\nset_board_size(5) # Set default board size to 5 members\nset_decision_threshold(0.7) # Set decision threshold to 70%\nset_board_model(\"gpt-4o\") # Set default model for board members\nenable_verbose_logging() # Enable verbose logging\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#advanced-usage-examples","title":"Advanced Usage Examples","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#custom-board-template","title":"Custom Board Template","text":"from swarms.config.board_config import get_default_board_template\n\n# Get a predefined board template\nexecutive_template = get_default_board_template(\"executive\")\nprint(\"Executive template roles:\", executive_template[\"roles\"])\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#complex-task-orchestration","title":"Complex Task Orchestration","text":"# Create a comprehensive board for financial analysis\nfinancial_board = BoardOfDirectorsSwarm(\n name=\"Financial_Analysis_Board\",\n description=\"Specialized board for financial analysis and investment decisions\",\n board_members=create_financial_board_members(),\n agents=create_financial_agents(),\n max_loops=3,\n verbose=True,\n decision_threshold=0.75, # Higher threshold for financial decisions\n enable_voting=True,\n enable_consensus=True,\n)\n\n# Execute complex financial analysis\nresult = financial_board.run(\n task=\"Conduct comprehensive analysis of Apple Inc. (AAPL) including market position, financial health, and investment recommendation\"\n)\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#board-feedback-and-iteration","title":"Board Feedback and Iteration","text":"# Enable board feedback for iterative improvement\nboard_swarm = BoardOfDirectorsSwarm(\n board_feedback_on=True, # Enable board feedback\n max_loops=3, # Allow multiple iterations\n verbose=True,\n)\n\n# The board will provide feedback after each iteration\nresult = board_swarm.run(task=\"Your complex task here\")\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#best-practices","title":"Best Practices","text":"The Board of Directors swarm includes comprehensive error handling:
try:\n result = board_swarm.run(task=\"Your task\")\nexcept Exception as e:\n print(f\"Board execution failed: {e}\")\n # Handle error appropriately\n
"},{"location":"swarms/structs/board_of_directors/board_of_directors_swarm/#performance-considerations","title":"Performance Considerations","text":"The Board of Directors swarm can be integrated with other swarm architectures:
# Use Board of Directors as a component in larger workflows\nfrom swarms.structs.sequential_workflow import SequentialWorkflow\n\n# Create a workflow that includes board decision-making\nworkflow = SequentialWorkflow([\n board_swarm, # Board makes decisions\n other_swarm, # Other swarm executes based on board decisions\n])\n
For more information on multi-agent architectures and advanced usage patterns, see the Multi-Agent Collaboration Guide.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/","title":"Board of Directors Workflow","text":"The Board of Directors workflow is a sophisticated multi-stage process that ensures comprehensive task analysis, collaborative decision-making, and effective execution through specialized agents.
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#workflow-overview","title":"Workflow Overview","text":"graph TD\n A[Task Input] --> B[Initial Assessment]\n B --> C[Board Assembly]\n C --> D[Meeting Phase]\n D --> E[Decision Phase]\n E --> F[Execution Phase]\n F --> G[Review Phase]\n G --> H{Approval?}\n H -->|No| I[Refinement]\n I --> D\n H -->|Yes| J[Final Delivery]\n\n style A fill:#e1f5fe\n style J fill:#c8e6c9\n style H fill:#fff3e0
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#phase-1-initial-assessment","title":"Phase 1: Initial Assessment","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#task-analysis-and-board-preparation","title":"Task Analysis and Board Preparation","text":"flowchart LR\n A[Task Received] --> B[Complexity Assessment]\n B --> C[Resource Requirements]\n C --> D[Expertise Mapping]\n D --> E[Board Member Selection]\n E --> F[Meeting Scheduling]\n\n subgraph \"Assessment Criteria\"\n G[Task Complexity]\n H[Time Constraints]\n I[Resource Availability]\n J[Expertise Requirements]\n end\n\n B --> G\n B --> H\n C --> I\n D --> J
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#board-member-activation","title":"Board Member Activation","text":"sequenceDiagram\n participant System\n participant Chairman\n participant Members\n participant Agents\n\n System->>Chairman: Notify New Task\n Chairman->>System: Assess Task Requirements\n System->>Members: Activate Relevant Members\n Members->>Chairman: Confirm Availability\n Chairman->>Agents: Prepare Agent Pool\n Agents->>Chairman: Confirm Readiness\n Chairman->>System: Board Ready for Meeting
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#phase-2-board-meeting","title":"Phase 2: Board Meeting","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#meeting-structure","title":"Meeting Structure","text":"graph TD\n A[Meeting Opens] --> B[Agenda Review]\n B --> C[Task Presentation]\n C --> D[Expertise Assignment]\n D --> E[Individual Analysis]\n E --> F[Group Discussion]\n F --> G[Proposal Development]\n G --> H[Voting Process]\n H --> I[Consensus Building]\n I --> J[Plan Finalization]\n\n subgraph \"Meeting Components\"\n K[Time Management]\n L[Documentation]\n M[Conflict Resolution]\n N[Decision Recording]\n end\n\n A --> K\n F --> L\n I --> M\n J --> N
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#discussion-and-debate-process","title":"Discussion and Debate Process","text":"flowchart TD\n A[Topic Introduction] --> B[Expert Analysis]\n B --> C[Cross-Examination]\n C --> D[Alternative Proposals]\n D --> E[Pros/Cons Analysis]\n E --> F[Risk Assessment]\n F --> G[Resource Evaluation]\n G --> H[Consensus Check]\n H -->|No Consensus| I[Mediation]\n I --> B\n H -->|Consensus| J[Proposal Finalization]\n\n style A fill:#e3f2fd\n style J fill:#e8f5e8
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#phase-3-decision-making","title":"Phase 3: Decision Making","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#voting-mechanism","title":"Voting Mechanism","text":"graph LR\n subgraph \"Voting Process\"\n A[Proposal Presentation]\n B[Individual Voting]\n C[Weight Calculation]\n D[Threshold Check]\n E[Decision Outcome]\n end\n\n subgraph \"Voting Weights\"\n F[Chairman: 1.5]\n G[Vice Chairman: 1.2]\n H[Executive Director: 1.5]\n I[Secretary: 1.0]\n J[Treasurer: 1.0]\n K[Member: 1.0]\n end\n\n A --> B\n B --> C\n C --> D\n D --> E\n\n F --> C\n G --> C\n H --> C\n I --> C\n J --> C\n K --> C
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#consensus-building","title":"Consensus Building","text":"flowchart TD\n A[Initial Vote] --> B{Consensus Reached?}\n B -->|Yes| C[Decision Approved]\n B -->|No| D[Identify Disagreements]\n D --> E[Reconciliation Discussion]\n E --> F[Proposal Modification]\n F --> G[Re-vote]\n G --> B\n\n subgraph \"Consensus Strategies\"\n H[Compromise Solutions]\n I[Expert Mediation]\n J[Alternative Approaches]\n K[Time Extension]\n end\n\n D --> H\n D --> I\n D --> J\n D --> K
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#phase-4-execution-planning","title":"Phase 4: Execution Planning","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#task-distribution-strategy","title":"Task Distribution Strategy","text":"graph TD\n A[Approved Plan] --> B[Task Breakdown]\n B --> C[Agent Assignment]\n C --> D[Resource Allocation]\n D --> E[Timeline Creation]\n E --> F[Quality Standards]\n F --> G[Execution Orders]\n\n subgraph \"Assignment Criteria\"\n H[Agent Expertise]\n I[Current Workload]\n J[Resource Requirements]\n K[Time Constraints]\n end\n\n C --> H\n C --> I\n D --> J\n E --> K
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#order-creation-and-distribution","title":"Order Creation and Distribution","text":"sequenceDiagram\n participant Board\n participant Chairman\n participant Secretary\n participant Agents\n\n Board->>Chairman: Approve Execution Plan\n Chairman->>Secretary: Create Formal Orders\n Secretary->>Chairman: Document Orders\n Chairman->>Agents: Distribute Orders\n Agents->>Chairman: Acknowledge Orders\n Agents->>Secretary: Confirm Understanding\n Secretary->>Board: Order Distribution Complete
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#phase-5-execution-and-monitoring","title":"Phase 5: Execution and Monitoring","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#parallel-execution-flow","title":"Parallel Execution Flow","text":"graph LR\n subgraph \"Agent Execution\"\n A[Agent 1]\n B[Agent 2]\n C[Agent 3]\n D[Agent N]\n end\n\n subgraph \"Monitoring\"\n E[Progress Tracking]\n F[Quality Control]\n G[Resource Monitoring]\n H[Issue Detection]\n end\n\n subgraph \"Communication\"\n I[Status Reports]\n J[Issue Escalation]\n K[Resource Requests]\n L[Completion Notifications]\n end\n\n A --> E\n B --> E\n C --> E\n D --> E\n\n E --> F\n F --> G\n G --> H\n\n H --> I\n H --> J\n H --> K\n H --> L
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#real-time-monitoring","title":"Real-time Monitoring","text":"flowchart TD\n A[Execution Start] --> B[Progress Monitoring]\n B --> C[Quality Assessment]\n C --> D[Resource Tracking]\n D --> E{Issues Detected?}\n E -->|Yes| F[Issue Resolution]\n F --> G[Plan Adjustment]\n G --> B\n E -->|No| H[Continue Execution]\n H --> I{All Tasks Complete?}\n I -->|No| B\n I -->|Yes| J[Result Aggregation]
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#phase-6-review-and-feedback","title":"Phase 6: Review and Feedback","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#result-evaluation","title":"Result Evaluation","text":"graph TD\n A[Results Collected] --> B[Quality Assessment]\n B --> C[Compliance Check]\n C --> D[Performance Analysis]\n D --> E[Stakeholder Review]\n E --> F[Approval Decision]\n\n subgraph \"Evaluation Criteria\"\n G[Quality Standards]\n H[Timeline Adherence]\n I[Resource Efficiency]\n J[Stakeholder Satisfaction]\n end\n\n B --> G\n C --> H\n D --> I\n E --> J
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#feedback-loop-process","title":"Feedback Loop Process","text":"flowchart TD\n A[Review Results] --> B{Meet Standards?}\n B -->|Yes| C[Approve Results]\n B -->|No| D[Identify Issues]\n D --> E[Root Cause Analysis]\n E --> F[Plan Refinement]\n F --> G[Re-execution]\n G --> A\n\n subgraph \"Feedback Types\"\n H[Quality Issues]\n I[Timeline Delays]\n J[Resource Overruns]\n K[Scope Changes]\n end\n\n D --> H\n D --> I\n D --> J\n D --> K
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#quality-assurance","title":"Quality Assurance","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#multi-level-quality-control","title":"Multi-Level Quality Control","text":"graph TD\n A[Agent Self-Check] --> B[Peer Review]\n B --> C[Board Review]\n C --> D[Stakeholder Validation]\n D --> E[Final Approval]\n\n subgraph \"Quality Gates\"\n F[Completeness Check]\n G[Accuracy Verification]\n H[Compliance Validation]\n I[Performance Metrics]\n end\n\n A --> F\n B --> G\n C --> H\n D --> I
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#performance-metrics","title":"Performance Metrics","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#key-performance-indicators","title":"Key Performance Indicators","text":"graph LR\n subgraph \"Efficiency Metrics\"\n A[Execution Time]\n B[Resource Utilization]\n C[Cost Effectiveness]\n end\n\n subgraph \"Quality Metrics\"\n D[Accuracy Rate]\n E[Compliance Score]\n F[Stakeholder Satisfaction]\n end\n\n subgraph \"Process Metrics\"\n G[Decision Speed]\n H[Consensus Time]\n I[Iteration Count]\n end\n\n A --> D\n B --> E\n C --> F\n\n G --> A\n H --> B\n I --> C
"},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#best-practices","title":"Best Practices","text":""},{"location":"swarms/structs/board_of_directors/board_of_directors_workflow/#workflow-optimization","title":"Workflow Optimization","text":"For more information on implementing the Board of Directors workflow, see the Board of Directors Configuration Guide and Advanced Examples.
"},{"location":"swarms/tools/base_tool/","title":"BaseTool Class Documentation","text":""},{"location":"swarms/tools/base_tool/#overview","title":"Overview","text":"The BaseTool
class is a comprehensive tool management system for function calling, schema conversion, and execution. It provides a unified interface for converting Python functions to OpenAI function calling schemas, managing Pydantic models, executing tools with proper error handling, and supporting multiple AI provider formats (OpenAI, Anthropic, etc.).
Key Features:
Convert Python functions to OpenAI function calling schemas
Manage Pydantic models and their schemas
Execute tools with proper error handling and validation
Support for parallel and sequential function execution
Schema validation for multiple AI providers
Automatic tool execution from API responses
Caching for improved performance
verbose
Optional[bool]
None
Enable detailed logging output base_models
Optional[List[type[BaseModel]]]
None
List of Pydantic models to manage autocheck
Optional[bool]
None
Enable automatic validation checks auto_execute_tool
Optional[bool]
None
Enable automatic tool execution tools
Optional[List[Callable[..., Any]]]
None
List of callable functions to manage tool_system_prompt
Optional[str]
None
System prompt for tool operations function_map
Optional[Dict[str, Callable]]
None
Mapping of function names to callables list_of_dicts
Optional[List[Dict[str, Any]]]
None
List of dictionary representations"},{"location":"swarms/tools/base_tool/#methods-overview","title":"Methods Overview","text":"Method Description func_to_dict
Convert a callable function to OpenAI function calling schema load_params_from_func_for_pybasemodel
Load function parameters for Pydantic BaseModel integration base_model_to_dict
Convert Pydantic BaseModel to OpenAI schema dictionary multi_base_models_to_dict
Convert multiple Pydantic BaseModels to OpenAI schema dict_to_openai_schema_str
Convert dictionary to OpenAI schema string multi_dict_to_openai_schema_str
Convert multiple dictionaries to OpenAI schema string get_docs_from_callable
Extract documentation from callable items execute_tool
Execute a tool based on response string detect_tool_input_type
Detect the type of tool input dynamic_run
Execute dynamic run with automatic type detection execute_tool_by_name
Search for and execute tool by name execute_tool_from_text
Execute tool from JSON-formatted string check_str_for_functions_valid
Check if output is valid JSON with matching function convert_funcs_into_tools
Convert all functions in tools list to OpenAI format convert_tool_into_openai_schema
Convert tools into OpenAI function calling schema check_func_if_have_docs
Check if function has proper documentation check_func_if_have_type_hints
Check if function has proper type hints find_function_name
Find function by name in tools list function_to_dict
Convert function to dictionary representation multiple_functions_to_dict
Convert multiple functions to dictionary representations execute_function_with_dict
Execute function using dictionary of parameters execute_multiple_functions_with_dict
Execute multiple functions with parameter dictionaries validate_function_schema
Validate function schema for different AI providers get_schema_provider_format
Get detected provider format of schema convert_schema_between_providers
Convert schema between provider formats execute_function_calls_from_api_response
Execute function calls from API responses detect_api_response_format
Detect the format of API response"},{"location":"swarms/tools/base_tool/#detailed-method-documentation","title":"Detailed Method Documentation","text":""},{"location":"swarms/tools/base_tool/#func_to_dict","title":"func_to_dict
","text":"Description: Convert a callable function to OpenAI function calling schema dictionary.
Arguments: - function
(Callable[..., Any], optional): The function to convert
Returns: Dict[str, Any]
- OpenAI function calling schema dictionary
Example:
from swarms.tools.base_tool import BaseTool\n\ndef add_numbers(a: int, b: int) -> int:\n \"\"\"Add two numbers together.\"\"\"\n return a + b\n\n# Create BaseTool instance\ntool = BaseTool(verbose=True)\n\n# Convert function to OpenAI schema\nschema = tool.func_to_dict(add_numbers)\nprint(schema)\n# Output: {'type': 'function', 'function': {'name': 'add_numbers', 'description': 'Add two numbers together.', 'parameters': {...}}}\n
"},{"location":"swarms/tools/base_tool/#load_params_from_func_for_pybasemodel","title":"load_params_from_func_for_pybasemodel
","text":"Description: Load and process function parameters for Pydantic BaseModel integration.
Arguments:
func
(Callable[..., Any]): The function to process
*args
: Additional positional arguments
**kwargs
: Additional keyword arguments
Returns: Callable[..., Any]
- Processed function with loaded parameters
Example:
from swarms.tools.base_tool import BaseTool\n\ndef calculate_area(length: float, width: float) -> float:\n \"\"\"Calculate area of a rectangle.\"\"\"\n return length * width\n\ntool = BaseTool()\nprocessed_func = tool.load_params_from_func_for_pybasemodel(calculate_area)\n
"},{"location":"swarms/tools/base_tool/#base_model_to_dict","title":"base_model_to_dict
","text":"Description: Convert a Pydantic BaseModel to OpenAI function calling schema dictionary.
Arguments:
pydantic_type
(type[BaseModel]): The Pydantic model class to convert
*args
: Additional positional arguments
**kwargs
: Additional keyword arguments
Returns: dict[str, Any]
- OpenAI function calling schema dictionary
Example:
from pydantic import BaseModel\nfrom swarms.tools.base_tool import BaseTool\n\nclass UserInfo(BaseModel):\n name: str\n age: int\n email: str\n\ntool = BaseTool()\nschema = tool.base_model_to_dict(UserInfo)\nprint(schema)\n
"},{"location":"swarms/tools/base_tool/#multi_base_models_to_dict","title":"multi_base_models_to_dict
","text":"Description: Convert multiple Pydantic BaseModels to OpenAI function calling schema.
Arguments: - base_models
(List[BaseModel]): List of Pydantic models to convert
Returns: dict[str, Any]
- Combined OpenAI function calling schema
Example:
from pydantic import BaseModel\nfrom swarms.tools.base_tool import BaseTool\n\nclass User(BaseModel):\n name: str\n age: int\n\nclass Product(BaseModel):\n name: str\n price: float\n\ntool = BaseTool()\nschemas = tool.multi_base_models_to_dict([User, Product])\nprint(schemas)\n
"},{"location":"swarms/tools/base_tool/#dict_to_openai_schema_str","title":"dict_to_openai_schema_str
","text":"Description: Convert a dictionary to OpenAI function calling schema string.
Arguments:
dict
(dict[str, Any]): Dictionary to convertReturns: str
- OpenAI schema string representation
Example:
from swarms.tools.base_tool import BaseTool\n\nmy_dict = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get weather information\",\n \"parameters\": {\"type\": \"object\", \"properties\": {\"city\": {\"type\": \"string\"}}}\n }\n}\n\ntool = BaseTool()\nschema_str = tool.dict_to_openai_schema_str(my_dict)\nprint(schema_str)\n
"},{"location":"swarms/tools/base_tool/#multi_dict_to_openai_schema_str","title":"multi_dict_to_openai_schema_str
","text":"Description: Convert multiple dictionaries to OpenAI function calling schema string.
Arguments:
dicts
(list[dict[str, Any]]): List of dictionaries to convertReturns: str
- Combined OpenAI schema string representation
Example:
from swarms.tools.base_tool import BaseTool\n\ndict1 = {\"type\": \"function\", \"function\": {\"name\": \"func1\", \"description\": \"Function 1\"}}\ndict2 = {\"type\": \"function\", \"function\": {\"name\": \"func2\", \"description\": \"Function 2\"}}\n\ntool = BaseTool()\nschema_str = tool.multi_dict_to_openai_schema_str([dict1, dict2])\nprint(schema_str)\n
"},{"location":"swarms/tools/base_tool/#get_docs_from_callable","title":"get_docs_from_callable
","text":"Description: Extract documentation from a callable item.
Arguments:
item
: The callable item to extract documentation fromReturns: Processed documentation
Example:
from swarms.tools.base_tool import BaseTool\n\ndef example_function():\n \"\"\"This is an example function with documentation.\"\"\"\n pass\n\ntool = BaseTool()\ndocs = tool.get_docs_from_callable(example_function)\nprint(docs)\n
"},{"location":"swarms/tools/base_tool/#execute_tool","title":"execute_tool
","text":"Description: Execute a tool based on a response string.
Arguments: - response
(str): JSON response string containing tool execution details
*args
: Additional positional arguments
**kwargs
: Additional keyword arguments
Returns: Callable
- Result of the tool execution
Example:
from swarms.tools.base_tool import BaseTool\n\ndef greet(name: str) -> str:\n \"\"\"Greet a person by name.\"\"\"\n return f\"Hello, {name}!\"\n\ntool = BaseTool(tools=[greet])\nresponse = '{\"name\": \"greet\", \"parameters\": {\"name\": \"Alice\"}}'\nresult = tool.execute_tool(response)\nprint(result) # Output: \"Hello, Alice!\"\n
"},{"location":"swarms/tools/base_tool/#detect_tool_input_type","title":"detect_tool_input_type
","text":"Description: Detect the type of tool input for appropriate processing.
Arguments:
input
(ToolType): The input to analyzeReturns: str
- Type of the input (\"Pydantic\", \"Dictionary\", \"Function\", or \"Unknown\")
Example:
from swarms.tools.base_tool import BaseTool\nfrom pydantic import BaseModel\n\nclass MyModel(BaseModel):\n value: int\n\ndef my_function():\n pass\n\ntool = BaseTool()\nprint(tool.detect_tool_input_type(MyModel)) # \"Pydantic\"\nprint(tool.detect_tool_input_type(my_function)) # \"Function\"\nprint(tool.detect_tool_input_type({\"key\": \"value\"})) # \"Dictionary\"\n
"},{"location":"swarms/tools/base_tool/#dynamic_run","title":"dynamic_run
","text":"Description: Execute a dynamic run based on the input type with automatic type detection.
Arguments: - input
(Any): The input to be processed (Pydantic model, dict, or function)
Returns: str
- The result of the dynamic run (schema string or execution result)
Example:
from swarms.tools.base_tool import BaseTool\n\ndef multiply(x: int, y: int) -> int:\n \"\"\"Multiply two numbers.\"\"\"\n return x * y\n\ntool = BaseTool(auto_execute_tool=False)\nresult = tool.dynamic_run(multiply)\nprint(result) # Returns OpenAI schema string\n
"},{"location":"swarms/tools/base_tool/#execute_tool_by_name","title":"execute_tool_by_name
","text":"Description: Search for a tool by name and execute it with the provided response.
Arguments: - tool_name
(str): The name of the tool to execute
response
(str): JSON response string containing execution parametersReturns: Any
- The result of executing the tool
Example:
from swarms.tools.base_tool import BaseTool\n\ndef calculate_sum(a: int, b: int) -> int:\n \"\"\"Calculate sum of two numbers.\"\"\"\n return a + b\n\ntool = BaseTool(function_map={\"calculate_sum\": calculate_sum})\nresult = tool.execute_tool_by_name(\"calculate_sum\", '{\"a\": 5, \"b\": 3}')\nprint(result) # Output: 8\n
"},{"location":"swarms/tools/base_tool/#execute_tool_from_text","title":"execute_tool_from_text
","text":"Description: Convert a JSON-formatted string into a tool dictionary and execute the tool.
Arguments: - text
(str): A JSON-formatted string representing a tool call with 'name' and 'parameters' keys
Returns: Any
- The result of executing the tool
Example:
from swarms.tools.base_tool import BaseTool\n\ndef divide(x: float, y: float) -> float:\n \"\"\"Divide x by y.\"\"\"\n return x / y\n\ntool = BaseTool(function_map={\"divide\": divide})\ntext = '{\"name\": \"divide\", \"parameters\": {\"x\": 10, \"y\": 2}}'\nresult = tool.execute_tool_from_text(text)\nprint(result) # Output: 5.0\n
"},{"location":"swarms/tools/base_tool/#check_str_for_functions_valid","title":"check_str_for_functions_valid
","text":"Description: Check if the output is a valid JSON string with a function name that matches the function map.
Arguments: - output
(str): The output string to validate
Returns: bool
- True if the output is valid and the function name matches, False otherwise
Example:
from swarms.tools.base_tool import BaseTool\n\ndef test_func():\n pass\n\ntool = BaseTool(function_map={\"test_func\": test_func})\nvalid_output = '{\"type\": \"function\", \"function\": {\"name\": \"test_func\"}}'\nis_valid = tool.check_str_for_functions_valid(valid_output)\nprint(is_valid) # Output: True\n
"},{"location":"swarms/tools/base_tool/#convert_funcs_into_tools","title":"convert_funcs_into_tools
","text":"Description: Convert all functions in the tools list into OpenAI function calling format.
Arguments: None
Returns: None (modifies internal state)
Example:
from swarms.tools.base_tool import BaseTool\n\ndef func1(x: int) -> int:\n \"\"\"Function 1.\"\"\"\n return x * 2\n\ndef func2(y: str) -> str:\n \"\"\"Function 2.\"\"\"\n return y.upper()\n\ntool = BaseTool(tools=[func1, func2])\ntool.convert_funcs_into_tools()\nprint(tool.function_map) # {'func1': <function func1>, 'func2': <function func2>}\n
"},{"location":"swarms/tools/base_tool/#convert_tool_into_openai_schema","title":"convert_tool_into_openai_schema
","text":"Description: Convert tools into OpenAI function calling schema format.
Arguments: None
Returns: dict[str, Any]
- Combined OpenAI function calling schema
Example:
from swarms.tools.base_tool import BaseTool\n\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers.\"\"\"\n return a + b\n\ndef subtract(a: int, b: int) -> int:\n \"\"\"Subtract b from a.\"\"\"\n return a - b\n\ntool = BaseTool(tools=[add, subtract])\nschema = tool.convert_tool_into_openai_schema()\nprint(schema)\n
"},{"location":"swarms/tools/base_tool/#check_func_if_have_docs","title":"check_func_if_have_docs
","text":"Description: Check if a function has proper documentation.
Arguments:
func
(callable): The function to checkReturns: bool
- True if function has documentation
Example:
from swarms.tools.base_tool import BaseTool\n\ndef documented_func():\n \"\"\"This function has documentation.\"\"\"\n pass\n\ndef undocumented_func():\n pass\n\ntool = BaseTool()\nprint(tool.check_func_if_have_docs(documented_func)) # True\n# tool.check_func_if_have_docs(undocumented_func) # Raises ToolDocumentationError\n
"},{"location":"swarms/tools/base_tool/#check_func_if_have_type_hints","title":"check_func_if_have_type_hints
","text":"Description: Check if a function has proper type hints.
Arguments:
func
(callable): The function to checkReturns: bool
- True if function has type hints
Example:
from swarms.tools.base_tool import BaseTool\n\ndef typed_func(x: int) -> str:\n \"\"\"A typed function.\"\"\"\n return str(x)\n\ndef untyped_func(x):\n \"\"\"An untyped function.\"\"\"\n return str(x)\n\ntool = BaseTool()\nprint(tool.check_func_if_have_type_hints(typed_func)) # True\n# tool.check_func_if_have_type_hints(untyped_func) # Raises ToolTypeHintError\n
"},{"location":"swarms/tools/base_tool/#find_function_name","title":"find_function_name
","text":"Description: Find a function by name in the tools list.
Arguments: - func_name
(str): The name of the function to find
Returns: Optional[callable]
- The function if found, None otherwise
Example:
from swarms.tools.base_tool import BaseTool\n\ndef my_function():\n \"\"\"My function.\"\"\"\n pass\n\ntool = BaseTool(tools=[my_function])\nfound_func = tool.find_function_name(\"my_function\")\nprint(found_func) # <function my_function at ...>\n
"},{"location":"swarms/tools/base_tool/#function_to_dict","title":"function_to_dict
","text":"Description: Convert a function to dictionary representation.
Arguments: - func
(callable): The function to convert
Returns: dict
- Dictionary representation of the function
Example:
from swarms.tools.base_tool import BaseTool\n\ndef example_func(param: str) -> str:\n \"\"\"Example function.\"\"\"\n return param\n\ntool = BaseTool()\nfunc_dict = tool.function_to_dict(example_func)\nprint(func_dict)\n
"},{"location":"swarms/tools/base_tool/#multiple_functions_to_dict","title":"multiple_functions_to_dict
","text":"Description: Convert multiple functions to dictionary representations.
Arguments:
funcs
(list[callable]): List of functions to convertReturns: list[dict]
- List of dictionary representations
Example:
from swarms.tools.base_tool import BaseTool\n\ndef func1(x: int) -> int:\n \"\"\"Function 1.\"\"\"\n return x\n\ndef func2(y: str) -> str:\n \"\"\"Function 2.\"\"\"\n return y\n\ntool = BaseTool()\nfunc_dicts = tool.multiple_functions_to_dict([func1, func2])\nprint(func_dicts)\n
"},{"location":"swarms/tools/base_tool/#execute_function_with_dict","title":"execute_function_with_dict
","text":"Description: Execute a function using a dictionary of parameters.
Arguments:
func_dict
(dict): Dictionary containing function parameters
func_name
(Optional[str]): Name of function to execute (if not in dict)
Returns: Any
- Result of function execution
Example:
from swarms.tools.base_tool import BaseTool\n\ndef power(base: int, exponent: int) -> int:\n \"\"\"Calculate base to the power of exponent.\"\"\"\n return base ** exponent\n\ntool = BaseTool(tools=[power])\nresult = tool.execute_function_with_dict({\"base\": 2, \"exponent\": 3}, \"power\")\nprint(result) # Output: 8\n
"},{"location":"swarms/tools/base_tool/#execute_multiple_functions_with_dict","title":"execute_multiple_functions_with_dict
","text":"Description: Execute multiple functions using dictionaries of parameters.
Arguments:
func_dicts
(list[dict]): List of dictionaries containing function parameters
func_names
(Optional[list[str]]): Optional list of function names
Returns: list[Any]
- List of results from function executions
Example:
from swarms.tools.base_tool import BaseTool\n\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers.\"\"\"\n return a + b\n\ndef multiply(a: int, b: int) -> int:\n \"\"\"Multiply two numbers.\"\"\"\n return a * b\n\ntool = BaseTool(tools=[add, multiply])\nresults = tool.execute_multiple_functions_with_dict(\n [{\"a\": 1, \"b\": 2}, {\"a\": 3, \"b\": 4}], \n [\"add\", \"multiply\"]\n)\nprint(results) # [3, 12]\n
"},{"location":"swarms/tools/base_tool/#validate_function_schema","title":"validate_function_schema
","text":"Description: Validate the schema of a function for different AI providers.
Arguments:
schema
(Optional[Union[List[Dict[str, Any]], Dict[str, Any]]]): Function schema(s) to validate
provider
(str): Target provider format (\"openai\", \"anthropic\", \"generic\", \"auto\")
Returns: bool
- True if schema(s) are valid, False otherwise
Example:
from swarms.tools.base_tool import BaseTool\n\nopenai_schema = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"add_numbers\",\n \"description\": \"Add two numbers\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"a\": {\"type\": \"integer\"},\n \"b\": {\"type\": \"integer\"}\n },\n \"required\": [\"a\", \"b\"]\n }\n }\n}\n\ntool = BaseTool()\nis_valid = tool.validate_function_schema(openai_schema, \"openai\")\nprint(is_valid) # True\n
"},{"location":"swarms/tools/base_tool/#get_schema_provider_format","title":"get_schema_provider_format
","text":"Description: Get the detected provider format of a schema.
Arguments:
schema
(Dict[str, Any]): Function schema dictionaryReturns: str
- Provider format (\"openai\", \"anthropic\", \"generic\", \"unknown\")
Example:
from swarms.tools.base_tool import BaseTool\n\nopenai_schema = {\n \"type\": \"function\",\n \"function\": {\"name\": \"test\", \"description\": \"Test function\"}\n}\n\ntool = BaseTool()\nprovider = tool.get_schema_provider_format(openai_schema)\nprint(provider) # \"openai\"\n
"},{"location":"swarms/tools/base_tool/#convert_schema_between_providers","title":"convert_schema_between_providers
","text":"Description: Convert a function schema between different provider formats.
Arguments:
schema
(Dict[str, Any]): Source function schema
target_provider
(str): Target provider format (\"openai\", \"anthropic\", \"generic\")
Returns: Dict[str, Any]
- Converted schema
Example:
from swarms.tools.base_tool import BaseTool\n\nopenai_schema = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"test_func\",\n \"description\": \"Test function\",\n \"parameters\": {\"type\": \"object\", \"properties\": {}}\n }\n}\n\ntool = BaseTool()\nanthropic_schema = tool.convert_schema_between_providers(openai_schema, \"anthropic\")\nprint(anthropic_schema)\n# Output: {\"name\": \"test_func\", \"description\": \"Test function\", \"input_schema\": {...}}\n
"},{"location":"swarms/tools/base_tool/#execute_function_calls_from_api_response","title":"execute_function_calls_from_api_response
","text":"Description: Automatically detect and execute function calls from OpenAI or Anthropic API responses.
Arguments:
api_response
(Union[Dict[str, Any], str, List[Any]]): The API response containing function calls
sequential
(bool): If True, execute functions sequentially. If False, execute in parallel
max_workers
(int): Maximum number of worker threads for parallel execution
return_as_string
(bool): If True, return results as formatted strings
Returns: Union[List[Any], List[str]]
- List of results from executed functions
Example:
from swarms.tools.base_tool import BaseTool\n\ndef get_weather(city: str) -> str:\n \"\"\"Get weather for a city.\"\"\"\n return f\"Weather in {city}: Sunny, 25\u00b0C\"\n\n# Simulated OpenAI API response\nopenai_response = {\n \"choices\": [{\n \"message\": {\n \"tool_calls\": [{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"arguments\": '{\"city\": \"New York\"}'\n },\n \"id\": \"call_123\"\n }]\n }\n }]\n}\n\ntool = BaseTool(tools=[get_weather])\nresults = tool.execute_function_calls_from_api_response(openai_response)\nprint(results) # [\"Function 'get_weather' result:\\nWeather in New York: Sunny, 25\u00b0C\"]\n
"},{"location":"swarms/tools/base_tool/#detect_api_response_format","title":"detect_api_response_format
","text":"Description: Detect the format of an API response.
Arguments:
response
(Union[Dict[str, Any], str, BaseModel]): API response to analyzeReturns: str
- Detected format (\"openai\", \"anthropic\", \"generic\", \"unknown\")
Example:
from swarms.tools.base_tool import BaseTool\n\nopenai_response = {\n \"choices\": [{\"message\": {\"tool_calls\": []}}]\n}\n\nanthropic_response = {\n \"content\": [{\"type\": \"tool_use\", \"name\": \"test\", \"input\": {}}]\n}\n\ntool = BaseTool()\nprint(tool.detect_api_response_format(openai_response)) # \"openai\"\nprint(tool.detect_api_response_format(anthropic_response)) # \"anthropic\"\n
"},{"location":"swarms/tools/base_tool/#exception-classes","title":"Exception Classes","text":"The BaseTool class defines several custom exception classes for better error handling:
BaseToolError
: Base exception class for all BaseTool related errors
ToolValidationError
: Raised when tool validation fails
ToolExecutionError
: Raised when tool execution fails
ToolNotFoundError
: Raised when a requested tool is not found
FunctionSchemaError
: Raised when function schema conversion fails
ToolDocumentationError
: Raised when tool documentation is missing or invalid
ToolTypeHintError
: Raised when tool type hints are missing or invalid
A tool is a Python function designed to perform specific tasks with clear type annotations and comprehensive docstrings. Below are examples of financial tools to help you get started.
"},{"location":"swarms/tools/build_tool/#rules","title":"Rules","text":"To create a tool in the Swarms environment, follow these rules:
The function should perform a specific task and be named appropriately.
Type Annotations:
Both input and output types must be strings (str
).
Docstrings:
Each function must include a comprehensive docstring that adheres to PEP 257 standards. The docstring should explain:
Input and Output Types:
import yfinance as yf\n\ndef get_stock_price(symbol: str) -> str:\n \"\"\"\n Fetches the current stock price from Yahoo Finance.\n\n Args:\n symbol (str): The stock symbol (e.g., \"AAPL\", \"TSLA\", \"NVDA\").\n\n Returns:\n str: A formatted string containing the current stock price and basic information.\n\n Raises:\n ValueError: If the stock symbol is invalid or data cannot be retrieved.\n Exception: If there is an error with the API request.\n \"\"\"\n try:\n # Remove any whitespace and convert to uppercase\n symbol = symbol.strip().upper()\n\n if not symbol:\n raise ValueError(\"Stock symbol cannot be empty.\")\n\n # Fetch stock data\n stock = yf.Ticker(symbol)\n info = stock.info\n\n if not info or 'regularMarketPrice' not in info:\n raise ValueError(f\"Unable to fetch data for symbol: {symbol}\")\n\n current_price = info.get('regularMarketPrice', 'N/A')\n previous_close = info.get('regularMarketPreviousClose', 'N/A')\n market_cap = info.get('marketCap', 'N/A')\n company_name = info.get('longName', symbol)\n\n # Format market cap for readability\n if isinstance(market_cap, (int, float)) and market_cap > 0:\n if market_cap >= 1e12:\n market_cap_str = f\"${market_cap/1e12:.2f}T\"\n elif market_cap >= 1e9:\n market_cap_str = f\"${market_cap/1e9:.2f}B\"\n elif market_cap >= 1e6:\n market_cap_str = f\"${market_cap/1e6:.2f}M\"\n else:\n market_cap_str = f\"${market_cap:,.0f}\"\n else:\n market_cap_str = \"N/A\"\n\n # Calculate price change\n if isinstance(current_price, (int, float)) and isinstance(previous_close, (int, float)):\n price_change = current_price - previous_close\n price_change_percent = (price_change / previous_close) * 100\n change_str = f\"{price_change:+.2f} ({price_change_percent:+.2f}%)\"\n else:\n change_str = \"N/A\"\n\n result = f\"\"\"\nStock: {company_name} ({symbol})\nCurrent Price: ${current_price}\nPrevious Close: ${previous_close}\nChange: {change_str}\nMarket Cap: {market_cap_str}\n \"\"\".strip()\n\n return result\n\n except ValueError as e:\n print(f\"Value Error: {e}\")\n raise\n except Exception as e:\n print(f\"Error fetching stock data: {e}\")\n raise\n
"},{"location":"swarms/tools/build_tool/#example-2-fetch-cryptocurrency-price-from-coingecko","title":"Example 2: Fetch Cryptocurrency Price from CoinGecko","text":"import requests\n\ndef get_crypto_price(coin_id: str) -> str:\n \"\"\"\n Fetches the current cryptocurrency price from CoinGecko API.\n\n Args:\n coin_id (str): The cryptocurrency ID (e.g., \"bitcoin\", \"ethereum\", \"cardano\").\n\n Returns:\n str: A formatted string containing the current crypto price and market data.\n\n Raises:\n ValueError: If the coin ID is invalid or data cannot be retrieved.\n requests.exceptions.RequestException: If there is an error with the API request.\n \"\"\"\n try:\n # Remove any whitespace and convert to lowercase\n coin_id = coin_id.strip().lower()\n\n if not coin_id:\n raise ValueError(\"Coin ID cannot be empty.\")\n\n url = f\"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": coin_id,\n \"vs_currencies\": \"usd\",\n \"include_market_cap\": \"true\",\n \"include_24hr_vol\": \"true\",\n \"include_24hr_change\": \"true\",\n \"include_last_updated_at\": \"true\"\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n data = response.json()\n\n if coin_id not in data:\n raise ValueError(f\"Coin ID '{coin_id}' not found. Please check the spelling.\")\n\n coin_data = data[coin_id]\n\n if not coin_data:\n raise ValueError(f\"No data available for coin ID: {coin_id}\")\n\n usd_price = coin_data.get('usd', 'N/A')\n market_cap = coin_data.get('usd_market_cap', 'N/A')\n volume_24h = coin_data.get('usd_24h_vol', 'N/A')\n change_24h = coin_data.get('usd_24h_change', 'N/A')\n last_updated = coin_data.get('last_updated_at', 'N/A')\n\n # Format large numbers for readability\n def format_number(value):\n if isinstance(value, (int, float)) and value > 0:\n if value >= 1e12:\n return f\"${value/1e12:.2f}T\"\n elif value >= 1e9:\n return f\"${value/1e9:.2f}B\"\n elif value >= 1e6:\n return f\"${value/1e6:.2f}M\"\n elif value >= 1e3:\n return f\"${value/1e3:.2f}K\"\n else:\n return f\"${value:,.2f}\"\n return \"N/A\"\n\n # Format the result\n result = f\"\"\"\nCryptocurrency: {coin_id.title()}\nCurrent Price: ${usd_price:,.8f}\" if isinstance(usd_price, (int, float)) else f\"Current Price: {usd_price}\nMarket Cap: {format_number(market_cap)}\n24h Volume: {format_number(volume_24h)}\n24h Change: {change_24h:+.2f}%\" if isinstance(change_24h, (int, float)) else f\"24h Change: {change_24h}\nLast Updated: {last_updated}\n \"\"\".strip()\n\n return result\n\n except requests.exceptions.RequestException as e:\n print(f\"Request Error: {e}\")\n raise\n except ValueError as e:\n print(f\"Value Error: {e}\")\n raise\n except Exception as e:\n print(f\"Error fetching crypto data: {e}\")\n raise\n
"},{"location":"swarms/tools/build_tool/#example-3-calculate-portfolio-performance","title":"Example 3: Calculate Portfolio Performance","text":"def calculate_portfolio_performance(initial_investment_str: str, current_value_str: str, time_period_str: str) -> str:\n \"\"\"\n Calculates portfolio performance metrics including return percentage and annualized return.\n\n Args:\n initial_investment_str (str): The initial investment amount as a string.\n current_value_str (str): The current portfolio value as a string.\n time_period_str (str): The time period in years as a string.\n\n Returns:\n str: A formatted string containing portfolio performance metrics.\n\n Raises:\n ValueError: If any of the inputs cannot be converted to the appropriate type or are negative.\n \"\"\"\n try:\n initial_investment = float(initial_investment_str)\n current_value = float(current_value_str)\n time_period = float(time_period_str)\n\n if initial_investment <= 0 or current_value < 0 or time_period <= 0:\n raise ValueError(\"Initial investment and time period must be positive, current value must be non-negative.\")\n\n # Calculate total return\n total_return = current_value - initial_investment\n total_return_percentage = (total_return / initial_investment) * 100\n\n # Calculate annualized return\n if time_period > 0:\n annualized_return = ((current_value / initial_investment) ** (1 / time_period) - 1) * 100\n else:\n annualized_return = 0\n\n # Determine performance status\n if total_return > 0:\n status = \"Profitable\"\n elif total_return < 0:\n status = \"Loss\"\n else:\n status = \"Break-even\"\n\n result = f\"\"\"\nPortfolio Performance Analysis:\nInitial Investment: ${initial_investment:,.2f}\nCurrent Value: ${current_value:,.2f}\nTime Period: {time_period:.1f} years\n\nTotal Return: ${total_return:+,.2f} ({total_return_percentage:+.2f}%)\nAnnualized Return: {annualized_return:+.2f}%\nStatus: {status}\n \"\"\".strip()\n\n return result\n\n except ValueError as e:\n print(f\"Value Error: {e}\")\n raise\n except Exception as e:\n print(f\"Error calculating portfolio performance: {e}\")\n raise\n
"},{"location":"swarms/tools/build_tool/#example-4-calculate-compound-interest","title":"Example 4: Calculate Compound Interest","text":"def calculate_compound_interest(principal_str: str, rate_str: str, time_str: str, compounding_frequency_str: str) -> str:\n \"\"\"\n Calculates compound interest for investment planning.\n\n Args:\n principal_str (str): The initial investment amount as a string.\n rate_str (str): The annual interest rate (as decimal) as a string.\n time_str (str): The investment time period in years as a string.\n compounding_frequency_str (str): The number of times interest is compounded per year as a string.\n\n Returns:\n str: A formatted string containing the compound interest calculation results.\n\n Raises:\n ValueError: If any of the inputs cannot be converted to the appropriate type or are negative.\n \"\"\"\n try:\n principal = float(principal_str)\n rate = float(rate_str)\n time = float(time_str)\n n = int(compounding_frequency_str)\n\n if principal <= 0 or rate < 0 or time <= 0 or n <= 0:\n raise ValueError(\"Principal, time, and compounding frequency must be positive. Rate must be non-negative.\")\n\n # Calculate compound interest\n amount = principal * (1 + rate / n) ** (n * time)\n interest_earned = amount - principal\n\n # Calculate effective annual rate\n effective_rate = ((1 + rate / n) ** n - 1) * 100\n\n result = f\"\"\"\nCompound Interest Calculation:\nPrincipal: ${principal:,.2f}\nAnnual Rate: {rate*100:.2f}%\nTime Period: {time:.1f} years\nCompounding Frequency: {n} times per year\n\nFinal Amount: ${amount:,.2f}\nInterest Earned: ${interest_earned:,.2f}\nEffective Annual Rate: {effective_rate:.2f}%\n \"\"\".strip()\n\n return result\n\n except ValueError as e:\n print(f\"Value Error: {e}\")\n raise\n except Exception as e:\n print(f\"Error calculating compound interest: {e}\")\n raise\n
"},{"location":"swarms/tools/build_tool/#integrating-tools-into-an-agent","title":"Integrating Tools into an Agent","text":"To integrate tools into an agent, simply pass callable functions with proper type annotations and documentation into the agent class.
from swarms import Agent\n\n# Initialize the financial analysis agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=(\n \"You are a professional financial analyst agent. Use the provided tools to \"\n \"analyze stocks, cryptocurrencies, and investment performance. Provide \"\n \"clear, accurate financial insights and recommendations. Always format \"\n \"responses in markdown for better readability.\"\n ),\n model_name=\"gpt-4o\",\n max_loops=3,\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"financial_agent.json\",\n tools=[get_stock_price, get_crypto_price, calculate_portfolio_performance],\n user_name=\"financial_analyst\",\n retry_attempts=3,\n context_length=200000,\n)\n\n# Run the agent\nresponse = agent(\"Analyze the current price of Apple stock and Bitcoin, then calculate the performance of a $10,000 investment in each over the past 2 years.\")\nprint(response)\n
"},{"location":"swarms/tools/build_tool/#complete-financial-analysis-example","title":"Complete Financial Analysis Example","text":"import yfinance as yf\nimport requests\nfrom swarms import Agent\n\ndef get_stock_price(symbol: str) -> str:\n \"\"\"\n Fetches the current stock price from Yahoo Finance.\n\n Args:\n symbol (str): The stock symbol (e.g., \"AAPL\", \"TSLA\", \"NVDA\").\n\n Returns:\n str: A formatted string containing the current stock price and basic information.\n\n Raises:\n ValueError: If the stock symbol is invalid or data cannot be retrieved.\n Exception: If there is an error with the API request.\n \"\"\"\n try:\n symbol = symbol.strip().upper()\n\n if not symbol:\n raise ValueError(\"Stock symbol cannot be empty.\")\n\n stock = yf.Ticker(symbol)\n info = stock.info\n\n if not info or 'regularMarketPrice' not in info:\n raise ValueError(f\"Unable to fetch data for symbol: {symbol}\")\n\n current_price = info.get('regularMarketPrice', 'N/A')\n previous_close = info.get('regularMarketPreviousClose', 'N/A')\n market_cap = info.get('marketCap', 'N/A')\n company_name = info.get('longName', symbol)\n\n if isinstance(market_cap, (int, float)) and market_cap > 0:\n if market_cap >= 1e12:\n market_cap_str = f\"${market_cap/1e12:.2f}T\"\n elif market_cap >= 1e9:\n market_cap_str = f\"${market_cap/1e9:.2f}B\"\n elif market_cap >= 1e6:\n market_cap_str = f\"${market_cap/1e6:.2f}M\"\n else:\n market_cap_str = f\"${market_cap:,.0f}\"\n else:\n market_cap_str = \"N/A\"\n\n if isinstance(current_price, (int, float)) and isinstance(previous_close, (int, float)):\n price_change = current_price - previous_close\n price_change_percent = (price_change / previous_close) * 100\n change_str = f\"{price_change:+.2f} ({price_change_percent:+.2f}%)\"\n else:\n change_str = \"N/A\"\n\n result = f\"\"\"\nStock: {company_name} ({symbol})\nCurrent Price: ${current_price}\nPrevious Close: ${previous_close}\nChange: {change_str}\nMarket Cap: {market_cap_str}\n \"\"\".strip()\n\n return result\n\n except ValueError as e:\n print(f\"Value Error: {e}\")\n raise\n except Exception as e:\n print(f\"Error fetching stock data: {e}\")\n raise\n\ndef get_crypto_price(coin_id: str) -> str:\n \"\"\"\n Fetches the current cryptocurrency price from CoinGecko API.\n\n Args:\n coin_id (str): The cryptocurrency ID (e.g., \"bitcoin\", \"ethereum\", \"cardano\").\n\n Returns:\n str: A formatted string containing the current crypto price and market data.\n\n Raises:\n ValueError: If the coin ID is invalid or data cannot be retrieved.\n requests.exceptions.RequestException: If there is an error with the API request.\n \"\"\"\n try:\n coin_id = coin_id.strip().lower()\n\n if not coin_id:\n raise ValueError(\"Coin ID cannot be empty.\")\n\n url = f\"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": coin_id,\n \"vs_currencies\": \"usd\",\n \"include_market_cap\": \"true\",\n \"include_24hr_vol\": \"true\",\n \"include_24hr_change\": \"true\",\n \"include_last_updated_at\": \"true\"\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n data = response.json()\n\n if coin_id not in data:\n raise ValueError(f\"Coin ID '{coin_id}' not found. Please check the spelling.\")\n\n coin_data = data[coin_id]\n\n if not coin_data:\n raise ValueError(f\"No data available for coin ID: {coin_id}\")\n\n usd_price = coin_data.get('usd', 'N/A')\n market_cap = coin_data.get('usd_market_cap', 'N/A')\n volume_24h = coin_data.get('usd_24h_vol', 'N/A')\n change_24h = coin_data.get('usd_24h_change', 'N/A')\n last_updated = coin_data.get('last_updated_at', 'N/A')\n\n def format_number(value):\n if isinstance(value, (int, float)) and value > 0:\n if value >= 1e12:\n return f\"${value/1e12:.2f}T\"\n elif value >= 1e9:\n return f\"${value/1e9:.2f}B\"\n elif value >= 1e6:\n return f\"${value/1e6:.2f}M\"\n elif value >= 1e3:\n return f\"${value/1e3:.2f}K\"\n else:\n return f\"${value:,.2f}\"\n return \"N/A\"\n\n result = f\"\"\"\nCryptocurrency: {coin_id.title()}\nCurrent Price: ${usd_price:,.8f}\" if isinstance(usd_price, (int, float)) else f\"Current Price: {usd_price}\nMarket Cap: {format_number(market_cap)}\n24h Volume: {format_number(volume_24h)}\n24h Change: {change_24h:+.2f}%\" if isinstance(change_24h, (int, float)) else f\"24h Change: {change_24h}\nLast Updated: {last_updated}\n \"\"\".strip()\n\n return result\n\n except requests.exceptions.RequestException as e:\n print(f\"Request Error: {e}\")\n raise\n except ValueError as e:\n print(f\"Value Error: {e}\")\n raise\n except Exception as e:\n print(f\"Error fetching crypto data: {e}\")\n raise\n\n# Initialize the financial analysis agent\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\",\n system_prompt=(\n \"You are a professional financial analyst agent specializing in stock and \"\n \"cryptocurrency analysis. Use the provided tools to fetch real-time market \"\n \"data and provide comprehensive financial insights. Always present data \"\n \"in a clear, professional format with actionable recommendations.\"\n ),\n model_name=\"gpt-4o\",\n max_loops=3,\n autosave=True,\n dashboard=False,\n verbose=True,\n streaming_on=True,\n dynamic_temperature_enabled=True,\n saved_state_path=\"financial_agent.json\",\n tools=[get_stock_price, get_crypto_price],\n user_name=\"financial_analyst\",\n retry_attempts=3,\n context_length=200000,\n)\n\n# Run the agent\nresponse = agent(\"What are the current prices and market data for Apple stock and Bitcoin? Provide a brief analysis of their performance.\")\nprint(response)\n
"},{"location":"swarms/tools/main/","title":"The Swarms Tool System: Functions, Pydantic BaseModels as Tools, and Radical Customization","text":"This guide provides an in-depth look at the Swarms Tool System, focusing on its functions, the use of Pydantic BaseModels as tools, and the extensive customization options available. Aimed at developers, this documentation highlights how the Swarms framework works and offers detailed examples of creating and customizing tools and agents, specifically for accounting tasks.
The Swarms Tool System is a flexible and extensible component of the Swarms framework that allows for the creation, registration, and utilization of various tools. These tools can perform a wide range of tasks and are integrated into agents to provide specific functionalities. The system supports multiple ways to define tools, including using Pydantic BaseModels, functions, and dictionaries.
"},{"location":"swarms/tools/main/#architecture","title":"Architecture","text":"The architecture of the Swarms Tool System is designed to be highly modular. It consists of the following main components:
Tools are the core functional units within the Swarms framework. They can be defined in various ways:
Agents utilize tools to perform tasks. They are configured with a set of tools and schemas, and they execute the tools based on the input they receive.
"},{"location":"swarms/tools/main/#detailed-documentation","title":"Detailed Documentation","text":""},{"location":"swarms/tools/main/#tool-definition","title":"Tool Definition","text":""},{"location":"swarms/tools/main/#using-pydantic-basemodels","title":"Using Pydantic BaseModels","text":"Pydantic BaseModels provide a structured way to define tool inputs and outputs. They ensure data validation and serialization, making them ideal for complex data handling.
Example:
Define Pydantic BaseModels for accounting tasks:
from pydantic import BaseModel\n\nclass CalculateTax(BaseModel):\n income: float\n\nclass GenerateInvoice(BaseModel):\n client_name: str\n amount: float\n date: str\n\nclass SummarizeExpenses(BaseModel):\n expenses: list[dict]\n
Define tool functions using these models:
def calculate_tax(data: CalculateTax) -> dict:\n tax_rate = 0.3 # Example tax rate\n tax = data.income * tax_rate\n return {\"income\": data.income, \"tax\": tax}\n\ndef generate_invoice(data: GenerateInvoice) -> dict:\n invoice = {\n \"client_name\": data.client_name,\n \"amount\": data.amount,\n \"date\": data.date,\n \"invoice_id\": \"INV12345\"\n }\n return invoice\n\ndef summarize_expenses(data: SummarizeExpenses) -> dict:\n total_expenses = sum(expense['amount'] for expense in data.expenses)\n return {\"total_expenses\": total_expenses}\n
"},{"location":"swarms/tools/main/#using-functions-directly","title":"Using Functions Directly","text":"Tools can also be defined directly as functions without using Pydantic models. This approach is suitable for simpler tasks where complex validation is not required.
Example:
def basic_tax_calculation(income: float) -> dict:\n tax_rate = 0.25\n tax = income * tax_rate\n return {\"income\": income, \"tax\": tax}\n
"},{"location":"swarms/tools/main/#using-dictionaries","title":"Using Dictionaries","text":"Tools can be represented as dictionaries, providing maximum flexibility. This method is useful when the tool's functionality is more dynamic or when integrating with external systems.
Example:
basic_tool_schema = {\n \"name\": \"basic_tax_tool\",\n \"description\": \"A basic tax calculation tool\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"income\": {\"type\": \"number\", \"description\": \"Income amount\"}\n },\n \"required\": [\"income\"]\n }\n}\n\ndef basic_tax_tool(income: float) -> dict:\n tax_rate = 0.2\n tax = income * tax_rate\n return {\"income\": income, \"tax\": tax}\n
"},{"location":"swarms/tools/main/#tool-registration","title":"Tool Registration","text":"Tools need to be registered with the agent for it to utilize them. This can be done by specifying the tools in the tools
parameter during agent initialization.
Example:
from swarms import Agent\nfrom llama_hosted import llama3Hosted\n\n# Define Pydantic BaseModels for accounting tasks\nclass CalculateTax(BaseModel):\n income: float\n\nclass GenerateInvoice(BaseModel):\n client_name: str\n amount: float\n date: str\n\nclass SummarizeExpenses(BaseModel):\n expenses: list[dict]\n\n# Define tool functions using these models\ndef calculate_tax(data: CalculateTax) -> dict:\n tax_rate = 0.3\n tax = data.income * tax_rate\n return {\"income\": data.income, \"tax\": tax}\n\ndef generate_invoice(data: GenerateInvoice) -> dict:\n invoice = {\n \"client_name\": data.client_name,\n \"amount\": data.amount,\n \"date\": data.date,\n \"invoice_id\": \"INV12345\"\n }\n return invoice\n\ndef summarize_expenses(data: SummarizeExpenses) -> dict:\n total_expenses = sum(expense['amount'] for expense in data.expenses)\n return {\"total_expenses\": total_expenses}\n\n# Function to generate a tool schema for demonstration purposes\ndef create_tool_schema():\n return {\n \"name\": \"execute\",\n \"description\": \"Executes code on the user's machine\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"language\": {\n \"type\": \"string\",\n \"description\": \"Programming language\",\n \"enum\": [\"python\", \"java\"]\n },\n \"code\": {\"type\": \"string\", \"description\": \"Code to execute\"}\n },\n \"required\": [\"language\", \"code\"]\n }\n }\n\n# Initialize the agent with the tools\nagent = Agent(\n agent_name=\"Accounting Agent\",\n system_prompt=\"This agent assists with various accounting tasks.\",\n sop_list=[\"Provide accurate and timely accounting services.\"],\n llm=llama3Hosted(),\n max_loops=\"auto\",\n interactive=True,\n verbose=True,\n tool_schema=BaseModel,\n list_base_models=[\n CalculateTax,\n GenerateInvoice,\n SummarizeExpenses\n ],\n output_type=str,\n metadata_output_type=\"json\",\n function_calling_format_type=\"OpenAI\",\n function_calling_type=\"json\",\n tools=[\n calculate_tax,\n generate_invoice,\n summarize_expenses\n ],\n list_base_models_json=create_tool_schema(),\n)\n
"},{"location":"swarms/tools/main/#running-the-agent","title":"Running the Agent","text":"The agent can execute tasks using the run
method. This method takes a prompt and determines the appropriate tool to use based on the input.
Example:
# Example task: Calculate tax for an income\nresult = agent.run(\"Calculate the tax for an income of $50,000.\")\nprint(f\"Result: {result}\")\n\n# Example task: Generate an invoice\ninvoice_data = agent.run(\"Generate an invoice for John Doe for $1500 on 2024-06-01.\")\nprint(f\"Invoice Data: {invoice_data}\")\n\n# Example task: Summarize expenses\nexpenses = [\n {\"amount\": 200.0, \"description\": \"Office supplies\"},\n {\"amount\": 1500.0, \"description\": \"Software licenses\"},\n {\"amount\": 300.0, \"description\": \"Travel expenses\"}\n]\nsummary = agent.run(\"Summarize these expenses: \" + str(expenses))\nprint(f\"Expenses Summary: {summary}\")\n
"},{"location":"swarms/tools/main/#customizing-tools","title":"Customizing Tools","text":"Custom tools can be created to extend the functionality of the Swarms framework. This can include integrating external APIs, performing complex calculations, or handling specialized data formats.
Example: Custom Accounting Tool
from pydantic import BaseModel\n\nclass CustomAccountingTool(BaseModel):\n data: dict\n\ndef custom_accounting_tool(data: CustomAccountingTool) -> dict:\n # Custom logic for the accounting tool\n result = {\n \"status\": \"success\",\n \"data_processed\": len(data.data)\n }\n return result\n\n# Register the custom tool with the agent\nagent = Agent(\n agent_name=\"Accounting Agent\",\n system_prompt=\"This agent assists with various accounting tasks.\",\n sop_list=[\"Provide accurate and timely accounting services.\"],\n llm=llama3Hosted(),\n max_loops=\"auto\",\n interactive=True,\n verbose=True,\n tool_schema=BaseModel,\n list_base_models=[\n CalculateTax,\n GenerateInvoice,\n SummarizeExpenses,\n CustomAccountingTool\n ],\n output_type=str,\n metadata_output_type=\"json\",\n function_calling_format_type=\"OpenAI\",\n function_calling_type=\"json\",\n tools=[\n calculate_tax,\n generate_invoice,\n summarize_expenses,\n custom_accounting_tool\n ],\n list_base_models_json=create_tool_schema(),\n)\n
"},{"location":"swarms/tools/main/#advanced-customization","title":"Advanced Customization","text":"Advanced customization involves modifying the core components of the Swarms framework. This includes extending existing classes, adding new methods, or integrating third-party libraries.
Example: Extending the Agent Class
from swarms import Agent\n\nclass AdvancedAccountingAgent(Agent):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def custom_behavior(self):\n print(\"Executing custom behavior\")\n\n def another_custom_method(self):\n print(\"Another\n\n custom method\")\n\n# Initialize the advanced agent\nadvanced_agent = AdvancedAccountingAgent(\n agent_name=\"Advanced Accounting Agent\",\n system_prompt=\"This agent performs advanced accounting tasks.\",\n sop_list=[\"Provide advanced accounting services.\"],\n llm=llama3Hosted(),\n max_loops=\"auto\",\n interactive=True,\n verbose=True,\n tool_schema=BaseModel,\n list_base_models=[\n CalculateTax,\n GenerateInvoice,\n SummarizeExpenses,\n CustomAccountingTool\n ],\n output_type=str,\n metadata_output_type=\"json\",\n function_calling_format_type=\"OpenAI\",\n function_calling_type=\"json\",\n tools=[\n calculate_tax,\n generate_invoice,\n summarize_expenses,\n custom_accounting_tool\n ],\n list_base_models_json=create_tool_schema(),\n)\n\n# Call custom methods\nadvanced_agent.custom_behavior()\nadvanced_agent.another_custom_method()\n
"},{"location":"swarms/tools/main/#integrating-external-libraries","title":"Integrating External Libraries","text":"You can integrate external libraries to extend the functionality of your tools. This is useful for adding new capabilities or leveraging existing libraries for complex tasks.
Example: Integrating Pandas for Data Processing
import pandas as pd\nfrom pydantic import BaseModel\n\nclass DataFrameTool(BaseModel):\n data: list[dict]\n\ndef process_data_frame(data: DataFrameTool) -> dict:\n df = pd.DataFrame(data.data)\n summary = df.describe().to_dict()\n return {\"summary\": summary}\n\n# Register the tool with the agent\nagent = Agent(\n agent_name=\"Data Processing Agent\",\n system_prompt=\"This agent processes data frames.\",\n sop_list=[\"Provide data processing services.\"],\n llm=llama3Hosted(),\n max_loops=\"auto\",\n interactive=True,\n verbose=True,\n tool_schema=BaseModel,\n list_base_models=[DataFrameTool],\n output_type=str,\n metadata_output_type=\"json\",\n function_calling_format_type=\"OpenAI\",\n function_calling_type=\"json\",\n tools=[process_data_frame],\n list_base_models_json=create_tool_schema(),\n)\n\n# Example task: Process a data frame\ndata = [\n {\"col1\": 1, \"col2\": 2},\n {\"col1\": 3, \"col2\": 4},\n {\"col1\": 5, \"col2\": 6}\n]\nresult = agent.run(\"Process this data frame: \" + str(data))\nprint(f\"Data Frame Summary: {result}\")\n
"},{"location":"swarms/tools/main/#conclusion","title":"Conclusion","text":"The Swarms Tool System provides a robust and flexible framework for defining and utilizing tools within agents. By leveraging Pydantic BaseModels, functions, and dictionaries, developers can create highly customized tools to perform a wide range of tasks. The extensive customization options allow for the integration of external libraries and the extension of core components, making the Swarms framework suitable for diverse applications.
This guide has covered the fundamental concepts and provided detailed examples to help you get started with the Swarms Tool System. With this foundation, you can explore and implement advanced features to build powerful
"},{"location":"swarms/tools/mcp_client_call/","title":"MCP Client Call Reference Documentation","text":"This document provides a comprehensive reference for the MCP (Model Control Protocol) client call functions, including detailed parameter descriptions, return types, and usage examples.
"},{"location":"swarms/tools/mcp_client_call/#table-of-contents","title":"Table of Contents","text":"aget_mcp_tools
get_mcp_tools_sync
get_tools_for_multiple_mcp_servers
execute_tool_call_simple
Asynchronously fetches available MCP tools from the server with retry logic.
"},{"location":"swarms/tools/mcp_client_call/#parameters","title":"Parameters","text":"Parameter Type Required Description server_path Optional[str] No Path to the MCP server script format str No Format of the returned tools (default: \"openai\") connection Optional[MCPConnection] No MCP connection object *args Any No Additional positional arguments **kwargs Any No Additional keyword arguments"},{"location":"swarms/tools/mcp_client_call/#returns","title":"Returns","text":"List[Dict[str, Any]]
: List of available MCP tools in OpenAI formatMCPValidationError
: If server_path is invalid
MCPConnectionError
: If connection to server fails
import asyncio\nfrom swarms.tools.mcp_client_call import aget_mcp_tools\nfrom swarms.tools.mcp_connection import MCPConnection\n\nasync def main():\n # Using server path\n tools = await aget_mcp_tools(server_path=\"http://localhost:8000\")\n\n # Using connection object\n connection = MCPConnection(\n host=\"localhost\",\n port=8000,\n headers={\"Authorization\": \"Bearer token\"}\n )\n tools = await aget_mcp_tools(connection=connection)\n\n print(f\"Found {len(tools)} tools\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
"},{"location":"swarms/tools/mcp_client_call/#get_mcp_tools_sync","title":"get_mcp_tools_sync","text":"Synchronous version of get_mcp_tools that handles event loop management.
"},{"location":"swarms/tools/mcp_client_call/#parameters_1","title":"Parameters","text":"Parameter Type Required Description server_path Optional[str] No Path to the MCP server script format str No Format of the returned tools (default: \"openai\") connection Optional[MCPConnection] No MCP connection object *args Any No Additional positional arguments **kwargs Any No Additional keyword arguments"},{"location":"swarms/tools/mcp_client_call/#returns_1","title":"Returns","text":"List[Dict[str, Any]]
: List of available MCP tools in OpenAI formatMCPValidationError
: If server_path is invalid
MCPConnectionError
: If connection to server fails
MCPExecutionError
: If event loop management fails
from swarms.tools.mcp_client_call import get_mcp_tools_sync\nfrom swarms.tools.mcp_connection import MCPConnection\n\n# Using server path\ntools = get_mcp_tools_sync(server_path=\"http://localhost:8000\")\n\n# Using connection object\nconnection = MCPConnection(\n host=\"localhost\",\n port=8000,\n headers={\"Authorization\": \"Bearer token\"}\n)\ntools = get_mcp_tools_sync(connection=connection)\n\nprint(f\"Found {len(tools)} tools\")\n
"},{"location":"swarms/tools/mcp_client_call/#get_tools_for_multiple_mcp_servers","title":"get_tools_for_multiple_mcp_servers","text":"Get tools for multiple MCP servers concurrently using ThreadPoolExecutor.
"},{"location":"swarms/tools/mcp_client_call/#parameters_2","title":"Parameters","text":"Parameter Type Required Description urls List[str] Yes List of server URLs to fetch tools from connections List[MCPConnection] No Optional list of MCPConnection objects format str No Format to return tools in (default: \"openai\") output_type Literal[\"json\", \"dict\", \"str\"] No Type of output format (default: \"str\") max_workers Optional[int] No Maximum number of worker threads"},{"location":"swarms/tools/mcp_client_call/#returns_2","title":"Returns","text":"List[Dict[str, Any]]
: Combined list of tools from all serversMCPExecutionError
: If fetching tools from any server failsfrom swarms.tools.mcp_client_call import get_tools_for_multiple_mcp_servers\nfrom swarms.tools.mcp_connection import MCPConnection\n\n# Define server URLs\nurls = [\n \"http://server1:8000\",\n \"http://server2:8000\"\n]\n\n# Optional: Define connections\nconnections = [\n MCPConnection(host=\"server1\", port=8000),\n MCPConnection(host=\"server2\", port=8000)\n]\n\n# Get tools from all servers\ntools = get_tools_for_multiple_mcp_servers(\n urls=urls,\n connections=connections,\n format=\"openai\",\n output_type=\"dict\",\n max_workers=4\n)\n\nprint(f\"Found {len(tools)} tools across all servers\")\n
"},{"location":"swarms/tools/mcp_client_call/#execute_tool_call_simple","title":"execute_tool_call_simple","text":"Execute a tool call using the MCP client.
"},{"location":"swarms/tools/mcp_client_call/#parameters_3","title":"Parameters","text":"Parameter Type Required Description response Any No Tool call response object server_path str No Path to the MCP server connection Optional[MCPConnection] No MCP connection object output_type Literal[\"json\", \"dict\", \"str\", \"formatted\"] No Type of output format (default: \"str\") *args Any No Additional positional arguments **kwargs Any No Additional keyword arguments"},{"location":"swarms/tools/mcp_client_call/#returns_3","title":"Returns","text":"List[Dict[str, Any]]
: Result of the tool executionMCPConnectionError
: If connection to server fails
MCPExecutionError
: If tool execution fails
import asyncio\nfrom swarms.tools.mcp_client_call import execute_tool_call_simple\nfrom swarms.tools.mcp_connection import MCPConnection\n\nasync def main():\n # Example tool call response\n response = {\n \"name\": \"example_tool\",\n \"parameters\": {\"param1\": \"value1\"}\n }\n\n # Using server path\n result = await execute_tool_call_simple(\n response=response,\n server_path=\"http://localhost:8000\",\n output_type=\"json\"\n )\n\n # Using connection object\n connection = MCPConnection(\n host=\"localhost\",\n port=8000,\n headers={\"Authorization\": \"Bearer token\"}\n )\n result = await execute_tool_call_simple(\n response=response,\n connection=connection,\n output_type=\"dict\"\n )\n\n print(f\"Tool execution result: {result}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n
"},{"location":"swarms/tools/mcp_client_call/#error-handling","title":"Error Handling","text":"The MCP client functions use a retry mechanism with exponential backoff for failed requests. The following error types may be raised:
MCPValidationError
: Raised when input validation fails
MCPConnectionError
: Raised when connection to the MCP server fails
MCPExecutionError
: Raised when tool execution fails
The ToolStorage
module provides a structured and efficient way to manage and utilize various tool functions. It is designed to store tool functions, manage settings, and ensure smooth registration and retrieval of tools. This module is particularly useful in applications that require dynamic management of a collection of functions, such as plugin systems, modular software, or any application where functions need to be registered and called dynamically.
The ToolStorage
class is the core component of the module. It provides functionalities to add, retrieve, and list tool functions as well as manage settings.
verbose
bool
A flag to enable verbose logging. tools
List[Callable]
A list of tool functions. _tools
Dict[str, Callable]
A dictionary that stores the tools, where the key is the tool name and the value is the tool function. _settings
Dict[str, Any]
A dictionary that stores the settings, where the key is the setting name and the value is the setting value."},{"location":"swarms/tools/tool_storage/#methods","title":"Methods","text":""},{"location":"swarms/tools/tool_storage/#__init__","title":"__init__
","text":"Initializes the ToolStorage
instance.
verbose
bool
None
A flag to enable verbose logging. tools
List[Callable]
None
A list of tool functions to initialize the storage with. *args
tuple
None
Additional positional arguments. **kwargs
dict
None
Additional keyword arguments."},{"location":"swarms/tools/tool_storage/#add_tool","title":"add_tool
","text":"Adds a tool to the storage.
Parameter Type Descriptionfunc
Callable
The tool function to be added. Raises: - ValueError
: If a tool with the same name already exists.
get_tool
","text":"Retrieves a tool by its name.
Parameter Type Descriptionname
str
The name of the tool to retrieve. Returns: - Callable
: The tool function.
Raises: - ValueError
: If no tool with the given name is found.
set_setting
","text":"Sets a setting in the storage.
Parameter Type Descriptionkey
str
The key for the setting. value
Any
The value for the setting."},{"location":"swarms/tools/tool_storage/#get_setting","title":"get_setting
","text":"Gets a setting from the storage.
Parameter Type Descriptionkey
str
The key for the setting. Returns: - Any
: The value of the setting.
Raises: - KeyError
: If the setting is not found.
list_tools
","text":"Lists all registered tools.
Returns: - List[str]
: A list of tool names.
The tool_registry
decorator registers a function as a tool in the storage.
storage
ToolStorage
The storage instance to register the tool in. Returns: - Callable
: The decorator function.
from swarms import ToolStorage, tool_registry\n\nstorage = ToolStorage()\n\n\n# Example usage\n@tool_registry(storage)\ndef example_tool(x: int, y: int) -> int:\n \"\"\"\n Example tool function that adds two numbers.\n\n Args:\n x (int): The first number.\n y (int): The second number.\n\n Returns:\n int: The sum of the two numbers.\n \"\"\"\n return x + y\n\n\n# Query all the tools and get the example tool\nprint(storage.list_tools()) # Should print ['example_tool']\n# print(storage.get_tool('example_tool')) # Should print <function example_tool at 0x...>\n\n# Find the tool by names and call it\nprint(storage.get_tool(\"example_tool\")) # Should print 5\n\n\n# Test the storage and querying\nif __name__ == \"__main__\":\n print(storage.list_tools()) # Should print ['example_tool']\n print(storage.get_tool(\"example_tool\")) # Should print 5\n storage.set_setting(\"example_setting\", 42)\n print(storage.get_setting(\"example_setting\")) # Should print 42\n
"},{"location":"swarms/tools/tool_storage/#basic-usage","title":"Basic Usage","text":""},{"location":"swarms/tools/tool_storage/#example-1-initializing-toolstorage-and-adding-a-tool","title":"Example 1: Initializing ToolStorage and Adding a Tool","text":"from swarms.tools.tool_registry import ToolStorage, tool_registry\n\n# Initialize ToolStorage\nstorage = ToolStorage()\n\n# Define a tool function\n@tool_registry(storage)\ndef add_numbers(x: int, y: int) -> int:\n return x + y\n\n# List tools\nprint(storage.list_tools()) # Output: ['add_numbers']\n\n# Retrieve and use the tool\nadd_tool = storage.get_tool('add_numbers')\nprint(add_tool(5, 3)) # Output: 8\n
"},{"location":"swarms/tools/tool_storage/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms/tools/tool_storage/#example-2-managing-settings","title":"Example 2: Managing Settings","text":"# Set a setting\nstorage.set_setting('max_retries', 5)\n\n# Get a setting\nmax_retries = storage.get_setting('max_retries')\nprint(max_retries) # Output: 5\n
"},{"location":"swarms/tools/tool_storage/#error-handling","title":"Error Handling","text":""},{"location":"swarms/tools/tool_storage/#example-3-handling-errors-in-tool-retrieval","title":"Example 3: Handling Errors in Tool Retrieval","text":"try:\n non_existent_tool = storage.get_tool('non_existent')\nexcept ValueError as e:\n print(e) # Output: No tool found with name: non_existent\n
"},{"location":"swarms/tools/tool_storage/#example-4-handling-duplicate-tool-addition","title":"Example 4: Handling Duplicate Tool Addition","text":"try:\n @tool_registry(storage)\n def add_numbers(x: int, y: int) -> int:\n return x + y\nexcept ValueError as e:\n print(e) # Output: Tool with name add_numbers already exists.\n
"},{"location":"swarms/tools/tool_storage/#conclusion","title":"Conclusion","text":"The ToolStorage
module provides a robust solution for managing tool functions and settings. Its design allows for easy registration, retrieval, and management of tools, making it a valuable asset in various applications requiring dynamic function handling. The inclusion of detailed logging ensures that the operations are transparent and any issues can be quickly identified and resolved.
Swarms provides a comprehensive toolkit for integrating various types of tools into your AI agents. This guide covers all available tool options including callable functions, MCP servers, schemas, and more.
"},{"location":"swarms/tools/tools_examples/#installation","title":"Installation","text":"pip install swarms\n
"},{"location":"swarms/tools/tools_examples/#overview","title":"Overview","text":"Swarms provides a comprehensive suite of tool integration methods to enhance your AI agents' capabilities:
Tool Type Description Callable Functions Direct integration of Python functions with proper type hints and comprehensive docstrings for immediate tool functionality MCP Servers Model Context Protocol servers enabling distributed tool functionality across multiple services and environments Tool Schemas Structured tool definitions that provide standardized interfaces and validation for tool integration Tool Collections Pre-built tool packages offering ready-to-use functionality for common use cases"},{"location":"swarms/tools/tools_examples/#method-1-callable-functions","title":"Method 1: Callable Functions","text":"Callable functions are the simplest way to add tools to your Swarms agents. They are regular Python functions with type hints and comprehensive docstrings.
"},{"location":"swarms/tools/tools_examples/#step-1-define-your-tool-functions","title":"Step 1: Define Your Tool Functions","text":"Create functions with the following requirements:
Type hints for all parameters and return values
Comprehensive docstrings with Args, Returns, Raises, and Examples sections
Error handling for robust operation
import json\nimport requests\nfrom swarms import Agent\n\n\ndef get_coin_price(coin_id: str, vs_currency: str = \"usd\") -> str:\n \"\"\"\n Get the current price of a specific cryptocurrency.\n\n Args:\n coin_id (str): The CoinGecko ID of the cryptocurrency \n Examples: 'bitcoin', 'ethereum', 'cardano'\n vs_currency (str, optional): The target currency for price conversion.\n Supported: 'usd', 'eur', 'gbp', 'jpy', etc.\n Defaults to \"usd\".\n\n Returns:\n str: JSON formatted string containing the coin's current price and market data\n including market cap, 24h volume, and price changes\n\n Raises:\n requests.RequestException: If the API request fails due to network issues\n ValueError: If coin_id is empty or invalid\n TimeoutError: If the request takes longer than 10 seconds\n\n Example:\n >>> result = get_coin_price(\"bitcoin\", \"usd\")\n >>> print(result)\n {\"bitcoin\": {\"usd\": 45000, \"usd_market_cap\": 850000000000, ...}}\n\n >>> result = get_coin_price(\"ethereum\", \"eur\")\n >>> print(result)\n {\"ethereum\": {\"eur\": 3200, \"eur_market_cap\": 384000000000, ...}}\n \"\"\"\n try:\n # Validate input parameters\n if not coin_id or not coin_id.strip():\n raise ValueError(\"coin_id cannot be empty\")\n\n url = \"https://api.coingecko.com/api/v3/simple/price\"\n params = {\n \"ids\": coin_id.lower().strip(),\n \"vs_currencies\": vs_currency.lower(),\n \"include_market_cap\": True,\n \"include_24hr_vol\": True,\n \"include_24hr_change\": True,\n \"include_last_updated_at\": True,\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Check if the coin was found\n if not data:\n return json.dumps({\n \"error\": f\"Cryptocurrency '{coin_id}' not found. Please check the coin ID.\"\n })\n\n return json.dumps(data, indent=2)\n\n except requests.RequestException as e:\n return json.dumps({\n \"error\": f\"Failed to fetch price for {coin_id}: {str(e)}\",\n \"suggestion\": \"Check your internet connection and try again\"\n })\n except ValueError as e:\n return json.dumps({\"error\": str(e)})\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef get_top_cryptocurrencies(limit: int = 10, vs_currency: str = \"usd\") -> str:\n \"\"\"\n Fetch the top cryptocurrencies by market capitalization.\n\n Args:\n limit (int, optional): Number of coins to retrieve. \n Range: 1-250 coins\n Defaults to 10.\n vs_currency (str, optional): The target currency for price conversion.\n Supported: 'usd', 'eur', 'gbp', 'jpy', etc.\n Defaults to \"usd\".\n\n Returns:\n str: JSON formatted string containing top cryptocurrencies with detailed market data\n including: id, symbol, name, current_price, market_cap, market_cap_rank,\n total_volume, price_change_24h, price_change_7d, last_updated\n\n Raises:\n requests.RequestException: If the API request fails\n ValueError: If limit is not between 1 and 250\n\n Example:\n >>> result = get_top_cryptocurrencies(5, \"usd\")\n >>> print(result)\n [{\"id\": \"bitcoin\", \"name\": \"Bitcoin\", \"current_price\": 45000, ...}]\n\n >>> result = get_top_cryptocurrencies(limit=3, vs_currency=\"eur\")\n >>> print(result)\n [{\"id\": \"bitcoin\", \"name\": \"Bitcoin\", \"current_price\": 38000, ...}]\n \"\"\"\n try:\n # Validate parameters\n if not isinstance(limit, int) or not 1 <= limit <= 250:\n raise ValueError(\"Limit must be an integer between 1 and 250\")\n\n url = \"https://api.coingecko.com/api/v3/coins/markets\"\n params = {\n \"vs_currency\": vs_currency.lower(),\n \"order\": \"market_cap_desc\",\n \"per_page\": limit,\n \"page\": 1,\n \"sparkline\": False,\n \"price_change_percentage\": \"24h,7d\",\n }\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Simplify and structure the data for better readability\n simplified_data = []\n for coin in data:\n simplified_data.append({\n \"id\": coin.get(\"id\"),\n \"symbol\": coin.get(\"symbol\", \"\").upper(),\n \"name\": coin.get(\"name\"),\n \"current_price\": coin.get(\"current_price\"),\n \"market_cap\": coin.get(\"market_cap\"),\n \"market_cap_rank\": coin.get(\"market_cap_rank\"),\n \"total_volume\": coin.get(\"total_volume\"),\n \"price_change_24h\": round(coin.get(\"price_change_percentage_24h\", 0), 2),\n \"price_change_7d\": round(coin.get(\"price_change_percentage_7d_in_currency\", 0), 2),\n \"last_updated\": coin.get(\"last_updated\"),\n })\n\n return json.dumps(simplified_data, indent=2)\n\n except (requests.RequestException, ValueError) as e:\n return json.dumps({\n \"error\": f\"Failed to fetch top cryptocurrencies: {str(e)}\"\n })\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n\n\ndef search_cryptocurrencies(query: str) -> str:\n \"\"\"\n Search for cryptocurrencies by name or symbol.\n\n Args:\n query (str): The search term (coin name or symbol)\n Examples: 'bitcoin', 'btc', 'ethereum', 'eth'\n Case-insensitive search\n\n Returns:\n str: JSON formatted string containing search results with coin details\n including: id, name, symbol, market_cap_rank, thumb (icon URL)\n Limited to top 10 results for performance\n\n Raises:\n requests.RequestException: If the API request fails\n ValueError: If query is empty\n\n Example:\n >>> result = search_cryptocurrencies(\"ethereum\")\n >>> print(result)\n {\"coins\": [{\"id\": \"ethereum\", \"name\": \"Ethereum\", \"symbol\": \"eth\", ...}]}\n\n >>> result = search_cryptocurrencies(\"btc\")\n >>> print(result)\n {\"coins\": [{\"id\": \"bitcoin\", \"name\": \"Bitcoin\", \"symbol\": \"btc\", ...}]}\n \"\"\"\n try:\n # Validate input\n if not query or not query.strip():\n raise ValueError(\"Search query cannot be empty\")\n\n url = \"https://api.coingecko.com/api/v3/search\"\n params = {\"query\": query.strip()}\n\n response = requests.get(url, params=params, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Extract and format the results\n coins = data.get(\"coins\", [])[:10] # Limit to top 10 results\n\n result = {\n \"coins\": coins,\n \"query\": query,\n \"total_results\": len(data.get(\"coins\", [])),\n \"showing\": min(len(coins), 10)\n }\n\n return json.dumps(result, indent=2)\n\n except requests.RequestException as e:\n return json.dumps({\n \"error\": f'Failed to search for \"{query}\": {str(e)}'\n })\n except ValueError as e:\n return json.dumps({\"error\": str(e)})\n except Exception as e:\n return json.dumps({\"error\": f\"Unexpected error: {str(e)}\"})\n
"},{"location":"swarms/tools/tools_examples/#step-2-configure-your-agent","title":"Step 2: Configure Your Agent","text":"Create an agent with the following key parameters:
# Initialize the agent with cryptocurrency tools\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\", # Unique identifier for your agent\n agent_description=\"Personal finance advisor agent with cryptocurrency market analysis capabilities\",\n system_prompt=\"\"\"You are a personal finance advisor agent with access to real-time \n cryptocurrency data from CoinGecko. You can help users analyze market trends, check \n coin prices, find trending cryptocurrencies, and search for specific coins. Always \n provide accurate, up-to-date information and explain market data in an easy-to-understand way.\"\"\",\n max_loops=1, # Number of reasoning loops\n max_tokens=4096, # Maximum response length\n model_name=\"anthropic/claude-3-opus-20240229\", # LLM model to use\n dynamic_temperature_enabled=True, # Enable adaptive creativity\n output_type=\"all\", # Return complete response\n tools=[ # List of callable functions\n get_coin_price,\n get_top_cryptocurrencies,\n search_cryptocurrencies,\n ],\n)\n
"},{"location":"swarms/tools/tools_examples/#step-3-use-your-agent","title":"Step 3: Use Your Agent","text":"# Example usage with different queries\nresponse = agent.run(\"What are the top 5 cryptocurrencies by market cap?\")\nprint(response)\n\n# Query with specific parameters\nresponse = agent.run(\"Get the current price of Bitcoin and Ethereum in EUR\")\nprint(response)\n\n# Search functionality\nresponse = agent.run(\"Search for cryptocurrencies related to 'cardano'\")\nprint(response)\n
"},{"location":"swarms/tools/tools_examples/#method-2-mcp-model-context-protocol-servers","title":"Method 2: MCP (Model Context Protocol) Servers","text":"MCP servers provide a standardized way to create distributed tool functionality. They're ideal for:
Reusable tools across multiple agents
Complex tool logic that needs isolation
Third-party tool integration
Scalable architectures
from mcp.server.fastmcp import FastMCP\nimport requests\n\n# Initialize the MCP server with configuration\nmcp = FastMCP(\"OKXCryptoPrice\") # Server name for identification\nmcp.settings.port = 8001 # Port for server communication\n
"},{"location":"swarms/tools/tools_examples/#step-2-define-mcp-tools","title":"Step 2: Define MCP Tools","text":"Each MCP tool requires the @mcp.tool
decorator with specific parameters:
@mcp.tool(\n name=\"get_okx_crypto_price\", # Tool identifier (must be unique)\n description=\"Get the current price and basic information for a given cryptocurrency from OKX exchange.\",\n)\ndef get_okx_crypto_price(symbol: str) -> str:\n \"\"\"\n Get the current price and basic information for a given cryptocurrency using OKX API.\n\n Args:\n symbol (str): The cryptocurrency trading pair\n Format: 'BASE-QUOTE' (e.g., 'BTC-USDT', 'ETH-USDT')\n If only base currency provided, '-USDT' will be appended\n Case-insensitive input\n\n Returns:\n str: A formatted string containing:\n - Current price in USDT\n - 24-hour price change percentage\n - Formatted for human readability\n\n Raises:\n requests.RequestException: If the OKX API request fails\n ValueError: If symbol format is invalid\n ConnectionError: If unable to connect to OKX servers\n\n Example:\n >>> get_okx_crypto_price('BTC-USDT')\n 'Current price of BTC/USDT: $45,000.00\\n24h Change: +2.34%'\n\n >>> get_okx_crypto_price('eth') # Automatically converts to ETH-USDT\n 'Current price of ETH/USDT: $3,200.50\\n24h Change: -1.23%'\n \"\"\"\n try:\n # Input validation and formatting\n if not symbol or not symbol.strip():\n return \"Error: Please provide a valid trading pair (e.g., 'BTC-USDT')\"\n\n # Normalize symbol format\n symbol = symbol.upper().strip()\n if not symbol.endswith(\"-USDT\"):\n symbol = f\"{symbol}-USDT\"\n\n # OKX API endpoint for ticker information\n url = f\"https://www.okx.com/api/v5/market/ticker?instId={symbol}\"\n\n # Make the API request with timeout\n response = requests.get(url, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Check API response status\n if data.get(\"code\") != \"0\":\n return f\"Error: {data.get('msg', 'Unknown error from OKX API')}\"\n\n # Extract ticker data\n ticker_data = data.get(\"data\", [{}])[0]\n if not ticker_data:\n return f\"Error: Could not find data for {symbol}. Please verify the trading pair exists.\"\n\n # Parse numerical data\n price = float(ticker_data.get(\"last\", 0))\n change_percent = float(ticker_data.get(\"change24h\", 0)) * 100 # Convert to percentage\n\n # Format response\n base_currency = symbol.split(\"-\")[0]\n change_symbol = \"+\" if change_percent >= 0 else \"\"\n\n return (f\"Current price of {base_currency}/USDT: ${price:,.2f}\\n\"\n f\"24h Change: {change_symbol}{change_percent:.2f}%\")\n\n except requests.exceptions.Timeout:\n return \"Error: Request timed out. OKX servers may be slow.\"\n except requests.exceptions.RequestException as e:\n return f\"Error fetching OKX data: {str(e)}\"\n except (ValueError, KeyError) as e:\n return f\"Error parsing OKX response: {str(e)}\"\n except Exception as e:\n return f\"Unexpected error: {str(e)}\"\n\n\n@mcp.tool(\n name=\"get_okx_crypto_volume\", # Second tool with different functionality\n description=\"Get the 24-hour trading volume for a given cryptocurrency from OKX exchange.\",\n)\ndef get_okx_crypto_volume(symbol: str) -> str:\n \"\"\"\n Get the 24-hour trading volume for a given cryptocurrency using OKX API.\n\n Args:\n symbol (str): The cryptocurrency trading pair\n Format: 'BASE-QUOTE' (e.g., 'BTC-USDT', 'ETH-USDT')\n If only base currency provided, '-USDT' will be appended\n Case-insensitive input\n\n Returns:\n str: A formatted string containing:\n - 24-hour trading volume in the base currency\n - Volume formatted with thousand separators\n - Currency symbol for clarity\n\n Raises:\n requests.RequestException: If the OKX API request fails\n ValueError: If symbol format is invalid\n\n Example:\n >>> get_okx_crypto_volume('BTC-USDT')\n '24h Trading Volume for BTC/USDT: 12,345.67 BTC'\n\n >>> get_okx_crypto_volume('ethereum') # Converts to ETH-USDT\n '24h Trading Volume for ETH/USDT: 98,765.43 ETH'\n \"\"\"\n try:\n # Input validation and formatting\n if not symbol or not symbol.strip():\n return \"Error: Please provide a valid trading pair (e.g., 'BTC-USDT')\"\n\n # Normalize symbol format\n symbol = symbol.upper().strip()\n if not symbol.endswith(\"-USDT\"):\n symbol = f\"{symbol}-USDT\"\n\n # OKX API endpoint\n url = f\"https://www.okx.com/api/v5/market/ticker?instId={symbol}\"\n\n # Make API request\n response = requests.get(url, timeout=10)\n response.raise_for_status()\n\n data = response.json()\n\n # Validate API response\n if data.get(\"code\") != \"0\":\n return f\"Error: {data.get('msg', 'Unknown error from OKX API')}\"\n\n ticker_data = data.get(\"data\", [{}])[0]\n if not ticker_data:\n return f\"Error: Could not find data for {symbol}. Please verify the trading pair.\"\n\n # Extract volume data\n volume_24h = float(ticker_data.get(\"vol24h\", 0))\n base_currency = symbol.split(\"-\")[0]\n\n return f\"24h Trading Volume for {base_currency}/USDT: {volume_24h:,.2f} {base_currency}\"\n\n except requests.exceptions.RequestException as e:\n return f\"Error fetching OKX data: {str(e)}\"\n except Exception as e:\n return f\"Error: {str(e)}\"\n
"},{"location":"swarms/tools/tools_examples/#step-3-start-your-mcp-server","title":"Step 3: Start Your MCP Server","text":"if __name__ == \"__main__\":\n # Run the MCP server with SSE (Server-Sent Events) transport\n # Server will be available at http://localhost:8001/sse\n mcp.run(transport=\"sse\")\n
"},{"location":"swarms/tools/tools_examples/#step-4-connect-agent-to-mcp-server","title":"Step 4: Connect Agent to MCP Server","text":"from swarms import Agent\n\n# Method 2: Using direct URL (simpler for development)\nmcp_url = \"http://0.0.0.0:8001/sse\"\n\n# Initialize agent with MCP tools\nagent = Agent(\n agent_name=\"Financial-Analysis-Agent\", # Agent identifier\n agent_description=\"Personal finance advisor with OKX exchange data access\",\n system_prompt=\"\"\"You are a financial analysis agent with access to real-time \n cryptocurrency data from OKX exchange. You can check prices, analyze trading volumes, \n and provide market insights. Always format numerical data clearly and explain \n market movements in context.\"\"\",\n max_loops=1, # Processing loops\n mcp_url=mcp_url, # MCP server connection\n output_type=\"all\", # Complete response format\n # Note: tools are automatically loaded from MCP server\n)\n
"},{"location":"swarms/tools/tools_examples/#step-5-use-your-mcp-enabled-agent","title":"Step 5: Use Your MCP-Enabled Agent","text":"# The agent automatically discovers and uses tools from the MCP server\nresponse = agent.run(\n \"Fetch the price for Bitcoin using the OKX exchange and also get its trading volume\"\n)\nprint(response)\n\n# Multiple tool usage\nresponse = agent.run(\n \"Compare the prices of BTC, ETH, and ADA on OKX, and show their trading volumes\"\n)\nprint(response)\n
"},{"location":"swarms/tools/tools_examples/#best-practices","title":"Best Practices","text":""},{"location":"swarms/tools/tools_examples/#function-design","title":"Function Design","text":"Practice Description Type Hints Always use type hints for all parameters and return values Docstrings Write comprehensive docstrings with Args, Returns, Raises, and Examples Error Handling Implement proper error handling with specific exception types Input Validation Validate input parameters before processing Data Structure Return structured data (preferably JSON) for consistency"},{"location":"swarms/tools/tools_examples/#mcp-server-development","title":"MCP Server Development","text":"Practice Description Tool Naming Use descriptive tool names that clearly indicate functionality Timeouts Set appropriate timeouts for external API calls Error Handling Implement graceful error handling for network issues Configuration Use environment variables for sensitive configuration Testing Test tools independently before integration"},{"location":"swarms/tools/tools_examples/#agent-configuration","title":"Agent Configuration","text":"Practice Description Loop Control Choose appropriate max_loops based on task complexity Token Management Set reasonable token limits to control response length System Prompts Write clear system prompts that explain tool capabilities Agent Naming Use meaningful agent names for debugging and logging Tool Integration Consider tool combinations for comprehensive functionality"},{"location":"swarms/tools/tools_examples/#performance-optimization","title":"Performance Optimization","text":"Practice Description Data Caching Cache frequently requested data when possible Connection Management Use connection pooling for multiple API calls Rate Control Implement rate limiting to respect API constraints Performance Monitoring Monitor tool execution times and optimize slow operations Async Operations Use async operations for concurrent tool execution when supported"},{"location":"swarms/tools/tools_examples/#troubleshooting","title":"Troubleshooting","text":""},{"location":"swarms/tools/tools_examples/#common-issues","title":"Common Issues","text":""},{"location":"swarms/tools/tools_examples/#tool-not-found","title":"Tool Not Found","text":"# Ensure function is in tools list\nagent = Agent(\n # ... other config ...\n tools=[your_function_name], # Function object, not string\n)\n
"},{"location":"swarms/tools/tools_examples/#mcp-connection-failed","title":"MCP Connection Failed","text":"# Check server status and URL\nimport requests\nresponse = requests.get(\"http://localhost:8001/health\") # Health check endpoint\n
"},{"location":"swarms/tools/tools_examples/#type-hint-errors","title":"Type Hint Errors","text":"# Always specify return types\ndef my_tool(param: str) -> str: # Not just -> None\n return \"result\"\n
"},{"location":"swarms/tools/tools_examples/#json-parsing-issues","title":"JSON Parsing Issues","text":"# Always return valid JSON strings\nimport json\nreturn json.dumps({\"result\": data}, indent=2)\n
"},{"location":"swarms/ui/main/","title":"Swarms Chat UI Documentation","text":"The Swarms Chat interface provides a customizable, multi-agent chat experience using Gradio. It supports various specialized AI agents\u2014from finance to healthcare and news analysis\u2014by leveraging Swarms models.
"},{"location":"swarms/ui/main/#table-of-contents","title":"Table of Contents","text":"Make sure you have Python 3.7+ installed, then install the required packages using pip:
pip install gradio ai-gradio swarms\n
"},{"location":"swarms/ui/main/#quick-start","title":"Quick Start","text":"Below is a minimal example to get the Swarms Chat interface up and running. Customize the agent, title, and description as needed.
import gradio as gr\nimport ai_gradio\n\n# Create and launch a Swarms Chat interface\ngr.load(\n name='swarms:gpt-4-turbo', # Model identifier (supports OpenAI and others)\n src=ai_gradio.registry, # Source module for model configurations\n agent_name=\"Stock-Analysis-Agent\", # Example agent from Finance category\n title='Swarms Chat',\n description='Chat with an AI agent powered by Swarms'\n).launch()\n
"},{"location":"swarms/ui/main/#parameters-overview","title":"Parameters Overview","text":"When configuring your interface, consider the following parameters:
name
(str): Model identifier (e.g., 'swarms:gpt-4-turbo'
) that specifies which Swarms model to use.
src
(module): The source module (typically ai_gradio.registry
) that contains model configurations.
agent_name
(str): The name of the specialized agent you wish to use (e.g., \"Stock-Analysis-Agent\").
title
(str): The title that appears at the top of the web interface.
description
(str): A short summary describing the functionality of the chat interface.
Swarms Chat supports multiple specialized agents designed for different domains. Below is an overview of available agent types.
"},{"location":"swarms/ui/main/#finance-agents","title":"Finance Agents","text":"Capabilities:
Tax Planning Agent
Capabilities:
Healthcare Management Agent
Capabilities:
Research Assistant
Below are detailed examples for each type of specialized agent.
"},{"location":"swarms/ui/main/#finance-agent-example","title":"Finance Agent Example","text":"This example configures a chat interface for stock analysis:
import gradio as gr\nimport ai_gradio\n\nfinance_interface = gr.load(\n name='swarms:gpt-4-turbo',\n src=ai_gradio.registry,\n agent_name=\"Stock-Analysis-Agent\",\n title='Finance Assistant',\n description='Expert financial analysis and advice tailored to your investment needs.'\n)\nfinance_interface.launch()\n
"},{"location":"swarms/ui/main/#healthcare-agent-example","title":"Healthcare Agent Example","text":"This example sets up a chat interface for healthcare assistance:
import gradio as gr\nimport ai_gradio\n\nhealthcare_interface = gr.load(\n name='swarms:gpt-4-turbo',\n src=ai_gradio.registry,\n agent_name=\"Medical-Assistant-Agent\",\n title='Healthcare Assistant',\n description='Access medical information, symptom analysis, and treatment recommendations.'\n)\nhealthcare_interface.launch()\n
"},{"location":"swarms/ui/main/#news-analysis-agent-example","title":"News Analysis Agent Example","text":"This example creates an interface for real-time news analysis:
import gradio as gr\nimport ai_gradio\n\nnews_interface = gr.load(\n name='swarms:gpt-4-turbo',\n src=ai_gradio.registry,\n agent_name=\"News-Analysis-Agent\",\n title='News Analyzer',\n description='Get real-time insights and analysis of trending news topics.'\n)\nnews_interface.launch()\n
"},{"location":"swarms/ui/main/#setup-and-deployment","title":"Setup and Deployment","text":"pip install gradio ai-gradio swarms\n
import gradio as gr\nimport ai_gradio\n
interface = gr.load(\n name='swarms:gpt-4-turbo',\n src=ai_gradio.registry,\n agent_name=\"Your-Desired-Agent\",\n title='Your Interface Title',\n description='A brief description of your interface.'\n)\ninterface.launch()\n
Select the Right Agent: Use the agent that best suits your specific domain needs.
Model Configuration: Adjust model parameters based on your computational resources to balance performance and cost.
Error Handling: Implement error handling to manage unexpected inputs or API failures gracefully.
Resource Monitoring: Keep an eye on system performance, especially during high-concurrency sessions.
Regular Updates: Keep your Swarms and Gradio packages updated to ensure compatibility with new features and security patches.
Local vs. Remote: The interface runs locally by default but can be deployed on remote servers for wider accessibility.
Customization: You can configure custom model parameters and integrate additional APIs as needed.
Session Management: Built-in session handling ensures that users can interact concurrently without interfering with each other's sessions.
Error Handling & Rate Limiting: The system includes basic error handling and rate limiting to maintain performance under load.
This documentation is designed to provide clarity, reliability, and comprehensive guidance for integrating and using the Swarms Chat UI. For further customization or troubleshooting, consult the respective package documentation and community forums.
"},{"location":"swarms_cloud/add_agent/","title":"Publishing an Agent to Agent Marketplace","text":""},{"location":"swarms_cloud/add_agent/#requirements","title":"Requirements","text":"swarms-cloud
package with pip3 install -U swarms-cloud
Onboarding Process with swarms-cloud onboarding
A Dockerfile Dockerfile
containing the API of your agent code with FastAPI
A YAML file for configuration agent.yaml
# Agent metadata and description\nagent_name: \"example-agent\" # The name of the agent\ndescription: \"This agent performs financial data analysis.\" # A brief description of the agent's purpose\nversion: \"v1.0\" # The version number of the agent\nauthor: \"Agent Creator Name\" # The name of the person or entity that created the agent\ncontact_email: \"creator@example.com\" # The email address for contacting the agent's creator\ntags:\n - \"financial\" # Tag indicating the agent is related to finance\n - \"data-analysis\" # Tag indicating the agent performs data analysis\n - \"agent\" # Tag indicating this is an agent\n\n\n# Deployment configuration\ndeployment_config:\n # Dockerfile configuration\n dockerfile_path: \"./Dockerfile\" # The path to the Dockerfile for building the agent's image\n dockerfile_port: 8080 # The port number the agent will listen on\n\n # Resource allocation for the agent\n resources:\n cpu: 2 # Number of CPUs allocated to the agent\n memory: \"2Gi\" # Memory allocation for the agent in gigabytes\n max_instances: 5 # Maximum number of instances to scale up to\n min_instances: 1 # Minimum number of instances to keep running\n timeout: 300s # Request timeout setting in seconds\n\n # Autoscaling configuration\n autoscaling:\n max_concurrency: 80 # Maximum number of requests the agent can handle concurrently\n target_utilization: 0.6 # CPU utilization target for auto-scaling\n\n # Environment variables for the agent\n environment_variables:\n DATABASE_URL: \"postgres://user:password@db-url\" # URL for the database connection\n API_KEY: \"your-secret-api-key\" # API key for authentication\n LOG_LEVEL: \"info\" # Log level for the agent\n\n # Secrets configuration\n secrets:\n SECRET_NAME_1: \"projects/my-project/secrets/my-secret/versions/latest\" # Path to a secret\n
"},{"location":"swarms_cloud/agent_api/","title":"Agent API","text":"The Swarms.ai Agent API provides powerful endpoints for running individual AI agents and batch agent operations. This documentation explains how to use these endpoints for effective agent-based task execution.
"},{"location":"swarms_cloud/agent_api/#getting-started","title":"Getting Started","text":"To use the Agent API, you'll need a Swarms.ai API key:
import os\nimport requests\nfrom dotenv import load_dotenv\n\n# Load API key from environment\nload_dotenv()\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\n# Configure headers with your API key\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n
"},{"location":"swarms_cloud/agent_api/#individual-agent-api","title":"Individual Agent API","text":"The Individual Agent API allows you to run a single agent with a specific configuration and task.
"},{"location":"swarms_cloud/agent_api/#agent-configuration-agentspec","title":"Agent Configuration (AgentSpec
)","text":"The AgentSpec
class defines the configuration for an individual agent.
agent_name
string Required Unique name identifying the agent and its functionality description
string None Detailed explanation of the agent's purpose and capabilities system_prompt
string None Initial instructions guiding the agent's behavior and responses model_name
string \"gpt-4o-mini\" The AI model used by the agent (e.g., gpt-4o, gpt-4o-mini, openai/o3-mini) auto_generate_prompt
boolean false Whether the agent should automatically create prompts based on task requirements max_tokens
integer 8192 Maximum number of tokens the agent can generate in its responses temperature
float 0.5 Controls output randomness (lower values = more deterministic responses) role
string \"worker\" The agent's role within a swarm, influencing its behavior and interactions max_loops
integer 1 Maximum number of times the agent can repeat its task for iterative processing tools_dictionary
array None Dictionary of tools the agent can use to complete its task mcp_url
string None URL for the MCP server that the agent can connect to"},{"location":"swarms_cloud/agent_api/#agent-completion","title":"Agent Completion","text":"The AgentCompletion
class combines an agent configuration with a specific task.
agent_config
AgentSpec Configuration of the agent to be completed task
string The task to be completed by the agent history
Optional[Union[Dict[Any, Any], List[Dict[str, str]]]] The history of the agent's previous tasks and responses. Can be either a dictionary or a list of message objects."},{"location":"swarms_cloud/agent_api/#single-agent-endpoint","title":"Single Agent Endpoint","text":"Endpoint: POST /v1/agent/completions
Run a single agent with a specific configuration and task.
"},{"location":"swarms_cloud/agent_api/#request","title":"Request","text":"def run_single_agent(agent_config, task):\n \"\"\"\n Run a single agent with the AgentCompletion format.\n\n Args:\n agent_config: Dictionary containing agent configuration\n task: String describing the task for the agent\n\n Returns:\n Dictionary containing the agent's response\n \"\"\"\n payload = {\n \"agent_config\": agent_config,\n \"task\": task\n }\n\n try:\n response = requests.post(\n f\"{BASE_URL}/v1/agent/completions\", \n headers=headers, \n json=payload\n )\n response.raise_for_status()\n return response.json()\n except requests.exceptions.RequestException as e:\n print(f\"Error making request: {e}\")\n return None\n
"},{"location":"swarms_cloud/agent_api/#example-usage","title":"Example Usage","text":"agent_config = {\n \"agent_name\": \"Research Analyst\",\n \"description\": \"An expert in analyzing and synthesizing research data\",\n \"system_prompt\": (\n \"You are a Research Analyst with expertise in data analysis and synthesis. \"\n \"Your role is to analyze provided information, identify key insights, \"\n \"and present findings in a clear, structured format. \"\n \"Focus on accuracy, clarity, and actionable recommendations.\"\n ),\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 2,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n}\n\ntask = \"Analyze the impact of artificial intelligence on healthcare delivery and provide a comprehensive report with key findings and recommendations.\"\n\nresult = run_single_agent(agent_config, task)\nprint(result)\n
"},{"location":"swarms_cloud/agent_api/#response-structure","title":"Response Structure","text":"{\n \"id\": \"agent-6a8b9c0d1e2f3g4h5i6j7k8l9m0n\",\n \"success\": true,\n \"name\": \"Research Analyst\",\n \"description\": \"An expert in analyzing and synthesizing research data\",\n \"temperature\": 0.5,\n \"outputs\": {\n \"content\": \"# Impact of Artificial Intelligence on Healthcare Delivery\\n\\n## Executive Summary\\n...\",\n \"role\": \"assistant\"\n },\n \"usage\": {\n \"input_tokens\": 1250,\n \"output_tokens\": 3822,\n \"total_tokens\": 5072\n },\n \"timestamp\": \"2025-05-10T18:35:29.421Z\"\n}\n
"},{"location":"swarms_cloud/agent_api/#batch-agent-api","title":"Batch Agent API","text":"The Batch Agent API allows you to run multiple agents in parallel, each with different configurations and tasks.
"},{"location":"swarms_cloud/agent_api/#batch-agent-endpoint","title":"Batch Agent Endpoint","text":"Endpoint: POST /v1/agent/batch/completions
Run multiple agents with different configurations and tasks in a single API call.
"},{"location":"swarms_cloud/agent_api/#request_1","title":"Request","text":"def run_batch_agents(agent_completions):\n \"\"\"\n Run multiple agents in batch.\n\n Args:\n agent_completions: List of dictionaries, each containing agent_config and task\n\n Returns:\n List of agent responses\n \"\"\"\n try:\n response = requests.post(\n f\"{BASE_URL}/v1/agent/batch/completions\",\n headers=headers,\n json=agent_completions\n )\n response.raise_for_status()\n return response.json()\n except requests.exceptions.RequestException as e:\n print(f\"Error making batch request: {e}\")\n return None\n
"},{"location":"swarms_cloud/agent_api/#example-usage_1","title":"Example Usage","text":"batch_completions = [\n {\n \"agent_config\": {\n \"agent_name\": \"Research Analyst\",\n \"description\": \"An expert in analyzing research data\",\n \"system_prompt\": \"You are a Research Analyst...\",\n \"model_name\": \"gpt-4o\",\n \"max_loops\": 2\n },\n \"task\": \"Analyze the impact of AI on healthcare delivery.\"\n },\n {\n \"agent_config\": {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"An expert in market analysis\",\n \"system_prompt\": \"You are a Market Analyst...\",\n \"model_name\": \"gpt-4o\",\n \"max_loops\": 1\n },\n \"task\": \"Analyze the AI startup landscape in 2025.\"\n }\n]\n\nbatch_results = run_batch_agents(batch_completions)\nprint(batch_results)\n
"},{"location":"swarms_cloud/agent_api/#response-structure_1","title":"Response Structure","text":"[\n {\n \"id\": \"agent-1a2b3c4d5e6f7g8h9i0j\",\n \"success\": true,\n \"name\": \"Research Analyst\",\n \"description\": \"An expert in analyzing research data\",\n \"temperature\": 0.5,\n \"outputs\": {\n \"content\": \"# Impact of AI on Healthcare Delivery\\n...\",\n \"role\": \"assistant\"\n },\n \"usage\": {\n \"input_tokens\": 1250,\n \"output_tokens\": 3822,\n \"total_tokens\": 5072\n },\n \"timestamp\": \"2025-05-10T18:35:29.421Z\"\n },\n {\n \"id\": \"agent-9i8h7g6f5e4d3c2b1a0\",\n \"success\": true,\n \"name\": \"Market Analyst\",\n \"description\": \"An expert in market analysis\",\n \"temperature\": 0.5,\n \"outputs\": {\n \"content\": \"# AI Startup Landscape 2025\\n...\",\n \"role\": \"assistant\"\n },\n \"usage\": {\n \"input_tokens\": 980,\n \"output_tokens\": 4120,\n \"total_tokens\": 5100\n },\n \"timestamp\": \"2025-05-10T18:35:31.842Z\"\n }\n]\n
"},{"location":"swarms_cloud/agent_api/#error-handling","title":"Error Handling","text":"The API uses standard HTTP status codes to indicate success or failure:
Status Code Meaning 200 Success 400 Bad Request - Check your request parameters 401 Unauthorized - Invalid or missing API key 403 Forbidden - Insufficient permissions 429 Too Many Requests - Rate limit exceeded 500 Server Error - Something went wrong on the serverWhen an error occurs, the response body will contain additional information:
{\n \"detail\": \"Error message explaining what went wrong\"\n}\n
"},{"location":"swarms_cloud/agent_api/#common-errors-and-solutions","title":"Common Errors and Solutions","text":"Error Possible Solution \"Invalid API Key\" Verify your API key is correct and properly included in the request headers \"Rate limit exceeded\" Reduce the number of requests or contact support to increase your rate limit \"Invalid agent configuration\" Check your agent_config parameters for any missing or invalid values \"Failed to create agent\" Ensure your system_prompt and model_name are valid \"Insufficient credits\" Add credits to your account at https://swarms.world/platform/account"},{"location":"swarms_cloud/agent_api/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms_cloud/agent_api/#setting-dynamic-temperature","title":"Setting Dynamic Temperature","text":"The agent can dynamically adjust its temperature for optimal outputs:
agent_config = {\n # Other config options...\n \"temperature\": 0.7,\n \"dynamic_temperature_enabled\": True\n}\n
"},{"location":"swarms_cloud/agent_api/#using-agent-tools","title":"Using Agent Tools","text":"Agents can utilize various tools to enhance their capabilities:
agent_config = {\n # Other config options...\n \"tools_dictionary\": [\n {\n \"name\": \"web_search\",\n \"description\": \"Search the web for information\",\n \"parameters\": {\n \"query\": \"string\"\n }\n },\n {\n \"name\": \"calculator\",\n \"description\": \"Perform mathematical calculations\",\n \"parameters\": {\n \"expression\": \"string\"\n }\n }\n ]\n}\n
"},{"location":"swarms_cloud/agent_api/#best-practices","title":"Best Practices","text":"API Key Security
Store API keys in environment variables or secure vaults, never in code repositories.
# DON'T do this\napi_key = \"sk-123456789abcdef\"\n\n# DO this instead\nimport os\nfrom dotenv import load_dotenv\nload_dotenv()\napi_key = os.getenv(\"SWARMS_API_KEY\")\n
Agent Naming Conventions
Use a consistent naming pattern for your agents to make your code more maintainable.
# Good naming convention\nagent_configs = {\n \"market_analyst\": {...},\n \"research_specialist\": {...},\n \"code_reviewer\": {...}\n}\n
Crafting Effective System Prompts
A well-crafted system prompt acts as your agent's personality and instruction set.
Basic PromptEnhanced PromptYou are a research analyst. Analyze the data and provide insights.\n
You are a Research Analyst with 15+ years of experience in biotech market analysis.\n\nYour task is to:\n1. Analyze the provided market data methodically\n2. Identify key trends and emerging patterns\n3. Highlight potential investment opportunities\n4. Assess risks and regulatory considerations\n5. Provide actionable recommendations supported by the data\n\nFormat your response as a professional report with clear sections,\nfocusing on data-driven insights rather than generalities.\n
Token Management
Manage your token usage carefully to control costs.
Error Handling
Implement comprehensive error handling to make your application resilient.
try:\n response = requests.post(\n f\"{BASE_URL}/v1/agent/completions\",\n headers=headers,\n json=payload,\n timeout=30 # Add timeout to prevent hanging requests\n )\n response.raise_for_status()\n return response.json()\nexcept requests.exceptions.HTTPError as e:\n if e.response.status_code == 429:\n # Implement exponential backoff for rate limiting\n retry_after = int(e.response.headers.get('Retry-After', 5))\n time.sleep(retry_after)\n return run_agent(payload) # Retry the request\n elif e.response.status_code == 401:\n logger.error(\"Authentication failed. Check your API key.\")\n else:\n logger.error(f\"HTTP Error: {e.response.status_code} - {e.response.text}\")\n return {\"error\": e.response.text}\nexcept requests.exceptions.Timeout:\n logger.error(\"Request timed out. The server might be busy.\")\n return {\"error\": \"Request timed out\"}\nexcept requests.exceptions.RequestException as e:\n logger.error(f\"Request Error: {e}\")\n return {\"error\": str(e)}\n
Implementing Caching
Cache identical requests to improve performance and reduce costs.
import hashlib\nimport json\nfrom functools import lru_cache\n\ndef generate_cache_key(agent_config, task):\n \"\"\"Generate a unique cache key for an agent request.\"\"\"\n cache_data = json.dumps({\"agent_config\": agent_config, \"task\": task}, sort_keys=True)\n return hashlib.md5(cache_data.encode()).hexdigest()\n\n@lru_cache(maxsize=100)\ndef cached_agent_run(cache_key, agent_config, task):\n \"\"\"Run agent with caching based on config and task.\"\"\"\n # Convert agent_config back to a dictionary if it's a string representation\n if isinstance(agent_config, str):\n agent_config = json.loads(agent_config)\n\n payload = {\n \"agent_config\": agent_config,\n \"task\": task\n }\n\n try:\n response = requests.post(\n f\"{BASE_URL}/v1/agent/completions\",\n headers=headers,\n json=payload\n )\n response.raise_for_status()\n return response.json()\n except Exception as e:\n return {\"error\": str(e)}\n\ndef run_agent_with_cache(agent_config, task):\n \"\"\"Wrapper function to run agent with caching.\"\"\"\n # Generate a cache key\n cache_key = generate_cache_key(agent_config, task)\n\n # Convert agent_config to a hashable type for lru_cache\n hashable_config = json.dumps(agent_config, sort_keys=True)\n\n # Call the cached function\n return cached_agent_run(cache_key, hashable_config, task)\n
Usage & Cost Monitoring
Set up a monitoring system to track your API usage and costs.
def log_api_usage(api_call_type, tokens_used, cost_estimate):\n \"\"\"Log API usage for monitoring.\"\"\"\n with open(\"api_usage_log.csv\", \"a\") as f:\n timestamp = datetime.now().isoformat()\n f.write(f\"{timestamp},{api_call_type},{tokens_used},{cost_estimate}\\n\")\n\ndef estimate_cost(tokens):\n \"\"\"Estimate cost based on token usage.\"\"\"\n # Example pricing: $0.002 per 1K tokens (adjust according to current pricing)\n return (tokens / 1000) * 0.002\n\ndef run_agent_with_logging(agent_config, task):\n \"\"\"Run agent and log usage.\"\"\"\n result = run_single_agent(agent_config, task)\n\n if \"usage\" in result:\n total_tokens = result[\"usage\"][\"total_tokens\"]\n cost = estimate_cost(total_tokens)\n log_api_usage(\"single_agent\", total_tokens, cost)\n\n return result\n
"},{"location":"swarms_cloud/agent_api/#faq","title":"FAQ","text":"What's the difference between Single Agent and Batch Agent APIs? The Single Agent API (/v1/agent/completions
) runs one agent with one task, while the Batch Agent API (/v1/agent/batch/completions
) allows running multiple agents with different configurations and tasks in parallel. Use Batch Agent when you need to process multiple independent tasks efficiently.
Model selection depends on your task complexity, performance requirements, and budget:
Model Best For Characteristics gpt-4o Complex analysis, creative tasks Highest quality, most expensive gpt-4o-mini General purpose tasks Good balance of quality and cost openai/o3-mini Simple, factual tasks Fast, economicalFor exploratory work, start with gpt-4o-mini and adjust based on results.
What should I include in my system prompt?A good system prompt should include:
Keep prompts focused and avoid contradictory instructions.
How can I optimize costs when using the Agent API?Cost optimization strategies include:
max_loops: 1
unless you specifically need iterative refinementWhile there's no hard limit specified, we recommend keeping batch sizes under 20 agents for optimal performance. For very large batches, consider splitting them into multiple calls or contacting support for guidance on handling high-volume processing.
How do I handle rate limiting?Implement exponential backoff in your error handling:
import time\n\ndef run_with_backoff(func, max_retries=5, initial_delay=1):\n \"\"\"Run a function with exponential backoff retry logic.\"\"\"\n retries = 0\n delay = initial_delay\n\n while retries < max_retries:\n try:\n return func()\n except requests.exceptions.HTTPError as e:\n if e.response.status_code == 429: # Too Many Requests\n retry_after = int(e.response.headers.get('Retry-After', delay))\n print(f\"Rate limited. Retrying after {retry_after} seconds...\")\n time.sleep(retry_after)\n retries += 1\n delay *= 2 # Exponential backoff\n else:\n raise\n except Exception as e:\n raise\n\n raise Exception(f\"Failed after {max_retries} retries\")\n
Can I use tools with my agents? Yes, you can enable tools through the tools_dictionary
parameter in your agent configuration. This allows agents to access external functionality like web searches, calculations, or custom tools.
agent_config = {\n # Other configuration...\n \"tools_dictionary\": [\n {\n \"name\": \"web_search\",\n \"description\": \"Search the web for current information\",\n \"parameters\": {\n \"query\": {\n \"type\": \"string\",\n \"description\": \"The search query\"\n }\n }\n }\n ]\n}\n
How do I debug agent performance issues? Debugging steps for agent performance issues:
The Agent API uses a token-based pricing model:
Pricing varies by model and is calculated per 1,000 tokens. Check the pricing page for current rates.
The API also offers a \"flex\" tier for lower-priority, cost-effective processing.
"},{"location":"swarms_cloud/agent_api/#further-resources","title":"Further Resources","text":"Swarms.ai Documentation Swarms.ai Platform API Key Management Swarms.ai Community
"},{"location":"swarms_cloud/api_clients/","title":"Swarms API Clients","text":"Production-Ready Client Libraries for Every Programming Language
"},{"location":"swarms_cloud/api_clients/#overview","title":"Overview","text":"The Swarms API provides official client libraries across multiple programming languages, enabling developers to integrate powerful multi-agent AI capabilities into their applications with ease. Our clients are designed for production use, featuring robust error handling, comprehensive documentation, and seamless integration with existing codebases.
Whether you're building enterprise applications, research prototypes, or innovative AI products, our client libraries provide the tools you need to harness the full power of the Swarms platform.
"},{"location":"swarms_cloud/api_clients/#available-clients","title":"Available Clients","text":"Language Status Repository Documentation Description Python \u2705 Available swarms-sdk Docs Production-grade Python client with comprehensive error handling, retry logic, and extensive examples TypeScript/Node.js \u2705 Available swarms-ts \ud83d\udcda Coming Soon Modern TypeScript client with full type safety, Promise-based API, and Node.js compatibility Go \u2705 Available swarms-client-go \ud83d\udcda Coming Soon High-performance Go client optimized for concurrent operations and microservices Java \u2705 Available swarms-java \ud83d\udcda Coming Soon Enterprise Java client with Spring Boot integration and comprehensive SDK features Kotlin \ud83d\udea7 Coming Soon In Development \ud83d\udcda Coming Soon Modern Kotlin client with coroutines support and Android compatibility Ruby \ud83d\udea7 Coming Soon In Development \ud83d\udcda Coming Soon Elegant Ruby client with Rails integration and gem packaging Rust \ud83d\udea7 Coming Soon In Development \ud83d\udcda Coming Soon Ultra-fast Rust client with memory safety and zero-cost abstractions C#/.NET \ud83d\udea7 Coming Soon In Development \ud83d\udcda Coming Soon .NET client with async/await support and NuGet packaging"},{"location":"swarms_cloud/api_clients/#client-features","title":"Client Features","text":"All Swarms API clients are built with the following enterprise-grade features:
"},{"location":"swarms_cloud/api_clients/#core-functionality","title":"\ud83d\udd27 Core Functionality","text":"Feature Description Full API Coverage Complete access to all Swarms API endpoints Type Safety Strongly-typed interfaces for all request/response objects Error Handling Comprehensive error handling with detailed error messages Retry Logic Automatic retries with exponential backoff for transient failures"},{"location":"swarms_cloud/api_clients/#performance-reliability","title":"\ud83d\ude80 Performance & Reliability","text":"Feature Description Connection Pooling Efficient HTTP connection management Rate Limiting Built-in rate limit handling and backoff strategies Timeout Configuration Configurable timeouts for different operation types Streaming Support Real-time streaming for long-running operations"},{"location":"swarms_cloud/api_clients/#security-authentication","title":"\ud83d\udee1\ufe0f Security & Authentication","text":"Feature Description API Key Management Secure API key handling and rotation TLS/SSL End-to-end encryption for all communications Request Signing Optional request signing for enhanced security Environment Configuration Secure environment-based configuration"},{"location":"swarms_cloud/api_clients/#monitoring-debugging","title":"\ud83d\udcca Monitoring & Debugging","text":"Feature Description Comprehensive Logging Detailed logging for debugging and monitoring Request/Response Tracing Full request/response tracing capabilities Metrics Integration Built-in metrics for monitoring client performance Debug Mode Enhanced debugging features for development"},{"location":"swarms_cloud/api_clients/#client-specific-features","title":"Client-Specific Features","text":""},{"location":"swarms_cloud/api_clients/#python-client","title":"Python Client","text":"Feature Description Async Support Full async/await support withasyncio
Pydantic Integration Type-safe request/response models Context Managers Resource management with context managers Rich Logging Integration with Python's logging
module"},{"location":"swarms_cloud/api_clients/#typescriptnodejs-client","title":"TypeScript/Node.js Client","text":"Feature Description TypeScript First Built with TypeScript for maximum type safety Promise-Based Modern Promise-based API with async/await Browser Compatible Works in both Node.js and modern browsers Zero Dependencies Minimal dependency footprint"},{"location":"swarms_cloud/api_clients/#go-client","title":"Go Client","text":"Feature Description Context Support Full context.Context support for cancellation Structured Logging Integration with structured logging libraries Concurrency Safe Thread-safe design for concurrent operations Minimal Allocation Optimized for minimal memory allocation"},{"location":"swarms_cloud/api_clients/#java-client","title":"Java Client","text":"Feature Description Spring Boot Ready Built-in Spring Boot auto-configuration Reactive Support Optional reactive streams support Enterprise Features JMX metrics, health checks, and more Maven & Gradle Available on Maven Central"},{"location":"swarms_cloud/api_clients/#advanced-configuration","title":"Advanced Configuration","text":""},{"location":"swarms_cloud/api_clients/#environment-variables","title":"Environment Variables","text":"All clients support standard environment variables for configuration:
# API Configuration\nSWARMS_API_KEY=your_api_key_here\nSWARMS_BASE_URL=https://api.swarms.world\n\n# Client Configuration\nSWARMS_TIMEOUT=60\nSWARMS_MAX_RETRIES=3\nSWARMS_LOG_LEVEL=INFO\n
"},{"location":"swarms_cloud/api_clients/#community-support","title":"Community & Support","text":""},{"location":"swarms_cloud/api_clients/#documentation-resources","title":"\ud83d\udcda Documentation & Resources","text":"Resource Link Complete API Documentation View Docs Python Client Docs View Docs API Examples & Tutorials View Examples"},{"location":"swarms_cloud/api_clients/#community-support_1","title":"\ud83d\udcac Community Support","text":"Community Channel Description Link Discord Community Join our active developer community for real-time support and discussions Join Discord GitHub Discussions Ask questions and share ideas GitHub Discussions Twitter/X Follow for updates and announcements Twitter/X"},{"location":"swarms_cloud/api_clients/#issue-reporting-contributions","title":"\ud83d\udc1b Issue Reporting & Contributions","text":"Contribution Area Description Link Report Bugs Help us improve by reporting issues Report Bugs Feature Requests Suggest new features and improvements Feature Requests Contributing Guide Learn how to contribute to the project Contributing Guide"},{"location":"swarms_cloud/api_clients/#direct-support","title":"\ud83d\udce7 Direct Support","text":"Support Type Contact Information Support Call Book a call Enterprise Support Contact us for dedicated enterprise support options"},{"location":"swarms_cloud/api_clients/#contributing-to-client-development","title":"Contributing to Client Development","text":"We welcome contributions to all our client libraries! Here's how you can help:
"},{"location":"swarms_cloud/api_clients/#development","title":"\ud83d\udee0\ufe0f Development","text":"Task Description Implement new features and endpoints Add new API features and expand client coverage Improve error handling and retry logic Enhance robustness and reliability Add comprehensive test coverage Ensure code quality and prevent regressions Optimize performance and memory usage Improve speed and reduce resource consumption"},{"location":"swarms_cloud/api_clients/#documentation","title":"\ud83d\udcdd Documentation","text":"Task Description Write tutorials and examples Create guides and sample code for users Improve API documentation Clarify and expand reference docs Create integration guides Help users connect clients to their applications Translate documentation Make docs accessible in multiple languages"},{"location":"swarms_cloud/api_clients/#testing","title":"\ud83e\uddea Testing","text":"Task Description Add unit and integration tests Test individual components and end-to-end flows Test with different language versions Ensure compatibility across environments Performance benchmarking Measure and optimize speed and efficiency Security testing Identify and fix vulnerabilities"},{"location":"swarms_cloud/api_clients/#packaging","title":"\ud83d\udce6 Packaging","text":"Task Description Package managers (npm, pip, Maven, etc.) Publish to popular package repositories Distribution optimization Streamline builds and reduce package size Version management Maintain clear versioning and changelogs Release automation Automate build, test, and deployment pipelines"},{"location":"swarms_cloud/api_clients/#enterprise-features","title":"Enterprise Features","text":"For enterprise customers, we offer additional features and support:
"},{"location":"swarms_cloud/api_clients/#enterprise-client-features","title":"\ud83c\udfe2 Enterprise Client Features","text":"Feature Description Priority Support Dedicated support team with SLA guarantees Custom Integrations Tailored integrations for your specific needs On-Premises Deployment Support for on-premises or private cloud deployments Advanced Security Enhanced security features and compliance support Training & Onboarding Comprehensive training for your development team"},{"location":"swarms_cloud/api_clients/#contact-enterprise-sales","title":"\ud83d\udcde Contact Enterprise Sales","text":"Contact Type Details Sales kye@swarms.world Schedule Demo Book a Demo Partnership kye@swarms.worldReady to build the future with AI agents? Start with any of our client libraries and join our growing community of developers building the next generation of intelligent applications.
"},{"location":"swarms_cloud/api_pricing/","title":"Swarm Agent API Pricing","text":"\ud83c\udf89 Get Started with $20 Free Credits!
New users receive $20 in free credits when they sign up! Create your account now to start building with our powerful multi-agent platform.
Overview
The Swarm Agent API provides a powerful platform for managing multi-agent collaboration at scale and orchestrating swarms of LLM agents in the cloud. Our pricing model is designed to be transparent and cost-effective, enabling you to harness the full potential of your agents with ease.
"},{"location":"swarms_cloud/api_pricing/#credit-system","title":"Credit System","text":"The Swarm API operates on a credit-based system with the following characteristics:
Credits are the currency used within the platform
1 credit = $1 USD
Credits can be purchased with USD or $swarms Solana tokens
Off-Peak Hours Discount
To encourage efficient resource usage during off-peak hours, we offer significant discounts for operations performed during California night-time hours:
Time Period (Pacific Time) Discount 8:00 PM to 6:00 AM 75% off token costs"},{"location":"swarms_cloud/api_pricing/#cost-calculation","title":"Cost Calculation","text":""},{"location":"swarms_cloud/api_pricing/#formula","title":"Formula","text":"The total cost for a swarm execution is calculated as follows:
Total Cost = (Number of Agents \u00d7 $0.01) + \n (Total Input Tokens / 1M \u00d7 $2.00 \u00d7 Number of Agents) +\n (Total Output Tokens / 1M \u00d7 $4.50 \u00d7 Number of Agents)\n
With night-time discount applied:
Input Token Cost = Input Token Cost \u00d7 0.25\nOutput Token Cost = Output Token Cost \u00d7 0.25\n
"},{"location":"swarms_cloud/api_pricing/#example-scenarios","title":"Example Scenarios","text":""},{"location":"swarms_cloud/api_pricing/#scenario-1-basic-workflow-day-time","title":"Scenario 1: Basic Workflow (Day-time)","text":"Basic Workflow Example
Parameters:
3 agents
10,000 input tokens total
25,000 output tokens total
Calculation:
Agent cost: 3 \u00d7 $0.01 = $0.03
Input token cost: (10,000 / 1,000,000) \u00d7 $2.00 \u00d7 3 = $0.06
Output token cost: (25,000 / 1,000,000) \u00d7 $4.50 \u00d7 3 = $0.3375
Total cost: $0.4275
Complex Workflow Example
Parameters:
5 agents
50,000 input tokens total
125,000 output tokens total
Calculation:
Agent cost: 5 \u00d7 $0.01 = $0.05
Input token cost: (50,000 / 1,000,000) \u00d7 $2.00 \u00d7 5 \u00d7 0.25 = $0.125
Output token cost: (125,000 / 1,000,000) \u00d7 $4.50 \u00d7 5 \u00d7 0.25 = $0.703125
Total cost: $0.878125
Credits can be purchased through our platform in two ways:
"},{"location":"swarms_cloud/api_pricing/#usd-payment","title":"USD Payment","text":"Available through our account page
Secure payment processing
Minimum purchase: $10
Use Solana-based $swarms tokens
Tokens can be purchased on supported exchanges
Connect your Solana wallet on our account page
Free Credit Program
We occasionally offer free credits to:
New users (welcome bonus)
During promotional periods
For educational and research purposes
Important Notes:
Used before standard credits
May have expiration dates
May have usage restrictions
Track your credit usage through our comprehensive logging and reporting features:
"},{"location":"swarms_cloud/api_pricing/#api-logs","title":"API Logs","text":"Access detailed logs via the /v1/swarm/logs
endpoint
View cost breakdowns for each execution
Real-time credit balance display
Historical usage graphs
Detailed cost analysis
Available at https://swarms.world/platform/dashboard
Yes, the minimum credit purchase is $10 USD equivalent.
Do credits expire?Standard credits do not expire. Free promotional credits may have expiration dates.
How is the night-time discount applied?The system automatically detects the execution time based on Pacific Time (America/Los_Angeles) and applies a 75% discount to token costs for executions between 8:00 PM and 6:00 AM.
What happens if I run out of credits during execution?Executions will fail with a 402 Payment Required error if sufficient credits are not available. We recommend maintaining a credit balance appropriate for your usage patterns.
Can I get a refund for unused credits?Please contact our support team for refund requests for unused credits.
Are there volume discounts available?Yes, please contact our sales team for enterprise pricing and volume discounts.
"},{"location":"swarms_cloud/api_pricing/#references","title":"References","text":"Need Help?
For additional questions or custom pricing options, please contact our support team at kye@swarms.world.
"},{"location":"swarms_cloud/architecture/","title":"Under The Hood: The Swarm Cloud Serving Infrastructure","text":"This blog post delves into the intricate workings of our serving model infrastructure, providing a comprehensive understanding for both users and infrastructure engineers. We'll embark on a journey that starts with an API request and culminates in a response generated by your chosen model, all orchestrated within a multi-cloud environment.
"},{"location":"swarms_cloud/architecture/#the-journey-of-an-api-request","title":"The Journey of an API Request","text":"The Gateway: Your API request first arrives at an EC2 instance running SkyPilot, a lightweight controller.
Intelligent Routing: SkyPilot, wielding its decision-making prowess, analyzes the request and identifies the most suitable GPU in our multi-cloud setup. Factors like resource availability, latency, and cost might influence this choice.
Multi-Cloud Agility: Based on the chosen cloud provider (AWS or Azure), SkyPilot seamlessly directs the request to the appropriate containerized model residing in a sky clusters cluster. Here's where the magic of cloud-agnostic deployments comes into play.
Let's dissect the technical architecture behind this process:
SkyPilot (EC2 Instance): This lightweight controller, deployed on an EC2 instance, acts as the central hub for orchestrating requests and routing them to suitable model instances.
Swarm Cloud Repositories: Each model resides within its own dedicated folder on the Swarms Cloud GitHub repository (https://github.com/kyegomez/swarms-cloud). Here, you'll find a folder structure like this:
servers/\n <model_name_1>/\n sky-serve.yaml # Deployment configuration file\n <model_name_2>/\n sky-serve.yaml\n ...\n
sky-serve.yaml
file that dictates the deployment configuration.Here's a breakdown of the sky serve
command and its subcommands:
sky serve -h
: Displays the help message for the sky serve
CLI tool.Commands:
sky serve up yaml.yaml -n --cloud aws/azure
: This command deploys a SkyServe service based on the provided yaml.yaml
configuration file. The -n
flag indicates a new deployment, and the --cloud
flag specifies the target cloud platform (AWS or Azure).Additional Commands:
sky serve update
: Updates a running SkyServe service.
sky serve status
: Shows the status of deployed SkyServe services.
sky serve down
: Tears down (stops and removes) a SkyServe service.
sky serve logs
: Tails the logs of a running SkyServe service, providing valuable insights into its operation.
By leveraging these commands, infrastructure engineers can efficiently manage the deployment and lifecycle of models within the multi-cloud environment.
Building the Cluster and Accessing the Model:
When you deploy a model using sky serve up
, SkyServe triggers the building of a sky clusters cluster, if one doesn't already exist. Once the deployment is complete, SkyServe provides you with an endpoint URL for interacting with the model. This URL allows you to send requests to the deployed model and receive its predictions.
sky-serve.yaml
Configuration","text":"The sky-serve.yaml
file plays a crucial role in defining the deployment parameters for your model. This file typically includes properties such as:
Image: Specifies the Docker image containing your model code and dependencies.
Replicas: Defines the number of model replicas to be deployed in the Swarm cluster. This allows for load balancing and fault tolerance.
Resources: Sets memory and CPU resource constraints for the deployed model containers.
Networking: Configures network settings for communication within the sky clusters and with the outside world.
Benefits of Our Infrastructure:
Multi-Cloud Flexibility: Deploy models seamlessly across AWS and Azure, taking advantage of whichever platform best suits your needs.
Scalability: Easily scale model deployments up or down based on traffic demands.
Cost Optimization: The intelligent routing by SkyPilot helps optimize costs by utilizing the most cost-effective cloud resources.
Simplified Management: Manage models across clouds with a single set of commands using sky serve
.
Cloud Considerations:
Our multi-cloud architecture offers several advantages, but it also introduces complexities that need to be addressed. Here's a closer look at some key considerations:
Cloud Provider APIs and SDKs: SkyPilot interacts with the APIs and SDKs of the chosen cloud provider (AWS or Azure) to manage resources like virtual machines, storage, and networking. Infrastructure engineers need to be familiar with the specific APIs and SDKs for each cloud platform to ensure smooth operation and troubleshooting.
Security: Maintaining consistent security across different cloud environments is crucial. This involves aspects like IAM (Identity and Access Management) configuration, network segmentation, and encryption of sensitive data at rest and in transit. Infrastructure engineers need to implement robust security measures tailored to each cloud provider's offerings.
Network Connectivity: Establishing secure and reliable network connectivity between SkyPilot (running on EC2), sky clusters clusters (deployed on cloud VMs), and your client applications is essential. This might involve setting up VPN tunnels or utilizing cloud-native networking solutions offered by each provider.
Monitoring and Logging: Monitoring the health and performance of SkyPilot, sky clusters clusters, and deployed models across clouds is critical for proactive issue identification and resolution. Infrastructure engineers can leverage cloud provider-specific monitoring tools alongside centralized logging solutions for comprehensive oversight.
sky clusters Clusters
sky clusters is a container orchestration platform that facilitates the deployment and management of containerized applications, including your machine learning models. When you deploy a model with sky serve up
, SkyPilot launches an node with:
Provision Resources: SkyPilot requests resources from the chosen cloud provider (e.g., VMs with GPUs) to create a sky clusters cluster if one doesn't already exist.
Deploy Containerized Models: SkyPilot leverages the sky-serve.yaml
configuration to build Docker images containing your model code and dependencies. These images are then pushed to a container registry (e.g., Docker Hub) and deployed as containers within the Swarm cluster.
Load Balancing and Service Discovery: sky clusters provides built-in load balancing capabilities to distribute incoming requests across multiple model replicas, ensuring high availability and performance. Additionally, service discovery mechanisms allow models to find each other and communicate within the cluster.
SkyPilot - The Orchestrator
SkyPilot, the lightweight controller running on an EC2 instance, plays a central role in this infrastructure. Here's a deeper look at its functionalities:
API Gateway Integration: SkyPilot can be integrated with your API gateway or service mesh to receive incoming requests for model predictions.
Request Routing: SkyPilot analyzes the incoming request, considering factors like model compatibility, resource availability, and latency. Based on this analysis, SkyPilot selects the most suitable model instance within the appropriate sky clusters cluster.
Cloud Provider Interaction: SkyPilot interacts with the chosen cloud provider's APIs to manage resources required for the sky clusters cluster and model deployment.
Model Health Monitoring: SkyPilot can be configured to monitor the health and performance of deployed models. This might involve collecting metrics like model response times, resource utilization, and error rates.
Scalability Management: Based on pre-defined policies or real-time traffic patterns, SkyPilot can trigger the scaling of model deployments (adding or removing replicas) within the sky clusters cluster.
Advanced Considerations
This blog post has provided a foundational understanding of our serving model infrastructure. For infrastructure engineers seeking a deeper dive, here are some additional considerations:
Container Security: Explore container image scanning for vulnerabilities, enforcing least privilege principles within container runtime environments, and utilizing secrets management solutions for secure access to sensitive data.
Model Versioning and Rollbacks: Implement a model versioning strategy to track changes and facilitate rollbacks to previous versions if necessary.
A/B Testing: Integrate A/B testing frameworks to evaluate the performance of different model versions and configurations before full-scale deployment.
Auto-Scaling with Cloud Monitoring: Utilize cloud provider-specific monitoring services like Amazon CloudWatch or Azure Monitor to trigger auto-scaling of sky clusters clusters based on predefined metrics.
By understanding these technical aspects and considerations, infrastructure engineers can effectively manage and optimize our multi-cloud serving model infrastructure.
"},{"location":"swarms_cloud/architecture/#conclusion","title":"Conclusion","text":"This comprehensive exploration has shed light on the intricate workings of our serving model infrastructure. We've covered the journey of an API request, delved into the technical architecture with a focus on cloud considerations, sky clusters clusters, and SkyPilot's role as the orchestrator. We've also explored advanced considerations for infrastructure engineers seeking to further optimize and secure this multi-cloud environment.
This understanding empowers both users and infrastructure engineers to leverage this technology effectively for deploying and managing your machine learning models at scale.
"},{"location":"swarms_cloud/best_practices/","title":"Swarms API Best Practices Guide","text":"This comprehensive guide outlines production-grade best practices for using the Swarms API effectively. Learn how to choose the right swarm architecture, optimize costs, and implement robust error handling.
"},{"location":"swarms_cloud/best_practices/#quick-reference-cards","title":"Quick Reference Cards","text":"Swarm TypesApplication PatternsCost OptimizationService TiersIndustry SolutionsError HandlingAvailable Swarm Architectures
Swarm Type Best For Use CasesAgentRearrange
Dynamic workflows - Complex task decomposition- Adaptive processing- Multi-stage analysis- Dynamic resource allocation MixtureOfAgents
Diverse expertise - Cross-domain problems- Comprehensive analysis- Multi-perspective tasks- Research synthesis SpreadSheetSwarm
Data processing - Financial analysis- Data transformation- Batch calculations- Report generation SequentialWorkflow
Linear processes - Document processing- Step-by-step analysis- Quality control- Content pipeline ConcurrentWorkflow
Parallel tasks - Batch processing- Independent analyses- High-throughput needs- Multi-market analysis GroupChat
Collaborative solving - Brainstorming- Decision making- Problem solving- Strategy development MultiAgentRouter
Task distribution - Load balancing- Specialized processing- Resource optimization- Service routing AutoSwarmBuilder
Automated setup - Quick prototyping- Simple tasks- Testing- MVP development HiearchicalSwarm
Complex organization - Project management- Research analysis- Enterprise workflows- Team automation MajorityVoting
Consensus needs - Quality assurance- Decision validation- Risk assessment- Content moderation Specialized Application Configurations
Application Recommended Swarm Benefits Team AutomationHiearchicalSwarm
- Automated team coordination- Clear responsibility chain- Scalable team structure Research Pipeline SequentialWorkflow
- Structured research process- Quality control at each stage- Comprehensive output Trading System ConcurrentWorkflow
- Multi-market coverage- Real-time analysis- Risk distribution Content Factory MixtureOfAgents
- Automated content creation- Consistent quality- High throughput Advanced Cost Management Strategies
Strategy Implementation Impact Batch Processing Group related tasks 20-30% cost reduction Off-peak Usage Schedule for 8 PM - 6 AM PT 15-25% cost reduction Token Optimization Precise prompts, focused tasks 10-20% cost reduction Caching Store reusable results 30-40% cost reduction Agent Optimization Use minimum required agents 15-25% cost reduction Smart Routing Route to specialized agents 10-15% cost reduction Prompt Engineering Optimize input tokens 15-20% cost reduction Flex Processing Use flex tier for non-urgent tasks 75% cost reductionChoosing the Right Service Tier
Tier Best For Benefits Considerations Standard - Real-time processing- Time-sensitive tasks- Critical workflows - Immediate execution- Higher priority- Predictable timing - Higher cost- 5-min timeout Flex - Batch processing- Non-urgent tasks- Cost-sensitive workloads - 75% cost reduction- Longer timeouts- Auto-retries - Variable timing- Resource contentionIndustry-Specific Swarm Patterns
Industry Use Case Applications Finance Automated trading desk - Portfolio management- Risk assessment- Market analysis- Trading execution Healthcare Clinical workflow automation - Patient analysis- Diagnostic support- Treatment planning- Follow-up care Legal Legal document processing - Document review- Case analysis- Contract review- Compliance checks E-commerce E-commerce operations - Product management- Pricing optimization- Customer support- Inventory managementAdvanced Error Management Strategies
Error Code Strategy Recovery Pattern 400 Input Validation Pre-request validation with fallback 401 Auth Management Secure key rotation and storage 429 Rate Limiting Exponential backoff with queuing 500 Resilience Retry with circuit breaking 503 High Availability Multi-region redundancy 504 Timeout Handling Adaptive timeouts with partial results"},{"location":"swarms_cloud/best_practices/#choosing-the-right-swarm-architecture","title":"Choosing the Right Swarm Architecture","text":""},{"location":"swarms_cloud/best_practices/#decision-framework","title":"Decision Framework","text":"Use this framework to select the optimal swarm architecture for your use case:
Task Complexity Analysis
AutoSwarmBuilder
HiearchicalSwarm
or MultiAgentRouter
AgentRearrange
Workflow Pattern
SequentialWorkflow
ConcurrentWorkflow
GroupChat
Domain Requirements
MixtureOfAgents
SpreadSheetSwarm
MajorityVoting
Financial Applications
HiearchicalSwarm
MixtureOfAgents
ConcurrentWorkflow
SpreadSheetSwarm
Healthcare Applications
SequentialWorkflow
MajorityVoting
GroupChat
MultiAgentRouter
Legal Applications
SequentialWorkflow
MixtureOfAgents
HiearchicalSwarm
ConcurrentWorkflow
Recommended Patterns
Anti-patterns to Avoid
Typical Performance Metrics
Metric Target Range Warning Threshold Response Time < 2s (standard)< 15s (flex) > 5s (standard)> 30s (flex) Success Rate > 99% < 95% Cost per Task < $0.05 (standard)< $0.0125 (flex) > $0.10 (standard)> $0.025 (flex) Cache Hit Rate > 80% < 60% Error Rate < 1% > 5% Retry Rate (flex) < 10% > 30%"},{"location":"swarms_cloud/best_practices/#additional-resources","title":"Additional Resources","text":"Useful Links
Swarm Agent API \u63d0\u4f9b\u4e86\u4e00\u4e2a\u5f3a\u5927\u7684\u5e73\u53f0\uff0c\u7528\u4e8e\u5927\u89c4\u6a21\u7ba1\u7406\u591a\u4ee3\u7406\u534f\u4f5c\u548c\u5728\u4e91\u7aef\u7f16\u6392 LLM \u4ee3\u7406\u7fa4\u3002\u672c\u6587\u6863\u6982\u8ff0\u4e86\u5b9a\u4ef7\u6a21\u578b\u3001\u6210\u672c\u8ba1\u7b97\u65b9\u5f0f\u4ee5\u53ca\u5982\u4f55\u8d2d\u4e70\u548c\u7ba1\u7406\u60a8\u7684\u79ef\u5206\u3002
\u6211\u4eec\u7684\u5b9a\u4ef7\u8bbe\u8ba1\u65e8\u5728\u900f\u660e\u4e14\u5177\u6709\u6210\u672c\u6548\u76ca\uff0c\u4f7f\u60a8\u80fd\u591f\u8f7b\u677e\u53d1\u6325\u4ee3\u7406\u7684\u5168\u90e8\u6f5c\u529b\u3002\u6210\u672c\u57fa\u4e8e\uff1a
\u4f7f\u7528\u7684\u4ee3\u7406\u6570\u91cf
\u8f93\u5165\u548c\u8f93\u51fa\u4ee4\u724c\u4f7f\u7528\u91cf
\u6267\u884c\u65f6\u95f4
Swarm API \u91c7\u7528\u57fa\u4e8e\u79ef\u5206\u7684\u7cfb\u7edf\uff1a
**\u79ef\u5206**\u662f\u5e73\u53f0\u5185\u4f7f\u7528\u7684\u8d27\u5e01
1\u79ef\u5206 = 1\u7f8e\u5143
\u79ef\u5206\u53ef\u4ee5\u7528\u7f8e\u5143\u6216 $swarms Solana \u4ee3\u5e01\u8d2d\u4e70
\u4e24\u79cd\u7c7b\u578b\u7684\u79ef\u5206\uff1a
\u6807\u51c6\u79ef\u5206\uff1a\u8d2d\u4e70\u7684\u79ef\u5206\u6c38\u4e0d\u8fc7\u671f
\u514d\u8d39\u79ef\u5206\uff1a\u4fc3\u9500\u79ef\u5206\uff0c\u53ef\u80fd\u6709\u4f7f\u7528\u9650\u5236
\u4e3a\u4e86\u9f13\u52b1\u5728\u975e\u9ad8\u5cf0\u65f6\u6bb5\u9ad8\u6548\u5229\u7528\u8d44\u6e90\uff0c\u6211\u4eec\u5728\u52a0\u5dde\u591c\u95f4\u65f6\u6bb5\u63d0\u4f9b\u663e\u8457\u6298\u6263\uff1a
\u65f6\u95f4\u6bb5\uff08\u592a\u5e73\u6d0b\u65f6\u95f4\uff09 \u6298\u6263 \u665a\u4e0a8:00\u81f3\u65e9\u4e0a6:00 \u4ee4\u724c\u6210\u672c75%\u6298\u6263"},{"location":"swarms_cloud/chinese_api_pricing/#_6","title":"\u6210\u672c\u8ba1\u7b97","text":""},{"location":"swarms_cloud/chinese_api_pricing/#_7","title":"\u516c\u5f0f","text":"\u7fa4\u4f53\u6267\u884c\u7684\u603b\u6210\u672c\u8ba1\u7b97\u5982\u4e0b\uff1a
\u603b\u6210\u672c = (\u4ee3\u7406\u6570\u91cf \u00d7 $0.01) + \n (\u603b\u8f93\u5165\u4ee4\u724c\u6570 / 1M \u00d7 $2.00 \u00d7 \u4ee3\u7406\u6570\u91cf) +\n (\u603b\u8f93\u51fa\u4ee4\u724c\u6570 / 1M \u00d7 $4.50 \u00d7 \u4ee3\u7406\u6570\u91cf)\n
\u5e94\u7528\u591c\u95f4\u6298\u6263\u65f6\uff1a
\u8f93\u5165\u4ee4\u724c\u6210\u672c = \u8f93\u5165\u4ee4\u724c\u6210\u672c \u00d7 0.25\n\u8f93\u51fa\u4ee4\u724c\u6210\u672c = \u8f93\u51fa\u4ee4\u724c\u6210\u672c \u00d7 0.25\n
"},{"location":"swarms_cloud/chinese_api_pricing/#_8","title":"\u793a\u4f8b\u573a\u666f","text":""},{"location":"swarms_cloud/chinese_api_pricing/#1","title":"\u573a\u666f1\uff1a\u57fa\u672c\u5de5\u4f5c\u6d41\uff08\u767d\u5929\uff09","text":"3\u4e2a\u4ee3\u7406
\u603b\u517110,000\u4e2a\u8f93\u5165\u4ee4\u724c
\u603b\u517125,000\u4e2a\u8f93\u51fa\u4ee4\u724c
\u8ba1\u7b97\uff1a
\u4ee3\u7406\u6210\u672c\uff1a3 \u00d7 $0.01 = $0.03
\u8f93\u5165\u4ee4\u724c\u6210\u672c\uff1a(10,000 / 1,000,000) \u00d7 $2.00 \u00d7 3 = $0.06
\u8f93\u51fa\u4ee4\u724c\u6210\u672c\uff1a(25,000 / 1,000,000) \u00d7 $4.50 \u00d7 3 = $0.3375
\u603b\u6210\u672c\uff1a$0.4275
5\u4e2a\u4ee3\u7406
\u603b\u517150,000\u4e2a\u8f93\u5165\u4ee4\u724c
\u603b\u5171125,000\u4e2a\u8f93\u51fa\u4ee4\u724c
\u8ba1\u7b97\uff1a
\u4ee3\u7406\u6210\u672c\uff1a5 \u00d7 $0.01 = $0.05
\u8f93\u5165\u4ee4\u724c\u6210\u672c\uff1a(50,000 / 1,000,000) \u00d7 $2.00 \u00d7 5 \u00d7 0.25 = $0.125
\u8f93\u51fa\u4ee4\u724c\u6210\u672c\uff1a(125,000 / 1,000,000) \u00d7 $4.50 \u00d7 5 \u00d7 0.25 = $0.703125
\u603b\u6210\u672c\uff1a$0.878125
\u53ef\u4ee5\u901a\u8fc7\u6211\u4eec\u7684\u5e73\u53f0\u4ee5\u4e24\u79cd\u65b9\u5f0f\u8d2d\u4e70\u79ef\u5206\uff1a
\u6700\u4f4e\u8d2d\u4e70\u989d\uff1a$10
$swarms \u4ee3\u5e01\u652f\u4ed8
\u6211\u4eec\u5076\u5c14\u4f1a\u5411\u4ee5\u4e0b\u5bf9\u8c61\u63d0\u4f9b\u514d\u8d39\u79ef\u5206\uff1a
\u65b0\u7528\u6237\uff08\u6b22\u8fce\u5956\u52b1\uff09
\u4fc3\u9500\u671f\u95f4
\u6559\u80b2\u548c\u7814\u7a76\u76ee\u7684
\u5173\u4e8e\u514d\u8d39\u79ef\u5206\u7684\u8bf4\u660e\uff1a
\u5728\u6807\u51c6\u79ef\u5206\u4e4b\u524d\u4f7f\u7528
\u53ef\u80fd\u6709\u8fc7\u671f\u65e5\u671f
\u53ef\u80fd\u6709\u4f7f\u7528\u9650\u5236
\u901a\u8fc7\u6211\u4eec\u5168\u9762\u7684\u65e5\u5fd7\u548c\u62a5\u544a\u529f\u80fd\u8ddf\u8e2a\u60a8\u7684\u79ef\u5206\u4f7f\u7528\u60c5\u51b5\uff1a
API\u65e5\u5fd7
\u901a\u8fc7/v1/swarm/logs
\u7aef\u70b9\u8bbf\u95ee\u8be6\u7ec6\u65e5\u5fd7
\u67e5\u770b\u6bcf\u6b21\u6267\u884c\u7684\u6210\u672c\u660e\u7ec6
\u4eea\u8868\u677f
\u5b9e\u65f6\u79ef\u5206\u4f59\u989d\u663e\u793a
\u5386\u53f2\u4f7f\u7528\u56fe\u8868
\u8be6\u7ec6\u6210\u672c\u5206\u6790
\u53ef\u5728https://swarms.world/platform/dashboard\u8bbf\u95ee
\u95ee\uff1a\u662f\u5426\u6709\u6700\u4f4e\u79ef\u5206\u8d2d\u4e70\u8981\u6c42\uff1f \u7b54\uff1a\u662f\u7684\uff0c\u6700\u4f4e\u79ef\u5206\u8d2d\u4e70\u989d\u4e3a10\u7f8e\u5143\u7b49\u503c\u3002
\u95ee\uff1a\u79ef\u5206\u4f1a\u8fc7\u671f\u5417\uff1f \u7b54\uff1a\u6807\u51c6\u79ef\u5206\u4e0d\u4f1a\u8fc7\u671f\u3002\u514d\u8d39\u4fc3\u9500\u79ef\u5206\u53ef\u80fd\u6709\u8fc7\u671f\u65e5\u671f\u3002
\u95ee\uff1a\u591c\u95f4\u6298\u6263\u5982\u4f55\u5e94\u7528\uff1f \u7b54\uff1a\u7cfb\u7edf\u4f1a\u6839\u636e\u592a\u5e73\u6d0b\u65f6\u95f4\uff08America/Los_Angeles\uff09\u81ea\u52a8\u68c0\u6d4b\u6267\u884c\u65f6\u95f4\uff0c\u5e76\u5728\u665a\u4e0a8:00\u81f3\u65e9\u4e0a6:00\u4e4b\u95f4\u7684\u6267\u884c\u5e94\u752875%\u7684\u4ee4\u724c\u6210\u672c\u6298\u6263\u3002
\u95ee\uff1a\u5982\u679c\u6211\u5728\u6267\u884c\u8fc7\u7a0b\u4e2d\u79ef\u5206\u7528\u5b8c\u4e86\u4f1a\u600e\u6837\uff1f \u7b54\uff1a\u5982\u679c\u6ca1\u6709\u8db3\u591f\u7684\u79ef\u5206\uff0c\u6267\u884c\u5c06\u5931\u8d25\u5e76\u663e\u793a402 Payment Required\u9519\u8bef\u3002\u6211\u4eec\u5efa\u8bae\u7ef4\u6301\u9002\u5408\u60a8\u4f7f\u7528\u6a21\u5f0f\u7684\u79ef\u5206\u4f59\u989d\u3002
\u95ee\uff1a\u6211\u53ef\u4ee5\u83b7\u5f97\u672a\u4f7f\u7528\u79ef\u5206\u7684\u9000\u6b3e\u5417\uff1f \u7b54\uff1a\u8bf7\u8054\u7cfb\u6211\u4eec\u7684\u652f\u6301\u56e2\u961f\u5904\u7406\u672a\u4f7f\u7528\u79ef\u5206\u7684\u9000\u6b3e\u8bf7\u6c42\u3002
\u95ee\uff1a\u662f\u5426\u6709\u6279\u91cf\u6298\u6263\uff1f \u7b54\uff1a\u662f\u7684\uff0c\u8bf7\u8054\u7cfb\u6211\u4eec\u7684\u9500\u552e\u56e2\u961f\u4e86\u89e3\u4f01\u4e1a\u5b9a\u4ef7\u548c\u6279\u91cf\u6298\u6263\u3002
"},{"location":"swarms_cloud/chinese_api_pricing/#_13","title":"\u53c2\u8003\u8d44\u6599","text":"Swarm API \u6587\u6863
\u8d26\u6237\u7ba1\u7406\u95e8\u6237
Swarm \u7c7b\u578b\u53c2\u8003
\u4ee4\u724c\u4f7f\u7528\u6307\u5357
API \u53c2\u8003
\u5982\u6709\u5176\u4ed6\u95ee\u9898\u6216\u81ea\u5b9a\u4e49\u5b9a\u4ef7\u9009\u9879\uff0c\u8bf7\u8054\u7cfb\u6211\u4eec\u7684\u652f\u6301\u56e2\u961f\uff0c\u90ae\u7bb1\uff1akye@swarms.world
"},{"location":"swarms_cloud/cloud_run/","title":"Hosting Agents on Google Cloud Run","text":"This documentation provides a highly detailed, step-by-step guide to hosting your agents using Google Cloud Run. It uses a well-structured project setup that includes a Dockerfile at the root level, a folder dedicated to your API file, and a requirements.txt
file to manage all dependencies. This guide will ensure your deployment is scalable, efficient, and easy to maintain.
Your project directory should adhere to the following structure to ensure compatibility and ease of deployment:
.\n\u251c\u2500\u2500 Dockerfile\n\u251c\u2500\u2500 requirements.txt\n\u2514\u2500\u2500 api/\n \u2514\u2500\u2500 api.py\n
Each component serves a specific purpose in the deployment pipeline, ensuring modularity and maintainability.
"},{"location":"swarms_cloud/cloud_run/#step-1-prerequisites","title":"Step 1: Prerequisites","text":"Before you begin, make sure to satisfy the following prerequisites to avoid issues during deployment:
Enable billing for your project. Billing is necessary for accessing Cloud Run services.
Install Google Cloud SDK:
Follow the installation guide to set up the Google Cloud SDK on your local machine.
Install Docker:
Download and install Docker by following the official Docker installation guide. Docker is crucial for containerizing your application.
Create a Google Cloud Project:
Navigate to the Google Cloud Console and create a new project. Assign it a meaningful name and note the Project ID, as it will be used throughout this guide.
Enable Required APIs:
api/api.py
","text":"This is the main Python script where you define your Swarms agents and expose an API endpoint for interacting with them. Here\u2019s an example:
from flask import Flask, request, jsonify\nfrom swarms import Agent # Assuming `swarms` is the framework you're using\n\napp = Flask(__name__)\n\n# Example Swarm agent\nagent = Agent(\n agent_name=\"Stock-Analysis-Agent\",\n model_name=\"gpt-4o-mini\",\n max_loops=\"auto\",\n interactive=True,\n streaming_on=True,\n)\n\n@app.route('/run-agent', methods=['POST'])\ndef run_agent():\n data = request.json\n task = data.get('task', '')\n result = agent.run(task)\n return jsonify({\"result\": result})\n\nif __name__ == '__main__':\n app.run(host='0.0.0.0', port=8080)\n
This example sets up a basic API that listens for POST requests, processes a task using a Swarm agent, and returns the result as a JSON response. Customize it based on your agent\u2019s functionality.
"},{"location":"swarms_cloud/cloud_run/#2-requirementstxt","title":"2.requirements.txt
","text":"This file lists all Python dependencies required for your project. Example:
flask\nswarms\n# add any other dependencies here\n
Be sure to include any additional libraries your agents rely on. Keeping this file up to date ensures smooth dependency management during deployment.
"},{"location":"swarms_cloud/cloud_run/#3-dockerfile","title":"3.Dockerfile
","text":"The Dockerfile specifies how your application is containerized. Below is a sample Dockerfile for your setup:
# Use an official Python runtime as the base image\nFROM python:3.10-slim\n\n# Set the working directory\nWORKDIR /app\n\n# Copy requirements.txt and install dependencies\nCOPY requirements.txt .\nRUN pip install --no-cache-dir -r requirements.txt\n\n# Copy the application code\nCOPY api/ ./api/\n\n# Expose port 8080 (Cloud Run default port)\nEXPOSE 8080\n\n# Run the application\nCMD [\"python\", \"api/api.py\"]\n
This Dockerfile ensures your application is containerized with minimal overhead, focusing on slim images for efficiency.
"},{"location":"swarms_cloud/cloud_run/#step-3-deploying-to-google-cloud-run","title":"Step 3: Deploying to Google Cloud Run","text":""},{"location":"swarms_cloud/cloud_run/#1-authenticate-with-google-cloud","title":"1. Authenticate with Google Cloud","text":"Log in to your Google Cloud account by running:
gcloud auth login\n
Set the active project to match your deployment target:
gcloud config set project [PROJECT_ID]\n
Replace [PROJECT_ID]
with your actual Project ID.
Use Google Cloud's Artifact Registry to store and manage your Docker image. Follow these steps:
gcloud artifacts repositories create my-repo --repository-format=Docker --location=us-central1\n
gcloud auth configure-docker us-central1-docker.pkg.dev\n
docker build -t us-central1-docker.pkg.dev/[PROJECT_ID]/my-repo/my-image .\n
docker push us-central1-docker.pkg.dev/[PROJECT_ID]/my-repo/my-image\n
"},{"location":"swarms_cloud/cloud_run/#3-deploy-to-cloud-run","title":"3. Deploy to Cloud Run","text":"Deploy the application to Cloud Run with the following command:
gcloud run deploy my-agent-service \\\n --image us-central1-docker.pkg.dev/[PROJECT_ID]/my-repo/my-image \\\n --platform managed \\\n --region us-central1 \\\n --allow-unauthenticated\n
Key points: - Replace [PROJECT_ID]
with your actual Project ID. - The --allow-unauthenticated
flag makes the service publicly accessible. Exclude it to restrict access.
Once the deployment is complete, test the service:
curl
or Postman to send a request. Example:curl -X POST [CLOUD_RUN_URL]/run-agent \\\n -H \"Content-Type: application/json\" \\\n -d '{\"task\": \"example task\"}'\n
This tests whether your agent processes the task correctly and returns the expected output.
"},{"location":"swarms_cloud/cloud_run/#step-5-updating-the-service","title":"Step 5: Updating the Service","text":"To apply changes to your application:
docker build -t us-central1-docker.pkg.dev/[PROJECT_ID]/my-repo/my-image .\ndocker push us-central1-docker.pkg.dev/[PROJECT_ID]/my-repo/my-image\n
gcloud run deploy my-agent-service \\\n --image us-central1-docker.pkg.dev/[PROJECT_ID]/my-repo/my-image\n
This ensures the latest version of your application is live.
"},{"location":"swarms_cloud/cloud_run/#troubleshooting","title":"Troubleshooting","text":"gcloud logs read --project [PROJECT_ID]\n
"},{"location":"swarms_cloud/cloud_run/#conclusion","title":"Conclusion","text":"By following this comprehensive guide, you can deploy your agents on Google Cloud Run with ease. This method leverages Docker for containerization and Google Cloud services for seamless scalability and management. With a robust setup like this, you can focus on enhancing your agents\u2019 capabilities rather than worrying about deployment challenges.
"},{"location":"swarms_cloud/launch/","title":"Swarms Cloud API Client Documentation","text":""},{"location":"swarms_cloud/launch/#overview","title":"Overview","text":"The Swarms Cloud API Client is a production-grade Python library for interacting with the Swarms Cloud Agent API. It provides a comprehensive interface for managing, executing, and monitoring cloud-based agents.
"},{"location":"swarms_cloud/launch/#installation","title":"Installation","text":"pip install swarms-cloud\n
"},{"location":"swarms_cloud/launch/#quick-start","title":"Quick Start","text":"from swarms_cloud import SwarmCloudAPI, AgentCreate\n\n# Initialize the client\nclient = SwarmCloudAPI(\n base_url=\"https://swarmcloud-285321057562.us-central1.run.app\",\n api_key=\"your_api_key_here\"\n)\n\n# Create an agent\nagent_data = AgentCreate(\n name=\"TranslateAgent\",\n description=\"Translates text between languages\",\n code=\"\"\"\n def main(request, store):\n text = request.payload.get('text', '')\n return f'Translated: {text}'\n \"\"\",\n requirements=\"requests==2.25.1\",\n envs=\"DEBUG=True\"\n)\n\nnew_agent = client.create_agent(agent_data)\nprint(f\"Created agent with ID: {new_agent.id}\")\n
"},{"location":"swarms_cloud/launch/#client-configuration","title":"Client Configuration","text":""},{"location":"swarms_cloud/launch/#constructor-parameters","title":"Constructor Parameters","text":"Parameter Type Required Default Description base_url str No https://swarmcloud-285321057562.us-central1.run.app The base URL of the SwarmCloud API api_key str Yes None Your SwarmCloud API key timeout float No 10.0 Request timeout in seconds"},{"location":"swarms_cloud/launch/#data-models","title":"Data Models","text":""},{"location":"swarms_cloud/launch/#agentcreate","title":"AgentCreate","text":"Model for creating new agents.
Field Type Required Default Description name str Yes - Name of the agent description str No None Description of the agent's purpose code str Yes - Python code that defines the agent's behavior requirements str No None Python package requirements (pip format) envs str No None Environment variables for the agent autoscaling bool No False Enable/disable concurrent execution scaling"},{"location":"swarms_cloud/launch/#agentupdate","title":"AgentUpdate","text":"Model for updating existing agents.
Field Type Required Default Description name str No None Updated name of the agent description str No None Updated description code str No None Updated Python code requirements str No None Updated package requirements autoscaling bool No None Updated autoscaling setting"},{"location":"swarms_cloud/launch/#api-methods","title":"API Methods","text":""},{"location":"swarms_cloud/launch/#list-agents","title":"List Agents","text":"Retrieve all available agents.
agents = client.list_agents()\nfor agent in agents:\n print(f\"Agent: {agent.name} (ID: {agent.id})\")\n
Returns: List[AgentOut]
"},{"location":"swarms_cloud/launch/#create-agent","title":"Create Agent","text":"Create a new agent with the specified configuration.
agent_data = AgentCreate(\n name=\"DataProcessor\",\n description=\"Processes incoming data streams\",\n code=\"\"\"\n def main(request, store):\n data = request.payload.get('data', [])\n return {'processed': len(data)}\n \"\"\",\n requirements=\"pandas==1.4.0\\nnumpy==1.21.0\",\n envs=\"PROCESSING_MODE=fast\",\n autoscaling=True\n)\n\nnew_agent = client.create_agent(agent_data)\n
Returns: AgentOut
"},{"location":"swarms_cloud/launch/#get-agent","title":"Get Agent","text":"Retrieve details of a specific agent.
agent = client.get_agent(\"agent_id_here\")\nprint(f\"Agent details: {agent}\")\n
Parameters: - agent_id (str): The unique identifier of the agent
Returns: AgentOut
"},{"location":"swarms_cloud/launch/#update-agent","title":"Update Agent","text":"Update an existing agent's configuration.
update_data = AgentUpdate(\n name=\"UpdatedProcessor\",\n description=\"Enhanced data processing capabilities\",\n code=\"def main(request, store):\\n return {'status': 'updated'}\"\n)\n\nupdated_agent = client.update_agent(\"agent_id_here\", update_data)\n
Parameters: - agent_id (str): The unique identifier of the agent - update (AgentUpdate): The update data
Returns: AgentOut
"},{"location":"swarms_cloud/launch/#execute-agent","title":"Execute Agent","text":"Manually execute an agent with optional payload data.
# Execute with payload\nresult = client.execute_agent(\n \"agent_id_here\",\n payload={\"text\": \"Hello, World!\"}\n)\n\n# Execute without payload\nresult = client.execute_agent(\"agent_id_here\")\n
Parameters: - agent_id (str): The unique identifier of the agent - payload (Optional[Dict[str, Any]]): Execution payload data
Returns: Dict[str, Any]
"},{"location":"swarms_cloud/launch/#get-agent-history","title":"Get Agent History","text":"Retrieve the execution history and logs for an agent.
history = client.get_agent_history(\"agent_id_here\")\nfor execution in history.executions:\n print(f\"[{execution.timestamp}] {execution.log}\")\n
Parameters: - agent_id (str): The unique identifier of the agent
Returns: AgentExecutionHistory
"},{"location":"swarms_cloud/launch/#batch-execute-agents","title":"Batch Execute Agents","text":"Execute multiple agents simultaneously with the same payload.
# Get list of agents\nagents = client.list_agents()\n\n# Execute batch with payload\nresults = client.batch_execute_agents(\n agents=agents[:3], # Execute first three agents\n payload={\"data\": \"test\"}\n)\n\nprint(f\"Batch execution results: {results}\")\n
Parameters: - agents (List[AgentOut]): List of agents to execute - payload (Optional[Dict[str, Any]]): Shared execution payload
Returns: List[Any]
"},{"location":"swarms_cloud/launch/#health-check","title":"Health Check","text":"Check the API's health status.
status = client.health()\nprint(f\"API Status: {status}\")\n
Returns: Dict[str, Any]
"},{"location":"swarms_cloud/launch/#error-handling","title":"Error Handling","text":"The client uses exception handling to manage various error scenarios:
from swarms_cloud import SwarmCloudAPI\nimport httpx\n\ntry:\n client = SwarmCloudAPI(api_key=\"your_api_key_here\")\n agents = client.list_agents()\nexcept httpx.HTTPError as http_err:\n print(f\"HTTP error occurred: {http_err}\")\nexcept Exception as err:\n print(f\"An unexpected error occurred: {err}\")\nfinally:\n client.close()\n
"},{"location":"swarms_cloud/launch/#context-manager-support","title":"Context Manager Support","text":"The client can be used with Python's context manager:
with SwarmCloudAPI(api_key=\"your_api_key_here\") as client:\n status = client.health()\n print(f\"API Status: {status}\")\n # Client automatically closes after the with block\n
"},{"location":"swarms_cloud/launch/#best-practices","title":"Best Practices","text":"Always close the client when finished:
client = SwarmCloudAPI(api_key=\"your_api_key_here\")\ntry:\n # Your code here\nfinally:\n client.close()\n
Use context managers for automatic cleanup:
with SwarmCloudAPI(api_key=\"your_api_key_here\") as client:\n # Your code here\n
Handle errors appropriately:
try:\n result = client.execute_agent(\"agent_id\", payload={\"data\": \"test\"})\nexcept httpx.HTTPError as e:\n logger.error(f\"HTTP error: {e}\")\n # Handle error appropriately\n
Set appropriate timeouts for your use case:
client = SwarmCloudAPI(\n api_key=\"your_api_key_here\",\n timeout=30.0 # Longer timeout for complex operations\n)\n
Here's a complete example showcasing various features of the client:
from swarms_cloud import SwarmCloudAPI, AgentCreate, AgentUpdate\nimport httpx\n\ndef main():\n with SwarmCloudAPI(api_key=\"your_api_key_here\") as client:\n # Create an agent\n agent_data = AgentCreate(\n name=\"DataAnalyzer\",\n description=\"Analyzes incoming data streams\",\n code=\"\"\"\n def main(request, store):\n data = request.payload.get('data', [])\n return {\n 'count': len(data),\n 'summary': 'Data processed successfully'\n }\n \"\"\",\n requirements=\"pandas==1.4.0\",\n autoscaling=True\n )\n\n try:\n # Create the agent\n new_agent = client.create_agent(agent_data)\n print(f\"Created agent: {new_agent.name} (ID: {new_agent.id})\")\n\n # Execute the agent\n result = client.execute_agent(\n new_agent.id,\n payload={\"data\": [1, 2, 3, 4, 5]}\n )\n print(f\"Execution result: {result}\")\n\n # Update the agent\n update_data = AgentUpdate(\n description=\"Enhanced data analysis capabilities\"\n )\n updated_agent = client.update_agent(new_agent.id, update_data)\n print(f\"Updated agent: {updated_agent.name}\")\n\n # Get execution history\n history = client.get_agent_history(new_agent.id)\n print(f\"Execution history: {history}\")\n\n except httpx.HTTPError as e:\n print(f\"HTTP error occurred: {e}\")\n except Exception as e:\n print(f\"Unexpected error: {e}\")\n\nif __name__ == \"__main__\":\n main()\n
"},{"location":"swarms_cloud/launch/#logging","title":"Logging","text":"The client uses the loguru
library for logging. You can configure the logging level and format:
from loguru import logger\n\n# Configure logging\nlogger.add(\"swarmcloud.log\", rotation=\"500 MB\")\n\nclient = SwarmCloudAPI(api_key=\"your_api_key_here\")\n
"},{"location":"swarms_cloud/launch/#performance-considerations","title":"Performance Considerations","text":"Connection Reuse: The client reuses HTTP connections by default, improving performance for multiple requests.
Timeout Configuration: Set appropriate timeouts based on your use case:
client = SwarmCloudAPI(\n api_key=\"your_api_key_here\",\n timeout=5.0 # Shorter timeout for time-sensitive operations\n)\n
Batch Operations: Use batch_execute_agents for multiple agent executions:
results = client.batch_execute_agents(\n agents=agents,\n payload=shared_payload\n)\n
The client respects API rate limits but does not implement retry logic. Implement your own retry mechanism if needed:
from tenacity import retry, stop_after_attempt, wait_exponential\n\n@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))\ndef execute_with_retry(client, agent_id, payload):\n return client.execute_agent(agent_id, payload)\n
"},{"location":"swarms_cloud/launch/#thread-safety","title":"Thread Safety","text":"The client is not thread-safe by default. For concurrent usage, create separate client instances for each thread or implement appropriate synchronization mechanisms.
"},{"location":"swarms_cloud/mcp/","title":"Swarms API as MCP","text":"SWARMS_API_KEY
in .env
# server.py\nfrom datetime import datetime\nimport os\nfrom typing import Any, Dict, List, Optional\n\nimport requests\nimport httpx\nfrom fastmcp import FastMCP\nfrom pydantic import BaseModel, Field\nfrom swarms import SwarmType\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nclass AgentSpec(BaseModel):\n agent_name: Optional[str] = Field(\n description=\"The unique name assigned to the agent, which identifies its role and functionality within the swarm.\",\n )\n description: Optional[str] = Field(\n description=\"A detailed explanation of the agent's purpose, capabilities, and any specific tasks it is designed to perform.\",\n )\n system_prompt: Optional[str] = Field(\n description=\"The initial instruction or context provided to the agent, guiding its behavior and responses during execution.\",\n )\n model_name: Optional[str] = Field(\n default=\"gpt-4o-mini\",\n description=\"The name of the AI model that the agent will utilize for processing tasks and generating outputs. For example: gpt-4o, gpt-4o-mini, openai/o3-mini\",\n )\n auto_generate_prompt: Optional[bool] = Field(\n default=False,\n description=\"A flag indicating whether the agent should automatically create prompts based on the task requirements.\",\n )\n max_tokens: Optional[int] = Field(\n default=8192,\n description=\"The maximum number of tokens that the agent is allowed to generate in its responses, limiting output length.\",\n )\n temperature: Optional[float] = Field(\n default=0.5,\n description=\"A parameter that controls the randomness of the agent's output; lower values result in more deterministic responses.\",\n )\n role: Optional[str] = Field(\n default=\"worker\",\n description=\"The designated role of the agent within the swarm, which influences its behavior and interaction with other agents.\",\n )\n max_loops: Optional[int] = Field(\n default=1,\n description=\"The maximum number of times the agent is allowed to repeat its task, enabling iterative processing if necessary.\",\n )\n # New fields for RAG functionality\n rag_collection: Optional[str] = Field(\n None,\n description=\"The Qdrant collection name for RAG functionality. If provided, this agent will perform RAG queries.\",\n )\n rag_documents: Optional[List[str]] = Field(\n None,\n description=\"Documents to ingest into the Qdrant collection for RAG. (List of text strings)\",\n )\n tools: Optional[List[Dict[str, Any]]] = Field(\n None,\n description=\"A dictionary of tools that the agent can use to complete its task.\",\n )\n\n\nclass AgentCompletion(BaseModel):\n \"\"\"\n Configuration for a single agent that works together as a swarm to accomplish tasks.\n \"\"\"\n\n agent: AgentSpec = Field(\n ...,\n description=\"The agent to run.\",\n )\n task: Optional[str] = Field(\n ...,\n description=\"The task to run.\",\n )\n img: Optional[str] = Field(\n None,\n description=\"An optional image URL that may be associated with the swarm's task or representation.\",\n )\n output_type: Optional[str] = Field(\n \"list\",\n description=\"The type of output to return.\",\n )\n\n\nclass AgentCompletionResponse(BaseModel):\n \"\"\"\n Response from an agent completion.\n \"\"\"\n\n agent_id: str = Field(\n ...,\n description=\"The unique identifier for the agent that completed the task.\",\n )\n agent_name: str = Field(\n ...,\n description=\"The name of the agent that completed the task.\",\n )\n agent_description: str = Field(\n ...,\n description=\"The description of the agent that completed the task.\",\n )\n messages: Any = Field(\n ...,\n description=\"The messages from the agent completion.\",\n )\n\n cost: Dict[str, Any] = Field(\n ...,\n description=\"The cost of the agent completion.\",\n )\n\n\nclass Agents(BaseModel):\n \"\"\"Configuration for a collection of agents that work together as a swarm to accomplish tasks.\"\"\"\n\n agents: List[AgentSpec] = Field(\n description=\"A list containing the specifications of each agent that will participate in the swarm, detailing their roles and functionalities.\"\n )\n\n\nclass ScheduleSpec(BaseModel):\n scheduled_time: datetime = Field(\n ...,\n description=\"The exact date and time (in UTC) when the swarm is scheduled to execute its tasks.\",\n )\n timezone: Optional[str] = Field(\n \"UTC\",\n description=\"The timezone in which the scheduled time is defined, allowing for proper scheduling across different regions.\",\n )\n\n\nclass SwarmSpec(BaseModel):\n name: Optional[str] = Field(\n None,\n description=\"The name of the swarm, which serves as an identifier for the group of agents and their collective task.\",\n max_length=100,\n )\n description: Optional[str] = Field(\n None,\n description=\"A comprehensive description of the swarm's objectives, capabilities, and intended outcomes.\",\n )\n agents: Optional[List[AgentSpec]] = Field(\n None,\n description=\"A list of agents or specifications that define the agents participating in the swarm.\",\n )\n max_loops: Optional[int] = Field(\n default=1,\n description=\"The maximum number of execution loops allowed for the swarm, enabling repeated processing if needed.\",\n )\n swarm_type: Optional[SwarmType] = Field(\n None,\n description=\"The classification of the swarm, indicating its operational style and methodology.\",\n )\n rearrange_flow: Optional[str] = Field(\n None,\n description=\"Instructions on how to rearrange the flow of tasks among agents, if applicable.\",\n )\n task: Optional[str] = Field(\n None,\n description=\"The specific task or objective that the swarm is designed to accomplish.\",\n )\n img: Optional[str] = Field(\n None,\n description=\"An optional image URL that may be associated with the swarm's task or representation.\",\n )\n return_history: Optional[bool] = Field(\n True,\n description=\"A flag indicating whether the swarm should return its execution history along with the final output.\",\n )\n rules: Optional[str] = Field(\n None,\n description=\"Guidelines or constraints that govern the behavior and interactions of the agents within the swarm.\",\n )\n schedule: Optional[ScheduleSpec] = Field(\n None,\n description=\"Details regarding the scheduling of the swarm's execution, including timing and timezone information.\",\n )\n tasks: Optional[List[str]] = Field(\n None,\n description=\"A list of tasks that the swarm should complete.\",\n )\n messages: Optional[List[Dict[str, Any]]] = Field(\n None,\n description=\"A list of messages that the swarm should complete.\",\n )\n # rag_on: Optional[bool] = Field(\n # None,\n # description=\"A flag indicating whether the swarm should use RAG.\",\n # )\n # collection_name: Optional[str] = Field(\n # None,\n # description=\"The name of the collection to use for RAG.\",\n # )\n stream: Optional[bool] = Field(\n False,\n description=\"A flag indicating whether the swarm should stream its output.\",\n )\n\n\nclass SwarmCompletionResponse(BaseModel):\n \"\"\"\n Response from a swarm completion.\n \"\"\"\n\n status: str = Field(..., description=\"The status of the swarm completion.\")\n swarm_name: str = Field(..., description=\"The name of the swarm.\")\n description: str = Field(..., description=\"Description of the swarm.\")\n swarm_type: str = Field(..., description=\"The type of the swarm.\")\n task: str = Field(\n ..., description=\"The task that the swarm is designed to accomplish.\"\n )\n output: List[Dict[str, Any]] = Field(\n ..., description=\"The output generated by the swarm.\"\n )\n number_of_agents: int = Field(\n ..., description=\"The number of agents involved in the swarm.\"\n )\n # \"input_config\": Optional[Dict[str, Any]] = Field(None, description=\"The input configuration for the swarm.\")\n\n\nBASE_URL = \"https://swarms-api-285321057562.us-east1.run.app\"\n\n\n# Create an MCP server\nmcp = FastMCP(\"swarms-api\")\n\n\n# Add an addition tool\n@mcp.tool(name=\"swarm_completion\", description=\"Run a swarm completion.\")\ndef swarm_completion(swarm: SwarmSpec) -> Dict[str, Any]:\n api_key = os.getenv(\"SWARMS_API_KEY\")\n headers = {\"x-api-key\": api_key, \"Content-Type\": \"application/json\"}\n\n payload = swarm.model_dump()\n\n response = requests.post(f\"{BASE_URL}/v1/swarm/completions\", json=payload, headers=headers)\n\n return response.json()\n\n@mcp.tool(name=\"swarms_available\", description=\"Get the list of available swarms.\")\nasync def swarms_available() -> Any:\n \"\"\"\n Get the list of available swarms.\n \"\"\"\n headers = {\"Content-Type\": \"application/json\"}\n\n async with httpx.AsyncClient() as client:\n response = await client.get(f\"{BASE_URL}/v1/models/available\", headers=headers)\n response.raise_for_status() # Raise an error for bad responses\n return response.json()\n\n\nif __name__ == \"__main__\":\n mcp.run(transport=\"sse\")\n
"},{"location":"swarms_cloud/mcp/#client-side","title":"Client side","text":"import asyncio\nfrom fastmcp import Client\n\nswarm_config = {\n \"name\": \"Simple Financial Analysis\",\n \"description\": \"A swarm to analyze financial data\",\n \"agents\": [\n {\n \"agent_name\": \"Data Analyzer\",\n \"description\": \"Looks at financial data\",\n \"system_prompt\": \"Analyze the data.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 1000,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n },\n {\n \"agent_name\": \"Risk Analyst\",\n \"description\": \"Checks risk levels\",\n \"system_prompt\": \"Evaluate the risks.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 1000,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n },\n {\n \"agent_name\": \"Strategy Checker\",\n \"description\": \"Validates strategies\",\n \"system_prompt\": \"Review the strategy.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 1000,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n },\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Analyze the financial data and provide insights.\",\n \"return_history\": False, # Added required field\n \"stream\": False, # Added required field\n \"rules\": None, # Added optional field\n \"img\": None, # Added optional field\n}\n\n\nasync def swarm_completion():\n \"\"\"Connect to a server over SSE and fetch available swarms.\"\"\"\n\n async with Client(\n transport=\"http://localhost:8000/sse\"\n ) as client:\n # Basic connectivity testing\n # print(\"Ping check:\", await client.ping())\n # print(\"Available tools:\", await client.list_tools())\n # print(\"Swarms available:\", await client.call_tool(\"swarms_available\", None))\n result = await client.call_tool(\"swarm_completion\", {\"swarm\": swarm_config})\n print(\"Swarm completion:\", result)\n\n\n# Execute the function\nif __name__ == \"__main__\":\n asyncio.run(swarm_completion())\n
"},{"location":"swarms_cloud/mcs_api/","title":"Medical Coder Swarm API Documentation","text":"Base URL: https://mcs-285321057562.us-central1.run.app
Authentication details will be provided by the MCS team. Contact support for API credentials.
"},{"location":"swarms_cloud/mcs_api/#rate-limits","title":"Rate Limits","text":"Endpoint GET Rate Limit StatusGET /rate-limits
Returns current rate limit status for your IP address"},{"location":"swarms_cloud/mcs_api/#endpoints","title":"Endpoints","text":""},{"location":"swarms_cloud/mcs_api/#health-check","title":"Health Check","text":"Check if the API is operational.
Method Endpoint DescriptionGET
/health
Returns 200 OK if service is running"},{"location":"swarms_cloud/mcs_api/#run-medical-coder","title":"Run Medical Coder","text":"Process a single patient case through the Medical Coder Swarm.
Method Endpoint DescriptionPOST
/v1/medical-coder/run
Process a single patient case Request Body Parameters:
Parameter Type Required Description patient_id string Yes Unique identifier for the patient case_description string Yes Medical case details to be processedResponse Schema:
Field Type Description patient_id string Patient identifier case_data string Processed case data"},{"location":"swarms_cloud/mcs_api/#run-batch-medical-coder","title":"Run Batch Medical Coder","text":"Process multiple patient cases in a single request.
Method Endpoint DescriptionPOST
/v1/medical-coder/run-batch
Process multiple patient cases Request Body Parameters:
Parameter Type Required Description cases array Yes Array of PatientCase objects"},{"location":"swarms_cloud/mcs_api/#get-patient-data","title":"Get Patient Data","text":"Retrieve data for a specific patient.
Method Endpoint DescriptionGET
/v1/medical-coder/patient/{patient_id}
Get patient data by ID Path Parameters:
Parameter Type Required Description patient_id string Yes Patient identifier"},{"location":"swarms_cloud/mcs_api/#get-all-patients","title":"Get All Patients","text":"Retrieve data for all patients.
Method Endpoint DescriptionGET
/v1/medical-coder/patients
Get all patient data"},{"location":"swarms_cloud/mcs_api/#code-examples","title":"Code Examples","text":""},{"location":"swarms_cloud/mcs_api/#python","title":"Python","text":"import requests\nimport json\n\nclass MCSClient:\n def __init__(self, base_url=\"https://mcs.swarms.ai\", api_key=None):\n self.base_url = base_url\n self.headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {api_key}\" if api_key else None\n }\n\n def run_medical_coder(self, patient_id, case_description):\n endpoint = f\"{self.base_url}/v1/medical-coder/run\"\n payload = {\n \"patient_id\": patient_id,\n \"case_description\": case_description\n }\n response = requests.post(endpoint, json=payload, headers=self.headers)\n return response.json()\n\n def run_batch(self, cases):\n endpoint = f\"{self.base_url}/v1/medical-coder/run-batch\"\n payload = {\"cases\": cases}\n response = requests.post(endpoint, json=payload, headers=self.headers)\n return response.json()\n\n# Usage example\nclient = MCSClient(api_key=\"your_api_key\")\nresult = client.run_medical_coder(\"P123\", \"Patient presents with...\")\n
"},{"location":"swarms_cloud/mcs_api/#nextjs-typescript","title":"Next.js (TypeScript)","text":"// types.ts\ninterface PatientCase {\n patient_id: string;\n case_description: string;\n}\n\ninterface QueryResponse {\n patient_id: string;\n case_data: string;\n}\n\n// api.ts\nexport class MCSApi {\n private baseUrl: string;\n private apiKey: string;\n\n constructor(apiKey: string, baseUrl = 'https://mcs.swarms.ai') {\n this.baseUrl = baseUrl;\n this.apiKey = apiKey;\n }\n\n private async fetchWithAuth(endpoint: string, options: RequestInit = {}) {\n const response = await fetch(`${this.baseUrl}${endpoint}`, {\n ...options,\n headers: {\n 'Content-Type': 'application/json',\n 'Authorization': `Bearer ${this.apiKey}`,\n ...options.headers,\n },\n });\n return response.json();\n }\n\n async runMedicalCoder(patientCase: PatientCase): Promise<QueryResponse> {\n return this.fetchWithAuth('/v1/medical-coder/run', {\n method: 'POST',\n body: JSON.stringify(patientCase),\n });\n }\n\n async getPatientData(patientId: string): Promise<QueryResponse> {\n return this.fetchWithAuth(`/v1/medical-coder/patient/${patientId}`);\n }\n}\n\n// Usage in component\nconst mcsApi = new MCSApi(process.env.MCS_API_KEY);\n\nexport async function ProcessPatientCase({ patientId, caseDescription }) {\n const result = await mcsApi.runMedicalCoder({\n patient_id: patientId,\n case_description: caseDescription,\n });\n return result;\n}\n
"},{"location":"swarms_cloud/mcs_api/#go","title":"Go","text":"package mcs\n\nimport (\n \"bytes\"\n \"encoding/json\"\n \"fmt\"\n \"net/http\"\n)\n\ntype MCSClient struct {\n BaseURL string\n APIKey string\n Client *http.Client\n}\n\ntype PatientCase struct {\n PatientID string `json:\"patient_id\"`\n CaseDescription string `json:\"case_description\"`\n}\n\ntype QueryResponse struct {\n PatientID string `json:\"patient_id\"`\n CaseData string `json:\"case_data\"`\n}\n\nfunc NewMCSClient(apiKey string) *MCSClient {\n return &MCSClient{\n BaseURL: \"https://mcs.swarms.ai\",\n APIKey: apiKey,\n Client: &http.Client{},\n }\n}\n\nfunc (c *MCSClient) RunMedicalCoder(patientCase PatientCase) (*QueryResponse, error) {\n payload, err := json.Marshal(patientCase)\n if err != nil {\n return nil, err\n }\n\n req, err := http.NewRequest(\"POST\", \n fmt.Sprintf(\"%s/v1/medical-coder/run\", c.BaseURL),\n bytes.NewBuffer(payload))\n if err != nil {\n return nil, err\n }\n\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", fmt.Sprintf(\"Bearer %s\", c.APIKey))\n\n resp, err := c.Client.Do(req)\n if err != nil {\n return nil, err\n }\n defer resp.Body.Close()\n\n var result QueryResponse\n if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {\n return nil, err\n }\n\n return &result, nil\n}\n\n// Usage example\nfunc main() {\n client := NewMCSClient(\"your_api_key\")\n\n result, err := client.RunMedicalCoder(PatientCase{\n PatientID: \"P123\",\n CaseDescription: \"Patient presents with...\",\n })\n if err != nil {\n panic(err)\n }\n\n fmt.Printf(\"Result: %+v\\n\", result)\n}\n
"},{"location":"swarms_cloud/mcs_api/#c-sharp","title":"C Sharp","text":"using System;\nusing System.Net.Http;\nusing System.Text;\nusing System.Text.Json;\nusing System.Threading.Tasks;\n\nnamespace MedicalCoderSwarm\n{\n public class PatientCase\n {\n public string PatientId { get; set; }\n public string CaseDescription { get; set; }\n }\n\n public class QueryResponse\n {\n public string PatientId { get; set; }\n public string CaseData { get; set; }\n }\n\n public class MCSClient : IDisposable\n {\n private readonly HttpClient _httpClient;\n private readonly string _baseUrl;\n\n public MCSClient(string apiKey, string baseUrl = \"https://mcs-285321057562.us-central1.run.app\")\n {\n _baseUrl = baseUrl;\n _httpClient = new HttpClient();\n _httpClient.DefaultRequestHeaders.Add(\"Authorization\", $\"Bearer {apiKey}\");\n _httpClient.DefaultRequestHeaders.Add(\"Content-Type\", \"application/json\");\n }\n\n public async Task<QueryResponse> RunMedicalCoderAsync(string patientId, string caseDescription)\n {\n var payload = new PatientCase\n {\n PatientId = patientId,\n CaseDescription = caseDescription\n };\n\n var content = new StringContent(\n JsonSerializer.Serialize(payload),\n Encoding.UTF8,\n \"application/json\"\n );\n\n var response = await _httpClient.PostAsync(\n $\"{_baseUrl}/v1/medical-coder/run\",\n content\n );\n\n response.EnsureSuccessStatusCode();\n\n var responseContent = await response.Content.ReadAsStringAsync();\n return JsonSerializer.Deserialize<QueryResponse>(responseContent);\n }\n\n public async Task<QueryResponse> GetPatientDataAsync(string patientId)\n {\n var response = await _httpClient.GetAsync(\n $\"{_baseUrl}/v1/medical-coder/patient/{patientId}\"\n );\n\n response.EnsureSuccessStatusCode();\n\n var responseContent = await response.Content.ReadAsStringAsync();\n return JsonSerializer.Deserialize<QueryResponse>(responseContent);\n }\n\n public async Task<bool> HealthCheckAsync()\n {\n var response = await _httpClient.GetAsync($\"{_baseUrl}/health\");\n return response.IsSuccessStatusCode;\n }\n\n public void Dispose()\n {\n _httpClient?.Dispose();\n }\n }\n\n // Example usage\n public class Program\n {\n public static async Task Main()\n {\n try\n {\n using var client = new MCSClient(\"your_api_key\");\n\n // Check API health\n var isHealthy = await client.HealthCheckAsync();\n Console.WriteLine($\"API Health: {(isHealthy ? \"Healthy\" : \"Unhealthy\")}\");\n\n // Process a single case\n var result = await client.RunMedicalCoderAsync(\n \"P123\",\n \"Patient presents with acute respiratory symptoms...\"\n );\n Console.WriteLine($\"Processed case for patient {result.PatientId}\");\n Console.WriteLine($\"Case data: {result.CaseData}\");\n\n // Get patient data\n var patientData = await client.GetPatientDataAsync(\"P123\");\n Console.WriteLine($\"Retrieved data for patient {patientData.PatientId}\");\n }\n catch (HttpRequestException ex)\n {\n Console.WriteLine($\"API request failed: {ex.Message}\");\n }\n catch (Exception ex)\n {\n Console.WriteLine($\"An error occurred: {ex.Message}\");\n }\n }\n }\n}\n
"},{"location":"swarms_cloud/mcs_api/#error-handling","title":"Error Handling","text":"The API uses standard HTTP status codes and returns detailed error messages in JSON format.
Common Status Codes:
Status Code Description 200 Success 400 Bad Request - Invalid input 401 Unauthorized - Invalid or missing API key 422 Validation Error - Request validation failed 429 Too Many Requests - Rate limit exceeded 500 Internal Server ErrorError Response Format:
{\n \"detail\": [\n {\n \"loc\": [\"body\", \"patient_id\"],\n \"msg\": \"field required\",\n \"type\": \"value_error.missing\"\n }\n ]\n}\n
"},{"location":"swarms_cloud/mcs_api/#mcs-python-client-documentation","title":"MCS Python Client Documentation","text":""},{"location":"swarms_cloud/mcs_api/#installation","title":"Installation","text":"pip install mcs\n
"},{"location":"swarms_cloud/mcs_api/#quick-start","title":"Quick Start","text":"from mcs import MCSClient, PatientCase\n\n# Using context manager (recommended)\nwith MCSClient() as client:\n # Process a single case\n response = client.run_medical_coder(\n patient_id=\"P123\",\n case_description=\"Patient presents with acute respiratory symptoms...\"\n )\n print(f\"Processed case: {response.case_data}\")\n\n # Process multiple cases\n cases = [\n PatientCase(\"P124\", \"Case 1 description...\"),\n PatientCase(\"P125\", \"Case 2 description...\")\n ]\n batch_response = client.run_batch(cases)\n
"},{"location":"swarms_cloud/mcs_api/#client-configuration","title":"Client Configuration","text":""},{"location":"swarms_cloud/mcs_api/#constructor-arguments","title":"Constructor Arguments","text":"Parameter Type Required Default Description api_key str Yes - Authentication API key base_url str No \"https://mcs.swarms.ai\" API base URL timeout int No 30 Request timeout in seconds max_retries int No 3 Maximum retry attempts logger_name str No \"mcs\" Name for the logger instance"},{"location":"swarms_cloud/mcs_api/#example-configuration","title":"Example Configuration","text":"client = MCSClient(\n base_url=\"https://custom-url.example.com\",\n timeout=45,\n max_retries=5,\n logger_name=\"custom_logger\"\n)\n
"},{"location":"swarms_cloud/mcs_api/#data-models","title":"Data Models","text":""},{"location":"swarms_cloud/mcs_api/#patientcase","title":"PatientCase","text":"Field Type Required Description patient_id str Yes Unique identifier for the patient case_description str Yes Medical case details"},{"location":"swarms_cloud/mcs_api/#queryresponse","title":"QueryResponse","text":"Field Type Description patient_id str Patient identifier case_data str Processed case data"},{"location":"swarms_cloud/mcs_api/#methods","title":"Methods","text":""},{"location":"swarms_cloud/mcs_api/#run_medical_coder","title":"run_medical_coder","text":"Process a single patient case.
def run_medical_coder(\n self,\n patient_id: str,\n case_description: str\n) -> QueryResponse:\n
Arguments:
Parameter Type Required Description patient_id str Yes Patient identifier case_description str Yes Case detailsExample:
response = client.run_medical_coder(\n patient_id=\"P123\",\n case_description=\"Patient presents with...\"\n)\nprint(response.case_data)\n
"},{"location":"swarms_cloud/mcs_api/#run_batch","title":"run_batch","text":"Process multiple patient cases in batch.
def run_batch(\n self,\n cases: List[PatientCase]\n) -> List[QueryResponse]:\n
Arguments:
Parameter Type Required Description cases List[PatientCase] Yes List of patient casesExample:
cases = [\n PatientCase(\"P124\", \"Case 1 description...\"),\n PatientCase(\"P125\", \"Case 2 description...\")\n]\nresponses = client.run_batch(cases)\nfor response in responses:\n print(f\"Patient {response.patient_id}: {response.case_data}\")\n
"},{"location":"swarms_cloud/mcs_api/#get_patient_data","title":"get_patient_data","text":"Retrieve data for a specific patient.
def get_patient_data(\n self,\n patient_id: str\n) -> QueryResponse:\n
Example:
patient_data = client.get_patient_data(\"P123\")\nprint(f\"Patient data: {patient_data.case_data}\")\n
"},{"location":"swarms_cloud/mcs_api/#get_all_patients","title":"get_all_patients","text":"Retrieve data for all patients.
def get_all_patients(self) -> List[QueryResponse]:\n
Example:
all_patients = client.get_all_patients()\nfor patient in all_patients:\n print(f\"Patient {patient.patient_id}: {patient.case_data}\")\n
"},{"location":"swarms_cloud/mcs_api/#get_rate_limits","title":"get_rate_limits","text":"Get current rate limit status.
def get_rate_limits(self) -> Dict[str, Any]:\n
Example:
rate_limits = client.get_rate_limits()\nprint(f\"Rate limit status: {rate_limits}\")\n
"},{"location":"swarms_cloud/mcs_api/#health_check","title":"health_check","text":"Check if the API is operational.
def health_check(self) -> bool:\n
Example:
is_healthy = client.health_check()\nprint(f\"API health: {'Healthy' if is_healthy else 'Unhealthy'}\")\n
"},{"location":"swarms_cloud/mcs_api/#error-handling_1","title":"Error Handling","text":""},{"location":"swarms_cloud/mcs_api/#exception-hierarchy","title":"Exception Hierarchy","text":"Exception Description MCSClientError Base exception for all client errors RateLimitError Raised when API rate limit is exceeded AuthenticationError Raised when API authentication fails ValidationError Raised when request validation fails"},{"location":"swarms_cloud/mcs_api/#example-error-handling","title":"Example Error Handling","text":"from mcs import MCSClient, MCSClientError, RateLimitError\n\nwith MCSClient() as client:\n try:\n response = client.run_medical_coder(\"P123\", \"Case description...\")\n except RateLimitError:\n print(\"Rate limit exceeded. Please wait before retrying.\")\n except MCSClientError as e:\n print(f\"An error occurred: {str(e)}\")\n
"},{"location":"swarms_cloud/mcs_api/#advanced-usage","title":"Advanced Usage","text":""},{"location":"swarms_cloud/mcs_api/#retry-configuration","title":"Retry Configuration","text":"The client implements two levels of retry logic:
Connection-level retries (using HTTPAdapter
):
client = MCSClient(\n ,\n max_retries=5 # Adjusts connection-level retries\n)\n
Application-level retries (using tenacity
):
from tenacity import retry, stop_after_attempt\n\n@retry(stop=stop_after_attempt(5))\ndef process_with_custom_retries():\n with MCSClient() as client:\n return client.run_medical_coder(\"P123\", \"Case description...\")\n
from tqdm import tqdm\n\nwith MCSClient() as client:\n cases = [\n PatientCase(f\"P{i}\", f\"Case description {i}\")\n for i in range(100)\n ]\n\n # Process in smaller batches\n batch_size = 10\n results = []\n\n for i in tqdm(range(0, len(cases), batch_size)):\n batch = cases[i:i + batch_size]\n batch_results = client.run_batch(batch)\n results.extend(batch_results)\n
"},{"location":"swarms_cloud/mcs_api/#best-practices","title":"Best Practices","text":"Always use context managers:
with MCSClient() as client:\n # Your code here\n pass\n
Handle rate limits appropriately:
from time import sleep\n\ndef process_with_rate_limit_handling():\n with MCSClient() as client:\n try:\n return client.run_medical_coder(\"P123\", \"Case...\")\n except RateLimitError:\n sleep(60) # Wait before retry\n return client.run_medical_coder(\"P123\", \"Case...\")\n
Implement proper logging:
from loguru import logger\n\nlogger.add(\"mcs.log\", rotation=\"500 MB\")\n\nwith MCSClient() as client:\n try:\n response = client.run_medical_coder(\"P123\", \"Case...\")\n except Exception as e:\n logger.exception(f\"Error processing case: {str(e)}\")\n
Monitor API health:
def ensure_healthy_api():\n with MCSClient() as client:\n if not client.health_check():\n raise SystemExit(\"API is not healthy\")\n
This guide will walk you through deploying your project to Phala's Trusted Execution Environment (TEE).
"},{"location":"swarms_cloud/phala_deploy/#prerequisites","title":"\ud83d\udccb Prerequisites","text":"For detailed instructions about Trusted Execution Environment setup, please refer to our TEE Documentation.
"},{"location":"swarms_cloud/phala_deploy/#deployment-steps","title":"\ud83d\ude80 Deployment Steps","text":""},{"location":"swarms_cloud/phala_deploy/#1-build-and-publish-docker-image","title":"1. Build and Publish Docker Image","text":"# Build the Docker image\ndocker compose build -t <your-dockerhub-username>/swarm-agent-node:latest\n\n# Push to DockerHub\ndocker push <your-dockerhub-username>/swarm-agent-node:latest\n
"},{"location":"swarms_cloud/phala_deploy/#2-deploy-to-phala-cloud","title":"2. Deploy to Phala Cloud","text":"Choose one of these deployment methods: - Use tee-cloud-cli (Recommended) - Deploy manually via the Phala Cloud Dashboard
"},{"location":"swarms_cloud/phala_deploy/#3-verify-tee-attestation","title":"3. Verify TEE Attestation","text":"Visit the TEE Attestation Explorer to check and verify your agent's TEE proof.
"},{"location":"swarms_cloud/phala_deploy/#docker-configuration","title":"\ud83d\udcdd Docker Configuration","text":"Below is a sample Docker Compose configuration for your Swarms agent:
services:\n swarms-agent-server:\n image: swarms-agent-node:latest\n platform: linux/amd64\n volumes:\n - /var/run/tappd.sock:/var/run/tappd.sock\n - swarms:/app\n restart: always\n ports:\n - 8000:8000\n command: # Sample MCP Server\n - /bin/sh\n - -c\n - |\n cd /app/mcp_example\n python mcp_test.py\nvolumes:\n swarms:\n
"},{"location":"swarms_cloud/phala_deploy/#additional-resources","title":"\ud83d\udcda Additional Resources","text":"For more comprehensive documentation and examples, visit our Official Documentation.
Note: Make sure to replace <your-dockerhub-username>
with your actual DockerHub username when building and pushing the image.
As large language models (LLMs) continue to advance and enable a wide range of powerful applications, enterprises are increasingly exploring multi-agent architectures to leverage the collective capabilities of multiple LLMs. However, coordinating and optimizing the performance of these complex multi-agent systems presents significant challenges.
This comprehensive guide provides enterprise architects, engineering leaders, and technical decision-makers with a strategic framework for maximizing performance across multi-agent LLM deployments. Developed through extensive research and collaboration with industry partners, this guide distills best practices, proven techniques, and cutting-edge methodologies into seven core principles.
By implementing the recommendations outlined in this guide, organizations can achieve superior latency, throughput, and resource utilization while ensuring scalability, cost-effectiveness, and optimal user experiences. Whether powering customer-facing conversational agents, driving internal knowledge management systems, or fueling mission-critical decision support tools, high-performance multi-agent LLM deployments will be pivotal to unlocking the full potential of this transformative technology.
"},{"location":"swarms_cloud/production_deployment/#introduction","title":"Introduction","text":"The rise of large language models (LLMs) has ushered in a new era of human-machine interaction, enabling enterprises to develop sophisticated natural language processing (NLP) applications that can understand, generate, and reason with human-like text. However, as the complexity and scale of LLM deployments grow, traditional monolithic architectures are increasingly challenged to meet the stringent performance, scalability, and cost requirements of enterprise environments.
Multi-agent architectures, which coordinate the collective capabilities of multiple specialized LLMs, have emerged as a powerful paradigm for addressing these challenges. By distributing workloads across a cohort of agents, each optimized for specific tasks or domains, multi-agent systems can deliver superior performance, resilience, and adaptability compared to single-model solutions.
However, realizing the full potential of multi-agent LLM deployments requires a strategic approach to system design, optimization, and ongoing management. This guide presents a comprehensive framework for maximizing performance across seven core principles, each underpinned by a range of proven techniques and methodologies.
Whether you are architecting a customer-facing conversational agent, building an internal knowledge management platform, or developing a mission-critical decision support system, this guide will equip you with the insights and best practices necessary to unlock the full potential of multi-agent LLM deployments within your enterprise.
"},{"location":"swarms_cloud/production_deployment/#principle-1-distribute-token-processing","title":"Principle 1: Distribute Token Processing","text":"At the heart of every LLM deployment lies the fundamental challenge of optimizing token processing -- the rate at which the model consumes and generates text inputs and outputs. In multi-agent architectures, distributing and parallelizing token processing across multiple agents is a critical performance optimization strategy.
"},{"location":"swarms_cloud/production_deployment/#agent-specialization","title":"Agent Specialization","text":"One of the key advantages of multi-agent architectures is the ability to dedicate specific agents to specialized tasks or domains. By carefully matching agents to the workloads they are optimized for, enterprises can maximize overall throughput and minimize latency.
For example, in a conversational agent deployment, one agent may be optimized for intent recognition and query understanding, while another is fine-tuned for generating coherent, context-aware responses. In a document processing pipeline, separate agents could be dedicated to tasks such as named entity recognition, sentiment analysis, and summarization.
To effectively leverage agent specialization, enterprises should:
Even with a well-designed allocation of tasks across specialized agents, fluctuations in workload and demand can create bottlenecks and performance degradation. Effective load balancing strategies are essential to ensure that token processing capacity is dynamically distributed across available agents based on real-time conditions.
Load balancing in multi-agent LLM deployments can be accomplished through a combination of techniques, including:
Implementing effective load balancing requires careful consideration of the specific characteristics and requirements of your multi-agent deployment, as well as the integration of robust monitoring and analytics capabilities to inform dynamic routing decisions.
"},{"location":"swarms_cloud/production_deployment/#horizontal-scaling","title":"Horizontal Scaling","text":"While load balancing optimizes the utilization of existing agent resources, horizontal scaling strategies enable organizations to dynamically provision additional token processing capacity to meet demand spikes or handle larger overall workloads.
In multi-agent LLM deployments, horizontal scaling can be achieved through:
Effective horizontal scaling requires robust orchestration and management capabilities, as well as seamless integration with load balancing mechanisms to ensure that incoming workloads are efficiently distributed across the dynamically scaled agent pool.
"},{"location":"swarms_cloud/production_deployment/#principle-2-optimize-agent-communication","title":"Principle 2: Optimize Agent Communication","text":"In multi-agent LLM deployments, efficient inter-agent communication is crucial for coordinating tasks, exchanging context and intermediate results, and maintaining overall system coherence. However, communication overhead can quickly become a performance bottleneck if not carefully managed.
"},{"location":"swarms_cloud/production_deployment/#minimizing-overhead","title":"Minimizing Overhead","text":"Reducing the volume and complexity of information exchanged between agents is a key strategy for optimizing communication performance. Techniques for minimizing overhead include:
Implementing these techniques requires careful analysis of the specific data exchange patterns and communication requirements within your multi-agent deployment, as well as the integration of appropriate compression, summarization, and differential update algorithms.
"},{"location":"swarms_cloud/production_deployment/#prioritizing-critical-information","title":"Prioritizing Critical Information","text":"In scenarios where communication bandwidth or latency constraints cannot be fully alleviated through overhead reduction techniques, enterprises can prioritize the exchange of critical information over non-essential data.
This can be achieved through:
Effective prioritization requires a deep understanding of the interdependencies and information flow within your multi-agent system, as well as the ability to dynamically assess and prioritize data based on its criticality and urgency.
"},{"location":"swarms_cloud/production_deployment/#caching-and-reusing-context","title":"Caching and Reusing Context","text":"In many multi-agent LLM deployments, agents frequently exchange or operate on shared context, such as user profiles, conversation histories, or domain-specific knowledge bases. Caching and reusing this context information can significantly reduce redundant communication and processing overhead.
Strategies for optimizing context caching and reuse include:
One of the key advantages of multi-agent architectures is the ability to optimize individual agents for specific tasks, domains, or capabilities. By leveraging agent specialization, enterprises can ensure that each component of their LLM system is finely tuned for maximum performance and quality.
"},{"location":"swarms_cloud/production_deployment/#task-specific-optimization","title":"Task-Specific Optimization","text":"Within a multi-agent LLM deployment, different agents may be responsible for distinct tasks such as language understanding, knowledge retrieval, response generation, or post-processing. Optimizing each agent for its designated task can yield significant performance gains and quality improvements.
Techniques for task-specific optimization include:
Implementing these optimization techniques requires a deep understanding of the capabilities and requirements of each task within your multi-agent system, as well as access to relevant training data and computational resources for fine-tuning and distillation processes.
"},{"location":"swarms_cloud/production_deployment/#domain-adaptation","title":"Domain Adaptation","text":"Many enterprise applications operate within specific domains or verticals, such as finance, healthcare, or legal. Adapting agents to these specialized domains can significantly improve their performance, accuracy, and compliance within the target domain.
Strategies for domain adaptation include:
Effective domain adaptation requires access to high-quality, domain-specific training data, as well as close collaboration with subject matter experts to ensure that agents are properly calibrated to meet the unique demands of the target domain.
"},{"location":"swarms_cloud/production_deployment/#ensemble-techniques","title":"Ensemble Techniques","text":"In complex multi-agent deployments, individual agents may excel at specific subtasks or aspects of the overall workflow. Ensemble techniques that combine the outputs or predictions of multiple specialized agents can often outperform any single agent, leveraging the collective strengths of the ensemble.
Common ensemble techniques for multi-agent LLM systems include:
Implementing effective ensemble techniques requires careful analysis of the strengths, weaknesses, and complementary capabilities of individual agents, as well as the development of robust combination strategies that can optimally leverage the ensemble's collective intelligence.
"},{"location":"swarms_cloud/production_deployment/#principle-4-implement-dynamic-scaling","title":"Principle 4: Implement Dynamic Scaling","text":"The demand and workload patterns of enterprise LLM deployments can be highly dynamic, with significant fluctuations driven by factors such as user activity, data ingestion schedules, or periodic batch processing. Implementing dynamic scaling strategies allows organizations to optimally provision and allocate resources in response to these fluctuations, ensuring consistent performance while minimizing unnecessary costs.
"},{"location":"swarms_cloud/production_deployment/#autoscaling","title":"Autoscaling","text":"Autoscaling is a core capability that enables the automatic adjustment of compute resources (e.g., CPU, GPU, memory) and agent instances based on real-time demand patterns and workload metrics. By dynamically scaling resources up or down, enterprises can maintain optimal performance and resource utilization, avoiding both over-provisioning and under-provisioning scenarios.
Effective autoscaling in multi-agent LLM deployments requires:
By automating the scaling process, enterprises can respond rapidly to workload fluctuations, ensuring consistent performance and optimal resource utilization without the need for manual intervention.
"},{"location":"swarms_cloud/production_deployment/#spot-instance-utilization","title":"Spot Instance Utilization","text":"Many cloud providers offer spot instances or preemptible resources at significantly discounted prices compared to on-demand or reserved instances. While these resources may be reclaimed with little notice, they can be leveraged judiciously within multi-agent LLM deployments to reduce operational costs.
Strategies for leveraging spot instances include:
Effective spot instance utilization requires careful architectural considerations to ensure fault tolerance and minimize the impact of potential disruptions, as well as robust monitoring and automation capabilities to seamlessly replace or migrate workloads in response to instance preemption events.
"},{"location":"swarms_cloud/production_deployment/#serverless-deployments","title":"Serverless Deployments","text":"Serverless computing platforms, such as AWS Lambda, Google Cloud Functions, or Azure Functions, offer a compelling alternative to traditional server-based deployments. By automatically scaling compute resources based on real-time demand and charging only for the resources consumed, serverless architectures can provide significant cost savings and operational simplicity.
Leveraging serverless deployments for multi-agent LLM systems can be achieved through:
Adopting serverless architectures requires careful consideration of factors such as execution duration limits, cold start latencies, and integration with other components of the multi-agent deployment. However, when implemented effectively, serverless deployments can provide unparalleled scalability, cost-efficiency, and operational simplicity for dynamic, event-driven workloads.
"},{"location":"swarms_cloud/production_deployment/#principle-5-employ-selective-execution","title":"Principle 5: Employ Selective Execution","text":"Not every input or request within a multi-agent LLM deployment requires the full execution of all agents or the complete processing pipeline. Selectively invoking agents or tasks based on input characteristics or intermediate results can significantly optimize performance by avoiding unnecessary computation and resource consumption.
"},{"location":"swarms_cloud/production_deployment/#input-filtering","title":"Input Filtering","text":"Implementing input filtering mechanisms allows enterprises to reject or bypass certain inputs before they are processed by the multi-agent system. This can be achieved through techniques such as:
Effective input filtering requires careful consideration of the specific requirements, constraints, and objectives of your multi-agent deployment, as well as ongoing monitoring and adjustment of filtering rules and thresholds to maintain optimal performance and accuracy.
"},{"location":"swarms_cloud/production_deployment/#early-stopping","title":"Early Stopping","text":"In many multi-agent LLM deployments, intermediate results or predictions generated by early-stage agents can be used to determine whether further processing is required or valuable. Early stopping mechanisms allow enterprises to terminate execution pipelines when specific conditions or thresholds are met, avoiding unnecessary downstream processing.
Techniques for implementing early stopping include:
Effective early stopping requires a deep understanding of the interdependencies and decision points within your multi-agent workflow, as well as careful tuning and monitoring to ensure that stopping conditions are calibrated to maintain an optimal balance between performance and accuracy.
"},{"location":"swarms_cloud/production_deployment/#conditional-branching","title":"Conditional Branching","text":"Rather than executing a linear, fixed pipeline of agents, conditional branching allows multi-agent systems to dynamically invoke different agents or execution paths based on input characteristics or intermediate results. This can significantly optimize resource utilization by ensuring that only the necessary agents and processes are executed for a given input or scenario.
Implementing conditional branching involves:
Conditional branching can be particularly effective in scenarios where inputs or workloads exhibit distinct characteristics or require significantly different processing pipelines, allowing enterprises to optimize resource allocation and minimize unnecessary computation.
"},{"location":"swarms_cloud/production_deployment/#principle-6-optimize-user-experience","title":"Principle 6: Optimize User Experience","text":"While many of the principles outlined in this guide focus on optimizing backend performance and resource utilization, delivering an exceptional user experience is also a critical consideration for enterprise multi-agent LLM deployments. By minimizing perceived wait times and providing real-time progress updates, organizations can ensure that users remain engaged and satisfied, even during periods of high workload or resource constraints.
"},{"location":"swarms_cloud/production_deployment/#streaming-responses","title":"Streaming Responses","text":"One of the most effective techniques for minimizing perceived wait times is to stream responses or outputs to users as they are generated, rather than waiting for the entire response to be completed before delivering it. This approach is particularly valuable in conversational agents, document summarization, or other scenarios where outputs can be naturally segmented and delivered incrementally.
Implementing streaming responses requires:
By delivering outputs as they are generated, streaming responses can significantly improve the perceived responsiveness and interactivity of multi-agent LLM deployments, even in scenarios where the overall processing time remains unchanged.
"},{"location":"swarms_cloud/production_deployment/#progress-indicators","title":"Progress Indicators","text":"In cases where streaming responses may not be feasible or appropriate, providing visual or textual indicators of ongoing processing and progress can help manage user expectations and improve the overall experience. Progress indicators can be implemented through techniques such as:
Effective progress indicators require careful integration with monitoring and telemetry capabilities to accurately track and communicate the progress of multi-agent workflows, as well as thoughtful user experience design to ensure that indicators are clear, unobtrusive, and aligned with user expectations.
"},{"location":"swarms_cloud/production_deployment/#chunked-delivery","title":"Chunked Delivery","text":"In scenarios where outputs or responses cannot be effectively streamed or rendered incrementally, chunked delivery can provide a middle ground between delivering the entire output at once and streaming individual tokens or characters. By breaking larger outputs into smaller, more manageable chunks and delivering them individually, enterprises can improve perceived responsiveness and provide a more engaging user experience.
Implementing chunked delivery involves:
Chunked delivery can be particularly effective in scenarios where outputs are inherently structured or segmented, such as document generation, report creation, or multi-step instructions or workflows.
"},{"location":"swarms_cloud/production_deployment/#principle-7-leverage-hybrid-approaches","title":"Principle 7: Leverage Hybrid Approaches","text":"While multi-agent LLM architectures offer numerous advantages, they should not be viewed as a one-size-fits-all solution. In many cases, combining LLM agents with traditional techniques, optimized components, or external services can yield superior performance, cost-effectiveness, and resource utilization compared to a pure LLM-based approach.
"},{"location":"swarms_cloud/production_deployment/#task-offloading","title":"Task Offloading","text":"Certain tasks or subtasks within a larger multi-agent workflow may be more efficiently handled by dedicated, optimized components or external services, rather than relying solely on LLM agents. Task offloading involves identifying these opportunities and integrating the appropriate components or services into the overall architecture.
Examples of task offloading in multi-agent LLM deployments include:
Effective task offloading requires a thorough understanding of the strengths and limitations of both LLM agents and traditional components, as well as careful consideration of integration points, data flows, and performance trade-offs within the overall multi-agent architecture.
"},{"location":"swarms_cloud/production_deployment/#caching-and-indexing","title":"Caching and Indexing","text":"While LLMs excel at generating dynamic, context-aware outputs, they can be less efficient when dealing with static or frequently accessed information or knowledge. Caching and indexing strategies can help mitigate this limitation by minimizing redundant LLM processing and enabling faster retrieval of commonly accessed data.
Techniques for leveraging caching and indexing in multi-agent LLM deployments include:
Output Caching: Caching the outputs or responses generated by LLM agents, allowing for rapid retrieval and reuse in cases where the same or similar input is encountered in the future.
Knowledge Base Indexing: Indexing domain-specific knowledge bases, data repositories, or other static information sources using traditional search and information retrieval techniques. This allows LLM agents to efficiently query and incorporate relevant information into their outputs, without needing to process or generate this content from scratch.
Contextual Caching: Caching not only outputs but also the contextual information and intermediate results generated during multi-agent workflows. This enables more efficient reuse and continuation of previous processing in scenarios where contexts are long-lived or recurring.
Implementing effective caching and indexing strategies requires careful consideration of data freshness, consistency, and invalidation mechanisms, as well as seamless integration with LLM agents and multi-agent workflows to ensure that cached or indexed data is appropriately leveraged and updated.
"},{"location":"swarms_cloud/production_deployment/#pre-computation-and-lookup","title":"Pre-computation and Lookup","text":"In certain scenarios, especially those involving constrained or well-defined inputs, pre-computing and lookup strategies can be leveraged to minimize or entirely avoid the need for real-time LLM processing. By generating and storing potential outputs or responses in advance, enterprises can significantly improve performance and reduce resource consumption.
Approaches for pre-computation and lookup include:
Output Pre-generation: For inputs or scenarios with a limited set of potential outputs, pre-generating and storing all possible responses, allowing for rapid retrieval and delivery without the need for real-time LLM execution.
Retrieval-Based Responses: Developing retrieval models or techniques that can identify and surface pre-computed or curated responses based on input characteristics, leveraging techniques such as nearest neighbor search, embedding-based retrieval, or example-based generation.
Hybrid Approaches: Combining pre-computed or retrieved responses with real-time LLM processing, allowing for the generation of dynamic, context-aware content while still leveraging pre-computed components to optimize performance and resource utilization.
Effective implementation of pre-computation and lookup strategies requires careful analysis of input patterns, output distributions, and potential performance gains, as well as robust mechanisms for managing and updating pre-computed data as application requirements or domain knowledge evolves.
"},{"location":"swarms_cloud/production_deployment/#conclusion","title":"Conclusion","text":"As enterprises increasingly embrace the transformative potential of large language models, optimizing the performance, scalability, and cost-effectiveness of these deployments has become a critical imperative. Multi-agent architectures, which coordinate the collective capabilities of multiple specialized LLM agents, offer a powerful paradigm for addressing these challenges.
By implementing the seven principles outlined in this guide -- distributing token processing, optimizing agent communication, leveraging agent specialization, implementing dynamic scaling, employing selective execution, optimizing user experience, and leveraging hybrid approaches -- organizations can unlock the full potential of multi-agent LLM deployments.
However, realizing these benefits requires a strategic and holistic approach that accounts for the unique requirements, constraints, and objectives of each enterprise. From task-specific optimizations and domain adaptation to dynamic scaling and user experience considerations, maximizing the performance of multi-agent LLM systems demands a deep understanding of the underlying technologies, as well as the ability to navigate the inherent complexities of these sophisticated architectures.
To learn more about how Swarm Corporation can assist your organization in architecting, deploying, and optimizing high-performance multi-agent LLM solutions, we invite you to book a consultation with one of our agent specialists. Visit https://calendly.com/swarm-corp/30min to schedule a 30-minute call and explore how our expertise and cutting-edge technologies can drive transformative outcomes for your business.
In the rapidly evolving landscape of artificial intelligence and natural language processing, staying ahead of the curve is essential. Partner with Swarm Corporation, and unlock the full potential of multi-agent LLM deployments, today.
Book a call with us now:
"},{"location":"swarms_cloud/python_client/","title":"Swarms Cloud API Client Documentation","text":""},{"location":"swarms_cloud/python_client/#introduction","title":"Introduction","text":"The Swarms Cloud API client is a production-grade Python package for interacting with the Swarms API. It provides both synchronous and asynchronous interfaces, making it suitable for a wide range of applications from simple scripts to high-performance, scalable services.
Key features include: - Connection pooling and efficient session management - Automatic retries with exponential backoff - Circuit breaker pattern for improved reliability - In-memory caching for frequently accessed resources - Comprehensive error handling with detailed exceptions - Full support for asynchronous operations - Type checking with Pydantic
This documentation covers all available client methods with detailed descriptions, parameter references, and usage examples.
"},{"location":"swarms_cloud/python_client/#installation","title":"Installation","text":"pip install swarms-client\n
"},{"location":"swarms_cloud/python_client/#authentication","title":"Authentication","text":"To use the Swarms API, you need an API key. You can obtain your API key from the Swarms Platform API Keys page.
"},{"location":"swarms_cloud/python_client/#client-initialization","title":"Client Initialization","text":"The SwarmsClient
is the main entry point for interacting with the Swarms API. It can be initialized with various configuration options to customize its behavior.
from swarms_client import SwarmsClient\n\n# Initialize with default settings\nclient = SwarmsClient(api_key=\"your-api-key\")\n\n# Or with custom settings\nclient = SwarmsClient(\n api_key=\"your-api-key\",\n base_url=\"https://swarms-api-285321057562.us-east1.run.app\",\n timeout=60,\n max_retries=3,\n retry_delay=1,\n log_level=\"INFO\",\n pool_connections=100,\n pool_maxsize=100,\n keep_alive_timeout=5,\n max_concurrent_requests=100,\n circuit_breaker_threshold=5,\n circuit_breaker_timeout=60,\n enable_cache=True\n)\n
"},{"location":"swarms_cloud/python_client/#parameters","title":"Parameters","text":"Parameter Type Default Description api_key
str
Environment variable SWARMS_API_KEY
API key for authentication base_url
str
\"https://swarms-api-285321057562.us-east1.run.app\"
Base URL for the API timeout
int
60
Timeout for API requests in seconds max_retries
int
3
Maximum number of retry attempts for failed requests retry_delay
int
1
Initial delay between retries in seconds (uses exponential backoff) log_level
str
\"INFO\"
Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) pool_connections
int
100
Number of connection pools to cache pool_maxsize
int
100
Maximum number of connections to save in the pool keep_alive_timeout
int
5
Keep-alive timeout for connections in seconds max_concurrent_requests
int
100
Maximum number of concurrent requests circuit_breaker_threshold
int
5
Failure threshold for the circuit breaker circuit_breaker_timeout
int
60
Reset timeout for the circuit breaker in seconds enable_cache
bool
True
Whether to enable in-memory caching"},{"location":"swarms_cloud/python_client/#client-methods","title":"Client Methods","text":""},{"location":"swarms_cloud/python_client/#clear_cache","title":"clear_cache","text":"Clears the in-memory cache used for caching API responses.
client.clear_cache()\n
"},{"location":"swarms_cloud/python_client/#agent-resource","title":"Agent Resource","text":"The Agent resource provides methods for creating and managing agent completions.
"},{"location":"swarms_cloud/python_client/#create","title":"create","text":"Creates an agent completion.
response = client.agent.create(\n agent_config={\n \"agent_name\": \"Researcher\",\n \"description\": \"Conducts in-depth research on topics\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n task=\"Research the latest advancements in quantum computing and summarize the key findings\"\n)\n\nprint(f\"Agent ID: {response.id}\")\nprint(f\"Output: {response.outputs}\")\n
"},{"location":"swarms_cloud/python_client/#parameters_1","title":"Parameters","text":"Parameter Type Required Description agent_config
dict
or AgentSpec
Yes Configuration for the agent task
str
Yes The task for the agent to complete history
dict
No Optional conversation history The agent_config
parameter can include the following fields:
agent_name
str
Required Name of the agent description
str
None
Description of the agent's purpose system_prompt
str
None
System prompt to guide the agent's behavior model_name
str
\"gpt-4o-mini\"
Name of the model to use auto_generate_prompt
bool
False
Whether to automatically generate a prompt max_tokens
int
8192
Maximum tokens in the response temperature
float
0.5
Temperature for sampling (0-1) role
str
None
Role of the agent max_loops
int
1
Maximum number of reasoning loops tools_dictionary
List[Dict]
None
Tools available to the agent"},{"location":"swarms_cloud/python_client/#returns","title":"Returns","text":"AgentCompletionResponse
object with the following properties:
id
: Unique identifier for the completionsuccess
: Whether the completion was successfulname
: Name of the agentdescription
: Description of the agenttemperature
: Temperature used for the completionoutputs
: Output from the agentusage
: Token usage informationtimestamp
: Timestamp of the completionCreates multiple agent completions in batch.
responses = client.agent.create_batch([\n {\n \"agent_config\": {\n \"agent_name\": \"Researcher\",\n \"model_name\": \"gpt-4o-mini\",\n \"temperature\": 0.5\n },\n \"task\": \"Summarize the latest quantum computing research\"\n },\n {\n \"agent_config\": {\n \"agent_name\": \"Writer\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n \"task\": \"Write a blog post about AI safety\"\n }\n])\n\nfor i, response in enumerate(responses):\n print(f\"Agent {i+1} ID: {response.id}\")\n print(f\"Output: {response.outputs}\")\n print(\"---\")\n
"},{"location":"swarms_cloud/python_client/#parameters_2","title":"Parameters","text":"Parameter Type Required Description completions
List[Dict or AgentCompletion]
Yes List of agent completion requests Each item in the completions
list should have the same structure as the parameters for the create
method.
List of AgentCompletionResponse
objects with the same properties as the return value of the create
method.
Creates an agent completion asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient(api_key=\"your-api-key\") as client:\n response = await client.agent.acreate(\n agent_config={\n \"agent_name\": \"Researcher\",\n \"description\": \"Conducts in-depth research\",\n \"model_name\": \"gpt-4o\"\n },\n task=\"Research the impact of quantum computing on cryptography\"\n )\n\n print(f\"Agent ID: {response.id}\")\n print(f\"Output: {response.outputs}\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#parameters_3","title":"Parameters","text":"Same as the create
method.
Same as the create
method.
Creates multiple agent completions in batch asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient(api_key=\"your-api-key\") as client:\n responses = await client.agent.acreate_batch([\n {\n \"agent_config\": {\n \"agent_name\": \"Researcher\",\n \"model_name\": \"gpt-4o-mini\"\n },\n \"task\": \"Summarize the latest quantum computing research\"\n },\n {\n \"agent_config\": {\n \"agent_name\": \"Writer\",\n \"model_name\": \"gpt-4o\"\n },\n \"task\": \"Write a blog post about AI safety\"\n }\n ])\n\n for i, response in enumerate(responses):\n print(f\"Agent {i+1} ID: {response.id}\")\n print(f\"Output: {response.outputs}\")\n print(\"---\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#parameters_4","title":"Parameters","text":"Same as the create_batch
method.
Same as the create_batch
method.
The Swarm resource provides methods for creating and managing swarm completions.
"},{"location":"swarms_cloud/python_client/#create_1","title":"create","text":"Creates a swarm completion.
response = client.swarm.create(\n name=\"Research Swarm\",\n description=\"A swarm for research tasks\",\n swarm_type=\"SequentialWorkflow\",\n task=\"Research quantum computing advances in 2024 and summarize the key findings\",\n agents=[\n {\n \"agent_name\": \"Researcher\",\n \"description\": \"Conducts in-depth research\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.5\n },\n {\n \"agent_name\": \"Critic\",\n \"description\": \"Evaluates arguments for flaws\",\n \"model_name\": \"gpt-4o-mini\",\n \"temperature\": 0.3\n }\n ],\n max_loops=3,\n return_history=True\n)\n\nprint(f\"Job ID: {response.job_id}\")\nprint(f\"Status: {response.status}\")\nprint(f\"Output: {response.output}\")\n
"},{"location":"swarms_cloud/python_client/#parameters_5","title":"Parameters","text":"Parameter Type Required Description name
str
No Name of the swarm description
str
No Description of the swarm agents
List[Dict or AgentSpec]
No List of agent specifications max_loops
int
No Maximum number of loops (default: 1) swarm_type
str
No Type of swarm (see available types) task
str
Conditional The task to complete (required if tasks and messages are not provided) tasks
List[str]
Conditional List of tasks for batch processing (required if task and messages are not provided) messages
List[Dict]
Conditional List of messages to process (required if task and tasks are not provided) return_history
bool
No Whether to return the execution history (default: True) rules
str
No Rules for the swarm schedule
Dict
No Schedule specification for delayed execution stream
bool
No Whether to stream the response (default: False) service_tier
str
No Service tier ('standard' or 'flex', default: 'standard')"},{"location":"swarms_cloud/python_client/#returns_4","title":"Returns","text":"SwarmCompletionResponse
object with the following properties:
job_id
: Unique identifier for the jobstatus
: Status of the jobswarm_name
: Name of the swarmdescription
: Description of the swarmswarm_type
: Type of swarm usedoutput
: Output from the swarmnumber_of_agents
: Number of agents in the swarmservice_tier
: Service tier usedtasks
: List of tasks processed (if applicable)messages
: List of messages processed (if applicable)Creates multiple swarm completions in batch.
responses = client.swarm.create_batch([\n {\n \"name\": \"Research Swarm\",\n \"swarm_type\": \"auto\",\n \"task\": \"Research quantum computing advances\",\n \"agents\": [\n {\"agent_name\": \"Researcher\", \"model_name\": \"gpt-4o\"}\n ]\n },\n {\n \"name\": \"Writing Swarm\",\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Write a blog post about AI safety\",\n \"agents\": [\n {\"agent_name\": \"Writer\", \"model_name\": \"gpt-4o\"},\n {\"agent_name\": \"Editor\", \"model_name\": \"gpt-4o-mini\"}\n ]\n }\n])\n\nfor i, response in enumerate(responses):\n print(f\"Swarm {i+1} Job ID: {response.job_id}\")\n print(f\"Status: {response.status}\")\n print(f\"Output: {response.output}\")\n print(\"---\")\n
"},{"location":"swarms_cloud/python_client/#parameters_6","title":"Parameters","text":"Parameter Type Required Description swarms
List[Dict or SwarmSpec]
Yes List of swarm specifications Each item in the swarms
list should have the same structure as the parameters for the create
method.
List of SwarmCompletionResponse
objects with the same properties as the return value of the create
method.
Lists available swarm types.
response = client.swarm.list_types()\n\nprint(f\"Available swarm types:\")\nfor swarm_type in response.swarm_types:\n print(f\"- {swarm_type}\")\n
"},{"location":"swarms_cloud/python_client/#returns_6","title":"Returns","text":"SwarmTypesResponse
object with the following properties:
success
: Whether the request was successfulswarm_types
: List of available swarm typesLists available swarm types asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient(api_key=\"your-api-key\") as client:\n response = await client.swarm.alist_types()\n\n print(f\"Available swarm types:\")\n for swarm_type in response.swarm_types:\n print(f\"- {swarm_type}\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#returns_7","title":"Returns","text":"Same as the list_types
method.
Creates a swarm completion asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient(api_key=\"your-api-key\") as client:\n response = await client.swarm.acreate(\n name=\"Research Swarm\",\n swarm_type=\"SequentialWorkflow\",\n task=\"Research quantum computing advances in 2024\",\n agents=[\n {\n \"agent_name\": \"Researcher\",\n \"description\": \"Conducts in-depth research\",\n \"model_name\": \"gpt-4o\"\n },\n {\n \"agent_name\": \"Critic\",\n \"description\": \"Evaluates arguments for flaws\",\n \"model_name\": \"gpt-4o-mini\"\n }\n ]\n )\n\n print(f\"Job ID: {response.job_id}\")\n print(f\"Status: {response.status}\")\n print(f\"Output: {response.output}\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#parameters_7","title":"Parameters","text":"Same as the create
method.
Same as the create
method.
Creates multiple swarm completions in batch asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient(api_key=\"your-api-key\") as client:\n responses = await client.swarm.acreate_batch([\n {\n \"name\": \"Research Swarm\",\n \"swarm_type\": \"auto\",\n \"task\": \"Research quantum computing\",\n \"agents\": [\n {\"agent_name\": \"Researcher\", \"model_name\": \"gpt-4o\"}\n ]\n },\n {\n \"name\": \"Writing Swarm\",\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Write a blog post about AI safety\",\n \"agents\": [\n {\"agent_name\": \"Writer\", \"model_name\": \"gpt-4o\"}\n ]\n }\n ])\n\n for i, response in enumerate(responses):\n print(f\"Swarm {i+1} Job ID: {response.job_id}\")\n print(f\"Status: {response.status}\")\n print(f\"Output: {response.output}\")\n print(\"---\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#parameters_8","title":"Parameters","text":"Same as the create_batch
method.
Same as the create_batch
method.
The Models resource provides methods for retrieving information about available models.
"},{"location":"swarms_cloud/python_client/#list","title":"list","text":"Lists available models.
response = client.models.list()\n\nprint(f\"Available models:\")\nfor model in response.models:\n print(f\"- {model}\")\n
"},{"location":"swarms_cloud/python_client/#returns_10","title":"Returns","text":"ModelsResponse
object with the following properties:
success
: Whether the request was successfulmodels
: List of available model namesLists available models asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient(api_key=\"your-api-key\") as client:\n response = await client.models.alist()\n\n print(f\"Available models:\")\n for model in response.models:\n print(f\"- {model}\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#returns_11","title":"Returns","text":"Same as the list
method.
The Logs resource provides methods for retrieving API request logs.
"},{"location":"swarms_cloud/python_client/#list_1","title":"list","text":"Lists API request logs.
response = client.logs.list()\n\nprint(f\"Found {response.count} logs:\")\nfor log in response.logs:\n print(f\"- ID: {log.id}, Created at: {log.created_at}\")\n print(f\" Data: {log.data}\")\n
"},{"location":"swarms_cloud/python_client/#returns_12","title":"Returns","text":"LogsResponse
object with the following properties:
status
: Status of the requestcount
: Number of logslogs
: List of log entriestimestamp
: Timestamp of the requestEach log entry is a LogEntry
object with the following properties:
id
: Unique identifier for the log entryapi_key
: API key used for the requestdata
: Request datacreated_at
: Timestamp when the log entry was createdLists API request logs asynchronously.
import asyncio\nfrom swarms_client import SwarmsClient\n\nasync def main():\n async with SwarmsClient() as client:\n response = await client.logs.alist()\n\n print(f\"Found {response.count} logs:\")\n for log in response.logs:\n print(f\"- ID: {log.id}, Created at: {log.created_at}\")\n print(f\" Data: {log.data}\")\n\nasyncio.run(main())\n
"},{"location":"swarms_cloud/python_client/#returns_13","title":"Returns","text":"Same as the list
method.
The Swarms API client provides detailed error handling with specific exception types for different error scenarios. All exceptions inherit from the base SwarmsError
class.
from swarms_client import SwarmsClient, SwarmsError, AuthenticationError, RateLimitError, APIError\n\ntry:\n client = SwarmsClient(api_key=\"invalid-api-key\")\n response = client.agent.create(\n agent_config={\"agent_name\": \"Researcher\", \"model_name\": \"gpt-4o\"},\n task=\"Research quantum computing\"\n )\nexcept AuthenticationError as e:\n print(f\"Authentication error: {e}\")\nexcept RateLimitError as e:\n print(f\"Rate limit exceeded: {e}\")\nexcept APIError as e:\n print(f\"API error: {e}\")\nexcept SwarmsError as e:\n print(f\"Swarms error: {e}\")\n
"},{"location":"swarms_cloud/python_client/#exception-types","title":"Exception Types","text":"Exception Description SwarmsError
Base exception for all Swarms API errors AuthenticationError
Raised when there's an issue with authentication RateLimitError
Raised when the rate limit is exceeded APIError
Raised when the API returns an error InvalidRequestError
Raised when the request is invalid InsufficientCreditsError
Raised when the user doesn't have enough credits TimeoutError
Raised when a request times out NetworkError
Raised when there's a network issue"},{"location":"swarms_cloud/python_client/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms_cloud/python_client/#connection-pooling","title":"Connection Pooling","text":"The Swarms API client uses connection pooling to efficiently manage HTTP connections, which can significantly improve performance when making multiple requests.
client = SwarmsClient(\n api_key=\"your-api-key\",\n pool_connections=100, # Number of connection pools to cache\n pool_maxsize=100, # Maximum number of connections to save in the pool\n keep_alive_timeout=5 # Keep-alive timeout for connections in seconds\n)\n
"},{"location":"swarms_cloud/python_client/#circuit-breaker-pattern","title":"Circuit Breaker Pattern","text":"The client implements the circuit breaker pattern to prevent cascading failures when the API is experiencing issues.
client = SwarmsClient(\n api_key=\"your-api-key\",\n circuit_breaker_threshold=5, # Number of failures before the circuit opens\n circuit_breaker_timeout=60 # Time in seconds before attempting to close the circuit\n)\n
"},{"location":"swarms_cloud/python_client/#caching","title":"Caching","text":"The client includes in-memory caching for frequently accessed resources to reduce API calls and improve performance.
client = SwarmsClient(\n api_key=\"your-api-key\",\n enable_cache=True # Enable in-memory caching\n)\n\n# Clear the cache manually if needed\nclient.clear_cache()\n
"},{"location":"swarms_cloud/python_client/#complete-example","title":"Complete Example","text":"Here's a complete example that demonstrates how to use the Swarms API client to create a research swarm and process its output:
import os\nfrom swarms_client import SwarmsClient\nfrom dotenv import load_dotenv\n\n# Load API key from environment\nload_dotenv()\napi_key = os.getenv(\"SWARMS_API_KEY\")\n\n# Initialize client\nclient = SwarmsClient(api_key=api_key)\n\n# Create a research swarm\ntry:\n # Define the agents\n researcher = {\n \"agent_name\": \"Researcher\",\n \"description\": \"Conducts thorough research on specified topics\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.5,\n \"system_prompt\": \"You are a diligent researcher focused on finding accurate and comprehensive information.\"\n }\n\n analyst = {\n \"agent_name\": \"Analyst\",\n \"description\": \"Analyzes research findings and identifies key insights\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.3,\n \"system_prompt\": \"You are an insightful analyst who can identify patterns and extract meaningful insights from research data.\"\n }\n\n summarizer = {\n \"agent_name\": \"Summarizer\",\n \"description\": \"Creates concise summaries of complex information\",\n \"model_name\": \"gpt-4o-mini\",\n \"temperature\": 0.4,\n \"system_prompt\": \"You specialize in distilling complex information into clear, concise summaries.\"\n }\n\n # Create the swarm\n response = client.swarm.create(\n name=\"Quantum Computing Research Swarm\",\n description=\"A swarm for researching and analyzing quantum computing advancements\",\n swarm_type=\"SequentialWorkflow\",\n task=\"Research the latest advancements in quantum computing in 2024, analyze their potential impact on cryptography and data security, and provide a concise summary of the findings.\",\n agents=[researcher, analyst, summarizer],\n max_loops=2,\n return_history=True\n )\n\n # Process the response\n print(f\"Job ID: {response.job_id}\")\n print(f\"Status: {response.status}\")\n print(f\"Number of agents: {response.number_of_agents}\")\n print(f\"Swarm type: {response.swarm_type}\")\n\n # Print the output\n if \"final_output\" in response.output:\n print(\"\\nFinal Output:\")\n print(response.output[\"final_output\"])\n else:\n print(\"\\nOutput:\")\n print(response.output)\n\n # Access agent-specific outputs if available\n if \"agent_outputs\" in response.output:\n print(\"\\nAgent Outputs:\")\n for agent, output in response.output[\"agent_outputs\"].items():\n print(f\"\\n{agent}:\")\n print(output)\n\nexcept Exception as e:\n print(f\"Error: {e}\")\n
This example creates a sequential workflow swarm with three agents to research quantum computing, analyze the findings, and create a summary of the results.
"},{"location":"swarms_cloud/quickstart/","title":"Swarms Quickstart Guide","text":"This guide will help you get started with both single agent and multi-agent functionalities in Swarms API.
"},{"location":"swarms_cloud/quickstart/#prerequisites","title":"Prerequisites","text":"Requirements
requests
library for Pythonaxios
for TypeScript/JavaScriptcurl
for shell commandspip install requests python-dotenv\n
npm install axios dotenv\n
"},{"location":"swarms_cloud/quickstart/#authentication","title":"Authentication","text":"API Key Security
Never hardcode your API key in your code. Always use environment variables or secure configuration management.
The API is accessible through two base URLs:
https://api.swarms.world
https://swarms-api-285321057562.us-east1.run.app
import os\nimport requests\nfrom dotenv import load_dotenv\n\nload_dotenv()\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n\nresponse = requests.get(f\"{BASE_URL}/health\", headers=headers)\nprint(response.json())\n
health_check.shcurl -X GET \"https://api.swarms.world/health\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\"\n
health_check.tsimport axios from 'axios';\nimport * as dotenv from 'dotenv';\n\ndotenv.config();\nconst API_KEY = process.env.SWARMS_API_KEY;\nconst BASE_URL = 'https://api.swarms.world';\n\nasync function checkHealth() {\n try {\n const response = await axios.get(`${BASE_URL}/health`, {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n });\n console.log(response.data);\n } catch (error) {\n console.error('Error:', error);\n }\n}\n\ncheckHealth();\n
"},{"location":"swarms_cloud/quickstart/#basic-agent","title":"Basic Agent","text":"PythoncURLTypeScript single_agent.pyimport os\nimport requests\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\") # (1)\nBASE_URL = \"https://api.swarms.world\"\n\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n\ndef run_single_agent():\n \"\"\"Run a single agent with the AgentCompletion format\"\"\"\n payload = {\n \"agent_config\": {\n \"agent_name\": \"Research Analyst\", # (2)\n \"description\": \"An expert in analyzing and synthesizing research data\",\n \"system_prompt\": ( # (3)\n \"You are a Research Analyst with expertise in data analysis and synthesis. \"\n \"Your role is to analyze provided information, identify key insights, \"\n \"and present findings in a clear, structured format.\"\n ),\n \"model_name\": \"claude-3-5-sonnet-20240620\", # (4)\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 1,\n \"auto_generate_prompt\": False,\n \"tools_list_dictionary\": None,\n },\n \"task\": \"What are the key trends in renewable energy adoption?\", # (5)\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/agent/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n\n# Run the agent\nresult = run_single_agent()\nprint(result)\n
curl -X POST \"https://api.swarms.world/v1/agent/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"agent_config\": {\n \"agent_name\": \"Research Analyst\",\n \"description\": \"An expert in analyzing and synthesizing research data\",\n \"system_prompt\": \"You are a Research Analyst with expertise in data analysis and synthesis. Your role is to analyze provided information, identify key insights, and present findings in a clear, structured format.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 1,\n \"auto_generate_prompt\": false,\n \"tools_list_dictionary\": null\n },\n \"task\": \"What are the key trends in renewable energy adoption?\"\n }'\n
single_agent.tsimport axios from 'axios';\nimport * as dotenv from 'dotenv';\n\ndotenv.config();\n\nconst API_KEY = process.env.SWARMS_API_KEY;\nconst BASE_URL = 'https://api.swarms.world';\n\ninterface AgentConfig {\n agent_name: string;\n description: string;\n system_prompt: string;\n model_name: string;\n role: string;\n max_loops: number;\n max_tokens: number;\n temperature: number;\n auto_generate_prompt: boolean;\n tools_list_dictionary: null | object[];\n}\n\ninterface AgentPayload {\n agent_config: AgentConfig;\n task: string;\n}\n\nasync function runSingleAgent() {\n const payload: AgentPayload = {\n agent_config: {\n agent_name: \"Research Analyst\",\n description: \"An expert in analyzing and synthesizing research data\",\n system_prompt: \"You are a Research Analyst with expertise in data analysis and synthesis.\",\n model_name: \"claude-3-5-sonnet-20240620\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 1,\n auto_generate_prompt: false,\n tools_list_dictionary: null\n },\n task: \"What are the key trends in renewable energy adoption?\"\n };\n\n try {\n const response = await axios.post(\n `${BASE_URL}/v1/agent/completions`,\n payload,\n {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n }\n );\n return response.data;\n } catch (error) {\n console.error('Error:', error);\n throw error;\n }\n}\n\n// Run the agent\nrunSingleAgent()\n .then(result => console.log(result))\n .catch(error => console.error(error));\n
"},{"location":"swarms_cloud/quickstart/#agent-with-history","title":"Agent with History","text":"PythoncURLTypeScript agent_with_history.pydef run_agent_with_history():\n payload = {\n \"agent_config\": {\n \"agent_name\": \"Conversation Agent\",\n \"description\": \"An agent that maintains conversation context\",\n \"system_prompt\": \"You are a helpful assistant that maintains context.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.7,\n \"auto_generate_prompt\": False,\n },\n \"task\": \"What's the weather like?\",\n \"history\": [ # (1)\n {\n \"role\": \"user\",\n \"content\": \"I'm planning a trip to New York.\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"That's great! When are you planning to visit?\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Next week.\"\n }\n ]\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/agent/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
curl -X POST \"https://api.swarms.world/v1/agent/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"agent_config\": {\n \"agent_name\": \"Conversation Agent\",\n \"description\": \"An agent that maintains conversation context\",\n \"system_prompt\": \"You are a helpful assistant that maintains context.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.7,\n \"auto_generate_prompt\": false\n },\n \"task\": \"What'\\''s the weather like?\",\n \"history\": [\n {\n \"role\": \"user\",\n \"content\": \"I'\\''m planning a trip to New York.\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"That'\\''s great! When are you planning to visit?\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Next week.\"\n }\n ]\n }'\n
agent_with_history.tsinterface Message {\n role: 'user' | 'assistant';\n content: string;\n}\n\ninterface AgentWithHistoryPayload extends AgentPayload {\n history: Message[];\n}\n\nasync function runAgentWithHistory() {\n const payload: AgentWithHistoryPayload = {\n agent_config: {\n agent_name: \"Conversation Agent\",\n description: \"An agent that maintains conversation context\",\n system_prompt: \"You are a helpful assistant that maintains context.\",\n model_name: \"claude-3-5-sonnet-20240620\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.7,\n auto_generate_prompt: false,\n tools_list_dictionary: null\n },\n task: \"What's the weather like?\",\n history: [\n {\n role: \"user\",\n content: \"I'm planning a trip to New York.\"\n },\n {\n role: \"assistant\",\n content: \"That's great! When are you planning to visit?\"\n },\n {\n role: \"user\",\n content: \"Next week.\"\n }\n ]\n };\n\n try {\n const response = await axios.post(\n `${BASE_URL}/v1/agent/completions`,\n payload,\n {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n }\n );\n return response.data;\n } catch (error) {\n console.error('Error:', error);\n throw error;\n }\n}\n
"},{"location":"swarms_cloud/quickstart/#multi-agent-swarms","title":"Multi-Agent Swarms","text":"Swarm Types
Swarms API supports two types of agent workflows:
SequentialWorkflow
: Agents work in sequence, each building on previous outputConcurrentWorkflow
: Agents work in parallel on the same taskdef run_sequential_swarm():\n payload = {\n \"name\": \"Financial Analysis Swarm\",\n \"description\": \"Market analysis swarm\",\n \"agents\": [\n {\n \"agent_name\": \"Market Analyst\", # (1)\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Economic Forecaster\", # (2)\n \"description\": \"Predicts economic trends\",\n \"system_prompt\": \"You are an expert in economic forecasting.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\", # (3)\n \"task\": \"Analyze the current market conditions and provide economic forecasts.\"\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
curl -X POST \"https://api.swarms.world/v1/swarm/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"name\": \"Financial Analysis Swarm\",\n \"description\": \"Market analysis swarm\",\n \"agents\": [\n {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": false\n },\n {\n \"agent_name\": \"Economic Forecaster\",\n \"description\": \"Predicts economic trends\",\n \"system_prompt\": \"You are an expert in economic forecasting.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": false\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Analyze the current market conditions and provide economic forecasts.\"\n }'\n
sequential_swarm.tsinterface SwarmAgent {\n agent_name: string;\n description: string;\n system_prompt: string;\n model_name: string;\n role: string;\n max_loops: number;\n max_tokens: number;\n temperature: number;\n auto_generate_prompt: boolean;\n}\n\ninterface SwarmPayload {\n name: string;\n description: string;\n agents: SwarmAgent[];\n max_loops: number;\n swarm_type: 'SequentialWorkflow' | 'ConcurrentWorkflow';\n task: string;\n}\n\nasync function runSequentialSwarm() {\n const payload: SwarmPayload = {\n name: \"Financial Analysis Swarm\",\n description: \"Market analysis swarm\",\n agents: [\n {\n agent_name: \"Market Analyst\",\n description: \"Analyzes market trends\",\n system_prompt: \"You are a financial analyst expert.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5,\n auto_generate_prompt: false\n },\n {\n agent_name: \"Economic Forecaster\",\n description: \"Predicts economic trends\",\n system_prompt: \"You are an expert in economic forecasting.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5,\n auto_generate_prompt: false\n }\n ],\n max_loops: 1,\n swarm_type: \"SequentialWorkflow\",\n task: \"Analyze the current market conditions and provide economic forecasts.\"\n };\n\n try {\n const response = await axios.post(\n `${BASE_URL}/v1/swarm/completions`,\n payload,\n {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n }\n );\n return response.data;\n } catch (error) {\n console.error('Error:', error);\n throw error;\n }\n}\n
"},{"location":"swarms_cloud/quickstart/#concurrent-workflow","title":"Concurrent Workflow","text":"PythoncURLTypeScript concurrent_swarm.pydef run_concurrent_swarm():\n payload = {\n \"name\": \"Medical Analysis Swarm\",\n \"description\": \"Analyzes medical data concurrently\",\n \"agents\": [\n {\n \"agent_name\": \"Lab Data Analyzer\", # (1)\n \"description\": \"Analyzes lab report data\",\n \"system_prompt\": \"You are a medical data analyst specializing in lab results.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Clinical Specialist\", # (2)\n \"description\": \"Provides clinical interpretations\",\n \"system_prompt\": \"You are an expert in clinical diagnosis.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\", # (3)\n \"task\": \"Analyze these lab results and provide clinical interpretations.\"\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
curl -X POST \"https://api.swarms.world/v1/swarm/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"name\": \"Medical Analysis Swarm\",\n \"description\": \"Analyzes medical data concurrently\",\n \"agents\": [\n {\n \"agent_name\": \"Lab Data Analyzer\",\n \"description\": \"Analyzes lab report data\",\n \"system_prompt\": \"You are a medical data analyst specializing in lab results.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": false\n },\n {\n \"agent_name\": \"Clinical Specialist\",\n \"description\": \"Provides clinical interpretations\",\n \"system_prompt\": \"You are an expert in clinical diagnosis.\",\n \"model_name\": \"claude-3-5-sonnet-20240620\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": false\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\",\n \"task\": \"Analyze these lab results and provide clinical interpretations.\"\n }'\n
concurrent_swarm.tsasync function runConcurrentSwarm() {\n const payload: SwarmPayload = {\n name: \"Medical Analysis Swarm\",\n description: \"Analyzes medical data concurrently\",\n agents: [\n {\n agent_name: \"Lab Data Analyzer\",\n description: \"Analyzes lab report data\",\n system_prompt: \"You are a medical data analyst specializing in lab results.\",\n model_name: \"claude-3-5-sonnet-20240620\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5,\n auto_generate_prompt: false\n },\n {\n agent_name: \"Clinical Specialist\",\n description: \"Provides clinical interpretations\",\n system_prompt: \"You are an expert in clinical diagnosis.\",\n model_name: \"claude-3-5-sonnet-20240620\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5,\n auto_generate_prompt: false\n }\n ],\n max_loops: 1,\n swarm_type: \"ConcurrentWorkflow\",\n task: \"Analyze these lab results and provide clinical interpretations.\"\n };\n\n try {\n const response = await axios.post(\n `${BASE_URL}/v1/swarm/completions`,\n payload,\n {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n }\n );\n return response.data;\n } catch (error) {\n console.error('Error:', error);\n throw error;\n }\n}\n
"},{"location":"swarms_cloud/quickstart/#batch-processing","title":"Batch Processing","text":"Batch Processing
Process multiple swarms in a single request for improved efficiency.
PythoncURLTypeScript batch_swarms.pydef run_batch_swarms():\n payload = [\n {\n \"name\": \"Batch Swarm 1\",\n \"description\": \"First swarm in batch\",\n \"agents\": [\n {\n \"agent_name\": \"Research Agent\",\n \"description\": \"Conducts research\",\n \"system_prompt\": \"You are a research assistant.\",\n \"model_name\": \"gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1\n },\n {\n \"agent_name\": \"Analysis Agent\",\n \"description\": \"Analyzes data\",\n \"system_prompt\": \"You are a data analyst.\",\n \"model_name\": \"gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Research AI advancements.\"\n }\n ]\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/batch/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
batch_swarms.shcurl -X POST \"https://api.swarms.world/v1/swarm/batch/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '[\n {\n \"name\": \"Batch Swarm 1\",\n \"description\": \"First swarm in batch\",\n \"agents\": [\n {\n \"agent_name\": \"Research Agent\",\n \"description\": \"Conducts research\",\n \"system_prompt\": \"You are a research assistant.\",\n \"model_name\": \"gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1\n },\n {\n \"agent_name\": \"Analysis Agent\",\n \"description\": \"Analyzes data\",\n \"system_prompt\": \"You are a data analyst.\",\n \"model_name\": \"gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Research AI advancements.\"\n }\n ]'\n
batch_swarms.tsasync function runBatchSwarms() {\n const payload: SwarmPayload[] = [\n {\n name: \"Batch Swarm 1\",\n description: \"First swarm in batch\",\n agents: [\n {\n agent_name: \"Research Agent\",\n description: \"Conducts research\",\n system_prompt: \"You are a research assistant.\",\n model_name: \"gpt-4\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.7,\n auto_generate_prompt: false\n },\n {\n agent_name: \"Analysis Agent\",\n description: \"Analyzes data\",\n system_prompt: \"You are a data analyst.\",\n model_name: \"gpt-4\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.7,\n auto_generate_prompt: false\n }\n ],\n max_loops: 1,\n swarm_type: \"SequentialWorkflow\",\n task: \"Research AI advancements.\"\n }\n ];\n\n try {\n const response = await axios.post(\n `${BASE_URL}/v1/swarm/batch/completions`,\n payload,\n {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n }\n );\n return response.data;\n } catch (error) {\n console.error('Error:', error);\n throw error;\n }\n}\n
"},{"location":"swarms_cloud/quickstart/#advanced-features","title":"Advanced Features","text":""},{"location":"swarms_cloud/quickstart/#tools-integration","title":"Tools Integration","text":"Tools
Enhance agent capabilities by providing them with specialized tools.
PythoncURLTypeScript tools_example.pydef run_agent_with_tools():\n tools_dictionary = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_topic\",\n \"description\": \"Conduct an in-depth search on a topic\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"depth\": {\n \"type\": \"integer\",\n \"description\": \"Search depth (1-3)\"\n },\n \"detailed_queries\": {\n \"type\": \"array\",\n \"description\": \"Specific search queries\",\n \"items\": {\n \"type\": \"string\"\n }\n }\n },\n \"required\": [\"depth\", \"detailed_queries\"]\n }\n }\n }\n ]\n\n payload = {\n \"agent_config\": {\n \"agent_name\": \"Research Assistant\",\n \"description\": \"Expert in research with search capabilities\",\n \"system_prompt\": \"You are a research assistant with search capabilities.\",\n \"model_name\": \"gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.7,\n \"auto_generate_prompt\": False,\n \"tools_dictionary\": tools_dictionary\n },\n \"task\": \"Research the latest developments in quantum computing.\"\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/agent/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
tools_example.shcurl -X POST \"https://api.swarms.world/v1/agent/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"agent_config\": {\n \"agent_name\": \"Research Assistant\",\n \"description\": \"Expert in research with search capabilities\",\n \"system_prompt\": \"You are a research assistant with search capabilities.\",\n \"model_name\": \"gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.7,\n \"auto_generate_prompt\": false,\n \"tools_dictionary\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_topic\",\n \"description\": \"Conduct an in-depth search on a topic\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"depth\": {\n \"type\": \"integer\",\n \"description\": \"Search depth (1-3)\"\n },\n \"detailed_queries\": {\n \"type\": \"array\",\n \"description\": \"Specific search queries\",\n \"items\": {\n \"type\": \"string\"\n }\n }\n },\n \"required\": [\"depth\", \"detailed_queries\"]\n }\n }\n }\n ]\n },\n \"task\": \"Research the latest developments in quantum computing.\"\n }'\n
tools_example.tsinterface ToolFunction {\n name: string;\n description: string;\n parameters: {\n type: string;\n properties: {\n [key: string]: {\n type: string;\n description: string;\n items?: {\n type: string;\n };\n };\n };\n required: string[];\n };\n}\n\ninterface Tool {\n type: string;\n function: ToolFunction;\n}\n\ninterface AgentWithToolsConfig extends AgentConfig {\n tools_dictionary: Tool[];\n}\n\ninterface AgentWithToolsPayload {\n agent_config: AgentWithToolsConfig;\n task: string;\n}\n\nasync function runAgentWithTools() {\n const toolsDictionary: Tool[] = [\n {\n type: \"function\",\n function: {\n name: \"search_topic\",\n description: \"Conduct an in-depth search on a topic\",\n parameters: {\n type: \"object\",\n properties: {\n depth: {\n type: \"integer\",\n description: \"Search depth (1-3)\"\n },\n detailed_queries: {\n type: \"array\",\n description: \"Specific search queries\",\n items: {\n type: \"string\"\n }\n }\n },\n required: [\"depth\", \"detailed_queries\"]\n }\n }\n }\n ];\n\n const payload: AgentWithToolsPayload = {\n agent_config: {\n agent_name: \"Research Assistant\",\n description: \"Expert in research with search capabilities\",\n system_prompt: \"You are a research assistant with search capabilities.\",\n model_name: \"gpt-4\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.7,\n auto_generate_prompt: false,\n tools_dictionary: toolsDictionary\n },\n task: \"Research the latest developments in quantum computing.\"\n };\n\n try {\n const response = await axios.post(\n `${BASE_URL}/v1/agent/completions`,\n payload,\n {\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n }\n }\n );\n return response.data;\n } catch (error) {\n console.error('Error:', error);\n throw error;\n }\n}\n
"},{"location":"swarms_cloud/quickstart/#available-models","title":"Available Models","text":"Supported Models
Choose the right model for your use case:
OpenAIAnthropicGroqgpt-4
gpt-4o
gpt-4o-mini
claude-3-5-sonnet-20240620
claude-3-7-sonnet-latest
groq/llama3-70b-8192
groq/deepseek-r1-distill-llama-70b
Security
Never commit API keys or sensitive credentials to version control.
Rate Limits
Implement proper rate limiting and error handling in production.
Testing
Start with simple tasks and gradually increase complexity.
PythonTypeScript best_practices.py# Error Handling\ntry:\n response = requests.post(url, headers=headers, json=payload)\n response.raise_for_status()\nexcept requests.exceptions.RequestException as e:\n print(f\"Error: {e}\")\n\n# Rate Limiting\nimport time\nfrom tenacity import retry, wait_exponential\n\n@retry(wait=wait_exponential(multiplier=1, min=4, max=10))\ndef make_api_call():\n response = requests.post(url, headers=headers, json=payload)\n response.raise_for_status()\n return response\n\n# Input Validation\ndef validate_payload(payload):\n required_fields = [\"agent_config\", \"task\"]\n if not all(field in payload for field in required_fields):\n raise ValueError(\"Missing required fields\")\n
best_practices.ts// Error Handling\ntry {\n const response = await axios.post(url, payload, { headers });\n} catch (error) {\n if (axios.isAxiosError(error)) {\n console.error('API Error:', error.response?.data);\n }\n throw error;\n}\n\n// Rate Limiting\nimport { rateLimit } from 'axios-rate-limit';\n\nconst http = rateLimit(axios.create(), { \n maxRequests: 2,\n perMilliseconds: 1000\n});\n\n// Input Validation\nfunction validatePayload(payload: unknown): asserts payload is AgentPayload {\n if (!payload || typeof payload !== 'object') {\n throw new Error('Invalid payload');\n }\n\n const { agent_config, task } = payload as Partial<AgentPayload>;\n\n if (!agent_config || !task) {\n throw new Error('Missing required fields');\n }\n}\n
"},{"location":"swarms_cloud/quickstart/#connect-with-us","title":"Connect With Us","text":"Join our community of agent engineers and researchers for technical support, cutting-edge updates, and exclusive access to world-class agent engineering insights!
Platform Description Link \ud83d\udcda Documentation Official documentation and guides docs.swarms.world \ud83d\udcdd Blog Latest updates and technical articles Medium \ud83d\udcac Discord Live chat and community support Join Discord \ud83d\udc26 Twitter Latest news and announcements @kyegomez \ud83d\udc65 LinkedIn Professional network and updates The Swarm Corporation \ud83d\udcfa YouTube Tutorials and demos Swarms Channel \ud83c\udfab Events Join our community events Sign up here \ud83d\ude80 Onboarding Session Get onboarded with Kye Gomez, creator and lead maintainer of Swarms Book Session"},{"location":"swarms_cloud/rate_limits/","title":"Swarms API Rate Limits","text":"The Swarms API implements a comprehensive rate limiting system that tracks API requests across multiple time windows and enforces various limits to ensure fair usage and system stability.
"},{"location":"swarms_cloud/rate_limits/#rate-limits-summary","title":"Rate Limits Summary","text":"Rate Limit Type Free Tier Premium Tier Time Window Description Requests per Minute 100 2,000 1 minute Maximum API calls per minute Requests per Hour 50 10,000 1 hour Maximum API calls per hour Requests per Day 1,200 100,000 24 hours Maximum API calls per day Tokens per Agent 200,000 2,000,000 Per request Maximum tokens per agent Prompt Length 200,000 200,000 Per request Maximum input tokens per request Batch Size 10 10 Per request Maximum agents in batch requests IP-based Fallback 100 100 60 seconds For requests without API keys"},{"location":"swarms_cloud/rate_limits/#detailed-rate-limit-explanations","title":"Detailed Rate Limit Explanations","text":""},{"location":"swarms_cloud/rate_limits/#1-request-rate-limits","title":"1. Request Rate Limits","text":"These limits control how many API calls you can make within specific time windows.
"},{"location":"swarms_cloud/rate_limits/#per-minute-limit","title":"Per-Minute Limit","text":"Tier Requests per Minute Reset Interval Applies To Free 100 Every minute (sliding) All API endpoints Premium 2,000 Every minute (sliding) All API endpoints"},{"location":"swarms_cloud/rate_limits/#per-hour-limit","title":"Per-Hour Limit","text":"Free Tier: 1,200 requests per day (50 \u00d7 24)
Premium Tier: 100,000 requests per day
Reset: Every 24 hours (sliding window)
Applies to: All API endpoints
These limits control the amount of text processing allowed per request.
"},{"location":"swarms_cloud/rate_limits/#tokens-per-agent","title":"Tokens per Agent","text":"Free Tier: 200,000 tokens per agent
Premium Tier: 2,000,000 tokens per agent
Applies to: Individual agent configurations
Includes: System prompts, task descriptions, and agent names
All Tiers: 200,000 tokens maximum
Applies to: Combined input text (task + history + system prompts)
Error: Returns 400 error if exceeded
Message: \"Prompt is too long. Please provide a prompt that is less than 10000 tokens.\"
These limits control concurrent processing capabilities.
"},{"location":"swarms_cloud/rate_limits/#batch-size-limit","title":"Batch Size Limit","text":"All Tiers: 10 agents maximum per batch
Applies to: /v1/agent/batch/completions
endpoint
Error: Returns 400 error if exceeded
Message: \"ERROR: BATCH SIZE EXCEEDED - You can only run up to 10 batch agents at a time.\"
The system uses a database-based approach for API key requests:
swarms_api_logs
tableRate limits use sliding windows rather than fixed windows:
Minute: Counts requests in the last 60 seconds
Hour: Counts requests in the last 60 minutes
Day: Counts requests in the last 24 hours
This provides more accurate rate limiting compared to fixed time windows.
"},{"location":"swarms_cloud/rate_limits/#checking-your-rate-limits","title":"Checking Your Rate Limits","text":""},{"location":"swarms_cloud/rate_limits/#api-endpoint","title":"API Endpoint","text":"Use the /v1/rate/limits
endpoint to check your current usage:
curl -H \"x-api-key: your-api-key\" \\\n https://api.swarms.world/v1/rate/limits\n
"},{"location":"swarms_cloud/rate_limits/#response-format","title":"Response Format","text":"{\n \"success\": true,\n \"rate_limits\": {\n \"minute\": {\n \"count\": 5,\n \"limit\": 100,\n \"exceeded\": false,\n \"remaining\": 95,\n \"reset_time\": \"2024-01-15T10:30:00Z\"\n },\n \"hour\": {\n \"count\": 25,\n \"limit\": 50,\n \"exceeded\": false,\n \"remaining\": 25,\n \"reset_time\": \"2024-01-15T11:00:00Z\"\n },\n \"day\": {\n \"count\": 150,\n \"limit\": 1200,\n \"exceeded\": false,\n \"remaining\": 1050,\n \"reset_time\": \"2024-01-16T10:00:00Z\"\n }\n },\n \"limits\": {\n \"maximum_requests_per_minute\": 100,\n \"maximum_requests_per_hour\": 50,\n \"maximum_requests_per_day\": 1200,\n \"tokens_per_agent\": 200000\n },\n \"timestamp\": \"2024-01-15T10:29:30Z\"\n}\n
"},{"location":"swarms_cloud/rate_limits/#handling-rate-limit-errors","title":"Handling Rate Limit Errors","text":""},{"location":"swarms_cloud/rate_limits/#error-response","title":"Error Response","text":"When rate limits are exceeded, you'll receive a 429 status code:
{\n \"detail\": \"Rate limit exceeded for minute window(s). Upgrade to Premium for increased limits (2,000/min, 10,000/hour, 100,000/day) at https://swarms.world/platform/account for just $99/month.\"\n}\n
"},{"location":"swarms_cloud/rate_limits/#best-practices","title":"Best Practices","text":"/v1/rate/limits
endpointUpgrade to Premium for significantly higher limits:
20x more requests per minute (2,000 vs 100)
200x more requests per hour (10,000 vs 50)
83x more requests per day (100,000 vs 1,200)
10x more tokens per agent (2M vs 200K)
Visit Swarms Platform Account to upgrade for just $99/month.
"},{"location":"swarms_cloud/rate_limits/#performance-considerations","title":"Performance Considerations","text":"A high-performance, production-ready Rust client for the Swarms API with comprehensive features for building multi-agent AI systems.
"},{"location":"swarms_cloud/rust_client/#features","title":"Features","text":"reqwest
and tokio
for maximum throughputtracing
serde
Install swarms-rs
globally using cargo:
cargo install swarms-rs\n
"},{"location":"swarms_cloud/rust_client/#quick-start","title":"Quick Start","text":"use swarms_client::SwarmsClient;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n // Initialize the client with API key from environment\n let client = SwarmsClient::builder()\n .unwrap()\n .from_env()? // Loads API key from SWARMS_API_KEY environment variable\n .timeout(std::time::Duration::from_secs(60))\n .max_retries(3)\n .build()?;\n\n // Make a simple swarm completion request\n let response = client.swarm()\n .completion()\n .name(\"My First Swarm\")\n .swarm_type(SwarmType::Auto)\n .task(\"Analyze the pros and cons of quantum computing\")\n .agent(|agent| {\n agent\n .name(\"Researcher\")\n .description(\"Conducts in-depth research\")\n .model(\"gpt-4o\")\n })\n .send()\n .await?;\n\n println!(\"Swarm output: {}\", response.output);\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#api-reference","title":"API Reference","text":""},{"location":"swarms_cloud/rust_client/#swarmsclient","title":"SwarmsClient","text":"The main client for interacting with the Swarms API.
"},{"location":"swarms_cloud/rust_client/#constructor-methods","title":"Constructor Methods","text":""},{"location":"swarms_cloud/rust_client/#swarmsclientbuilder","title":"SwarmsClient::builder()
","text":"Creates a new client builder for configuring the client.
Returns: Result<ClientBuilder, SwarmsError>
Example:
let client = SwarmsClient::builder()\n .unwrap()\n .api_key(\"your-api-key\")\n .timeout(Duration::from_secs(60))\n .build()?;\n
"},{"location":"swarms_cloud/rust_client/#swarmsclientwith_configconfig-clientconfig","title":"SwarmsClient::with_config(config: ClientConfig)
","text":"Creates a client with custom configuration.
Parameter Type Descriptionconfig
ClientConfig
Client configuration settings Returns: Result<SwarmsClient, SwarmsError>
Example:
let config = ClientConfig {\n api_key: \"your-api-key\".to_string(),\n base_url: \"https://api.swarms.com/\".parse().unwrap(),\n timeout: Duration::from_secs(120),\n max_retries: 5,\n ..Default::default()\n};\n\nlet client = SwarmsClient::with_config(config)?;\n
"},{"location":"swarms_cloud/rust_client/#resource-access-methods","title":"Resource Access Methods","text":"Method Returns Description agent()
AgentResource
Access agent-related operations swarm()
SwarmResource
Access swarm-related operations models()
ModelsResource
Access model listing operations logs()
LogsResource
Access logging operations"},{"location":"swarms_cloud/rust_client/#cache-management-methods","title":"Cache Management Methods","text":"Method Parameters Returns Description clear_cache()
None ()
Clears all cached responses cache_stats()
None Option<(usize, usize)>
Returns (valid_entries, total_entries)"},{"location":"swarms_cloud/rust_client/#clientbuilder","title":"ClientBuilder","text":"Builder for configuring the Swarms client.
"},{"location":"swarms_cloud/rust_client/#configuration-methods","title":"Configuration Methods","text":"Method Parameters Returns Descriptionnew()
None ClientBuilder
Creates a new builder with defaults from_env()
None Result<ClientBuilder, SwarmsError>
Loads API key from environment api_key(key)
String
ClientBuilder
Sets the API key base_url(url)
&str
Result<ClientBuilder, SwarmsError>
Sets the base URL timeout(duration)
Duration
ClientBuilder
Sets request timeout max_retries(count)
usize
ClientBuilder
Sets maximum retry attempts retry_delay(duration)
Duration
ClientBuilder
Sets retry delay duration max_concurrent_requests(count)
usize
ClientBuilder
Sets concurrent request limit enable_cache(enabled)
bool
ClientBuilder
Enables/disables caching cache_ttl(duration)
Duration
ClientBuilder
Sets cache TTL build()
None Result<SwarmsClient, SwarmsError>
Builds the client Example:
let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .timeout(Duration::from_secs(120))\n .max_retries(5)\n .max_concurrent_requests(50)\n .enable_cache(true)\n .cache_ttl(Duration::from_secs(600))\n .build()?;\n
"},{"location":"swarms_cloud/rust_client/#swarmresource","title":"SwarmResource","text":"Resource for swarm-related operations.
"},{"location":"swarms_cloud/rust_client/#methods","title":"Methods","text":"Method Parameters Returns Descriptioncompletion()
None SwarmCompletionBuilder
Creates a new swarm completion builder create(request)
SwarmSpec
Result<SwarmCompletionResponse, SwarmsError>
Creates a swarm completion directly create_batch(requests)
Vec<SwarmSpec>
Result<Vec<SwarmCompletionResponse>, SwarmsError>
Creates multiple swarm completions list_types()
None Result<SwarmTypesResponse, SwarmsError>
Lists available swarm types"},{"location":"swarms_cloud/rust_client/#swarmcompletionbuilder","title":"SwarmCompletionBuilder","text":"Builder for creating swarm completion requests.
"},{"location":"swarms_cloud/rust_client/#configuration-methods_1","title":"Configuration Methods","text":"Method Parameters Returns Descriptionname(name)
String
SwarmCompletionBuilder
Sets the swarm name description(desc)
String
SwarmCompletionBuilder
Sets the swarm description swarm_type(type)
SwarmType
SwarmCompletionBuilder
Sets the swarm type task(task)
String
SwarmCompletionBuilder
Sets the main task agent(builder_fn)
Fn(AgentSpecBuilder) -> AgentSpecBuilder
SwarmCompletionBuilder
Adds an agent using a builder function max_loops(count)
u32
SwarmCompletionBuilder
Sets maximum execution loops service_tier(tier)
String
SwarmCompletionBuilder
Sets the service tier send()
None Result<SwarmCompletionResponse, SwarmsError>
Sends the request"},{"location":"swarms_cloud/rust_client/#agentresource","title":"AgentResource","text":"Resource for agent-related operations.
"},{"location":"swarms_cloud/rust_client/#methods_1","title":"Methods","text":"Method Parameters Returns Descriptioncompletion()
None AgentCompletionBuilder
Creates a new agent completion builder create(request)
AgentCompletion
Result<AgentCompletionResponse, SwarmsError>
Creates an agent completion directly create_batch(requests)
Vec<AgentCompletion>
Result<Vec<AgentCompletionResponse>, SwarmsError>
Creates multiple agent completions"},{"location":"swarms_cloud/rust_client/#agentcompletionbuilder","title":"AgentCompletionBuilder","text":"Builder for creating agent completion requests.
"},{"location":"swarms_cloud/rust_client/#configuration-methods_2","title":"Configuration Methods","text":"Method Parameters Returns Descriptionagent_name(name)
String
AgentCompletionBuilder
Sets the agent name task(task)
String
AgentCompletionBuilder
Sets the task model(model)
String
AgentCompletionBuilder
Sets the AI model description(desc)
String
AgentCompletionBuilder
Sets the agent description system_prompt(prompt)
String
AgentCompletionBuilder
Sets the system prompt temperature(temp)
f32
AgentCompletionBuilder
Sets the temperature (0.0-1.0) max_tokens(tokens)
u32
AgentCompletionBuilder
Sets maximum tokens max_loops(loops)
u32
AgentCompletionBuilder
Sets maximum loops send()
None Result<AgentCompletionResponse, SwarmsError>
Sends the request"},{"location":"swarms_cloud/rust_client/#swarmtype-enum","title":"SwarmType Enum","text":"Available swarm types for different execution patterns.
Variant DescriptionAgentRearrange
Agents can be rearranged based on task requirements MixtureOfAgents
Combines multiple agents with different specializations SpreadSheetSwarm
Organized like a spreadsheet with structured data flow SequentialWorkflow
Agents execute in a sequential order ConcurrentWorkflow
Agents execute concurrently GroupChat
Agents interact in a group chat format MultiAgentRouter
Routes tasks between multiple agents AutoSwarmBuilder
Automatically builds swarm structure HiearchicalSwarm
Hierarchical organization of agents Auto
Automatically selects the best swarm type MajorityVoting
Agents vote on decisions Malt
Multi-Agent Language Tasks DeepResearchSwarm
Specialized for deep research tasks"},{"location":"swarms_cloud/rust_client/#detailed-examples","title":"Detailed Examples","text":""},{"location":"swarms_cloud/rust_client/#1-simple-agent-completion","title":"1. Simple Agent Completion","text":"use swarms_client::{SwarmsClient};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .build()?;\n\n let response = client.agent()\n .completion()\n .agent_name(\"Content Writer\")\n .task(\"Write a blog post about sustainable technology\")\n .model(\"gpt-4o\")\n .temperature(0.7)\n .max_tokens(2000)\n .description(\"An expert content writer specializing in technology topics\")\n .system_prompt(\"You are a professional content writer with expertise in technology and sustainability. Write engaging, informative content that is well-structured and SEO-friendly.\")\n .send()\n .await?;\n\n println!(\"Agent Response: {}\", response.outputs);\n println!(\"Tokens Used: {}\", response.usage.total_tokens);\n\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#2-multi-agent-research-swarm","title":"2. Multi-Agent Research Swarm","text":"use swarms_client::{SwarmsClient, SwarmType};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .timeout(Duration::from_secs(300)) // 5 minutes for complex tasks\n .build()?;\n\n let response = client.swarm()\n .completion()\n .name(\"AI Research Swarm\")\n .description(\"A comprehensive research team analyzing AI trends and developments\")\n .swarm_type(SwarmType::SequentialWorkflow)\n .task(\"Conduct a comprehensive analysis of the current state of AI in healthcare, including recent developments, challenges, and future prospects\")\n\n // Data Collection Agent\n .agent(|agent| {\n agent\n .name(\"Data Collector\")\n .description(\"Gathers comprehensive data and recent developments\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are a research data collector specializing in AI and healthcare. Your job is to gather the most recent and relevant information about AI applications in healthcare, including clinical trials, FDA approvals, and industry developments.\")\n .temperature(0.3)\n .max_tokens(3000)\n })\n\n // Technical Analyst\n .agent(|agent| {\n agent\n .name(\"Technical Analyst\")\n .description(\"Analyzes technical aspects and implementation details\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are a technical analyst with deep expertise in AI/ML technologies. Analyze the technical feasibility, implementation challenges, and technological requirements of AI solutions in healthcare.\")\n .temperature(0.4)\n .max_tokens(3000)\n })\n\n // Market Analyst\n .agent(|agent| {\n agent\n .name(\"Market Analyst\")\n .description(\"Analyzes market trends, adoption rates, and economic factors\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are a market research analyst specializing in healthcare technology markets. Analyze market size, growth projections, key players, investment trends, and economic factors affecting AI adoption in healthcare.\")\n .temperature(0.5)\n .max_tokens(3000)\n })\n\n // Regulatory Expert\n .agent(|agent| {\n agent\n .name(\"Regulatory Expert\")\n .description(\"Analyzes regulatory landscape and compliance requirements\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are a regulatory affairs expert with deep knowledge of healthcare regulations and AI governance. Analyze regulatory challenges, compliance requirements, ethical considerations, and policy developments affecting AI in healthcare.\")\n .temperature(0.3)\n .max_tokens(3000)\n })\n\n // Report Synthesizer\n .agent(|agent| {\n agent\n .name(\"Report Synthesizer\")\n .description(\"Synthesizes all analyses into a comprehensive report\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are an expert report writer and strategic analyst. Synthesize all the previous analyses into a comprehensive, well-structured executive report with clear insights, recommendations, and future outlook.\")\n .temperature(0.6)\n .max_tokens(4000)\n })\n\n .max_loops(1)\n .service_tier(\"premium\")\n .send()\n .await?;\n\n println!(\"Research Report:\");\n println!(\"{}\", response.output);\n println!(\"\\nSwarm executed with {} agents\", response.number_of_agents);\n\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#3-financial-analysis-swarm-from-example","title":"3. Financial Analysis Swarm (From Example)","text":"use swarms_client::{SwarmsClient, SwarmType};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .timeout(Duration::from_secs(120))\n .max_retries(3)\n .build()?;\n\n let response = client.swarm()\n .completion()\n .name(\"Financial Health Analysis Swarm\")\n .description(\"A sequential workflow of specialized financial agents analyzing company health\")\n .swarm_type(SwarmType::ConcurrentWorkflow)\n .task(\"Analyze the financial health of Apple Inc. (AAPL) based on their latest quarterly report\")\n\n // Financial Data Collector Agent\n .agent(|agent| {\n agent\n .name(\"Financial Data Collector\")\n .description(\"Specializes in gathering and organizing financial data from various sources\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are a financial data collection specialist. Your role is to gather and organize relevant financial data, including revenue, expenses, profit margins, and key financial ratios. Present the data in a clear, structured format.\")\n .temperature(0.7)\n .max_tokens(2000)\n })\n\n // Financial Ratio Analyzer Agent\n .agent(|agent| {\n agent\n .name(\"Ratio Analyzer\")\n .description(\"Analyzes key financial ratios and metrics\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are a financial ratio analysis expert. Your role is to calculate and interpret key financial ratios such as P/E ratio, debt-to-equity, current ratio, and return on equity. Provide insights on what these ratios indicate about the company's financial health.\")\n .temperature(0.7)\n .max_tokens(2000)\n })\n\n // Additional agents...\n .agent(|agent| {\n agent\n .name(\"Investment Advisor\")\n .description(\"Provides investment recommendations based on analysis\")\n .model(\"gpt-4o\")\n .system_prompt(\"You are an investment advisory specialist. Your role is to synthesize the analysis from previous agents and provide clear, actionable investment recommendations. Consider both short-term and long-term investment perspectives.\")\n .temperature(0.7)\n .max_tokens(2000)\n })\n\n .max_loops(1)\n .service_tier(\"standard\")\n .send()\n .await?;\n\n println!(\"Financial Analysis Results:\");\n println!(\"{}\", response.output);\n\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#4-batch-processing","title":"4. Batch Processing","text":"use swarms_client::{SwarmsClient, AgentCompletion, AgentSpec};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .max_concurrent_requests(20) // Allow more concurrent requests for batch\n .build()?;\n\n // Create multiple agent completion requests\n let requests = vec![\n AgentCompletion {\n agent_config: AgentSpec {\n agent_name: \"Content Creator 1\".to_string(),\n model_name: \"gpt-4o-mini\".to_string(),\n temperature: 0.7,\n max_tokens: 1000,\n ..Default::default()\n },\n task: \"Write a social media post about renewable energy\".to_string(),\n history: None,\n },\n AgentCompletion {\n agent_config: AgentSpec {\n agent_name: \"Content Creator 2\".to_string(),\n model_name: \"gpt-4o-mini\".to_string(),\n temperature: 0.8,\n max_tokens: 1000,\n ..Default::default()\n },\n task: \"Write a social media post about electric vehicles\".to_string(),\n history: None,\n },\n // Add more requests...\n ];\n\n // Process all requests in batch\n let responses = client.agent()\n .create_batch(requests)\n .await?;\n\n for (i, response) in responses.iter().enumerate() {\n println!(\"Response {}: {}\", i + 1, response.outputs);\n println!(\"Tokens used: {}\\n\", response.usage.total_tokens);\n }\n\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#5-custom-configuration-with-error-handling","title":"5. Custom Configuration with Error Handling","text":"use swarms_client::{SwarmsClient, SwarmsError, ClientConfig};\nuse std::time::Duration;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n // Custom configuration for production use\n let config = ClientConfig {\n api_key: std::env::var(\"SWARMS_API_KEY\")?,\n base_url: \"https://swarms-api-285321057562.us-east1.run.app/\".parse()?,\n timeout: Duration::from_secs(180),\n max_retries: 5,\n retry_delay: Duration::from_secs(2),\n max_concurrent_requests: 50,\n circuit_breaker_threshold: 10,\n circuit_breaker_timeout: Duration::from_secs(120),\n enable_cache: true,\n cache_ttl: Duration::from_secs(600),\n };\n\n let client = SwarmsClient::with_config(config)?;\n\n // Example with comprehensive error handling\n match client.swarm()\n .completion()\n .name(\"Production Swarm\")\n .swarm_type(SwarmType::Auto)\n .task(\"Analyze market trends for Q4 2024\")\n .agent(|agent| {\n agent\n .name(\"Market Analyst\")\n .model(\"gpt-4o\")\n .temperature(0.5)\n })\n .send()\n .await\n {\n Ok(response) => {\n println!(\"Success! Job ID: {}\", response.job_id);\n println!(\"Output: {}\", response.output);\n },\n Err(SwarmsError::Authentication { message, .. }) => {\n eprintln!(\"Authentication error: {}\", message);\n },\n Err(SwarmsError::RateLimit { message, .. }) => {\n eprintln!(\"Rate limit exceeded: {}\", message);\n // Implement backoff strategy\n },\n Err(SwarmsError::InsufficientCredits { message, .. }) => {\n eprintln!(\"Insufficient credits: {}\", message);\n },\n Err(SwarmsError::CircuitBreakerOpen) => {\n eprintln!(\"Circuit breaker is open - service temporarily unavailable\");\n },\n Err(e) => {\n eprintln!(\"Other error: {}\", e);\n }\n }\n\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#6-monitoring-and-observability","title":"6. Monitoring and Observability","text":"use swarms_client::SwarmsClient;\nuse tracing::{info, warn, error};\n\n#[tokio::main]\nasync fn main() -> Result<(), Box<dyn std::error::Error>> {\n // Initialize tracing for observability\n tracing_subscriber::init();\n\n let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .enable_cache(true)\n .build()?;\n\n // Monitor cache performance\n if let Some((valid, total)) = client.cache_stats() {\n info!(\"Cache stats: {}/{} entries valid\", valid, total);\n }\n\n // Make request with monitoring\n let start = std::time::Instant::now();\n\n let response = client.swarm()\n .completion()\n .name(\"Monitored Swarm\")\n .task(\"Analyze system performance metrics\")\n .agent(|agent| {\n agent\n .name(\"Performance Analyst\")\n .model(\"gpt-4o-mini\")\n })\n .send()\n .await?;\n\n let duration = start.elapsed();\n info!(\"Request completed in {:?}\", duration);\n\n if duration > Duration::from_secs(30) {\n warn!(\"Request took longer than expected: {:?}\", duration);\n }\n\n // Clear cache periodically in production\n client.clear_cache();\n\n Ok(())\n}\n
"},{"location":"swarms_cloud/rust_client/#error-handling","title":"Error Handling","text":"The client provides comprehensive error handling with specific error types:
"},{"location":"swarms_cloud/rust_client/#swarmserror-types","title":"SwarmsError Types","text":"Error Type Description Recommended ActionAuthentication
Invalid API key or authentication failure Check API key and permissions RateLimit
Rate limit exceeded Implement exponential backoff InvalidRequest
Malformed request parameters Validate input parameters InsufficientCredits
Not enough credits for operation Check account balance Api
General API error Check API status and retry Network
Network connectivity issues Check internet connection Timeout
Request timeout Increase timeout or retry CircuitBreakerOpen
Circuit breaker preventing requests Wait for recovery period Serialization
JSON serialization/deserialization error Check data format"},{"location":"swarms_cloud/rust_client/#error-handling-best-practices","title":"Error Handling Best Practices","text":"use swarms_client::{SwarmsClient, SwarmsError};\n\nasync fn handle_swarm_request(client: &SwarmsClient, task: &str) -> Result<String, SwarmsError> {\n match client.swarm()\n .completion()\n .task(task)\n .agent(|agent| agent.name(\"Worker\").model(\"gpt-4o-mini\"))\n .send()\n .await\n {\n Ok(response) => Ok(response.output.to_string()),\n Err(SwarmsError::RateLimit { .. }) => {\n // Implement exponential backoff\n tokio::time::sleep(Duration::from_secs(5)).await;\n Err(SwarmsError::RateLimit {\n message: \"Rate limited - should retry\".to_string(),\n status: Some(429),\n request_id: None,\n })\n },\n Err(e) => Err(e),\n }\n}\n
"},{"location":"swarms_cloud/rust_client/#performance-features","title":"Performance Features","text":""},{"location":"swarms_cloud/rust_client/#connection-pooling","title":"Connection Pooling","text":"The client automatically manages HTTP connection pooling for optimal performance:
// Connections are automatically pooled and reused\nlet client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .max_concurrent_requests(100) // Allow up to 100 concurrent requests\n .build()?;\n
"},{"location":"swarms_cloud/rust_client/#caching","title":"Caching","text":"Intelligent caching reduces redundant API calls:
let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .enable_cache(true)\n .cache_ttl(Duration::from_secs(300)) // 5-minute TTL\n .build()?;\n\n// GET requests are automatically cached\nlet models = client.models().list().await?; // First call hits API\nlet models_cached = client.models().list().await?; // Second call uses cache\n
"},{"location":"swarms_cloud/rust_client/#circuit-breaker","title":"Circuit Breaker","text":"Automatic failure detection and recovery:
let client = SwarmsClient::builder()\n .unwrap()\n .from_env()?\n .build()?;\n\n// Circuit breaker automatically opens after 5 failures\n// and recovers after 60 seconds\n
"},{"location":"swarms_cloud/rust_client/#configuration-reference","title":"Configuration Reference","text":""},{"location":"swarms_cloud/rust_client/#clientconfig-structure","title":"ClientConfig Structure","text":"Field Type Default Description api_key
String
\"\"
Swarms API key base_url
Url
https://swarms-api-285321057562.us-east1.run.app/
API base URL timeout
Duration
60s
Request timeout max_retries
usize
3
Maximum retry attempts retry_delay
Duration
1s
Base retry delay max_concurrent_requests
usize
100
Concurrent request limit circuit_breaker_threshold
usize
5
Failure threshold for circuit breaker circuit_breaker_timeout
Duration
60s
Circuit breaker recovery time enable_cache
bool
true
Enable response caching cache_ttl
Duration
300s
Cache time-to-live"},{"location":"swarms_cloud/rust_client/#environment-variables","title":"Environment Variables","text":"Variable Description Example SWARMS_API_KEY
Your Swarms API key sk-xxx...
SWARMS_BASE_URL
Custom API base URL (optional) https://api.custom.com/
"},{"location":"swarms_cloud/rust_client/#testing","title":"Testing","text":"Run the test suite:
cargo test\n
Run specific tests:
cargo test test_cache\ncargo test test_circuit_breaker\n
"},{"location":"swarms_cloud/rust_client/#contributing","title":"Contributing","text":"This project is licensed under the MIT License - see the LICENSE file for details.
"},{"location":"swarms_cloud/subscription_tiers/","title":"Swarms Cloud Subscription Tiers","text":"Overview
Choose the perfect plan for your agent infrastructure needs. All plans include our core features with additional benefits as you scale up.
"},{"location":"swarms_cloud/subscription_tiers/#pricing-plans","title":"Pricing Plans","text":""},{"location":"swarms_cloud/subscription_tiers/#free-tier","title":"Free Tier","text":"Free
$0/year
Perfect for getting started with AI development.
Get Started
What's Included:
Sign up Bonus!
Basic Access
Pay-Per-Use Pricing
Community Support
Standard Processing Speed
Premium
Monthly $100/month
Yearly $1,020/year (Save 15% on annual billing)
Subscribe Now
Everything in Free, plus:
Full Access to Explorer and Agents
Access to Premium Multi-Modality Models
Priority Access to Swarms
High-Performance Infrastructure
Exclusive Webinars and Tutorials
Priority Support
Enhanced Security Features
Early Access to New Models and Features
Enterprise
Contact for more Information
Book a Call
Everything in Premium, plus:
High-Performance Infrastructure
Batch API
Early Access to New Swarms
Dedicated 24/7 Support
Custom Solutions Engineering
Advanced Security Features
Onsite Training and Onboarding
Custom Model Training
Priority Support
Pay-Per-Use Pricing
Enterprise Telemetry Platform
Regular Check-In Strategy Sessions
Rate Limit Increases
Need Help?
Each multi-agent architecture type is designed for specific use cases and can be combined to create powerful multi-agent systems. Here's a comprehensive overview of each available swarm:
Swarm Type Description Learn More AgentRearrange Dynamically reorganizes agents to optimize task performance and efficiency. Optimizes agent performance by dynamically adjusting their roles and positions within the workflow. This architecture is particularly useful when the effectiveness of agents depends on their sequence or arrangement. Learn More MixtureOfAgents Creates diverse teams of specialized agents, each bringing unique capabilities to solve complex problems. Each agent contributes unique skills to achieve the overall goal, making it excel at tasks requiring multiple types of expertise or processing. Learn More SpreadSheetSwarm Provides a structured approach to data management and operations, making it ideal for tasks involving data analysis, transformation, and systematic processing in a spreadsheet-like structure. Learn More SequentialWorkflow Ensures strict process control by executing tasks in a predefined order. Perfect for workflows where each step depends on the completion of previous steps. Learn More ConcurrentWorkflow Maximizes efficiency by running independent tasks in parallel, significantly reducing overall processing time for complex operations. Ideal for independent tasks that can be processed simultaneously. Learn More GroupChat Enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. Learn More MultiAgentRouter Acts as an intelligent task dispatcher, ensuring optimal distribution of work across available agents based on their capabilities and current workload. Learn More AutoSwarmBuilder Simplifies swarm creation by automatically configuring agent architectures based on task requirements and performance metrics. Learn More HiearchicalSwarm Implements a structured approach to task management, with clear lines of authority and delegation across multiple agent levels. Learn More auto Provides intelligent swarm selection based on context, automatically choosing the most effective architecture for given tasks. Learn More MajorityVoting Implements robust decision-making through consensus, particularly useful for tasks requiring collective intelligence or verification. Learn More MALT Specialized framework for language-based tasks, optimizing agent collaboration for complex language processing operations. Learn More"},{"location":"swarms_cloud/swarm_types/#learn-more","title":"Learn More","text":"To learn more about Swarms architecture and how different swarm types work together, visit our comprehensive guides:
Introduction to Multi-Agent Architectures
How to Choose the Right Multi-Agent Architecture
Framework Architecture Overview
Building Custom Swarms
Enterprise-Grade Agent Swarm Management API
Base URL: https://api.swarms.world
or https://swarms-api-285321057562.us-east1.run.app
API Key Management: https://swarms.world/platform/api-keys
"},{"location":"swarms_cloud/swarms_api/#overview","title":"Overview","text":"The Swarms API provides a robust, scalable infrastructure for deploying and managing intelligent agent swarms in the cloud. This enterprise-grade API enables organizations to create, execute, and orchestrate sophisticated AI agent workflows without managing the underlying infrastructure.
Key capabilities include:
Intelligent Swarm Management: Create and execute swarms of specialized AI agents that collaborate to solve complex tasks
Automatic Agent Generation: Dynamically create optimized agents based on task requirements
Multiple Swarm Architectures: Choose from various swarm patterns to match your specific workflow needs
Comprehensive Logging: Track and analyze all API interactions
Cost Management: Predictable, transparent pricing with optimized resource utilization
Enterprise Security: Full API key authentication and management
Swarms API is designed for production use cases requiring sophisticated AI orchestration, with applications in finance, healthcare, legal, research, and other domains where complex reasoning and multi-agent collaboration are needed.
"},{"location":"swarms_cloud/swarms_api/#authentication","title":"Authentication","text":"All API requests require a valid API key, which must be included in the header of each request:
x-api-key: your_api_key_here\n
API keys can be obtained and managed at https://swarms.world/platform/api-keys.
"},{"location":"swarms_cloud/swarms_api/#api-reference","title":"API Reference","text":""},{"location":"swarms_cloud/swarms_api/#endpoints-summary","title":"Endpoints Summary","text":"Endpoint Method Description/health
GET Simple health check endpoint /v1/swarm/completions
POST Run a swarm with specified configuration /v1/swarm/batch/completions
POST Run multiple swarms in batch mode /v1/swarm/logs
GET Retrieve API request logs /v1/swarms/available
GET Get all available swarms as a list of strings /v1/models/available
GET Get all available models as a list of strings /v1/agent/completions
POST Run a single agent with specified configuration /v1/agent/batch/completions
POST Run a batch of individual agent completions"},{"location":"swarms_cloud/swarms_api/#swarmtype-reference","title":"SwarmType Reference","text":"The swarm_type
parameter defines the architecture and collaboration pattern of the agent swarm:
AgentRearrange
Dynamically reorganizes the workflow between agents based on task requirements MixtureOfAgents
Combines multiple agent types to tackle diverse aspects of a problem SpreadSheetSwarm
Specialized for spreadsheet data analysis and manipulation SequentialWorkflow
Agents work in a predefined sequence, each handling specific subtasks ConcurrentWorkflow
Multiple agents work simultaneously on different aspects of the task GroupChat
Agents collaborate in a discussion format to solve problems MultiAgentRouter
Routes subtasks to specialized agents based on their capabilities AutoSwarmBuilder
Automatically designs and builds an optimal swarm based on the task HiearchicalSwarm
Organizes agents in a hierarchical structure with managers and workers MajorityVoting
Uses a consensus mechanism where multiple agents vote on the best solution auto
Automatically selects the most appropriate swarm type for the given task"},{"location":"swarms_cloud/swarms_api/#data-models","title":"Data Models","text":""},{"location":"swarms_cloud/swarms_api/#swarmspec","title":"SwarmSpec","text":"The SwarmSpec
model defines the configuration of a swarm.
The AgentSpec
model defines the configuration of an individual agent.
*Required if agents are manually specified; not required if using auto-generated agents
"},{"location":"swarms_cloud/swarms_api/#endpoint-details","title":"Endpoint Details","text":""},{"location":"swarms_cloud/swarms_api/#health-check","title":"Health Check","text":"Check if the API service is available and functioning correctly.
Endpoint: /health
Method: GET Rate Limit: 100 requests per 60 seconds
curl -X GET \"https://api.swarms.world/health\" \\\n -H \"x-api-key: your_api_key_here\"\n
import requests\n\nAPI_BASE_URL = \"https://api.swarms.world\"\nAPI_KEY = \"your_api_key_here\"\n\nheaders = {\n \"x-api-key\": API_KEY\n}\n\nresponse = requests.get(f\"{API_BASE_URL}/health\", headers=headers)\n\nif response.status_code == 200:\n print(\"API is healthy:\", response.json())\nelse:\n print(f\"Error: {response.status_code}\")\n
const API_BASE_URL = \"https://api.swarms.world\";\nconst API_KEY = \"your_api_key_here\";\n\nasync function checkHealth(): Promise<void> {\n try {\n const response = await fetch(`${API_BASE_URL}/health`, {\n method: 'GET',\n headers: {\n 'x-api-key': API_KEY\n }\n });\n\n if (response.ok) {\n const data = await response.json();\n console.log(\"API is healthy:\", data);\n } else {\n console.error(`Error: ${response.status}`);\n }\n } catch (error) {\n console.error(\"Request failed:\", error);\n }\n}\n\ncheckHealth();\n
Example Response:
{\n \"status\": \"ok\"\n}\n
"},{"location":"swarms_cloud/swarms_api/#run-swarm","title":"Run Swarm","text":"Run a swarm with the specified configuration to complete a task.
Endpoint: /v1/swarm/completions
Method: POST Rate Limit: 100 requests per 60 seconds
Request Parameters:
Field Type Description Required name string Identifier for the swarm No description string Description of the swarm's purpose No agents Array List of agent specifications No max_loops integer Maximum number of execution loops No swarm_type SwarmType Architecture of the swarm No rearrange_flow string Instructions for rearranging task flow No task string The main task for the swarm to accomplish Yes img string Optional image URL for the swarm No return_history boolean Whether to return execution history No rules string Guidelines for swarm behavior No Shell (curl)Python (requests)TypeScript (fetch)curl -X POST \"https://api.swarms.world/v1/swarm/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"name\": \"Financial Analysis Swarm\",\n \"description\": \"Market analysis swarm\",\n \"agents\": [\n {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": false\n },\n {\n \"agent_name\": \"Economic Forecaster\",\n \"description\": \"Predicts economic trends\",\n \"system_prompt\": \"You are an expert in economic forecasting.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": false\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\",\n \"task\": \"What are the best etfs and index funds for ai and tech?\",\n \"output_type\": \"dict\"\n }'\n
import requests\nimport json\n\nAPI_BASE_URL = \"https://api.swarms.world\"\nAPI_KEY = \"your_api_key_here\"\n\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n\nswarm_config = {\n \"name\": \"Financial Analysis Swarm\",\n \"description\": \"Market analysis swarm\",\n \"agents\": [\n {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False\n },\n {\n \"agent_name\": \"Economic Forecaster\",\n \"description\": \"Predicts economic trends\",\n \"system_prompt\": \"You are an expert in economic forecasting.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\",\n \"task\": \"What are the best etfs and index funds for ai and tech?\",\n \"output_type\": \"dict\"\n}\n\nresponse = requests.post(\n f\"{API_BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=swarm_config\n)\n\nif response.status_code == 200:\n result = response.json()\n print(\"Swarm completed successfully!\")\n print(f\"Cost: ${result['metadata']['billing_info']['total_cost']}\")\n print(f\"Execution time: {result['metadata']['execution_time_seconds']} seconds\")\nelse:\n print(f\"Error: {response.status_code} - {response.text}\")\n
interface AgentSpec {\n agent_name: string;\n description: string;\n system_prompt: string;\n model_name: string;\n role: string;\n max_loops: number;\n max_tokens: number;\n temperature: number;\n auto_generate_prompt: boolean;\n}\n\ninterface SwarmConfig {\n name: string;\n description: string;\n agents: AgentSpec[];\n max_loops: number;\n swarm_type: string;\n task: string;\n output_type: string;\n}\n\nconst API_BASE_URL = \"https://api.swarms.world\";\nconst API_KEY = \"your_api_key_here\";\n\nasync function runSwarm(): Promise<void> {\n const swarmConfig: SwarmConfig = {\n name: \"Financial Analysis Swarm\",\n description: \"Market analysis swarm\",\n agents: [\n {\n agent_name: \"Market Analyst\",\n description: \"Analyzes market trends\",\n system_prompt: \"You are a financial analyst expert.\",\n model_name: \"openai/gpt-4o\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5,\n auto_generate_prompt: false\n },\n {\n agent_name: \"Economic Forecaster\",\n description: \"Predicts economic trends\",\n system_prompt: \"You are an expert in economic forecasting.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5,\n auto_generate_prompt: false\n }\n ],\n max_loops: 1,\n swarm_type: \"ConcurrentWorkflow\",\n task: \"What are the best etfs and index funds for ai and tech?\",\n output_type: \"dict\"\n };\n\n try {\n const response = await fetch(`${API_BASE_URL}/v1/swarm/completions`, {\n method: 'POST',\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify(swarmConfig)\n });\n\n if (response.ok) {\n const result = await response.json();\n console.log(\"Swarm completed successfully!\");\n console.log(`Cost: $${result.metadata.billing_info.total_cost}`);\n console.log(`Execution time: ${result.metadata.execution_time_seconds} seconds`);\n } else {\n console.error(`Error: ${response.status} - ${await response.text()}`);\n }\n } catch (error) {\n console.error(\"Request failed:\", error);\n }\n}\n\nrunSwarm();\n
Example Response:
{\n \"status\": \"success\",\n \"swarm_name\": \"financial-analysis-swarm\",\n \"description\": \"Analyzes financial data for risk assessment\",\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Analyze the provided quarterly financials for Company XYZ and identify potential risk factors. Summarize key insights and provide recommendations for risk mitigation.\",\n \"output\": {\n \"financial_analysis\": {\n \"risk_factors\": [...],\n \"key_insights\": [...],\n \"recommendations\": [...]\n }\n },\n \"metadata\": {\n \"max_loops\": 2,\n \"num_agents\": 3,\n \"execution_time_seconds\": 12.45,\n \"completion_time\": 1709563245.789,\n \"billing_info\": {\n \"cost_breakdown\": {\n \"agent_cost\": 0.03,\n \"input_token_cost\": 0.002134,\n \"output_token_cost\": 0.006789,\n \"token_counts\": {\n \"total_input_tokens\": 1578,\n \"total_output_tokens\": 3456,\n \"total_tokens\": 5034,\n \"per_agent\": {...}\n },\n \"num_agents\": 3,\n \"execution_time_seconds\": 12.45\n },\n \"total_cost\": 0.038923\n }\n }\n}\n
"},{"location":"swarms_cloud/swarms_api/#run-batch-completions","title":"Run Batch Completions","text":"Run multiple swarms as a batch operation.
Endpoint: /v1/swarm/batch/completions
Method: POST Rate Limit: 100 requests per 60 seconds
Request Parameters:
Field Type Description Required swarms Array List of swarm specifications Yes Shell (curl)Python (requests)TypeScript (fetch)curl -X POST \"https://api.swarms.world/v1/swarm/batch/completions\" \\\n -H \"x-api-key: $SWARMS_API_KEY\" \\\n -H \"Content-Type: application/json\" \\\n -d '[\n {\n \"name\": \"Batch Swarm 1\",\n \"description\": \"First swarm in the batch\",\n \"agents\": [\n {\n \"agent_name\": \"Research Agent\",\n \"description\": \"Conducts research\",\n \"system_prompt\": \"You are a research assistant.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n },\n {\n \"agent_name\": \"Analysis Agent\",\n \"description\": \"Analyzes data\",\n \"system_prompt\": \"You are a data analyst.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Research AI advancements.\"\n },\n {\n \"name\": \"Batch Swarm 2\",\n \"description\": \"Second swarm in the batch\",\n \"agents\": [\n {\n \"agent_name\": \"Writing Agent\",\n \"description\": \"Writes content\",\n \"system_prompt\": \"You are a content writer.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n },\n {\n \"agent_name\": \"Editing Agent\",\n \"description\": \"Edits content\",\n \"system_prompt\": \"You are an editor.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Write a summary of AI research.\"\n }\n ]'\n
import requests\nimport json\n\nAPI_BASE_URL = \"https://api.swarms.world\"\nAPI_KEY = \"your_api_key_here\"\n\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n\nbatch_swarms = [\n {\n \"name\": \"Batch Swarm 1\",\n \"description\": \"First swarm in the batch\",\n \"agents\": [\n {\n \"agent_name\": \"Research Agent\",\n \"description\": \"Conducts research\",\n \"system_prompt\": \"You are a research assistant.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n },\n {\n \"agent_name\": \"Analysis Agent\",\n \"description\": \"Analyzes data\",\n \"system_prompt\": \"You are a data analyst.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Research AI advancements.\"\n },\n {\n \"name\": \"Batch Swarm 2\",\n \"description\": \"Second swarm in the batch\",\n \"agents\": [\n {\n \"agent_name\": \"Writing Agent\",\n \"description\": \"Writes content\",\n \"system_prompt\": \"You are a content writer.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n },\n {\n \"agent_name\": \"Editing Agent\",\n \"description\": \"Edits content\",\n \"system_prompt\": \"You are an editor.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"SequentialWorkflow\",\n \"task\": \"Write a summary of AI research.\"\n }\n]\n\nresponse = requests.post(\n f\"{API_BASE_URL}/v1/swarm/batch/completions\",\n headers=headers,\n json=batch_swarms\n)\n\nif response.status_code == 200:\n results = response.json()\n print(f\"Batch completed with {len(results)} swarms\")\n for i, result in enumerate(results):\n print(f\"Swarm {i+1}: {result['swarm_name']} - {result['status']}\")\nelse:\n print(f\"Error: {response.status_code} - {response.text}\")\n
interface AgentSpec {\n agent_name: string;\n description: string;\n system_prompt: string;\n model_name: string;\n role: string;\n max_loops: number;\n}\n\ninterface SwarmSpec {\n name: string;\n description: string;\n agents: AgentSpec[];\n max_loops: number;\n swarm_type: string;\n task: string;\n}\n\nconst API_BASE_URL = \"https://api.swarms.world\";\nconst API_KEY = \"your_api_key_here\";\n\nasync function runBatchSwarms(): Promise<void> {\n const batchSwarms: SwarmSpec[] = [\n {\n name: \"Batch Swarm 1\",\n description: \"First swarm in the batch\",\n agents: [\n {\n agent_name: \"Research Agent\",\n description: \"Conducts research\",\n system_prompt: \"You are a research assistant.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1\n },\n {\n agent_name: \"Analysis Agent\",\n description: \"Analyzes data\",\n system_prompt: \"You are a data analyst.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1\n }\n ],\n max_loops: 1,\n swarm_type: \"SequentialWorkflow\",\n task: \"Research AI advancements.\"\n },\n {\n name: \"Batch Swarm 2\",\n description: \"Second swarm in the batch\",\n agents: [\n {\n agent_name: \"Writing Agent\",\n description: \"Writes content\",\n system_prompt: \"You are a content writer.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1\n },\n {\n agent_name: \"Editing Agent\",\n description: \"Edits content\",\n system_prompt: \"You are an editor.\",\n model_name: \"gpt-4o\",\n role: \"worker\",\n max_loops: 1\n }\n ],\n max_loops: 1,\n swarm_type: \"SequentialWorkflow\",\n task: \"Write a summary of AI research.\"\n }\n ];\n\n try {\n const response = await fetch(`${API_BASE_URL}/v1/swarm/batch/completions`, {\n method: 'POST',\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify(batchSwarms)\n });\n\n if (response.ok) {\n const results = await response.json();\n console.log(`Batch completed with ${results.length} swarms`);\n results.forEach((result: any, index: number) => {\n console.log(`Swarm ${index + 1}: ${result.swarm_name} - ${result.status}`);\n });\n } else {\n console.error(`Error: ${response.status} - ${await response.text()}`);\n }\n } catch (error) {\n console.error(\"Request failed:\", error);\n }\n}\n\nrunBatchSwarms();\n
Example Response:
[\n {\n \"status\": \"success\",\n \"swarm_name\": \"risk-analysis\",\n \"task\": \"Analyze risk factors for investment portfolio\",\n \"output\": {...},\n \"metadata\": {...}\n },\n {\n \"status\": \"success\",\n \"swarm_name\": \"market-sentiment\",\n \"task\": \"Assess current market sentiment for technology sector\",\n \"output\": {...},\n \"metadata\": {...}\n }\n]\n
"},{"location":"swarms_cloud/swarms_api/#individual-agent-endpoints","title":"Individual Agent Endpoints","text":""},{"location":"swarms_cloud/swarms_api/#run-single-agent","title":"Run Single Agent","text":"Run a single agent with the specified configuration.
Endpoint: /v1/agent/completions
Method: POST Rate Limit: 100 requests per 60 seconds
Request Parameters:
Field Type Description Required agent_config AgentSpec Configuration for the agent Yes task string The task to be completed by the agent Yes Shell (curl)Python (requests)TypeScript (fetch)curl -X POST \"https://api.swarms.world/v1/agent/completions\" \\\n -H \"x-api-key: your_api_key_here\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"agent_config\": {\n \"agent_name\": \"Research Assistant\",\n \"description\": \"Helps with research tasks\",\n \"system_prompt\": \"You are a research assistant expert.\",\n \"model_name\": \"gpt-4o\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5\n },\n \"task\": \"Research the latest developments in quantum computing.\"\n }'\n
import requests\nimport json\n\nAPI_BASE_URL = \"https://api.swarms.world\"\nAPI_KEY = \"your_api_key_here\"\n\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n\nagent_request = {\n \"agent_config\": {\n \"agent_name\": \"Research Assistant\",\n \"description\": \"Helps with research tasks\",\n \"system_prompt\": \"You are a research assistant expert.\",\n \"model_name\": \"gpt-4o\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5\n },\n \"task\": \"Research the latest developments in quantum computing.\"\n}\n\nresponse = requests.post(\n f\"{API_BASE_URL}/v1/agent/completions\",\n headers=headers,\n json=agent_request\n)\n\nif response.status_code == 200:\n result = response.json()\n print(f\"Agent {result['name']} completed successfully!\")\n print(f\"Usage: {result['usage']['total_tokens']} tokens\")\n print(f\"Output: {result['outputs']}\")\nelse:\n print(f\"Error: {response.status_code} - {response.text}\")\n
interface AgentConfig {\n agent_name: string;\n description: string;\n system_prompt: string;\n model_name: string;\n max_loops: number;\n max_tokens: number;\n temperature: number;\n}\n\ninterface AgentRequest {\n agent_config: AgentConfig;\n task: string;\n}\n\nconst API_BASE_URL = \"https://api.swarms.world\";\nconst API_KEY = \"your_api_key_here\";\n\nasync function runSingleAgent(): Promise<void> {\n const agentRequest: AgentRequest = {\n agent_config: {\n agent_name: \"Research Assistant\",\n description: \"Helps with research tasks\",\n system_prompt: \"You are a research assistant expert.\",\n model_name: \"gpt-4o\",\n max_loops: 1,\n max_tokens: 8192,\n temperature: 0.5\n },\n task: \"Research the latest developments in quantum computing.\"\n };\n\n try {\n const response = await fetch(`${API_BASE_URL}/v1/agent/completions`, {\n method: 'POST',\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify(agentRequest)\n });\n\n if (response.ok) {\n const result = await response.json();\n console.log(`Agent ${result.name} completed successfully!`);\n console.log(`Usage: ${result.usage.total_tokens} tokens`);\n console.log(`Output:`, result.outputs);\n } else {\n console.error(`Error: ${response.status} - ${await response.text()}`);\n }\n } catch (error) {\n console.error(\"Request failed:\", error);\n }\n}\n\nrunSingleAgent();\n
Example Response:
{\n \"id\": \"agent-abc123\",\n \"success\": true,\n \"name\": \"Research Assistant\",\n \"description\": \"Helps with research tasks\",\n \"temperature\": 0.5,\n \"outputs\": {},\n \"usage\": {\n \"input_tokens\": 150,\n \"output_tokens\": 450,\n \"total_tokens\": 600\n },\n \"timestamp\": \"2024-03-05T12:34:56.789Z\"\n}\n
"},{"location":"swarms_cloud/swarms_api/#agentcompletion-model","title":"AgentCompletion Model","text":"The AgentCompletion
model defines the configuration for running a single agent task.
agent_config
AgentSpec The configuration of the agent to be completed Yes task
string The task to be completed by the agent Yes history
Dict[str, Any] The history of the agent's previous tasks and responses No"},{"location":"swarms_cloud/swarms_api/#agentspec-model","title":"AgentSpec Model","text":"The AgentSpec
model defines the configuration for an individual agent.
agent_name
string None The unique name assigned to the agent Yes description
string None Detailed explanation of the agent's purpose No system_prompt
string None Initial instruction provided to the agent No model_name
string \"gpt-4o-mini\" Name of the AI model to use No auto_generate_prompt
boolean false Whether to auto-generate prompts No max_tokens
integer 8192 Maximum tokens in response No temperature
float 0.5 Controls randomness (0-1) No role
string \"worker\" Role of the agent No max_loops
integer 1 Maximum iterations No tools_list_dictionary
List[Dict] None Available tools for the agent No mcp_url
string None URL for Model Control Protocol No Execute a task using a single agent with specified configuration.
Endpoint: /v1/agent/completions
Method: POST Rate Limit: 100 requests per 60 seconds
Request Body:
{\n \"agent_config\": {\n \"agent_name\": \"Research Assistant\",\n \"description\": \"Specialized in research and analysis\",\n \"system_prompt\": \"You are an expert research assistant.\",\n \"model_name\": \"gpt-4o\",\n \"auto_generate_prompt\": false,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"tools_list_dictionary\": [\n {\n \"name\": \"search\",\n \"description\": \"Search the web for information\",\n \"parameters\": {\n \"query\": \"string\"\n }\n }\n ],\n \"mcp_url\": \"https://example-mcp.com\"\n },\n \"task\": \"Research the latest developments in quantum computing and summarize key findings\",\n \"history\": {\n \"previous_research\": \"Earlier findings on quantum computing basics...\",\n \"user_preferences\": \"Focus on practical applications...\"\n }\n}\n
Response:
{\n \"id\": \"agent-abc123xyz\",\n \"success\": true,\n \"name\": \"Research Assistant\",\n \"description\": \"Specialized in research and analysis\",\n \"temperature\": 0.5,\n \"outputs\": {\n \"research_summary\": \"...\",\n \"key_findings\": [\n \"...\"\n ]\n },\n \"usage\": {\n \"input_tokens\": 450,\n \"output_tokens\": 850,\n \"total_tokens\": 1300,\n \"mcp_url\": 0.1\n },\n \"timestamp\": \"2024-03-05T12:34:56.789Z\"\n}\n
"},{"location":"swarms_cloud/swarms_api/#run-batch-agents","title":"Run Batch Agents","text":"Execute multiple agent tasks in parallel.
Endpoint: /v1/agent/batch/completions
Method: POST Rate Limit: 100 requests per 60 seconds Maximum Batch Size: 10 requests Input A list of AgentCompeletion
inputs
curl -X POST \"https://api.swarms.world/v1/agent/batch/completions\" \\\n -H \"x-api-key: your_api_key_here\" \\\n -H \"Content-Type: application/json\" \\\n -d '[\n {\n \"agent_config\": {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Expert in market analysis\",\n \"system_prompt\": \"You are a financial market analyst.\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.3\n },\n \"task\": \"Analyze the current market trends in AI technology sector\"\n },\n {\n \"agent_config\": {\n \"agent_name\": \"Technical Writer\",\n \"description\": \"Specialized in technical documentation\",\n \"system_prompt\": \"You are a technical documentation expert.\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n \"task\": \"Create a technical guide for implementing OAuth2 authentication\"\n }\n ]'\n
import requests\nimport json\n\nAPI_BASE_URL = \"https://api.swarms.world\"\nAPI_KEY = \"your_api_key_here\"\n\nheaders = {\n \"x-api-key\": API_KEY,\n \"Content-Type\": \"application/json\"\n}\n\nbatch_agents = [\n {\n \"agent_config\": {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Expert in market analysis\",\n \"system_prompt\": \"You are a financial market analyst.\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.3\n },\n \"task\": \"Analyze the current market trends in AI technology sector\"\n },\n {\n \"agent_config\": {\n \"agent_name\": \"Technical Writer\",\n \"description\": \"Specialized in technical documentation\",\n \"system_prompt\": \"You are a technical documentation expert.\",\n \"model_name\": \"gpt-4o\",\n \"temperature\": 0.7\n },\n \"task\": \"Create a technical guide for implementing OAuth2 authentication\"\n }\n]\n\nresponse = requests.post(\n f\"{API_BASE_URL}/v1/agent/batch/completions\",\n headers=headers,\n json=batch_agents\n)\n\nif response.status_code == 200:\n result = response.json()\n print(f\"Batch completed with {result['total_requests']} agents\")\n print(f\"Execution time: {result['execution_time']} seconds\")\n print(\"\\nResults:\")\n for i, agent_result in enumerate(result['results']):\n print(f\" Agent {i+1}: {agent_result['name']} - {agent_result['success']}\")\nelse:\n print(f\"Error: {response.status_code} - {response.text}\")\n
interface AgentConfig {\n agent_name: string;\n description: string;\n system_prompt: string;\n model_name: string;\n temperature: number;\n}\n\ninterface AgentCompletion {\n agent_config: AgentConfig;\n task: string;\n}\n\nconst API_BASE_URL = \"https://api.swarms.world\";\nconst API_KEY = \"your_api_key_here\";\n\nasync function runBatchAgents(): Promise<void> {\n const batchAgents: AgentCompletion[] = [\n {\n agent_config: {\n agent_name: \"Market Analyst\",\n description: \"Expert in market analysis\",\n system_prompt: \"You are a financial market analyst.\",\n model_name: \"gpt-4o\",\n temperature: 0.3\n },\n task: \"Analyze the current market trends in AI technology sector\"\n },\n {\n agent_config: {\n agent_name: \"Technical Writer\",\n description: \"Specialized in technical documentation\",\n system_prompt: \"You are a technical documentation expert.\",\n model_name: \"gpt-4o\",\n temperature: 0.7\n },\n task: \"Create a technical guide for implementing OAuth2 authentication\"\n }\n ];\n\n try {\n const response = await fetch(`${API_BASE_URL}/v1/agent/batch/completions`, {\n method: 'POST',\n headers: {\n 'x-api-key': API_KEY,\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify(batchAgents)\n });\n\n if (response.ok) {\n const result = await response.json();\n console.log(`Batch completed with ${result.total_requests} agents`);\n console.log(`Execution time: ${result.execution_time} seconds`);\n console.log(\"\\nResults:\");\n result.results.forEach((agentResult: any, index: number) => {\n console.log(` Agent ${index + 1}: ${agentResult.name} - ${agentResult.success}`);\n });\n } else {\n console.error(`Error: ${response.status} - ${await response.text()}`);\n }\n } catch (error) {\n console.error(\"Request failed:\", error);\n }\n}\n\nrunBatchAgents();\n
Response:
{\n \"batch_id\": \"agent-batch-xyz789\",\n \"total_requests\": 2,\n \"execution_time\": 15.5,\n \"timestamp\": \"2024-03-05T12:34:56.789Z\",\n \"results\": [\n {\n \"id\": \"agent-abc123\",\n \"success\": true,\n \"name\": \"Market Analyst\",\n \"outputs\": {\n \"market_analysis\": \"...\"\n },\n \"usage\": {\n \"input_tokens\": 300,\n \"output_tokens\": 600,\n \"total_tokens\": 900\n }\n },\n {\n \"id\": \"agent-def456\",\n \"success\": true,\n \"name\": \"Technical Writer\",\n \"outputs\": {\n \"technical_guide\": \"...\"\n },\n \"usage\": {\n \"input_tokens\": 400,\n \"output_tokens\": 800,\n \"total_tokens\": 1200\n }\n }\n ]\n}\n
"},{"location":"swarms_cloud/swarms_api/#production-examples","title":"Production Examples","text":""},{"location":"swarms_cloud/swarms_api/#error-handling","title":"Error Handling","text":"The Swarms API follows standard HTTP status codes for error responses:
Status Code Meaning Handling Strategy 400 Bad Request Validate request parameters before sending 401 Unauthorized Check API key validity 403 Forbidden Verify API key permissions 404 Not Found Check endpoint URL and resource IDs 429 Too Many Requests Implement exponential backoff retry logic 500 Internal Server Error Retry with backoff, then contact supportError responses include a detailed message explaining the issue:
{\n \"detail\": \"Failed to create swarm: Invalid swarm_type specified\"\n}\n
"},{"location":"swarms_cloud/swarms_api/#rate-limiting","title":"Rate Limiting","text":"Description Details Rate Limit 100 requests per 60-second window Exceed Consequence 429 status code returned Recommended Action Implement retry logic with exponential backoff"},{"location":"swarms_cloud/swarms_api/#billing-cost-management","title":"Billing & Cost Management","text":"Cost Factor Description Agent Count Base cost per agent Input Tokens Cost based on size of input data and prompts Output Tokens Cost based on length of generated responses Time of Day Reduced rates during nighttime hours (8 PM to 6 AM PT) Cost Information Included in each response's metadata"},{"location":"swarms_cloud/swarms_api/#best-practices","title":"Best Practices","text":""},{"location":"swarms_cloud/swarms_api/#task-description","title":"Task Description","text":"Practice Description Detail Provide detailed, specific task descriptions Context Include all necessary context and constraints Structure Structure complex inputs for easier processing"},{"location":"swarms_cloud/swarms_api/#agent-configuration","title":"Agent Configuration","text":"Practice Description Simple Tasks Use AutoSwarmBuilder
for automatic agent generation Complex Tasks Manually define agents with specific expertise Workflow Use appropriate swarm_type
for your workflow pattern"},{"location":"swarms_cloud/swarms_api/#production-implementation","title":"Production Implementation","text":"Practice Description Error Handling Implement robust error handling and retries Logging Log API responses for debugging and auditing Cost Monitoring Monitor costs closely during development and testing"},{"location":"swarms_cloud/swarms_api/#cost-optimization","title":"Cost Optimization","text":"Practice Description Batching Batch related tasks when possible Scheduling Schedule non-urgent tasks during discount hours Scoping Carefully scope task descriptions to reduce token usage Caching Cache results when appropriate"},{"location":"swarms_cloud/swarms_api/#support","title":"Support","text":"Support Type Contact Information Documentation https://docs.swarms.world Email kye@swarms.world Community https://discord.gg/jM3Z6M9uMq Marketplace https://swarms.world Website https://swarms.ai"},{"location":"swarms_cloud/swarms_api/#service-tiers","title":"Service Tiers","text":""},{"location":"swarms_cloud/swarms_api/#standard-tier","title":"Standard Tier","text":"Feature Description Processing Default processing tier Execution Immediate execution Priority Higher priority processing Pricing Standard pricing Timeout 5-minute timeout limit"},{"location":"swarms_cloud/swarms_api/#flex-tier","title":"Flex Tier","text":"Feature Description Cost Lower cost processing Retries Automatic retries (up to 3 attempts) Timeout 15-minute timeout Discount 75% discount on token costs Suitability Best for non-urgent tasks Backoff Exponential backoff on resource contention Configuration Set service_tier: \"flex\"
in SwarmSpec"},{"location":"swarms_cloud/swarms_api_tools/","title":"Swarms API with Tools Guide","text":"Swarms API allows you to create and manage AI agent swarms with optional tool integration. This guide will walk you through setting up and using the Swarms API with tools.
"},{"location":"swarms_cloud/swarms_api_tools/#prerequisites","title":"Prerequisites","text":"requests
python-dotenv
pip install requests python-dotenv\n
.env
file in your project root:SWARMS_API_KEY=your_api_key_here\n
import os\nimport requests\nfrom dotenv import load_dotenv\nimport json\n\nload_dotenv()\n\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\nheaders = {\"x-api-key\": API_KEY, \"Content-Type\": \"application/json\"}\n
"},{"location":"swarms_cloud/swarms_api_tools/#creating-a-swarm-with-tools","title":"Creating a Swarm with Tools","text":""},{"location":"swarms_cloud/swarms_api_tools/#step-by-step-guide","title":"Step-by-Step Guide","text":"Define your tool dictionary:
tool_dictionary = {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_topic\",\n \"description\": \"Conduct an in-depth search on a specified topic\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"depth\": {\n \"type\": \"integer\",\n \"description\": \"Search depth (1-3)\"\n },\n \"detailed_queries\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\",\n \"description\": \"Specific search queries\"\n }\n }\n },\n \"required\": [\"depth\", \"detailed_queries\"]\n }\n }\n}\n
Create agent configurations:
agent_config = {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"openai/gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n \"tools_dictionary\": [tool_dictionary] # Optional: Add tools if needed\n}\n
Create the swarm payload:
payload = {\n \"name\": \"Your Swarm Name\",\n \"description\": \"Swarm description\",\n \"agents\": [agent_config],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\",\n \"task\": \"Your task description\",\n \"output_type\": \"dict\"\n}\n
Make the API request:
def run_swarm(payload):\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
No, tools are optional for each agent. You can choose which agents have tools based on your specific needs. Simply omit the tools_dictionary
field for agents that don't require tools.
Currently, the API supports function-type tools. Each tool must have: - A unique name
A clear description
Well-defined parameters with types and descriptions
Yes, you can create swarms with a mix of tool-enabled and regular agents. This allows for flexible swarm architectures.
"},{"location":"swarms_cloud/swarms_api_tools/#whats-the-recommended-number-of-tools-per-agent","title":"What's the recommended number of tools per agent?","text":"While there's no strict limit, it's recommended to:
Keep tools focused and specific
Only include tools that the agent needs
Consider the complexity of tool interactions
Here's a complete example of a financial analysis swarm:
def run_financial_analysis_swarm():\n payload = {\n \"name\": \"Financial Analysis Swarm\",\n \"description\": \"Market analysis swarm\",\n \"agents\": [\n {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"openai/gpt-4\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n \"tools_dictionary\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_topic\",\n \"description\": \"Conduct market research\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"depth\": {\n \"type\": \"integer\",\n \"description\": \"Search depth (1-3)\"\n },\n \"detailed_queries\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"}\n }\n },\n \"required\": [\"depth\", \"detailed_queries\"]\n }\n }\n }\n ]\n }\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\",\n \"task\": \"Analyze top performing tech ETFs\",\n \"output_type\": \"dict\"\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload\n )\n return response.json()\n
"},{"location":"swarms_cloud/swarms_api_tools/#health-check","title":"Health Check","text":"Always verify the API status before running swarms:
def check_api_health():\n response = requests.get(f\"{BASE_URL}/health\", headers=headers)\n return response.json()\n
"},{"location":"swarms_cloud/swarms_api_tools/#best-practices","title":"Best Practices","text":"Error Handling: Always implement proper error handling:
def safe_run_swarm(payload):\n try:\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload\n )\n response.raise_for_status()\n return response.json()\n except requests.exceptions.RequestException as e:\n print(f\"Error running swarm: {e}\")\n return None\n
Environment Variables: Never hardcode API keys
Tool Design: Keep tools simple and focused
Testing: Validate swarm configurations before production use
Common issues and solutions:
Verify key is correctly set in .env
Check key permissions
Tool Execution Errors
Validate tool parameters
Check tool function signatures
Response Timeout
Consider reducing max_tokens
Simplify tool complexity
import os\nimport requests\nfrom dotenv import load_dotenv\nimport json\n\nload_dotenv()\n\nAPI_KEY = os.getenv(\"SWARMS_API_KEY\")\nBASE_URL = \"https://api.swarms.world\"\n\nheaders = {\"x-api-key\": API_KEY, \"Content-Type\": \"application/json\"}\n\n\ndef run_health_check():\n response = requests.get(f\"{BASE_URL}/health\", headers=headers)\n return response.json()\n\n\ndef run_single_swarm():\n payload = {\n \"name\": \"Financial Analysis Swarm\",\n \"description\": \"Market analysis swarm\",\n \"agents\": [\n {\n \"agent_name\": \"Market Analyst\",\n \"description\": \"Analyzes market trends\",\n \"system_prompt\": \"You are a financial analyst expert.\",\n \"model_name\": \"openai/gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n \"tools_dictionary\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_topic\",\n \"description\": \"Conduct an in-depth search on a specified topic or subtopic, generating a comprehensive array of highly detailed search queries tailored to the input parameters.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"depth\": {\n \"type\": \"integer\",\n \"description\": \"Indicates the level of thoroughness for the search. Values range from 1 to 3, where 1 represents a superficial search and 3 signifies an exploration of the topic.\",\n },\n \"detailed_queries\": {\n \"type\": \"array\",\n \"description\": \"An array of highly specific search queries that are generated based on the input query and the specified depth. Each query should be designed to elicit detailed and relevant information from various sources.\",\n \"items\": {\n \"type\": \"string\",\n \"description\": \"Each item in this array should represent a unique search query that targets a specific aspect of the main topic, ensuring a comprehensive exploration of the subject matter.\",\n },\n },\n },\n \"required\": [\"depth\", \"detailed_queries\"],\n },\n },\n },\n ],\n },\n {\n \"agent_name\": \"Economic Forecaster\",\n \"description\": \"Predicts economic trends\",\n \"system_prompt\": \"You are an expert in economic forecasting.\",\n \"model_name\": \"gpt-4o\",\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"auto_generate_prompt\": False,\n \"tools_dictionary\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_topic\",\n \"description\": \"Conduct an in-depth search on a specified topic or subtopic, generating a comprehensive array of highly detailed search queries tailored to the input parameters.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"depth\": {\n \"type\": \"integer\",\n \"description\": \"Indicates the level of thoroughness for the search. Values range from 1 to 3, where 1 represents a superficial search and 3 signifies an exploration of the topic.\",\n },\n \"detailed_queries\": {\n \"type\": \"array\",\n \"description\": \"An array of highly specific search queries that are generated based on the input query and the specified depth. Each query should be designed to elicit detailed and relevant information from various sources.\",\n \"items\": {\n \"type\": \"string\",\n \"description\": \"Each item in this array should represent a unique search query that targets a specific aspect of the main topic, ensuring a comprehensive exploration of the subject matter.\",\n },\n },\n },\n \"required\": [\"depth\", \"detailed_queries\"],\n },\n },\n },\n ],\n },\n ],\n \"max_loops\": 1,\n \"swarm_type\": \"ConcurrentWorkflow\",\n \"task\": \"What are the best etfs and index funds for ai and tech?\",\n \"output_type\": \"dict\",\n }\n\n response = requests.post(\n f\"{BASE_URL}/v1/swarm/completions\",\n headers=headers,\n json=payload,\n )\n\n print(response)\n print(response.status_code)\n # return response.json()\n output = response.json()\n\n return json.dumps(output, indent=4)\n\n\nif __name__ == \"__main__\":\n result = run_single_swarm()\n print(\"Swarm Result:\")\n print(result)\n
"},{"location":"swarms_cloud/vision/","title":"The Swarms Cloud and Agent Marketplace","text":"We stand at the dawn of a new era\u2014the Agentic Economy, where the power of intelligent automation is in the hands of everyone. The Swarms Cloud and Agent Marketplace will serve as the epicenter of this economy, enabling developers, businesses, and creators to easily publish, discover, and leverage intelligent agents. Our vision is to make publishing agents as simple as possible through an intuitive CLI, while empowering users to generate income by posting their APIs on the marketplace.
The Swarms Marketplace is more than just a platform\u2014it\u2019s a revolutionary ecosystem that will change how we think about automation and intelligence. By building this platform, we aim to democratize access to agent-driven solutions, enabling a seamless bridge between creators and consumers of automation. With every agent posted to the marketplace, a ripple effect is created, driving innovation across industries and providing an unparalleled opportunity for monetization.
"},{"location":"swarms_cloud/vision/#the-agent-marketplace","title":"The Agent Marketplace","text":""},{"location":"swarms_cloud/vision/#a-unified-platform-for-automation","title":"A Unified Platform for Automation","text":"In the Swarms Marketplace, agents will be the new currency of efficiency. Whether you\u2019re building agents for marketing, finance, customer service, or any other domain, the Swarms Cloud will allow you to showcase your agentic APIs, easily discoverable by anyone needing those capabilities.
We envision the marketplace to function like an API store, where users can search for specific agent capabilities, purchase access to agents, or even integrate their existing systems with agent-based APIs that others have developed. Each agent you publish will come with a potential income stream as businesses and developers integrate your creations into their workflows.
"},{"location":"swarms_cloud/vision/#the-opportunity-to-monetize-your-apis","title":"The Opportunity to Monetize Your APIs","text":"The Swarms Marketplace is designed to let developers and businesses generate income by sharing their agent APIs. Once your agent is published to the marketplace, other users can browse, test, and integrate it into their operations. You will be able to set custom pricing, usage tiers, and licensing terms for your API, ensuring you can profit from your innovations.
Our vision for monetization includes:
API subscriptions: Allow users to subscribe to your agent API with recurring payments.
Per-use pricing: Offer users a pay-as-you-go model where they only pay for the API calls they use.
Licensing: Enable others to purchase full access to your agent for a set period or on a project basis.
The complexity of deploying agents to a marketplace should never be a barrier. Our goal is to streamline the publishing process into something as simple as a command-line interaction. The Swarms CLI will be your one-stop solution to get your agent up and running on the marketplace.
"},{"location":"swarms_cloud/vision/#cli-workflow","title":"CLI Workflow:","text":"swarms publish
command to instantly deploy your agent to the marketplace.Here\u2019s an example of how easy publishing will be:
$ swarms create-agent --name \"CustomerSupportAgent\" --type \"LLM\" \n$ swarms set-metadata --description \"An intelligent agent for customer support operations\" --pricing \"subscription\" --rate \"$20/month\"\n$ swarms publish\n
Within minutes, your agent will be live and accessible to the global marketplace!
"},{"location":"swarms_cloud/vision/#empowering-businesses","title":"Empowering Businesses","text":"For businesses, the marketplace offers an unprecedented opportunity to automate tasks, integrate pre-built agents, and drastically cut operational costs. Companies no longer need to build every system from scratch. With the marketplace, they can simply discover and plug in the agentic solutions that best suit their needs.
graph TD\n A[Build Agent] --> B[Set Metadata]\n B --> C[Publish to Marketplace]\n C --> D{Agent Available Globally}\n D --> E[Developers Discover API]\n D --> F[Businesses Integrate API]\n F --> G[Revenue Stream for Agent Creator]\n E --> G
"},{"location":"swarms_cloud/vision/#the-future-of-automation-agents-as-apis","title":"The Future of Automation: Agents as APIs","text":"In this future we\u2019re creating, agents will be as ubiquitous as APIs. The Swarms Marketplace will be an expansive repository of intelligent agents, each contributing to the automation and streamlining of everyday tasks. Imagine a world where every business can access highly specific, pre-built intelligence for any task, from customer support to supply chain management, and integrate these agents into their processes in minutes.
graph LR\n A[Search for Agent API] --> B[Find Agent That Fits]\n B --> C[Purchase Access]\n C --> D[Integrate with Business System]\n D --> E[Business Operations Streamlined]
"},{"location":"swarms_cloud/vision/#conclusion","title":"Conclusion","text":"The Swarms Cloud and Agent Marketplace will usher in an agent-powered future, where automation is accessible to all, and monetization opportunities are boundless. Our vision is to create a space where developers can not only build and showcase their agents but can also create sustainable income streams from their creations. The CLI will remove the friction of deployment, and the marketplace will enable a self-sustaining ecosystem of agentic intelligence that powers the next generation of automation.
Together, we will shape the Agentic Economy, where collaboration, innovation, and financial opportunity intersect. Welcome to the future of intelligent automation. Welcome to Swarms Cloud.
"},{"location":"swarms_memory/","title":"Announcing the Release of Swarms-Memory Package: Your Gateway to Efficient RAG Systems","text":"We are thrilled to announce the release of the Swarms-Memory package, a powerful and easy-to-use toolkit designed to facilitate the implementation of Retrieval-Augmented Generation (RAG) systems. Whether you're a seasoned AI practitioner or just starting out, Swarms-Memory provides the tools you need to integrate high-performance, reliable RAG systems into your applications seamlessly.
In this blog post, we'll walk you through getting started with the Swarms-Memory package, covering installation, usage examples, and a detailed overview of supported RAG systems like Pinecone and ChromaDB. Let's dive in!
"},{"location":"swarms_memory/#what-is-swarms-memory","title":"What is Swarms-Memory?","text":"Swarms-Memory is a Python package that simplifies the integration of advanced RAG systems into your projects. It supports multiple databases optimized for AI tasks, providing you with the flexibility to choose the best system for your needs. With Swarms-Memory, you can effortlessly handle large-scale AI tasks, vector searches, and more.
"},{"location":"swarms_memory/#key-features","title":"Key Features","text":"Here's an overview of the RAG systems currently supported by Swarms-Memory:
RAG System Status Description Documentation Website ChromaDB Available A high-performance, distributed database optimized for handling large-scale AI tasks. ChromaDB Documentation ChromaDB Pinecone Available A fully managed vector database for adding vector search to your applications. Pinecone Documentation Pinecone Redis Coming Soon An open-source, in-memory data structure store, used as a database, cache, and broker. Redis Documentation Redis Faiss Coming Soon A library for efficient similarity search and clustering of dense vectors by Facebook AI. Faiss Documentation Faiss HNSW Coming Soon A graph-based algorithm for approximate nearest neighbor search, known for speed. HNSW Documentation HNSW"},{"location":"swarms_memory/#getting-started","title":"Getting Started","text":""},{"location":"swarms_memory/#requirements","title":"Requirements","text":"Before you begin, ensure you have the following:
.env
file with your respective API keys (e.g., PINECONE_API_KEY
)You can install the Swarms-Memory package using pip:
$ pip install swarms-memory\n
"},{"location":"swarms_memory/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms_memory/#pinecone","title":"Pinecone","text":"Here's a step-by-step guide on how to use Pinecone with Swarms-Memory:
from typing import List, Dict, Any\nfrom swarms_memory import PineconeMemory\n
from transformers import AutoTokenizer, AutoModel\nimport torch\n\n# Custom embedding function using a HuggingFace model\ndef custom_embedding_function(text: str) -> List[float]:\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n model = AutoModel.from_pretrained(\"bert-base-uncased\")\n inputs = tokenizer(text, return_tensors=\"pt\", padding=True, truncation=True, max_length=512)\n with torch.no_grad():\n outputs = model(**inputs)\n embeddings = outputs.last_hidden_state.mean(dim=1).squeeze().tolist()\n return embeddings\n\n# Custom preprocessing function\ndef custom_preprocess(text: str) -> str:\n return text.lower().strip()\n\n# Custom postprocessing function\ndef custom_postprocess(results: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n for result in results:\n result[\"custom_score\"] = result[\"score\"] * 2 # Example modification\n return results\n
wrapper = PineconeMemory(\n api_key=\"your-api-key\",\n environment=\"your-environment\",\n index_name=\"your-index-name\",\n embedding_function=custom_embedding_function,\n preprocess_function=custom_preprocess,\n postprocess_function=custom_postprocess,\n logger_config={\n \"handlers\": [\n {\"sink\": \"custom_rag_wrapper.log\", \"rotation\": \"1 GB\"},\n {\"sink\": lambda msg: print(f\"Custom log: {msg}\", end=\"\")},\n ],\n },\n)\n
# Adding documents\nwrapper.add(\"This is a sample document about artificial intelligence.\", {\"category\": \"AI\"})\nwrapper.add(\"Python is a popular programming language for data science.\", {\"category\": \"Programming\"})\n\n# Querying\nresults = wrapper.query(\"What is AI?\", filter={\"category\": \"AI\"})\nfor result in results:\n print(f\"Score: {result['score']}, Custom Score: {result['custom_score']}, Text: {result['metadata']['text']}\")\n
"},{"location":"swarms_memory/#chromadb","title":"ChromaDB","text":"Using ChromaDB with Swarms-Memory is straightforward. Here\u2019s how:
from swarms_memory import ChromaDB\n
chromadb = ChromaDB(\n metric=\"cosine\",\n output_dir=\"results\",\n limit_tokens=1000,\n n_results=2,\n docs_folder=\"path/to/docs\",\n verbose=True,\n)\n
# Add a document\ndoc_id = chromadb.add(\"This is a test document.\")\n\n# Query the document\nresult = chromadb.query(\"This is a test query.\")\n\n# Traverse a directory\nchromadb.traverse_directory()\n\n# Display the result\nprint(result)\n
"},{"location":"swarms_memory/#join-the-community","title":"Join the Community","text":"We're excited to see how you leverage Swarms-Memory in your projects! Join our community on Discord to share your experiences, ask questions, and stay updated on the latest developments.
The Swarms-Memory package brings a new level of ease and efficiency to building and managing RAG systems. With support for leading databases like ChromaDB and Pinecone, it's never been easier to integrate powerful, scalable AI solutions into your projects. We can't wait to see what you'll create with Swarms-Memory!
For more detailed usage examples and documentation, visit our GitHub repository and start exploring today!
"},{"location":"swarms_memory/chromadb/","title":"ChromaDB Documentation","text":"ChromaDB is a specialized module designed to facilitate the storage and retrieval of documents using the ChromaDB system. It offers functionalities for adding documents to a local ChromaDB collection and querying this collection based on provided query texts. This module integrates with the ChromaDB client to create and manage collections, leveraging various configurations for optimizing the storage and retrieval processes.
"},{"location":"swarms_memory/chromadb/#parameters","title":"Parameters","text":"Parameter Type Default Descriptionmetric
str
\"cosine\"
The similarity metric to use for the collection. output_dir
str
\"swarms\"
The name of the collection to store the results in. limit_tokens
Optional[int]
1000
The maximum number of tokens to use for the query. n_results
int
1
The number of results to retrieve. docs_folder
Optional[str]
None
The folder containing documents to be added to the collection. verbose
bool
False
Flag to enable verbose logging for debugging. *args
tuple
()
Additional positional arguments. **kwargs
dict
{}
Additional keyword arguments."},{"location":"swarms_memory/chromadb/#methods","title":"Methods","text":"Method Description __init__
Initializes the ChromaDB instance with specified parameters. add
Adds a document to the ChromaDB collection. query
Queries documents from the ChromaDB collection based on the query text. traverse_directory
Traverses the specified directory to add documents to the collection."},{"location":"swarms_memory/chromadb/#usage","title":"Usage","text":"from swarms_memory import ChromaDB\n\nchromadb = ChromaDB(\n metric=\"cosine\",\n output_dir=\"results\",\n limit_tokens=1000,\n n_results=2,\n docs_folder=\"path/to/docs\",\n verbose=True,\n)\n
"},{"location":"swarms_memory/chromadb/#adding-documents","title":"Adding Documents","text":"The add
method allows you to add a document to the ChromaDB collection. It generates a unique ID for each document and adds it to the collection.
document
str
- The document to be added to the collection. *args
tuple
()
Additional positional arguments. **kwargs
dict
{}
Additional keyword arguments."},{"location":"swarms_memory/chromadb/#returns","title":"Returns","text":"Type Description str
The ID of the added document."},{"location":"swarms_memory/chromadb/#example","title":"Example","text":"task = \"example_task\"\nresult = \"example_result\"\nresult_id = chromadb.add(document=\"This is a sample document.\")\nprint(f\"Document ID: {result_id}\")\n
"},{"location":"swarms_memory/chromadb/#querying-documents","title":"Querying Documents","text":"The query
method allows you to retrieve documents from the ChromaDB collection based on the provided query text.
query_text
str
- The query string to search for. *args
tuple
()
Additional positional arguments. **kwargs
dict
{}
Additional keyword arguments."},{"location":"swarms_memory/chromadb/#returns_1","title":"Returns","text":"Type Description str
The retrieved documents as a string."},{"location":"swarms_memory/chromadb/#example_1","title":"Example","text":"query_text = \"search term\"\nresults = chromadb.query(query_text=query_text)\nprint(f\"Retrieved Documents: {results}\")\n
"},{"location":"swarms_memory/chromadb/#traversing-directory","title":"Traversing Directory","text":"The traverse_directory
method traverses through every file in the specified directory and its subdirectories, adding the contents of each file to the ChromaDB collection.
chromadb.traverse_directory()\n
"},{"location":"swarms_memory/chromadb/#additional-information-and-tips","title":"Additional Information and Tips","text":""},{"location":"swarms_memory/chromadb/#verbose-logging","title":"Verbose Logging","text":"Enable the verbose
flag during initialization to get detailed logs of the operations, which is useful for debugging.
chromadb = ChromaDB(verbose=True)\n
"},{"location":"swarms_memory/chromadb/#handling-large-documents","title":"Handling Large Documents","text":"When dealing with large documents, consider using the limit_tokens
parameter to restrict the number of tokens processed in a single query.
chromadb = ChromaDB(limit_tokens=500)\n
"},{"location":"swarms_memory/chromadb/#optimizing-query-performance","title":"Optimizing Query Performance","text":"Use the appropriate similarity metric (metric
parameter) that suits your use case for optimal query performance.
chromadb = ChromaDB(metric=\"euclidean\")\n
"},{"location":"swarms_memory/chromadb/#references-and-resources","title":"References and Resources","text":"By following this documentation, users can effectively utilize the ChromaDB module for managing document storage and retrieval in their applications.
"},{"location":"swarms_memory/faiss/","title":"FAISSDB: Documentation","text":"The FAISSDB
class is a highly customizable wrapper for the FAISS (Facebook AI Similarity Search) library, designed for efficient similarity search and clustering of dense vectors. This class facilitates the creation of a Retrieval-Augmented Generation (RAG) system by providing methods to add documents to a FAISS index and query the index for similar documents. It supports custom embedding models, preprocessing functions, and other customizations to fit various use cases.
dimension
int
768
Dimension of the document embeddings. index_type
str
'Flat'
Type of FAISS index to use ('Flat'
or 'IVF'
). embedding_model
Optional[Any]
None
Custom embedding model. embedding_function
Optional[Callable[[str], List[float]]]
None
Custom function to generate embeddings from text. preprocess_function
Optional[Callable[[str], str]]
None
Custom function to preprocess text before embedding. postprocess_function
Optional[Callable[[List[Dict[str, Any]]], List[Dict[str, Any]]]]
None
Custom function to postprocess the results. metric
str
'cosine'
Distance metric for FAISS index ('cosine'
or 'l2'
). logger_config
Optional[Dict[str, Any]]
None
Configuration for the logger."},{"location":"swarms_memory/faiss/#methods","title":"Methods","text":""},{"location":"swarms_memory/faiss/#__init__","title":"__init__
","text":"Initializes the FAISSDB instance, setting up the logger, creating the FAISS index, and configuring custom functions if provided.
"},{"location":"swarms_memory/faiss/#add","title":"add
","text":"Adds a document to the FAISS index.
"},{"location":"swarms_memory/faiss/#parameters_1","title":"Parameters","text":"Parameter Type Default Descriptiondoc
str
None The document to be added. metadata
Optional[Dict[str, Any]]
None Additional metadata for the document."},{"location":"swarms_memory/faiss/#example-usage","title":"Example Usage","text":"db = FAISSDB(dimension=768)\ndb.add(\"This is a sample document.\", {\"category\": \"sample\"})\n
"},{"location":"swarms_memory/faiss/#query","title":"query
","text":"Queries the FAISS index for similar documents.
"},{"location":"swarms_memory/faiss/#parameters_2","title":"Parameters","text":"Parameter Type Default Descriptionquery
str
None The query string. top_k
int
5
The number of top results to return."},{"location":"swarms_memory/faiss/#returns","title":"Returns","text":"Type Description List[Dict[str, Any]]
A list of dictionaries containing the top_k most similar documents."},{"location":"swarms_memory/faiss/#example-usage_1","title":"Example Usage","text":"results = db.query(\"What is artificial intelligence?\")\nfor result in results:\n print(f\"Score: {result['score']}, Text: {result['metadata']['text']}\")\n
"},{"location":"swarms_memory/faiss/#internal-methods","title":"Internal Methods","text":""},{"location":"swarms_memory/faiss/#_setup_logger","title":"_setup_logger
","text":"Sets up the logger with the given configuration.
"},{"location":"swarms_memory/faiss/#parameters_3","title":"Parameters","text":"Parameter Type Default Descriptionconfig
Optional[Dict[str, Any]]
None Configuration for the logger."},{"location":"swarms_memory/faiss/#_create_index","title":"_create_index
","text":"Creates and returns a FAISS index based on the specified type and metric.
"},{"location":"swarms_memory/faiss/#parameters_4","title":"Parameters","text":"Parameter Type Default Descriptionindex_type
str
'Flat' Type of FAISS index to use. metric
str
'cosine' Distance metric for FAISS index."},{"location":"swarms_memory/faiss/#returns_1","title":"Returns","text":"Type Description faiss.Index
FAISS index instance."},{"location":"swarms_memory/faiss/#_default_embedding_function","title":"_default_embedding_function
","text":"Default embedding function using the SentenceTransformer model.
"},{"location":"swarms_memory/faiss/#parameters_5","title":"Parameters","text":"Parameter Type Default Descriptiontext
str
None The input text to embed."},{"location":"swarms_memory/faiss/#returns_2","title":"Returns","text":"Type Description List[float]
Embedding vector for the input text."},{"location":"swarms_memory/faiss/#_default_preprocess_function","title":"_default_preprocess_function
","text":"Default preprocessing function.
"},{"location":"swarms_memory/faiss/#parameters_6","title":"Parameters","text":"Parameter Type Default Descriptiontext
str
None The input text to preprocess."},{"location":"swarms_memory/faiss/#returns_3","title":"Returns","text":"Type Description str
Preprocessed text."},{"location":"swarms_memory/faiss/#_default_postprocess_function","title":"_default_postprocess_function
","text":"Default postprocessing function.
"},{"location":"swarms_memory/faiss/#parameters_7","title":"Parameters","text":"Parameter Type Default Descriptionresults
List[Dict[str, Any]]
None The results to postprocess."},{"location":"swarms_memory/faiss/#returns_4","title":"Returns","text":"Type Description List[Dict[str, Any]]
Postprocessed results."},{"location":"swarms_memory/faiss/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms_memory/faiss/#example-1-basic-usage","title":"Example 1: Basic Usage","text":"# Initialize the FAISSDB instance\ndb = FAISSDB(dimension=768, index_type=\"Flat\")\n\n# Add documents to the FAISS index\ndb.add(\"This is a document about AI.\", {\"category\": \"AI\"})\ndb.add(\"Python is great for data science.\", {\"category\": \"Programming\"})\n\n# Query the FAISS index\nresults = db.query(\"Tell me about AI\")\nfor result in results:\n print(f\"Score: {result['score']}, Text: {result['metadata']['text']}\")\n
"},{"location":"swarms_memory/faiss/#example-2-custom-functions","title":"Example 2: Custom Functions","text":"from transformers import AutoTokenizer, AutoModel\nimport torch\n\n# Custom embedding function using a HuggingFace model\ndef custom_embedding_function(text: str) -> List[float]:\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n model = AutoModel.from_pretrained(\"bert-base-uncased\")\n inputs = tokenizer(text, return_tensors=\"pt\", padding=True, truncation=True, max_length=512)\n with torch.no_grad():\n outputs = model(**inputs)\n embeddings = outputs.last_hidden_state.mean(dim=1).squeeze().tolist()\n return embeddings\n\n# Custom preprocessing function\ndef custom_preprocess(text: str) -> str:\n return text.lower().strip()\n\n# Custom postprocessing function\ndef custom_postprocess(results: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n for result in results:\n result[\"custom_score\"] = result[\"score\"] * 2 # Example modification\n return results\n\n# Initialize the FAISSDB instance with custom functions\ndb = FAISSDB(\n dimension=768,\n index_type=\"Flat\",\n embedding_function=custom_embedding_function,\n preprocess_function=custom_preprocess,\n postprocess_function=custom_postprocess,\n metric=\"cosine\",\n logger_config={\n \"handlers\": [\n {\"sink\": \"custom_faiss_rag_wrapper.log\", \"rotation\": \"1 GB\"},\n {\"sink\": lambda msg: print(f\"Custom log: {msg}\", end=\"\")}\n ],\n },\n)\n\n# Add documents to the FAISS index\ndb.add(\"This is a document about machine learning.\", {\"category\": \"ML\"})\ndb.add(\"Python is a versatile programming language.\", {\"category\": \"Programming\"})\n\n# Query the FAISS index\nresults = db.query(\"Explain machine learning\")\nfor result in results:\n print(f\"Score: {result['score']}, Custom Score: {result['custom_score']}, Text: {result['metadata']['text']}\")\n
"},{"location":"swarms_memory/faiss/#additional-information-and-tips","title":"Additional Information and Tips","text":"result formatting to specific needs. - FAISS supports various types of indices; choose the one that best fits the application requirements (e.g., Flat
for brute-force search, IVF
for faster search with some accuracy trade-off). - Properly configure the logger to monitor and debug the operations of the FAISSDB instance.
By following this documentation, users can effectively utilize the FAISSDB
class for various similarity search and document retrieval tasks, customizing it to their specific needs through the provided hooks and functions.
The PineconeMemory
class provides a robust interface for integrating Pinecone-based Retrieval-Augmented Generation (RAG) systems. It allows for adding documents to a Pinecone index and querying the index for similar documents. The class supports custom embedding models, preprocessing functions, and other customizations to suit different use cases.
api_key
str
- Pinecone API key. environment
str
- Pinecone environment. index_name
str
- Name of the Pinecone index to use. dimension
int
768
Dimension of the document embeddings. embedding_model
Optional[Any]
None
Custom embedding model. Defaults to SentenceTransformer('all-MiniLM-L6-v2')
. embedding_function
Optional[Callable[[str], List[float]]]
None
Custom embedding function. Defaults to _default_embedding_function
. preprocess_function
Optional[Callable[[str], str]]
None
Custom preprocessing function. Defaults to _default_preprocess_function
. postprocess_function
Optional[Callable[[List[Dict[str, Any]]], List[Dict[str, Any]]]]
None
Custom postprocessing function. Defaults to _default_postprocess_function
. metric
str
'cosine'
Distance metric for Pinecone index. pod_type
str
'p1'
Pinecone pod type. namespace
str
''
Pinecone namespace. logger_config
Optional[Dict[str, Any]]
None
Configuration for the logger. Defaults to logging to rag_wrapper.log
and console output."},{"location":"swarms_memory/pinecone/#methods","title":"Methods","text":""},{"location":"swarms_memory/pinecone/#_setup_logger","title":"_setup_logger
","text":"def _setup_logger(self, config: Optional[Dict[str, Any]] = None)\n
Sets up the logger with the given configuration.
"},{"location":"swarms_memory/pinecone/#_default_embedding_function","title":"_default_embedding_function
","text":"def _default_embedding_function(self, text: str) -> List[float]\n
Generates embeddings using the default SentenceTransformer model.
"},{"location":"swarms_memory/pinecone/#_default_preprocess_function","title":"_default_preprocess_function
","text":"def _default_preprocess_function(self, text: str) -> str\n
Preprocesses the input text by stripping whitespace.
"},{"location":"swarms_memory/pinecone/#_default_postprocess_function","title":"_default_postprocess_function
","text":"def _default_postprocess_function(self, results: List[Dict[str, Any]]) -> List[Dict[str, Any]]\n
Postprocesses the query results.
"},{"location":"swarms_memory/pinecone/#add","title":"add
","text":"Adds a document to the Pinecone index.
Parameter Type Default Descriptiondoc
str
- The document to be added. metadata
Optional[Dict[str, Any]]
None
Additional metadata for the document."},{"location":"swarms_memory/pinecone/#query","title":"query
","text":"Queries the Pinecone index for similar documents.
Parameter Type Default Descriptionquery
str
- The query string. top_k
int
5
The number of top results to return. filter
Optional[Dict[str, Any]]
None
Metadata filter for the query."},{"location":"swarms_memory/pinecone/#usage","title":"Usage","text":"The PineconeMemory
class is initialized with the necessary parameters to configure Pinecone and the embedding model. It supports a variety of custom configurations to suit different needs.
from swarms_memory import PineconeMemory\n\n# Initialize PineconeMemory\nmemory = PineconeMemory(\n api_key=\"your-api-key\",\n environment=\"us-west1-gcp\",\n index_name=\"example-index\",\n dimension=768\n)\n
"},{"location":"swarms_memory/pinecone/#adding-documents","title":"Adding Documents","text":"Documents can be added to the Pinecone index using the add
method. The method accepts a document string and optional metadata.
doc = \"This is a sample document to be added to the Pinecone index.\"\nmetadata = {\"author\": \"John Doe\", \"date\": \"2024-07-08\"}\n\nmemory.add(doc, metadata)\n
"},{"location":"swarms_memory/pinecone/#querying-documents","title":"Querying Documents","text":"The query
method allows for querying the Pinecone index for similar documents based on a query string. It returns the top k
most similar documents.
query = \"Sample query to find similar documents.\"\nresults = memory.query(query, top_k=5)\n\nfor result in results:\n print(result)\n
"},{"location":"swarms_memory/pinecone/#additional-information-and-tips","title":"Additional Information and Tips","text":""},{"location":"swarms_memory/pinecone/#custom-embedding-and-preprocessing-functions","title":"Custom Embedding and Preprocessing Functions","text":"Custom embedding and preprocessing functions can be provided during initialization to tailor the document processing to specific requirements.
"},{"location":"swarms_memory/pinecone/#example_3","title":"Example","text":"def custom_embedding_function(text: str) -> List[float]:\n # Custom embedding logic\n return [0.1, 0.2, 0.3]\n\ndef custom_preprocess_function(text: str) -> str:\n # Custom preprocessing logic\n return text.lower()\n\nmemory = PineconeMemory(\n api_key=\"your-api-key\",\n environment=\"us-west1-gcp\",\n index_name=\"example-index\",\n embedding_function=custom_embedding_function,\n preprocess_function=custom_preprocess_function\n)\n
"},{"location":"swarms_memory/pinecone/#logger-configuration","title":"Logger Configuration","text":"The logger can be configured to suit different logging needs. The default configuration logs to a file and the console.
"},{"location":"swarms_memory/pinecone/#example_4","title":"Example","text":"logger_config = {\n \"handlers\": [\n {\"sink\": \"custom_log.log\", \"rotation\": \"1 MB\"},\n {\"sink\": lambda msg: print(msg, end=\"\")},\n ]\n}\n\nmemory = PineconeMemory(\n api_key=\"your-api-key\",\n environment=\"us-west1-gcp\",\n index_name=\"example-index\",\n logger_config=logger_config\n)\n
"},{"location":"swarms_memory/pinecone/#references-and-resources","title":"References and Resources","text":"For further exploration and examples, refer to the official documentation and resources provided by Pinecone, SentenceTransformers, and Loguru.
This concludes the detailed documentation for the PineconeMemory
class. The class offers a flexible and powerful interface for leveraging Pinecone's capabilities in retrieval-augmented generation systems. By supporting custom embeddings, preprocessing, and postprocessing functions, it can be tailored to a wide range of applications.
Welcome to the Swarms Platform, a dynamic ecosystem where users can share, discover, and host agents and agent swarms. This documentation will guide you through the various features of the platform, providing you with the information you need to get started and make the most out of your experience.
"},{"location":"swarms_platform/#table-of-contents","title":"Table of Contents","text":"The Swarms Platform is designed to facilitate the sharing, discovery, and hosting of intelligent agents and swarms of agents. Whether you are a developer looking to deploy your own agents, or an organization seeking to leverage collective intelligence, the Swarms Platform provides the tools and community support you need.
"},{"location":"swarms_platform/#getting-started","title":"Getting Started","text":"To begin using the Swarms Platform, follow these steps:
Access and manage your account settings through the account page.
Here, you can update your profile information, manage security settings, and configure notifications.
"},{"location":"swarms_platform/#usage-monitoring","title":"Usage Monitoring","text":""},{"location":"swarms_platform/#check-your-usage","title":"Check Your Usage","text":"Monitor your usage statistics to keep track of your activities and resource consumption on the platform.
This page provides detailed insights into your usage patterns, helping you optimize your resource allocation and stay within your limits.
"},{"location":"swarms_platform/#api-key-generation","title":"API Key Generation","text":""},{"location":"swarms_platform/#generate-your-api-keys","title":"Generate Your API Keys","text":"Generate API keys to securely interact with the Swarms Platform API.
Follow the steps on this page to create, manage, and revoke API keys as needed. Ensure that your keys are kept secure and only share them with trusted applications.
"},{"location":"swarms_platform/#explorer","title":"Explorer","text":""},{"location":"swarms_platform/#explorer-share-discover-and-deploy","title":"Explorer: Share, Discover, and Deploy","text":"The Explorer is a central hub for sharing, discovering, and deploying prompts, agents, and swarms.
Use the Explorer to:
The Dashboard is your control center for managing all aspects of your Swarms Platform experience.
From the Dashboard, you can:
Collaborate with others by creating and joining organizations on the Swarms Platform.
Creating an organization allows you to:
To further enhance your understanding and usage of the Swarms Platform, explore the following resources:
The Swarms Platform is a versatile and powerful ecosystem for managing intelligent agents and swarms. By following this documentation, you can effectively navigate the platform, leverage its features, and collaborate with others to create innovative solutions. Happy swarming!
"},{"location":"swarms_platform/account_management/","title":"Swarms Platform Account Management Documentation","text":"This guide provides comprehensive, production-grade documentation for managing your account on the Swarms Platform. It covers account settings, profile management, billing, payment methods, subscription details, and cryptocurrency wallet management. Use this documentation to navigate the account management interface, understand available options, and perform account-related operations efficiently and securely.
"},{"location":"swarms_platform/account_management/#table-of-contents","title":"Table of Contents","text":"The Swarms Platform account management page, available at https://swarms.world/platform/account, allows you to configure and update your account settings and preferences. From here, you can manage the appearance of the platform, view and update profile details, manage your billing information and subscriptions, and handle your cryptocurrency wallet operations.
"},{"location":"swarms_platform/account_management/#accessing-the-account-management-page","title":"Accessing the Account Management Page","text":"To access your account management dashboard: 1. Log in to your Swarms Platform account. 2. Navigate to https://swarms.world/platform/account.
Once on this page, you will see several sections dedicated to different aspects of your account:
This section allows you to modify your personal account preferences, including the visual theme.
"},{"location":"swarms_platform/account_management/#theme-mode","title":"Theme Mode","text":"You can choose between different theme options to tailor your user experience:
Example:
pip3 install -U swarms\n
Sync with System Theme: Automatically adjusts the platform theme to match your system's theme settings.
Select the theme mode that best fits your workflow. Changes are applied immediately across the platform.
"},{"location":"swarms_platform/account_management/#profile-management","title":"Profile Management","text":""},{"location":"swarms_platform/account_management/#profile-information","title":"Profile Information","text":"The Profile section allows you to view and update your personal details:
View Details: Your current profile information is displayed, including contact details, username, and any additional settings.
Manage Profile: Options to update your information, ensuring your account details remain current.
For security purposes, it is important to regularly update your password:
The Billing section helps you manage financial aspects of your account, including credits, invoices, and subscriptions.
"},{"location":"swarms_platform/account_management/#subscription-status","title":"Subscription Status","text":"Your subscription details are clearly displayed:
Current Plan: Options include Free, Premium, or Enterprise.
Status: The active subscription status is indicated (e.g., \"Active\").
Customer Portal: An option to open the customer portal for additional billing and subscription management.
Manage your payment methods and review your billing details:
Expiry Date: 2030/2
Add Card: Use the \"Add Card\" option to register a new payment method securely.
Details of the credits available for your account:
Credits Available: Displays the current credit balance (e.g., $20.00
).
Charge: Option to apply charges against your available credits.
Invoice: Review or download your invoices.
The Crypto section provides management tools for your cryptocurrency wallet and associated transactions.
"},{"location":"swarms_platform/account_management/#wallet-overview","title":"Wallet Overview","text":"Example:
EmVa...79Vb
)$swarms Balance and Price:
0.00
).$0.0400
).Exchange Functionality: Option to exchange $swarms tokens for credits directly through the platform.
Transaction History: View a detailed log of wallet transactions, ensuring full transparency over all exchanges and wallet activity.
For further assistance or to learn more about managing your account on the Swarms Platform, refer to the following resources:
Regular Updates: Periodically review your account settings, profile, and payment methods to ensure they are up-to-date.
Security Measures: Always use strong, unique passwords and consider enabling two-factor authentication if available.
Monitor Transactions: Regularly check your billing and wallet transaction history to detect any unauthorized activities promptly.
This document provides detailed information on managing API keys within the Swarms Platform. API keys grant programmatic access to your account and should be handled securely. Follow the guidelines below to manage your API keys safely and effectively.
"},{"location":"swarms_platform/apikeys/#table-of-contents","title":"Table of Contents","text":"API keys are unique credentials that allow you to interact with the Swarms Platform programmatically. These keys enable you to make authenticated API requests to access or modify your data. Important: Once a secret API key is generated, it will not be displayed again. Ensure you store it securely, as it cannot be retrieved from the platform later.
"},{"location":"swarms_platform/apikeys/#viewing-your-api-keys","title":"Viewing Your API Keys","text":"When you navigate to the API Keys page (https://swarms.world/platform/api-keys), you will see a list of your API keys along with the following information:
"},{"location":"swarms_platform/apikeys/#key-details","title":"Key Details:","text":"To generate a new API key, follow these steps:
Attach a Credit Card: Before creating a new API key, ensure that your account has a credit card attached. This is required for authentication and billing purposes.
Access the API Keys Page: Navigate to https://swarms.world/platform/api-keys.
Generate a New Key: Click on the \"Create new API key\" button. The system will generate a new secret API key for your account.
Store Your API Key Securely: Once generated, the full API key will be displayed only once. Copy and store it in a secure location, as it will not be displayed again. Note: Do not share your API key with anyone or expose it in any client-side code (e.g., browser JavaScript).
Confidentiality: Your API keys are sensitive credentials. Do not share them with anyone or include them in public repositories or client-side code.
Storage: Store your API keys in secure, encrypted storage. Avoid saving them in plain text files or unsecured locations.
Rotation: If you suspect that your API key has been compromised, immediately delete it and create a new one.
Access Control: Limit access to your API keys to only those systems and personnel who absolutely require it.
A: The requirement to attach a credit card helps verify your identity and manage billing, ensuring responsible usage of the API services provided by the Swarms Platform.
"},{"location":"swarms_platform/apikeys/#q2-what-happens-if-i-lose-my-api-key","title":"Q2: What happens if I lose my API key?","text":"A: If you lose your API key, you will need to generate a new one. The platform does not store the full key after its initial generation, so recovery is not possible.
"},{"location":"swarms_platform/apikeys/#q3-how-can-i-delete-an-api-key","title":"Q3: How can I delete an API key?","text":"A: On the API Keys page, locate the key you wish to delete and click the \"Delete\" action next to it. This will revoke the key's access immediately.
"},{"location":"swarms_platform/apikeys/#q4-can-i-have-multiple-api-keys","title":"Q4: Can I have multiple API keys?","text":"A: Yes, you can generate and manage multiple API keys. Use naming conventions to keep track of their usage and purpose.
For any further questions or issues regarding API key management, please refer to our Help Center or contact our support team.
"},{"location":"swarms_platform/apps_page/","title":"Swarms Marketplace Apps Documentation","text":"The Swarms Apps page (https://swarms.world/apps
) is your central hub for managing and customizing your workspace experience. Here you can control which applications appear in your sidebar, organize them using templates, and quickly access the tools you need for different workflows.
The Apps Gallery allows you to curate your workspace by selecting which applications you want to see in your sidebar. This personalized approach ensures you have quick access to the tools most relevant to your work.
Key Features:
Selective App Display: Choose exactly which apps appear in your sidebar
Favorites System: Star your most-used apps to pin them for instant access
Quick Access: Starred apps remain easily accessible regardless of your current template
Templates provide pre-configured app collections optimized for specific workflows. Instead of manually selecting apps one by one, you can choose a template that matches your current needs.
"},{"location":"swarms_platform/apps_page/#available-templates","title":"Available Templates","text":""},{"location":"swarms_platform/apps_page/#marketplace-template","title":"\ud83c\udfea Marketplace Template","text":"Perfect for discovering and managing marketplace content
Included Apps:
Marketplace: Browse and discover new tools, agents, and prompts
App Store: Access autonomous AI applications
Leaderboard: View top creators and contributors
Dashboard: Your main control center
Settings: Account and organization configuration
Best For: Content discovery, community engagement, platform exploration
"},{"location":"swarms_platform/apps_page/#no-code-solutions-template","title":"\ud83c\udfa8 No-Code Solutions Template","text":"Ideal for users who prefer visual, drag-and-drop interfaces
Included Apps: - Dashboard: Central control and overview
Chat: Direct communication with agents and team members
Spreadsheet: Collaborative AI-powered spreadsheets
Drag n Drop: Visual workflow builder for creating processes
Settings: Platform configuration options
Best For: Visual workflow creation, collaborative work, rapid prototyping
"},{"location":"swarms_platform/apps_page/#developer-template","title":"\ud83d\udc68\u200d\ud83d\udcbb Developer Template","text":"Designed for technical users and developers
Included Apps: - Dashboard: System overview and monitoring
API Key: Manage authentication credentials
Telemetry: Monitor platform usage and analytics
Settings: Advanced configuration options
Playground: Testing and experimentation environment
Best For: API integration, performance monitoring, technical development
"},{"location":"swarms_platform/apps_page/#all-apps-template","title":"\ud83d\udcf1 All Apps Template","text":"Comprehensive access to every available application
Features: - Complete Access: Activates all available apps in your sidebar
Maximum Flexibility: Switch between any tool without reconfiguration
Full Feature Set: Access to every platform capability
Best For: Power users, comprehensive workflows, exploration of all features
"},{"location":"swarms_platform/apps_page/#app-categories","title":"App Categories","text":""},{"location":"swarms_platform/apps_page/#marketplace-applications","title":"Marketplace Applications","text":"These apps focus on content discovery, community interaction, and marketplace functionality.
"},{"location":"swarms_platform/apps_page/#dashboard","title":"Dashboard","text":"Your primary control center providing system overview, key metrics, and quick access to important functions.
"},{"location":"swarms_platform/apps_page/#marketplace","title":"Marketplace","text":"Discover and utilize new tools, agents, and prompts created by the community. Browse categories, read reviews, and integrate new capabilities into your workflows.
"},{"location":"swarms_platform/apps_page/#app-store","title":"App Store","text":"Access a curated collection of autonomous AI applications. These are complete solutions that can operate independently to accomplish specific tasks.
"},{"location":"swarms_platform/apps_page/#leaderboard","title":"Leaderboard","text":"View rankings of top creators, contributors, and most popular content. Discover trending tools and identify influential community members.
"},{"location":"swarms_platform/apps_page/#marketplace-bookmarks","title":"Marketplace Bookmarks","text":"Organize and manage your saved marketplace items. Keep track of tools you want to try later or frequently reference.
"},{"location":"swarms_platform/apps_page/#no-code-agent-platforms","title":"No Code Agent Platforms","text":"Visual, user-friendly tools that don't require programming knowledge.
Application Description Apps Meta-application for managing your sidebar configuration. Add, remove, and organize your available applications. Chat Real-time communication interface for conversing with AI agents and team members. Supports both individual and group conversations. Spreadsheet Swarm AI-enhanced collaborative spreadsheets that combine familiar spreadsheet functionality with intelligent automation and team collaboration features. Drag & Drop Visual workflow builder allowing you to create complex processes using intuitive drag-and-drop interfaces. Connect different tools and actions without coding."},{"location":"swarms_platform/apps_page/#account-settings","title":"Account Settings","text":"Configuration and management tools for your account and organization.
Application Description API Keys Secure management of your authentication credentials. Generate, rotate, and manage API keys for integrating external services. Telemetry Comprehensive analytics dashboard showing platform usage, performance metrics, and usage patterns. Monitor your organization's AI agent activity. Settings Central configuration hub for account preferences, organization settings, notification preferences, and platform customization options. Playground Safe testing environment for experimenting with new configurations, testing API calls, and prototyping workflows before implementing them in production."},{"location":"swarms_platform/apps_page/#best-practices","title":"Best Practices","text":""},{"location":"swarms_platform/apps_page/#template-selection-strategy","title":"Template Selection Strategy","text":"Start with Templates: Begin with a template that matches your primary use case, then customize as needed.
Regular Review: Periodically reassess your app selection as your needs evolve.
Workflow-Specific: Consider switching templates based on current projects or tasks.
"},{"location":"swarms_platform/apps_page/#app-management-tips","title":"App Management Tips","text":"Star Strategically: Only star apps you use daily to avoid sidebar clutter.
Template Switching: Don't hesitate to switch templates when your focus changes.
Exploration: Periodically activate the \"All\" template to discover new capabilities.
"},{"location":"swarms_platform/apps_page/#organization-recommendations","title":"Organization Recommendations","text":"Role-Based Setup: Configure templates based on team roles (developers, content creators, analysts).
Project Phases: Adjust app selection based on project phases (research, development, deployment).
Performance Monitoring: Use telemetry data to optimize your app selection over time.
"},{"location":"swarms_platform/apps_page/#getting-started","title":"Getting Started","text":"https://swarms.world/apps
The Apps page puts you in complete control of your Swarms experience, ensuring you have the right tools at your fingertips for any task or workflow.
"},{"location":"swarms_platform/monetize/","title":"Swarms.World Monetization Guide","text":""},{"location":"swarms_platform/monetize/#quick-overview","title":"Quick Overview","text":"Swarms Marketplace has activated its payment infrastructure, enabling creators to monetize AI agents, prompts, and tools directly through the platform. Sellers receive payments minus a 5-15% platform fee, scaled based on subscription tiers. Revenue accrues in real-time to integrated crypto wallets, with optional fiat conversions.
"},{"location":"swarms_platform/monetize/#eligibility-requirements","title":"Eligibility Requirements","text":""},{"location":"swarms_platform/monetize/#current-requirements-for-paid-content","title":"Current Requirements for Paid Content","text":"2+ published items (Prompts, Agents, and Tools)
2 Items with 4+ star ratings (you need community ratings)
Marketplace Agent Rating An agent will automatically rate your prompt, agent, or tool.
Bottom Line: You must build reputation with free, high-quality content first.
"},{"location":"swarms_platform/monetize/#step-by-step-process","title":"Step-by-Step Process","text":""},{"location":"swarms_platform/monetize/#phase-1-build-reputation-required-first","title":"Phase 1: Build Reputation (Required First)","text":""},{"location":"swarms_platform/monetize/#1-improve-your-existing-content","title":"1. Improve Your Existing Content","text":"Add better descriptions and examples to your published items
Use the Rating System: Evaluate and rate prompts, agents, and tools based on their effectiveness. Commenting System: Share feedback and insights with the Swarms community
Ask users for honest reviews and ratings
Focus on these categories:
Agents: Marketing, finance, or programming automation
Prompts: Templates for specific business tasks
Tools: Utilities that solve real problems
Target: 3-5 additional items, all aiming for 4+ star ratings
"},{"location":"swarms_platform/monetize/#3-get-community-ratings","title":"3. Get Community Ratings","text":"Share your content in relevant communities
Engage with users who try your content
Respond to feedback and improve based on comments
Be patient - ratings take time to accumulate
Three primary monetization avenues exist: AI agents (autonomous task-execution models), prompts (pre-optimized input templates), and tools (development utilities like data preprocessors)
Pricing Options:
One-time: $0.01 - $999,999 USD
Subscription: Monthly/annual recurring fees (Coming Soon)
Usage-based: Pay per API call or computation (Coming Soon)
Monitor your revenue and user feedback
Developers can bundle assets\u2014such as pairing prompt libraries with compatible agents\u2014creating value-added packages
Create bundles of related content for higher value
Adjust pricing based on demand
Business Automation Agents - Marketing, sales, finance
Industry-Specific Prompts - Legal, medical, technical writing
Integration Tools - APIs, data processors, connectors
Simple prompts: $1-50
Complex agents: $20-500+
Enterprise tools: $100-1000+
Publishing low-quality content to meet quantity requirements
Not responding to user feedback
Setting prices too high before building reputation
Copying existing solutions without adding value
Ignoring community guidelines
The Swarms Playground (https://swarms.world/platform/playground
) is an interactive testing environment that allows you to experiment with the Swarms API in real-time. This powerful tool enables you to configure AI agents, test different parameters, and generate code examples in multiple programming languages without writing any code manually.
Real-time API Testing: Execute Swarms API calls directly in the browser
Multi-language Code Generation: Generate code examples in Python, Rust, Go, and TypeScript
Interactive Configuration: Visual interface for setting up agent parameters
Live Output: See API responses immediately in the output terminal
Code Export: Copy generated code for use in your applications
The playground supports code generation in four programming languages:
Python: Default language with requests
library implementation
Rust: Native Rust HTTP client implementation
Go: Standard Go HTTP package implementation
TypeScript: Node.js/browser-compatible implementation
Switch between languages using the dropdown menu in the top-right corner to see language-specific code examples.
"},{"location":"swarms_platform/playground_page/#agent-modes","title":"Agent Modes","text":"The playground offers two distinct modes for testing different types of AI implementations:
"},{"location":"swarms_platform/playground_page/#single-agent-mode","title":"Single Agent Mode","text":"Test individual AI agents with specific configurations and tasks. Ideal for: - Prototype testing
Parameter optimization
Simple task automation
API familiarization
Experiment with coordinated AI agent systems. Perfect for: - Complex workflow automation
Collaborative AI systems
Distributed task processing
Advanced orchestration scenarios
Purpose: Unique identifier for your agent Usage: Helps distinguish between different agent configurations Example: \"customer_service_bot\"
, \"data_analyst\"
, \"content_writer\"
Purpose: Specifies which AI model to use for the agent Default: gpt-4o-mini
Options: Various OpenAI and other supported models Impact: Affects response quality, speed, and cost
Purpose: Human-readable description of the agent's purpose Usage: Documentation and identification Best Practice: Be specific about the agent's intended function
"},{"location":"swarms_platform/playground_page/#system-prompt","title":"System Prompt","text":"Purpose: Core instructions that define the agent's behavior and personality Impact: Critical for agent performance and response style Tips: - Be clear and specific
Include role definition
Specify output format if needed
Add relevant constraints
Range: 0.0 - 2.0
Default: 0.5 Purpose: Controls randomness in responses - Low (0.0-0.3): More deterministic, consistent responses
Medium (0.4-0.7): Balanced creativity and consistency
High (0.8-2.0): More creative and varied responses
Default: 8192 Purpose: Maximum length of the agent's response Considerations: - Higher values allow longer responses
Impacts API costs
Model-dependent limits apply
Default: worker
Purpose: Defines the agent's role in multi-agent scenarios Common Roles: worker
, manager
, coordinator
, specialist
Default: 1 Purpose: Number of iterations the agent can perform Usage: - 1
: Single response
>1
: Allows iterative problem solvingPurpose: Model Context Protocol URL for external integrations Usage: Connect to external services or data sources Format: Valid URL pointing to MCP-compatible service
"},{"location":"swarms_platform/playground_page/#task-definition","title":"Task Definition","text":""},{"location":"swarms_platform/playground_page/#task","title":"Task","text":"Purpose: Specific instruction or query for the agent to process Best Practices: - Be specific and clear
Include all necessary context
Specify desired output format
Provide examples when helpful
Temperature Testing: Try different temperature values to find optimal creativity levels
Prompt Engineering: Iterate on system prompts to improve responses
Token Optimization: Adjust max_tokens based on expected response length
Start Simple: Begin with basic tasks and gradually increase complexity
Iterative Refinement: Use playground results to refine your approach
Documentation: Keep notes on successful configurations
The Output Terminal displays:
Agent Responses: Direct output from the AI agent
Error Messages: API errors or configuration issues
Execution Status: Success/failure indicators
Response Metadata: Token usage, timing information
The Code Preview section shows:
Complete Implementation: Ready-to-use code in your selected language
API Configuration: Proper headers and authentication setup
Request Structure: Correctly formatted payload
Response Handling: Basic error handling and output processing
import requests\n\nurl = \"https://swarms-api-285321057562.us-east1.run.app/v1/agent/completions\"\nheaders = {\n \"Content-Type\": \"application/json\",\n \"x-api-key\": \"your-api-key-here\"\n}\n\npayload = {\n \"agent_config\": {\n \"agent_name\": \"example_agent\",\n \"description\": \"Example agent for demonstration\",\n \"system_prompt\": \"You are a helpful assistant.\",\n \"model_name\": \"gpt-4o-mini\",\n \"auto_generate_prompt\": false,\n \"max_tokens\": 8192,\n \"temperature\": 0.5,\n \"role\": \"worker\",\n \"max_loops\": 1,\n \"tools_list_dictionary\": null,\n \"mcp_url\": null\n },\n \"task\": \"Explain quantum computing in simple terms\"\n}\n\nresponse = requests.post(url, json=payload, headers=headers)\nprint(response.json())\n
"},{"location":"swarms_platform/playground_page/#key-code-components","title":"Key Code Components","text":""},{"location":"swarms_platform/playground_page/#api-endpoint","title":"API Endpoint","text":"URL: https://swarms-api-285321057562.us-east1.run.app/v1/agent/completions
Method: POST
Authentication: API key in x-api-key
header
Headers: Content-Type and API key
Payload: Agent configuration and task
Response: JSON with agent output and metadata
API Key Management: Never expose API keys in client-side code
Environment Variables: Store sensitive credentials securely
Rate Limiting: Respect API rate limits in production
Parameter Tuning: Optimize temperature and max_tokens for your use case
Prompt Engineering: Craft efficient system prompts
Caching: Implement response caching for repeated queries
Prototype in Playground: Test configurations before implementation
Document Successful Configs: Save working parameter combinations
Iterate and Improve: Use playground for continuous optimization
Check API Key: Ensure valid API key is configured
Verify Parameters: All required fields must be filled
Network Issues: Check internet connection
Review System Prompt: Ensure clear instructions
Adjust Temperature: Try different creativity levels
Check Task Definition: Verify task clarity and specificity
Language Selection: Ensure correct language is selected
Copy Functionality: Use the \"Copy Code\" button for accurate copying
Syntax Validation: Test generated code in your development environment
The Swarms Playground is your gateway to understanding and implementing the Swarms API effectively. Use it to experiment, learn, and build confidence before deploying AI agents in production environments.
"},{"location":"swarms_platform/share_and_discover/","title":"Swarms Marketplace Documentation","text":"The Swarms Marketplace (https://swarms.world
) is a vibrant community hub where developers, researchers, and agent enthusiasts share and discover cutting-edge agent tools, agents, and prompts. This collaborative platform empowers you to leverage the collective intelligence of the Swarms community while contributing your own innovations.
Ready-to-use agent agents for specific tasks and industries:
Specialized Agents: From healthcare diagnostics to financial analysis
Multi-Agent Systems: Collaborative agent swarms for complex workflows
Industry Solutions: Pre-built agents for healthcare, finance, education, and more
Custom Implementations: Unique agent architectures and approaches
System prompts and instructions that define agent behavior:
Role-Specific Prompts: Behavioral psychologist, documentation specialist, financial advisor
System Templates: Production-grade prompts for various use cases
Collaborative Frameworks: Multi-agent coordination prompts
Task-Specific Instructions: Optimized prompts for specific workflows
APIs, integrations, and utilities that extend agent capabilities:
API Integrations: Connect to external services and data sources
Data Fetchers: Tools for retrieving information from various platforms
Workflow Utilities: Helper functions and automation tools
Communication Tools: Integrations with messaging platforms and services
Industry Categories:
Healthcare: Medical diagnosis, patient care, research tools
Education: Learning assistants, curriculum development, assessment tools
Finance: Trading bots, market analysis, financial planning
Research: Academic paper fetchers, data analysis, literature review
Public Safety: Risk assessment, emergency response, safety monitoring
Marketing: Content creation, campagentgn optimization, audience analysis
Sales: Lead generation, customer engagement, sales automation
Customer Support: Chatbots, issue resolution, knowledge management
Discover the most popular and highly-rated content in the community:
Top-Rated Items: Content with 5-star ratings from users
Community Favorites: Most shared and downloaded items
Recent Additions: Latest contributions to the marketplace
Featured Content: Curated selections highlighting exceptional work
Keyword Search: Find specific tools, agents, or prompts by name or description
Category Filters: Browse within specific industry verticals
Rating Filters: Filter by community ratings and reviews
Tag-Based Discovery: Explore content by relevant tags and keywords
\ud83c\udf1f Community Impact
Help fellow developers solve similar challenges
Contribute to the collective advancement of agent technology
Build your reputation in the agent community
\ud83d\udcc8 Professional Growth
Showcase your expertise and innovative solutions
Receive feedback and suggestions from the community
Network with like-minded professionals and researchers
\ud83d\udd04 Knowledge Exchange
Learn from others who use and modify your contributions
Discover new approaches and improvements to your work
Foster collaborative innovation and problem-solving
\ud83c\udfc6 Recognition
Get credited for your contributions with author attribution
Build a portfolio of public agent implementations
Gagentn visibility in the growing Swarms ecosystem
Prompts are the foundation of agent behavior - share your carefully crafted instructions with the community.
Step-by-Step Process:
Fill Required Fields:
Name: Descriptive title that clearly indicates the prompt's purpose
Description: Detagentled explanation of what the prompt does and when to use it
Prompt: The complete system prompt or instruction text
Enhance Your Submission:
Add Image: Upload a visual representation (up to 60MB)
Select Categories: Choose relevant industry categories
Add Tags: Include searchable keywords and descriptors
Submit: Review and submit your prompt to the community
Best Practices for Prompts:
Be Specific: Clearly define the agent's role and expected behavior
Include Context: Provide background information and use case scenarios
Test Thoroughly: Ensure your prompt produces consistent, high-quality results
Document Parameters: Explagentn any variables or customization options
Agents are complete agent implementations - share your working solutions with the community.
Step-by-Step Process:
Name: Clear, descriptive agent name
Description: Comprehensive explanation of functionality and use cases
Agent Code: Complete, working implementation
Language: Select the programming language (Python, etc.)
Optimize Discoverability:
Categories: Choose appropriate industry verticals
Image: Add a representative image or diagram
Tags: Include relevant keywords for searchability
Submit: Finalize and share your agent with the community
Agent Submission Guidelines:
Complete Implementation: Provide fully functional, tested code
Clear Documentation: Include usage instructions and configuration detagentls
Error Handling: Implement robust error handling and validation
Dependencies: List all required libraries and dependencies
Examples: Provide usage examples and expected outputs
Tools extend the capabilities of the Swarms ecosystem - share your integrations and utilities.
What Makes a Great Tool:
Solves Real Problems: Addresses common pagentn points or workflow gaps
Easy Integration: Simple to implement and configure
Well Documented: Clear instructions and examples
Reliable Performance: Tested and optimized for production use
For All Submissions:
Start with Purpose: Lead with what your contribution does
Explagentn Benefits: Highlight the value and use cases
Include Technical Detagentls: Mention key features and capabilities
Provide Context: Explagentn when and why to use your contribution
Example Description Structure:
[Brief summary of what it does]\n\nKey Features:\n- [Feature 1 with benefit]\n\n- [Feature 2 with benefit]\n\n- [Feature 3 with benefit]\n\n\nUse Cases:\n- [Scenario 1]\n\n- [Scenario 2]\n\n- [Scenario 3]\n\n\nTechnical Detagentls:\n- [Implementation notes]\n\n- [Requirements or dependencies]\n\n- [Configuration options]\n
"},{"location":"swarms_platform/share_and_discover/#choosing-categories-and-tags","title":"Choosing Categories and Tags","text":"Categories:
Select all relevant industry verticals
Consider cross-industry applications
Choose the primary category first
Tags:
Include technical keywords (API names, frameworks, models)
Add functional descriptors (automation, analysis, generation)
Include use case keywords (customer service, data processing, content creation)
Use common terminology that others would search for
Image Guidelines:
File Size: Maximum 60MB supported
Recommended Types: Screenshots, diagrams, logos, workflow illustrations
Quality: High-resolution images that clearly represent your contribution
Content: Visual representations of functionality, architecture, or results
As a User:
Rate content honestly based on quality and usefulness
Leave constructive feedback to help creators improve
Share your experiences and modifications
As a Creator:
Respond to feedback and questions
Update your submissions based on community input
Engage with users who implement your solutions
Consistency: Regularly contribute high-quality content Responsiveness: Engage with community feedback and questions Innovation: Share unique approaches and creative solutions Collaboration: Build upon and improve existing community contributions
"},{"location":"swarms_platform/share_and_discover/#what-makes-content-successful","title":"What Makes Content Successful","text":"Clear Value Proposition: Immediately obvious benefits and use cases Production Ready: Fully functional, tested implementations Good Documentation: Clear instructions and examples Active Magentntenance: Regular updates and community engagement Unique Approach: Novel solutions or creative implementations
"},{"location":"swarms_platform/share_and_discover/#getting-started","title":"Getting Started","text":""},{"location":"swarms_platform/share_and_discover/#for-new-contributors","title":"For New Contributors","text":"\u2705 Test your contribution thoroughly
\u2705 Write clear, comprehensive documentation
\u2705 Choose appropriate categories and tags
\u2705 Create or find a representative image
\u2705 Review similar existing content
\u2705 Monitor for community feedback
\u2705 Respond to questions and comments
\u2705 Update based on user suggestions
\u2705 Share your contribution on social platforms
\u2705 Continue improving and iterating
The Swarms Marketplace thrives on community participation. Whether you're sharing a simple prompt or a complex multi-agent system, your contribution makes the entire ecosystem stronger. Start exploring, contributing, and collaborating today!
Ready to contribute? Visit https://swarms.world
and click \"Add Prompt,\" \"Add Agent,\" or \"Add Tool\" to share your innovation with the world.
Together, we're building the future of agent collaboration, one contribution at a time.
"},{"location":"swarms_platform/agents/agents_api/","title":"Agents API Documentation","text":"The https://swarms.world/api/add-agent
endpoint allows users to add a new agent to the Swarms platform. This API accepts a POST request with a JSON body containing details of the agent, such as its name, description, use cases, language, tags and requirements. The request must be authenticated using an API key.
https://swarms.world/api/add-agent
application/json
The request body should be a JSON object with the following attributes:
Attribute Type Description Requiredname
string
The name of the agent. Yes agent
string
The agent text. Yes description
string
A brief description of the agent. Yes language
string
The agent's syntax language with a default of python No useCases
array
An array of use cases, each containing a title and description. Yes requirements
array
An array of requirements, each containing a package name and installation. Yes tags
string
Comma-separated tags for the agent. Yes"},{"location":"swarms_platform/agents/agents_api/#usecases-structure","title":"useCases
Structure","text":"Each use case in the useCases
array should be an object with the following attributes:
title
string
The title of the use case. Yes description
string
A brief description of the use case. Yes"},{"location":"swarms_platform/agents/agents_api/#requirements-structure","title":"requirements
Structure","text":"Each requirement in the requirements
array should be an object with the following attributes:
package
string
The name of the package. Yes installation
string
Installation command for the package Yes"},{"location":"swarms_platform/agents/agents_api/#example-usage","title":"Example Usage","text":""},{"location":"swarms_platform/agents/agents_api/#python","title":"Python","text":"import requests\nimport json\nimport os\n\n\nurl = \"https://swarms.world/api/add-agent\"\n\nheaders = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {os.getenv(\"SWARMS_API_KEY\")}\"\n}\n\ndata = {\n \"name\": \"Example Agent\",\n \"agent\": \"This is an example agent from an API route.\",\n \"description\": \"Description of the agent.\",\n \"language\": \"python\",\n \"useCases\": [\n {\"title\": \"Use case 1\", \"description\": \"Description of use case 1\"},\n {\"title\": \"Use case 2\", \"description\": \"Description of use case 2\"}\n ],\n \"requirements\": [\n {\"package\": \"pip\", \"installation\": \"pip install\"},\n {\"package\": \"pip3\", \"installation\": \"pip3 install\"}\n ],\n \"tags\": \"example, agent\"\n}\n\nresponse = requests.post(url, headers=headers, data=json.dumps(data))\nprint(response.json())\n
"},{"location":"swarms_platform/agents/agents_api/#nodejs","title":"Node.js","text":"const fetch = require(\"node-fetch\");\n\nasync function addAgentHandler() {\n try {\n const response = await fetch(\"https://swarms.world/api/add-agent\", {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: \"Bearer {apiKey}\",\n },\n body: JSON.stringify({\n name: \"Example Agent\",\n agent: \"This is an example agent from an API route.\",\n description: \"Description of the agent.\",\n language: \"python\",\n useCases: [\n { title: \"Use case 1\", description: \"Description of use case 1\" },\n { title: \"Use case 2\", description: \"Description of use case 2\" },\n ],\n requirements: [\n { package: \"pip\", installation: \"pip install\" },\n { package: \"pip3\", installation: \"pip3 install\" },\n ],\n tags: \"example, agent\",\n }),\n });\n\n const result = await response.json();\n console.log(result);\n } catch (error) {\n console.error(\"An error has occurred\", error);\n }\n}\n\naddAgentHandler();\n
"},{"location":"swarms_platform/agents/agents_api/#go","title":"Go","text":"package main\n\nimport (\n \"bytes\"\n \"encoding/json\"\n \"fmt\"\n \"net/http\"\n)\n\nfunc main() {\n url := \"https://swarms.world/api/add-agent\"\n payload := map[string]interface{}{\n \"name\": \"Example Agent\",\n \"agent\": \"This is an example agent from an API route.\",\n \"description\": \"Description of the agent.\",\n \"useCases\": []map[string]string{\n {\"title\": \"Use case 1\", \"description\": \"Description of use case 1\"},\n {\"title\": \"Use case 2\", \"description\": \"Description of use case 2\"},\n },\n \"requirements\": []map[string]string{\n {\"package\": \"pip\", \"installation\": \"pip install\"},\n {\"package\": \"pip3\", \"installation\": \"pip3 install\"}\n },\n \"tags\": \"example, agent\",\n }\n jsonPayload, _ := json.Marshal(payload)\n\n req, _ := http.NewRequest(\"POST\", url, bytes.NewBuffer(jsonPayload))\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", \"Bearer {apiKey}\")\n\n client := &http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n fmt.Println(\"An error has occurred\", err)\n return\n }\n defer resp.Body.Close()\n\n var result map[string]interface{}\n json.NewDecoder(resp.Body).Decode(&result)\n fmt.Println(result)\n}\n
"},{"location":"swarms_platform/agents/agents_api/#curl","title":"cURL","text":"curl -X POST https://swarms.world/api/add-agent \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer {apiKey}\" \\\n-d '{\n \"name\": \"Example Agent\",\n \"agent\": \"This is an example agent from an API route.\",\n \"description\": \"Description of the agent.\",\n \"language\": \"python\",\n \"useCases\": [\n { title: \"Use case 1\", description: \"Description of use case 1\" },\n { title: \"Use case 2\", description: \"Description of use case 2\" },\n ],\n \"requirements\": [\n { package: \"pip\", installation: \"pip install\" },\n { package: \"pip3\", installation: \"pip3 install\" },\n ],\n \"tags\": \"example, agent\",\n}'\n
"},{"location":"swarms_platform/agents/agents_api/#response","title":"Response","text":"The response will be a JSON object containing the result of the operation. Example response:
{\n \"success\": true,\n \"message\": \"Agent added successfully\",\n \"data\": {\n \"id\": \"agent_id\",\n \"name\": \"Example Agent\",\n \"agent\": \"This is an example agent from an API route.\",\n \"description\": \"Description of the agent.\",\n \"language\": \"python\",\n \"useCases\": [\n { \"title\": \"Use case 1\", \"description\": \"Description of use case 1\" },\n { \"title\": \"Use case 2\", \"description\": \"Description of use case 2\" }\n ],\n \"requirements\": [\n { \"package\": \"pip\", \"installation\": \"pip install\" },\n { \"package\": \"pip3\", \"installation\": \"pip3 install\" }\n ],\n \"tags\": \"example, agent\"\n }\n}\n
"},{"location":"swarms_platform/agents/edit_agent/","title":"Endpoint: Edit Agent","text":"The https://swarms.world/api/edit-agent
endpoint allows users to edit an existing agent on the Swarms platform. This API accepts a POST request with a JSON body containing the agent details to be updated, such as its id, name, description, use cases, language, tags and requirements. The request must be authenticated using an API key.
https://swarms.world/api/edit-agent
application/json
The request body should be a JSON object with the following attributes:
Attribute Type Description Requiredid
string
The ID of the agent to be edited. Yes name
string
The name of the agent. Yes agent
string
The agent text. Yes description
string
A brief description of the agent. Yes language
string
The agent's syntax language No useCases
array
An array of use cases, each containing a title and description. Yes requirements
array
An array of requirements, each containing a package name and installation. Yes tags
string
Comma-separated tags for the agent. No"},{"location":"swarms_platform/agents/edit_agent/#usecases-structure","title":"useCases
Structure","text":"Each use case in the useCases
array should be an object with the following attributes:
title
string
The title of the use case. Yes description
string
A brief description of the use case. Yes"},{"location":"swarms_platform/agents/edit_agent/#requirements-structure","title":"requirements
Structure","text":"Each requirement in the requirements
array should be an object with the following attributes:
package
string
The name of the package. Yes installation
string
Installation command for the package Yes"},{"location":"swarms_platform/agents/edit_agent/#example-usage","title":"Example Usage","text":""},{"location":"swarms_platform/agents/edit_agent/#python","title":"Python","text":"import requests\nimport json\n\nurl = \"https://swarms.world/api/edit-agent\"\nheaders = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer {apiKey}\"\n}\ndata = {\n \"id\": \"agent_id\",\n \"name\": \"Updated agent\",\n \"agent\": \"This is an updated agent from an API route.\",\n \"description\": \"Updated description of the agent.\",\n \"language\": \"javascript\",\n \"useCases\": [\n {\"title\": \"Updated use case 1\", \"description\": \"Updated description of use case 1\"},\n {\"title\": \"Updated use case 2\", \"description\": \"Updated description of use case 2\"}\n ],\n \"requirements\": [\n { \"package\": \"express\", \"installation\": \"npm install express\" },\n { \"package\": \"lodash\", \"installation\": \"npm install lodash\" },\n ],\n \"tags\": \"updated, agent\"\n}\n\nresponse = requests.post(url, headers=headers, data=json.dumps(data))\nprint(response.json())\n
"},{"location":"swarms_platform/agents/edit_agent/#nodejs","title":"Node.js","text":"const fetch = require(\"node-fetch\");\n\nasync function editAgentHandler() {\n try {\n const response = await fetch(\"https://swarms.world/api/edit-agent\", {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: \"Bearer {apiKey}\",\n },\n body: JSON.stringify({\n id: \"agent_id\",\n name: \"Updated agent\",\n agent: \"This is an updated agent from an API route.\",\n description: \"Updated description of the agent.\",\n language: \"javascript\",\n useCases: [\n {\n title: \"Updated use case 1\",\n description: \"Updated description of use case 1\",\n },\n {\n title: \"Updated use case 2\",\n description: \"Updated description of use case 2\",\n },\n ],\n requirements: [\n { package: \"express\", installation: \"npm install express\" },\n { package: \"lodash\", installation: \"npm install lodash\" },\n ],\n tags: \"updated, agent\",\n }),\n });\n\n const result = await response.json();\n console.log(result);\n } catch (error) {\n console.error(\"An error has occurred\", error);\n }\n}\n\neditAgentHandler();\n
"},{"location":"swarms_platform/agents/edit_agent/#go","title":"Go","text":"package main\n\nimport (\n \"bytes\"\n \"encoding/json\"\n \"fmt\"\n \"net/http\"\n)\n\nfunc main() {\n url := \"https://swarms.world/api/edit-agent\"\n payload := map[string]interface{}{\n \"id\": \"agent_id\",\n \"name\": \"Updated Agent\",\n \"agent\": \"This is an updated agent from an API route.\",\n \"description\": \"Updated description of the agent.\",\n \"language\": \"javascript\",\n \"useCases\": []map[string]string{\n {\"title\": \"Updated use case 1\", \"description\": \"Updated description of use case 1\"},\n {\"title\": \"Updated use case 2\", \"description\": \"Updated description of use case 2\"},\n },\n \"requirements\": []map[string]string{\n {\"package\": \"express\", \"installation\": \"npm install express\"},\n {\"package\": \"lodash\", \"installation\": \"npm install lodash\"},\n },\n \"tags\": \"updated, agent\",\n }\n jsonPayload, _ := json.Marshal(payload)\n\n req, _ := http.NewRequest(\"POST\", url, bytes.NewBuffer(jsonPayload))\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", \"Bearer {apiKey}\")\n\n client := &http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n fmt.Println(\"An error has occurred\", err)\n return\n }\n defer resp.Body.Close()\n\n var result map[string]interface{}\n json.NewDecoder(resp.Body).Decode(&result)\n fmt.Println(result)\n}\n
"},{"location":"swarms_platform/agents/edit_agent/#curl","title":"cURL","text":"curl -X POST https://swarms.world/api/edit-agent \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer {apiKey}\" \\\n-d '{\n \"id\": \"agent_id\",\n \"name\": \"Updated agent\",\n \"agent\": \"This is an updated agent from an API route.\",\n \"description\": \"Updated description of the agent.\",\n \"language\": \"javascript\",\n \"useCases\": [\n {\"title\": \"Updated use case 1\", \"description\": \"Updated description of use case 1\"},\n {\"title\": \"Updated use case 2\", \"description\": \"Updated description of use case 2\"}\n ],\n \"requirements\": [\n { \"package\": \"express\", \"installation\": \"npm install express\" },\n { \"package\": \"lodash\", \"installation\": \"npm install lodash\" },\n ],\n \"tags\": \"updated, agent\"\n}'\n
"},{"location":"swarms_platform/agents/edit_agent/#response","title":"Response","text":"The response will be a JSON object containing the result of the operation. Example response:
{\n \"success\": true,\n \"message\": \"Agent updated successfully\",\n \"data\": {\n \"id\": \"agent_id\",\n \"name\": \"Updated agent\",\n \"agent\": \"This is an updated agent from an API route.\",\n \"description\": \"Updated description of the agent.\",\n \"language\": \"javascript\",\n \"useCases\": [\n {\n \"title\": \"Updated use case 1\",\n \"description\": \"Updated description of use case 1\"\n },\n {\n \"title\": \"Updated use case 2\",\n \"description\": \"Updated description of use case 2\"\n }\n ],\n \"requirements\": [\n { \"package\": \"express\", \"installation\": \"npm install express\" },\n { \"package\": \"lodash\", \"installation\": \"npm install lodash\" }\n ],\n \"tags\": \"updated, agent\"\n }\n}\n
In case of an error, the response will contain an error message detailing the issue.
"},{"location":"swarms_platform/agents/edit_agent/#common-issues-and-tips","title":"Common Issues and Tips","text":"Authorization
header is correctly set with a valid API key.name
, agent
, description
, useCases
, requirements
) are included in the request body.This comprehensive documentation provides all the necessary information to effectively use the https://swarms.world/api/add-agent
and https://swarms.world/api/edit-agent
endpoints, including details on request parameters, example code snippets in multiple programming languages, and troubleshooting tips.
getAllAgents
API Endpoint","text":"The getAllAgents
API endpoint is a part of the swarms.world
application, designed to fetch all agent records from the database. This endpoint is crucial for retrieving various agents stored in the swarms_cloud_agents
table, including their metadata such as name, description, use cases, language, requirements and tags. It provides an authenticated way to access this data, ensuring that only authorized users can retrieve the information.
The primary purpose of this API endpoint is to provide a method for clients to fetch a list of agents stored in the swarms_cloud_agents
table, with the ability to filter by name, tags, language, requirement package and use cases. It ensures data integrity and security by using an authentication guard and handles various HTTP methods and errors gracefully.
https://swarms.world/get-agents\n
"},{"location":"swarms_platform/agents/fetch_agents/#http-method","title":"HTTP Method","text":"GET\n
"},{"location":"swarms_platform/agents/fetch_agents/#request-headers","title":"Request Headers","text":"Header Type Required Description Authorization String Yes Bearer token for API access"},{"location":"swarms_platform/agents/fetch_agents/#query-parameters","title":"Query Parameters","text":"use_cases
array. The query is case-insensitive.requirements
array. The query is case-insensitive.Returns an array of agents.
[\n {\n \"id\": \"string\",\n \"name\": \"string\",\n \"description\": \"string\",\n \"language\": \"string\",\n \"agent\": \"string\",\n \"use_cases\": [\n {\n \"title\": \"string\",\n \"description\": \"string\"\n }\n ],\n \"requirements\": [\n {\n \"package\": \"string\",\n \"installation\": \"string\"\n }\n ],\n \"tags\": \"string\"\n },\n ...\n]\n
"},{"location":"swarms_platform/agents/fetch_agents/#error-responses","title":"Error Responses","text":"{\n \"error\": \"Method <method> Not Allowed\"\n}\n
{\n \"error\": \"Could not fetch agents\"\n}\n
"},{"location":"swarms_platform/agents/fetch_agents/#fetch-agent-by-id","title":"Fetch Agent by ID","text":""},{"location":"swarms_platform/agents/fetch_agents/#endpoint-url_1","title":"Endpoint URL","text":"https://swarms.world/get-agents/[id]\n
"},{"location":"swarms_platform/agents/fetch_agents/#http-method_1","title":"HTTP Method","text":"GET\n
"},{"location":"swarms_platform/agents/fetch_agents/#request-headers_1","title":"Request Headers","text":"Header Type Required Description Authorization String Yes Bearer token for API access"},{"location":"swarms_platform/agents/fetch_agents/#response_1","title":"Response","text":""},{"location":"swarms_platform/agents/fetch_agents/#success-response-200_1","title":"Success Response (200)","text":"Returns a single agent by ID.
{\n \"id\": \"string\",\n \"name\": \"string\",\n \"description\": \"string\",\n \"language\": \"string\",\n \"agent\": \"string\",\n \"use_cases\": [\n {\n \"title\": \"string\",\n \"description\": \"string\"\n }\n ],\n \"requirements\": [\n {\n \"package\": \"string\",\n \"installation\": \"string\"\n }\n ],\n \"tags\": \"string\"\n}\n
"},{"location":"swarms_platform/agents/fetch_agents/#error-responses_1","title":"Error Responses","text":"{\n \"error\": \"Agent not found\"\n}\n
{\n \"error\": \"Could not fetch agent\"\n}\n
"},{"location":"swarms_platform/agents/fetch_agents/#request-handling","title":"Request Handling","text":"Method Validation: The endpoint only supports the GET
method. If a different HTTP method is used, it responds with a 405 Method Not Allowed
status.
Database Query:
Fetching All Agents: The endpoint uses the supabaseAdmin
client to query the swarms_cloud_agents
table. Filters are applied based on the query parameters (name
, tag
, language
, req_package
and use_case
).
Fetching an Agent by ID: The endpoint retrieves a single agent from the swarms_cloud_agents
table by its unique ID.
Response: On success, it returns the agent data in JSON format. In case of an error during the database query, a 500 Internal Server Error
status is returned. For fetching by ID, if the agent is not found, it returns a 404 Not Found
status.
import fetch from \"node-fetch\";\n\n// Fetch all agents with optional filters\nconst getAgents = async (filters) => {\n const queryString = new URLSearchParams(filters).toString();\n const response = await fetch(\n `https://swarms.world/get-agents?${queryString}`,\n {\n method: \"GET\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: \"Bearer {apiKey}\",\n },\n }\n );\n\n if (!response.ok) {\n throw new Error(`Error: ${response.statusText}`);\n }\n\n const data = await response.json();\n console.log(data);\n};\n\n// Fetch agent by ID\nconst getAgentById = async (id) => {\n const response = await fetch(`https://swarms.world/get-agents/${id}`, {\n method: \"GET\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: \"Bearer {apiKey}\",\n },\n });\n\n if (!response.ok) {\n throw new Error(`Error: ${response.statusText}`);\n }\n\n const data = await response.json();\n console.log(data);\n};\n\n// Example usage\ngetAgents({\n name: \"example\",\n tag: \"tag1,tag2\",\n use_case: \"example\",\n language: \"langauge\",\n req_package: \"package_name\",\n}).catch(console.error);\ngetAgentById(\"123\").catch(console.error);\n
"},{"location":"swarms_platform/agents/fetch_agents/#python","title":"Python","text":"import requests\n\nAPI_KEY = \"{apiKey}\"\n\n# Fetch all agents with optional filters\ndef get_agents(filters):\n query_string = \"&\".join([f\"{key}={value}\" for key, value in filters.items()])\n url = f\"https://swarms.world/get-agents?{query_string}\"\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {API_KEY}\",\n }\n response = requests.get(url, headers=headers)\n\n if not response.ok:\n raise Exception(f\"Error: {response.reason}\")\n\n data = response.json()\n print(data)\n return data\n\n# Fetch agent by ID\ndef get_agent_by_id(agent_id):\n url = f\"https://swarms.world/get-agents/{agent_id}\"\n headers = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": f\"Bearer {API_KEY}\",\n }\n response = requests.get(url, headers=headers)\n\n if not response.ok:\n raise Exception(f\"Error: {response.reason}\")\n\n data = response.json()\n print(data)\n return data\n\n# Example usage\ntry:\n get_agents({\n \"name\": \"example\",\n \"tag\": \"tag1,tag2\",\n \"use_case\": \"example\",\n \"language\": \"language\",\n \"req_package\": \"package_name\",\n })\nexcept Exception as e:\n print(e)\n\ntry:\n get_agent_by_id(\"123\")\nexcept Exception as e:\n print(e)\n
"},{"location":"swarms_platform/agents/fetch_agents/#curl","title":"cURL","text":"# Fetch all agents with optional filters\ncurl -X GET \"https://swarms.world/get-agents?name=example&tag=tag1,tag2&use_case=example&language=language&req_package=package_name\" \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer {apiKey}\"\n\n# Fetch agent by ID\ncurl -X GET \"https://swarms.world/get-agents/123\" \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer {apiKey}\"\n
"},{"location":"swarms_platform/agents/fetch_agents/#go","title":"Go","text":"package main\n\nimport (\n \"encoding/json\"\n \"fmt\"\n \"net/http\"\n \"net/url\"\n \"os\"\n)\n\nfunc getAgents(filters map[string]string) error {\n query := url.Values{}\n for key, value := range filters {\n query.Set(key, value)\n }\n\n url := fmt.Sprintf(\"https://swarms.world/get-agents?%s\", query.Encode())\n req, err := http.NewRequest(\"GET\", url, nil)\n if err != nil {\n return err\n }\n\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", \"Bearer {apiKey}\")\n\n client := &http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n return err\n }\n defer resp.Body.Close()\n\n if resp.StatusCode != http.StatusOK {\n return fmt.Errorf(\"error: %s\", resp.Status)\n }\n\n var data interface{}\n if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {\n return err\n }\n\n fmt.Println(data)\n return nil\n}\n\nfunc getAgentById(id string) error {\n url := fmt.Sprintf(\"https://swarms.world/get-agents/%s\", id)\n req, err := http.NewRequest(\"GET\", url, nil)\n if err != nil {\n return err\n }\n\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", \"Bearer {apiKey}\")\n\n client := &http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n return err\n }\n defer resp.Body.Close()\n\n if resp.StatusCode != http.StatusOK {\n return fmt.Errorf(\"error: %s\", resp.Status)\n }\n\n var data interface{}\n if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {\n return err\n }\n\n fmt.Println(data)\n return nil\n}\nfunc main() {\n filters := map[string]string{\n \"name\": \"example\",\n \"tag\": \"tag1,tag2\",\n \"use_case\": \"example\",\n \"language\": \"language\",\n \"req_package\": \"package_name\",\n }\n\n getAgents(filters)\n getAgentById(\"123\")\n}\n
"},{"location":"swarms_platform/agents/fetch_agents/#attributes-table","title":"Attributes Table","text":"Attribute Type Description id String Unique identifier for the agent name String Name of the agent description String Description of the agent agent String The actual agent lanuage String The code language of the agent use_cases Array Use cases for the agent requirements Array Requirements for the agent tags String Tags associated with the agent"},{"location":"swarms_platform/agents/fetch_agents/#additional-information-and-tips","title":"Additional Information and Tips","text":"This documentation provides a comprehensive guide to the getAllAgents
API endpoint, including usage examples in multiple programming languages and detailed attribute descriptions.
The https://swarms.world/api/add-prompt
endpoint allows users to add a new prompt to the Swarms platform. This API accepts a POST request with a JSON body containing details of the prompt, such as its name, description, use cases, and tags. The request must be authenticated using an API key.
https://swarms.world/api/add-prompt
application/json
The request body should be a JSON object with the following attributes:
Attribute Type Description Requiredname
string
The name of the prompt. Yes prompt
string
The prompt text. Yes description
string
A brief description of the prompt. Yes useCases
array
An array of use cases, each containing a title and description. Yes tags
string
Comma-separated tags for the prompt. No"},{"location":"swarms_platform/prompts/add_prompt/#usecases-structure","title":"useCases
Structure","text":"Each use case in the useCases
array should be an object with the following attributes:
title
string
The title of the use case. Yes description
string
A brief description of the use case. Yes"},{"location":"swarms_platform/prompts/add_prompt/#example-usage","title":"Example Usage","text":""},{"location":"swarms_platform/prompts/add_prompt/#python","title":"Python","text":"import requests\nimport json\n\nurl = \"https://swarms.world/api/add-prompt\"\nheaders = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer {apiKey}\"\n}\ndata = {\n \"name\": \"Example Prompt\",\n \"prompt\": \"This is an example prompt from an API route.\",\n \"description\": \"Description of the prompt.\",\n \"useCases\": [\n {\"title\": \"Use case 1\", \"description\": \"Description of use case 1\"},\n {\"title\": \"Use case 2\", \"description\": \"Description of use case 2\"}\n ],\n \"tags\": \"example, prompt\"\n}\n\nresponse = requests.post(url, headers=headers, data=json.dumps(data))\nprint(response.json())\n
"},{"location":"swarms_platform/prompts/add_prompt/#nodejs","title":"Node.js","text":"const fetch = require(\"node-fetch\");\n\nasync function addPromptsHandler() {\n try {\n const response = await fetch(\"https://swarms.world/api/add-prompt\", {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: \"Bearer {apiKey}\",\n },\n body: JSON.stringify({\n name: \"Example Prompt\",\n prompt: \"This is an example prompt from an API route.\",\n description: \"Description of the prompt.\",\n useCases: [\n { title: \"Use case 1\", description: \"Description of use case 1\" },\n { title: \"Use case 2\", description: \"Description of use case 2\" },\n ],\n tags: \"example, prompt\",\n }),\n });\n\n const result = await response.json();\n console.log(result);\n } catch (error) {\n console.error(\"An error has occurred\", error);\n }\n}\n\naddPromptsHandler();\n
"},{"location":"swarms_platform/prompts/add_prompt/#go","title":"Go","text":"package main\n\nimport (\n \"bytes\"\n \"encoding/json\"\n \"fmt\"\n \"net/http\"\n)\n\nfunc main() {\n url := \"https://swarms.world/api/add-prompt\"\n payload := map[string]interface{}{\n \"name\": \"Example Prompt\",\n \"prompt\": \"This is an example prompt from an API route.\",\n \"description\": \"Description of the prompt.\",\n \"useCases\": []map[string]string{\n {\"title\": \"Use case 1\", \"description\": \"Description of use case 1\"},\n {\"title\": \"Use case 2\", \"description\": \"Description of use case 2\"},\n },\n \"tags\": \"example, prompt\",\n }\n jsonPayload, _ := json.Marshal(payload)\n\n req, _ := http.NewRequest(\"POST\", url, bytes.NewBuffer(jsonPayload))\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", \"Bearer {apiKey}\")\n\n client := &http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n fmt.Println(\"An error has occurred\", err)\n return\n }\n defer resp.Body.Close()\n\n var result map[string]interface{}\n json.NewDecoder(resp.Body).Decode(&result)\n fmt.Println(result)\n}\n
"},{"location":"swarms_platform/prompts/add_prompt/#curl","title":"cURL","text":"curl -X POST https://swarms.world/api/add-prompt \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer {apiKey}\" \\\n-d '{\n \"name\": \"Example Prompt\",\n \"prompt\": \"This is an example prompt from an API route.\",\n \"description\": \"Description of the prompt.\",\n \"useCases\": [\n { \"title\": \"Use case 1\", \"description\": \"Description of use case 1\" },\n { \"title\": \"Use case 2\", \"description\": \"Description of use case 2\" }\n ],\n \"tags\": \"example, prompt\"\n}'\n
"},{"location":"swarms_platform/prompts/add_prompt/#response","title":"Response","text":"The response will be a JSON object containing the result of the operation. Example response:
{\n \"success\": true,\n \"message\": \"Prompt added successfully\",\n \"data\": {\n \"id\": \"prompt_id\",\n \"name\": \"Example Prompt\",\n \"prompt\": \"This is an example prompt from an API route.\",\n \"description\": \"Description of the prompt.\",\n \"useCases\": [\n { \"title\": \"Use case 1\", \"description\": \"Description of use case 1\" },\n { \"title\": \"Use case 2\", \"description\": \"Description of use case 2\" }\n ],\n \"tags\": \"example, prompt\"\n }\n}\n
"},{"location":"swarms_platform/prompts/edit_prompt/","title":"Endpoint: Edit Prompt","text":"The https://swarms.world/api/edit-prompt
endpoint allows users to edit an existing prompt on the Swarms platform. This API accepts a POST request with a JSON body containing the prompt details to be updated, such as its name, description, use cases, and tags. The request must be authenticated using an API key.
https://swarms.world/api/edit-prompt
application/json
The request body should be a JSON object with the following attributes:
Attribute Type Description Requiredid
string
The ID of the prompt to be edited. Yes name
string
The name of the prompt. Yes prompt
string
The prompt text. Yes description
string
A brief description of the prompt. No useCases
array
An array of use cases, each containing a title and description. Yes tags
string
Comma-separated tags for the prompt. No"},{"location":"swarms_platform/prompts/edit_prompt/#usecases-structure","title":"useCases
Structure","text":"Each use case in the useCases
array should be an object with the following attributes:
title
string
The title of the use case. Yes description
string
A brief description of the use case. Yes"},{"location":"swarms_platform/prompts/edit_prompt/#example-usage","title":"Example Usage","text":""},{"location":"swarms_platform/prompts/edit_prompt/#python","title":"Python","text":"import requests\nimport json\n\nurl = \"https://swarms.world/api/edit-prompt\"\nheaders = {\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"Bearer {apiKey}\"\n}\ndata = {\n \"id\": \"prompt_id\",\n \"name\": \"Updated Prompt\",\n \"prompt\": \"This is an updated prompt from an API route.\",\n \"description\": \"Updated description of the prompt.\",\n \"useCases\": [\n {\"title\": \"Updated use case 1\", \"description\": \"Updated description of use case 1\"},\n {\"title\": \"Updated use case 2\", \"description\": \"Updated description of use case 2\"}\n ],\n \"tags\": \"updated, prompt\"\n}\n\nresponse = requests.post(url, headers=headers, data=json.dumps(data))\nprint(response.json())\n
"},{"location":"swarms_platform/prompts/edit_prompt/#nodejs","title":"Node.js","text":"const fetch = require(\"node-fetch\");\n\nasync function editPromptsHandler() {\n try {\n const response = await fetch(\"https://swarms.world/api/edit-prompt\", {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: \"Bearer {apiKey}\",\n },\n body: JSON.stringify({\n id: \"prompt_id\",\n name: \"Updated Prompt\",\n prompt: \"This is an updated prompt from an API route.\",\n description: \"Updated description of the prompt.\",\n useCases: [\n {\n title: \"Updated use case 1\",\n description: \"Updated description of use case 1\",\n },\n {\n title: \"Updated use case 2\",\n description: \"Updated description of use case 2\",\n },\n ],\n tags: \"updated, prompt\",\n }),\n });\n\n const result = await response.json();\n console.log(result);\n } catch (error) {\n console.error(\"An error has occurred\", error);\n }\n}\n\neditPromptsHandler();\n
"},{"location":"swarms_platform/prompts/edit_prompt/#go","title":"Go","text":"package main\n\nimport (\n \"bytes\"\n \"encoding/json\"\n \"fmt\"\n \"net/http\"\n)\n\nfunc main() {\n url := \"https://swarms.world/api/edit-prompt\"\n payload := map[string]interface{}{\n \"id\": \"prompt_id\",\n \"name\": \"Updated Prompt\",\n \"prompt\": \"This is an updated prompt from an API route.\",\n \"description\": \"Updated description of the prompt.\",\n \"useCases\": []map[string]string{\n {\"title\": \"Updated use case 1\", \"description\": \"Updated description of use case 1\"},\n {\"title\": \"Updated use case 2\", \"description\": \"Updated description of use case 2\"},\n },\n \"tags\": \"updated, prompt\",\n }\n jsonPayload, _ := json.Marshal(payload)\n\n req, _ := http.NewRequest(\"POST\", url, bytes.NewBuffer(jsonPayload))\n req.Header.Set(\"Content-Type\", \"application/json\")\n req.Header.Set(\"Authorization\", \"Bearer {apiKey}\")\n\n client := &http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n fmt.Println(\"An error has occurred\", err)\n return\n }\n defer resp.Body.Close()\n\n var result map[string]interface{}\n json.NewDecoder(resp.Body).Decode(&result)\n fmt.Println(result)\n}\n
"},{"location":"swarms_platform/prompts/edit_prompt/#curl","title":"cURL","text":"curl -X POST https://swarms.world/api/edit-prompt \\\n-H \"Content-Type: application/json\" \\\n-H \"Authorization: Bearer {apiKey}\" \\\n-d '{\n \"id\": \"prompt_id\",\n \"name\": \"Updated Prompt\",\n \"prompt\": \"This is an updated prompt from an API route.\",\n \"description\": \"Updated description of the prompt.\",\n \"useCases\": [\n { \"title\": \"Updated use case 1\", \"description\": \"Updated description of use case 1\" },\n { \"title\": \"Updated use case 2\", \"description\": \"Updated description of use case 2\" }\n ],\n \"tags\": \"updated, prompt\"\n}'\n
"},{"location":"swarms_platform/prompts/edit_prompt/#response","title":"Response","text":"The response will be a JSON object containing the result of the operation. Example response:
{\n \"success\": true,\n \"message\": \"Prompt updated successfully\",\n \"data\": {\n \"id\": \"prompt_id\",\n \"name\": \"Updated Prompt\",\n \"prompt\": \"This is an updated prompt from an API route.\",\n \"description\": \"Updated description of the prompt.\",\n \"useCases\": [\n {\n \"title\": \"Updated use case 1\",\n \"description\": \"Updated description of use case 1\"\n },\n {\n \"title\": \"Updated use case 2\",\n \"description\": \"Updated description of use case 2\"\n }\n ],\n \"tags\": \"updated, prompt\"\n }\n}\n
In case of an error, the response will contain an error message detailing the issue.
"},{"location":"swarms_platform/prompts/edit_prompt/#common-issues-and-tips","title":"Common Issues and Tips","text":"Authorization
header is correctly set with a valid API key.name
, prompt
, description
, useCases
) are included in the request body.This comprehensive documentation provides all the necessary information to effectively use the https://swarms.world/api/add-prompt
and https://swarms.world/api/edit-prompt
endpoints, including details on request parameters, example code snippets in multiple programming languages, and troubleshooting tips.
getAllPrompts
API Endpoint","text":"The getAllPrompts
API endpoint is a part of the swarms.world
application, designed to fetch all prompt records from the database. This endpoint is crucial for retrieving various prompts stored in the swarms_cloud_prompts
table, including their metadata such as name, description, use cases, and tags.
The primary purpose of this API endpoint is to provide a method for clients to fetch a list of prompts stored in the swarms_cloud_prompts
table, with the ability to filter by name, tags, and use cases.
https://swarms.world/get-prompts\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#http-method","title":"HTTP Method","text":"GET\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#query-parameters","title":"Query Parameters","text":"use_cases
array. The query is case-insensitive.use_cases
array. The query is case-insensitive.Returns an array of prompts.
[\n {\n \"id\": \"string\",\n \"name\": \"string\",\n \"description\": \"string\",\n \"prompt\": \"string\",\n \"use_cases\": [\n {\n \"title\": \"string\",\n \"description\": \"string\"\n }\n ],\n \"tags\": \"string\"\n },\n ...\n]\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#error-responses","title":"Error Responses","text":"{\n \"error\": \"Method <method> Not Allowed\"\n}\n
{\n \"error\": \"Could not fetch prompts\"\n}\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#fetch-prompt-by-id","title":"Fetch Prompt by ID","text":""},{"location":"swarms_platform/prompts/fetch_prompts/#endpoint-url_1","title":"Endpoint URL","text":"https://swarms.world/get-prompts/[id]\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#http-method_1","title":"HTTP Method","text":"GET\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#response_1","title":"Response","text":""},{"location":"swarms_platform/prompts/fetch_prompts/#success-response-200_1","title":"Success Response (200)","text":"Returns a single prompt by ID.
{\n \"id\": \"string\",\n \"name\": \"string\",\n \"description\": \"string\",\n \"prompt\": \"string\",\n \"use_cases\": [\n {\n \"title\": \"string\",\n \"description\": \"string\"\n }\n ],\n \"tags\": \"string\"\n}\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#error-responses_1","title":"Error Responses","text":"{\n \"error\": \"Prompt not found\"\n}\n
{\n \"error\": \"Could not fetch prompt\"\n}\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#request-handling","title":"Request Handling","text":"Method Validation: The endpoint only supports the GET
method. If a different HTTP method is used, it responds with a 405 Method Not Allowed
status.
Database Query:
Fetching All Prompts: The endpoint uses the supabaseAdmin
client to query the swarms_cloud_prompts
table. Filters are applied based on the query parameters (name
, tag
, and use_cases
).
Fetching a Prompt by ID: The endpoint retrieves a single prompt from the swarms_cloud_prompts
table by its unique ID.
Response: On success, it returns the prompt data in JSON format. In case of an error during the database query, a 500 Internal Server Error
status is returned. For fetching by ID, if the prompt is not found, it returns a 404 Not Found
status.
import fetch from \"node-fetch\";\n\n// Fetch all prompts with optional filters\nconst getPrompts = async (filters) => {\n const queryString = new URLSearchParams(filters).toString();\n const response = await fetch(\n `https://swarms.world/get-prompts?${queryString}`,\n {\n method: \"GET\",\n }\n );\n\n if (!response.ok) {\n throw new Error(`Error: ${response.statusText}`);\n }\n\n const data = await response.json();\n console.log(data);\n};\n\n// Fetch prompt by ID\nconst getPromptById = async (id) => {\n const response = await fetch(`https://swarms.world/get-prompts/${id}`, {\n method: \"GET\",\n });\n\n if (!response.ok) {\n throw new Error(`Error: ${response.statusText}`);\n }\n\n const data = await response.json();\n console.log(data);\n};\n\n// Example usage\ngetPrompts({\n name: \"example\",\n tag: \"tag1,tag2\",\n use_case: \"example\",\n use_case_description: \"description\",\n}).catch(console.error);\ngetPromptById(\"123\").catch(console.error);\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#python","title":"Python","text":"import requests\n\n# Fetch all prompts with optional filters\ndef get_prompts(filters):\n response = requests.get('https://swarms.world/get-prompts', params=filters)\n\n if response.status_code != 200:\n raise Exception(f'Error: {response.status_code}, {response.text}')\n\n data = response.json()\n print(data)\n\n# Fetch prompt by ID\ndef get_prompt_by_id(id):\n response = requests.get(f'https://swarms.world/get-prompts/{id}')\n\n if response.status_code != 200:\n raise Exception(f'Error: {response.status_code}, {response.text}')\n\n data = response.json()\n print(data)\n\n# Example usage\nget_prompts({'name': 'example', 'tag': 'tag1,tag2', 'use_case': 'example', 'use_case_description': 'description'})\nget_prompt_by_id('123')\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#curl","title":"cURL","text":"# Fetch all prompts with optional filters\ncurl -X GET \"https://swarms.world/get-prompts?name=example&tag=tag1,tag2&use_case=example&use_case_description=description\"\n\n# Fetch prompt by ID\ncurl -X GET https://swarms.world/get-prompts/123\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#go","title":"Go","text":"package main\n\nimport (\n \"fmt\"\n \"io/ioutil\"\n \"net/http\"\n \"net/url\"\n)\n\nfunc getPrompts(filters map[string]string) {\n baseURL := \"https://swarms.world/get-prompts\"\n query := url.Values{}\n for key, value := range filters {\n query.Set(key, value)\n }\n fullURL := fmt.Sprintf(\"%s?%s\", baseURL, query.Encode())\n\n resp, err := http.Get(fullURL)\n if err != nil {\n panic(err)\n }\n defer resp.Body.Close()\n\n if resp.StatusCode != http.StatusOK {\n body, _ := ioutil.ReadAll(resp.Body)\n panic(fmt.Sprintf(\"Error: %d, %s\", resp.StatusCode, string(body)))\n }\n\n body, err := ioutil.ReadAll(resp.Body)\n if err != nil {\n panic(err)\n }\n\n fmt.Println(string(body))\n}\n\nfunc getPromptById(id string) {\n url := fmt.Sprintf(\"https://swarms.world/get-prompts/%s\", id)\n resp, err := http.Get(url)\n if err != nil {\n panic(err)\n }\n defer resp.Body.Close()\n\n if resp.StatusCode != http.StatusOK {\n body, _ := ioutil.ReadAll(resp.Body)\n panic(fmt.Sprintf(\"Error: %d, %s\", resp.StatusCode, string(body)))\n }\n\n body, err := ioutil.ReadAll(resp.Body)\n if err != nil {\n panic(err)\n }\n\n fmt.Println(string(body))\n}\n\nfunc main() {\n filters := map[string]string{\n \"name\": \"example\",\n \"tag\": \"tag1,tag2\",\n \"use_case\": \"example\",\n \"use_case_description\": \"description\",\n }\n getPrompts(filters)\n getPromptById(\"123\")\n}\n
"},{"location":"swarms_platform/prompts/fetch_prompts/#attributes-table","title":"Attributes Table","text":"Attribute Type Description id String Unique identifier for the prompt name String Name of the prompt description String Description of the prompt prompt String The actual prompt text use_cases Array Use cases for the prompt tags String Tags associated with the prompt"},{"location":"swarms_platform/prompts/fetch_prompts/#additional-information-and-tips","title":"Additional Information and Tips","text":"This documentation provides a comprehensive guide to the getAllPrompts
API endpoint, including usage examples in multiple programming languages and detailed attribute descriptions.
Modern AI Agent Framework
swarms-rs is a powerful Rust framework for building autonomous AI agents powered by LLMs, equipped with robust tools and memory capabilities. Designed for various applications from trading analysis to healthcare diagnostics.
"},{"location":"swarms_rs/agents/#getting-started","title":"Getting Started","text":""},{"location":"swarms_rs/agents/#installation","title":"Installation","text":"cargo add swarms-rs\n
Compatible with Rust 1.70+
This library requires Rust 1.70 or later. Make sure your Rust toolchain is up to date.
"},{"location":"swarms_rs/agents/#required-environment-variables","title":"Required Environment Variables","text":"# Required API keys\nOPENAI_API_KEY=\"your_openai_api_key_here\"\nDEEPSEEK_API_KEY=\"your_deepseek_api_key_here\"\n
"},{"location":"swarms_rs/agents/#quick-start","title":"Quick Start","text":"Here's a simple example to get you started with swarms-rs:
use std::env;\nuse anyhow::Result;\nuse swarms_rs::{llm::provider::openai::OpenAI, structs::agent::Agent};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n // Load environment variables from .env file\n dotenv::dotenv().ok();\n\n // Initialize tracing for better debugging\n tracing_subscriber::registry()\n .with(tracing_subscriber::EnvFilter::from_default_env())\n .with(\n tracing_subscriber::fmt::layer()\n .with_line_number(true)\n .with_file(true),\n )\n .init();\n\n // Set up your LLM client\n let api_key = env::var(\"OPENAI_API_KEY\").expect(\"OPENAI_API_KEY must be set\");\n let client = OpenAI::new(api_key).set_model(\"gpt-4-turbo\");\n\n // Create a basic agent\n let agent = client\n .agent_builder()\n .system_prompt(\"You are a helpful assistant.\")\n .agent_name(\"BasicAgent\")\n .user_name(\"User\")\n .build();\n\n // Run the agent with a user query\n let response = agent\n .run(\"Tell me about Rust programming.\".to_owned())\n .await?;\n\n println!(\"{}\", response);\n Ok(())\n}\n
"},{"location":"swarms_rs/agents/#core-concepts","title":"Core Concepts","text":""},{"location":"swarms_rs/agents/#agents","title":"Agents","text":"Agents in swarms-rs are autonomous entities that can:
system_prompt
Initial instructions/role for the agent - Yes agent_name
Name identifier for the agent - Yes user_name
Name for the user interacting with agent - Yes max_loops
Maximum number of reasoning loops 1 No retry_attempts
Number of retry attempts on failure 1 No enable_autosave
Enable state persistence false No save_state_dir
Directory for saving agent state None No"},{"location":"swarms_rs/agents/#advanced-configuration","title":"Advanced Configuration","text":"You can enhance your agent's capabilities with:
Resource Usage
Setting high values for max_loops
can increase API usage and costs. Start with lower values and adjust as needed.
use std::env;\nuse anyhow::Result;\nuse swarms_rs::{llm::provider::openai::OpenAI, structs::agent::Agent};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n dotenv::dotenv().ok();\n tracing_subscriber::registry()\n .with(tracing_subscriber::EnvFilter::from_default_env())\n .with(\n tracing_subscriber::fmt::layer()\n .with_line_number(true)\n .with_file(true),\n )\n .init();\n\n let api_key = env::var(\"OPENAI_API_KEY\").expect(\"OPENAI_API_KEY must be set\");\n let client = OpenAI::new(api_key).set_model(\"gpt-4-turbo\");\n\n let agent = client\n .agent_builder()\n .system_prompt(\n \"You are a sophisticated cryptocurrency analysis assistant specialized in:\n 1. Technical analysis of crypto markets\n 2. Fundamental analysis of blockchain projects\n 3. Market sentiment analysis\n 4. Risk assessment\n 5. Trading patterns recognition\n\n When analyzing cryptocurrencies, always consider:\n - Market capitalization and volume\n - Historical price trends\n - Project fundamentals and technology\n - Recent news and developments\n - Market sentiment indicators\n - Potential risks and opportunities\n\n Provide clear, data-driven insights and always include relevant disclaimers about market volatility.\"\n )\n .agent_name(\"CryptoAnalyst\")\n .user_name(\"Trader\")\n .enable_autosave()\n .max_loops(3) // Increased for more thorough analysis\n .save_state_dir(\"./crypto_analysis/\")\n .enable_plan(\"Break down the crypto analysis into systematic steps:\n 1. Gather market data\n 2. Analyze technical indicators\n 3. Review fundamental factors\n 4. Assess market sentiment\n 5. Provide comprehensive insights\".to_owned())\n .build();\n\n let response = agent\n .run(\"What are your thoughts on Bitcoin's current market position?\".to_owned())\n .await?;\n\n println!(\"{}\", response);\n Ok(())\n}\n
"},{"location":"swarms_rs/agents/#using-tools-with-mcp","title":"Using Tools with MCP","text":""},{"location":"swarms_rs/agents/#model-context-protocol-mcp","title":"Model Context Protocol (MCP)","text":"swarms-rs supports the Model Context Protocol (MCP), enabling agents to interact with external tools through standardized interfaces.
What is MCP?
MCP (Model Context Protocol) provides a standardized way for LLMs to interact with external tools, giving your agents access to real-world data and capabilities beyond language processing.
"},{"location":"swarms_rs/agents/#supported-mcp-server-types","title":"Supported MCP Server Types","text":"Add tools to your agent during configuration:
let agent = client\n .agent_builder()\n .system_prompt(\"You are a helpful assistant with access to tools.\")\n .agent_name(\"ToolAgent\")\n .user_name(\"User\")\n // Add STDIO MCP server\n .add_stdio_mcp_server(\"uvx\", [\"mcp-hn\"])\n .await\n // Add SSE MCP server\n .add_sse_mcp_server(\"file-browser\", \"http://127.0.0.1:8000/sse\")\n .await\n .build();\n
"},{"location":"swarms_rs/agents/#full-mcp-agent-example","title":"Full MCP Agent Example","text":"use std::env;\nuse anyhow::Result;\nuse swarms_rs::{llm::provider::openai::OpenAI, structs::agent::Agent};\n\n#[tokio::main]\nasync fn main() -> Result<()> {\n dotenv::dotenv().ok();\n tracing_subscriber::registry()\n .with(tracing_subscriber::EnvFilter::from_default_env())\n .with(\n tracing_subscriber::fmt::layer()\n .with_line_number(true)\n .with_file(true),\n )\n .init();\n\n let api_key = env::var(\"OPENAI_API_KEY\").expect(\"OPENAI_API_KEY must be set\");\n let client = OpenAI::new(api_key).set_model(\"gpt-4-turbo\");\n\n let agent = client\n .agent_builder()\n .system_prompt(\"You are a helpful assistant with access to news and file system tools.\")\n .agent_name(\"SwarmsAgent\")\n .user_name(\"User\")\n // Add Hacker News tool\n .add_stdio_mcp_server(\"uvx\", [\"mcp-hn\"])\n .await\n // Add filesystem tool\n // To set up: uvx mcp-proxy --sse-port=8000 -- npx -y @modelcontextprotocol/server-filesystem ~\n .add_sse_mcp_server(\"file-browser\", \"http://127.0.0.1:8000/sse\")\n .await\n .retry_attempts(2)\n .max_loops(3)\n .build();\n\n // Use the news tool\n let news_response = agent\n .run(\"Get the top 3 stories of today from Hacker News\".to_owned())\n .await?;\n println!(\"NEWS RESPONSE:\\n{}\", news_response);\n\n // Use the filesystem tool\n let fs_response = agent.run(\"List files in my home directory\".to_owned()).await?;\n println!(\"FILESYSTEM RESPONSE:\\n{}\", fs_response);\n\n Ok(())\n}\n
"},{"location":"swarms_rs/agents/#setting-up-mcp-tools","title":"Setting Up MCP Tools","text":""},{"location":"swarms_rs/agents/#installing-mcp-servers","title":"Installing MCP Servers","text":"To use MCP servers with swarms-rs, you'll need to install the appropriate tools:
uv Package Manager:
curl -sSf https://raw.githubusercontent.com/astral-sh/uv/main/install.sh | sh\n
MCP-HN (Hacker News MCP server):
uvx install mcp-hn\n
Setting up an SSE MCP server:
# Start file system MCP server over SSE\nuvx mcp-proxy --sse-port=8000 -- npx -y @modelcontextprotocol/server-filesystem ~\n
swarms-rs currently supports:
OpenAI (GPT models)
DeepSeek AI
More providers coming soon
When enable_autosave
is set to true
, the agent will save its state to the directory specified in save_state_dir
. This includes conversation history and tool states, allowing the agent to resume from where it left off.
max_loops
and retry_attempts
? max_loops
: Controls how many reasoning steps the agent can take for a single query
retry_attempts
: Specifies how many times the agent will retry if an error occurs
You can create your own MCP server by implementing the MCP protocol. Check out the MCP documentation for details on the protocol specification.
Can I use tools without MCP?Currently, swarms-rs is designed to use the MCP protocol for tool integration. This provides a standardized way for agents to interact with external systems.
"},{"location":"swarms_rs/agents/#advanced-topics","title":"Advanced Topics","text":""},{"location":"swarms_rs/agents/#performance-optimization","title":"Performance Optimization","text":"Optimize your agent's performance by:
Be specific about the agent's role and capabilities
Include clear instructions on how to use available tools
Define success criteria for the agent's responses
Tuning Loop Parameters:
Start with lower values for max_loops
and increase as needed
Consider the complexity of tasks when setting loop limits
Strategic Tool Integration:
Only integrate tools that are necessary for the agent's tasks
Provide clear documentation in the system prompt about when to use each tool
Security Notice
When using file system tools or other system-level access, always be careful about permissions. Limit the scope of what your agent can access, especially in production environments.
"},{"location":"swarms_rs/agents/#coming-soon","title":"Coming Soon","text":"Memory plugins for different storage backends
Additional LLM providers
Group agent coordination
Function calling
Custom tool development framework
Contributions to swarms-rs are welcome! Check out our GitHub repository for more information.
"},{"location":"swarms_rs/overview/","title":"swarms-rs \ud83d\ude80","text":""},{"location":"swarms_rs/overview/#overview","title":"\ud83d\udcd6 Overview","text":"swarms-rs is an enterprise-grade, production-ready multi-agent orchestration framework built in Rust, designed to handle the most demanding tasks with unparalleled speed and efficiency. By leveraging Rust's bleeding-edge performance and safety features, swarms-rs provides a powerful and scalable solution for orchestrating complex multi-agent systems across various industries.
"},{"location":"swarms_rs/overview/#key-benefits","title":"\u2728 Key Benefits","text":""},{"location":"swarms_rs/overview/#extreme-performance","title":"\u26a1 Extreme Performance","text":"Multi-Threaded Architecture
Utilize the full potential of modern multi-core processors
Zero-cost abstractions and fearless concurrency
Minimal overhead with maximum throughput
Optimal resource utilization
Bleeding-Edge Speed
Near-zero latency execution
Lightning-fast performance
Ideal for high-frequency applications
Perfect for real-time systems
GitHub
Crates.io
Documentation
pip3 install -U swarms-tools yfinance requests httpx pandas loguru backoff web3 solana spl-token\n
"},{"location":"swarms_tools/finance/#environment-variables","title":"Environment Variables","text":"Create a .env
file in your project root with the following variables (as needed):
COINBASE_API_KEY
Coinbase API Key Coinbase Trading COINBASE_API_SECRET
Coinbase API Secret Coinbase Trading COINBASE_API_PASSPHRASE
Coinbase API Passphrase Coinbase Trading COINMARKETCAP_API_KEY
CoinMarketCap API Key CoinMarketCap Data HELIUS_API_KEY
Helius API Key Solana Data EODHD_API_KEY
EODHD API Key Stock News OKX_API_KEY
OKX API Key OKX Trading OKX_API_SECRET
OKX API Secret OKX Trading OKX_PASSPHRASE
OKX Passphrase OKX Trading"},{"location":"swarms_tools/finance/#tools-overview","title":"Tools Overview","text":"Tool Description Requires API Key Yahoo Finance Real-time stock market data No CoinGecko Cryptocurrency market data No Coinbase Cryptocurrency trading and data Yes CoinMarketCap Cryptocurrency market data Yes Helius Solana blockchain data Yes DexScreener DEX trading pairs and data No HTX (Huobi) Cryptocurrency exchange data No OKX Cryptocurrency exchange data Yes EODHD Stock market news Yes Jupiter Solana DEX aggregator No Sector Analysis GICS sector ETF analysis No Solana Tools Solana wallet and token tools Yes"},{"location":"swarms_tools/finance/#detailed-documentation","title":"Detailed Documentation","text":""},{"location":"swarms_tools/finance/#yahoo-finance-api","title":"Yahoo Finance API","text":"Fetch real-time and historical stock market data.
from swarms_tools.finance import yahoo_finance_api\n\n# Fetch data for single stock\ndata = yahoo_finance_api([\"AAPL\"])\n\n# Fetch data for multiple stocks\ndata = yahoo_finance_api([\"AAPL\", \"GOOG\", \"MSFT\"])\n
Arguments:
Parameter Type Description Required stock_symbols List[str] List of stock symbols Yes"},{"location":"swarms_tools/finance/#coingecko-api","title":"CoinGecko API","text":"Fetch comprehensive cryptocurrency data.
from swarms_tools.finance import coin_gecko_coin_api\n\n# Fetch Bitcoin data\ndata = coin_gecko_coin_api(\"bitcoin\")\n
Arguments:
Parameter Type Description Required coin str Cryptocurrency ID (e.g., 'bitcoin') Yes"},{"location":"swarms_tools/finance/#coinbase-trading","title":"Coinbase Trading","text":"Execute trades and fetch market data from Coinbase.
from swarms_tools.finance import get_coin_data, place_buy_order, place_sell_order\n\n# Fetch coin data\ndata = get_coin_data(\"BTC-USD\")\n\n# Place orders\nbuy_order = place_buy_order(\"BTC-USD\", amount=100) # Buy $100 worth of BTC\nsell_order = place_sell_order(\"BTC-USD\", amount=0.01) # Sell 0.01 BTC\n
Arguments:
Parameter Type Description Required symbol str Trading pair (e.g., 'BTC-USD') Yes amount Union[str, float, Decimal] Trade amount Yes sandbox bool Use sandbox environment No"},{"location":"swarms_tools/finance/#coinmarketcap-api","title":"CoinMarketCap API","text":"Fetch cryptocurrency market data from CoinMarketCap.
from swarms_tools.finance import coinmarketcap_api\n\n# Fetch single coin data\ndata = coinmarketcap_api([\"Bitcoin\"])\n\n# Fetch multiple coins\ndata = coinmarketcap_api([\"Bitcoin\", \"Ethereum\", \"Tether\"])\n
Arguments:
Parameter Type Description Required coin_names Optional[List[str]] List of coin names No"},{"location":"swarms_tools/finance/#helius-api-solana","title":"Helius API (Solana)","text":"Fetch Solana blockchain data.
from swarms_tools.finance import helius_api_tool\n\n# Fetch account data\naccount_data = helius_api_tool(\"account\", \"account_address\")\n\n# Fetch transaction data\ntx_data = helius_api_tool(\"transaction\", \"tx_signature\")\n\n# Fetch token data\ntoken_data = helius_api_tool(\"token\", \"token_mint_address\")\n
Arguments:
Parameter Type Description Required action str Type of action ('account', 'transaction', 'token') Yes identifier str Address/signature to query Yes"},{"location":"swarms_tools/finance/#dexscreener-api","title":"DexScreener API","text":"Fetch DEX trading pair data.
from swarms_tools.finance import (\n fetch_dex_screener_profiles,\n fetch_latest_token_boosts,\n fetch_solana_token_pairs\n)\n\n# Fetch latest profiles\nprofiles = fetch_dex_screener_profiles()\n\n# Fetch token boosts\nboosts = fetch_latest_token_boosts()\n\n# Fetch Solana pairs\npairs = fetch_solana_token_pairs([\"token_address\"])\n
"},{"location":"swarms_tools/finance/#htx-huobi-api","title":"HTX (Huobi) API","text":"Fetch cryptocurrency data from HTX.
from swarms_tools.finance import fetch_htx_data\n\n# Fetch coin data\ndata = fetch_htx_data(\"BTC\")\n
Arguments:
Parameter Type Description Required coin_name str Cryptocurrency symbol Yes"},{"location":"swarms_tools/finance/#okx-api","title":"OKX API","text":"Fetch cryptocurrency data from OKX.
from swarms_tools.finance import okx_api_tool\n\n# Fetch single coin\ndata = okx_api_tool([\"BTC-USDT\"])\n\n# Fetch multiple coins\ndata = okx_api_tool([\"BTC-USDT\", \"ETH-USDT\"])\n
Arguments:
Parameter Type Description Required coin_symbols Optional[List[str]] List of trading pairs No"},{"location":"swarms_tools/finance/#eodhd-stock-news","title":"EODHD Stock News","text":"Fetch stock market news.
from swarms_tools.finance import fetch_stock_news\n\n# Fetch news for a stock\nnews = fetch_stock_news(\"AAPL\")\n
Arguments:
Parameter Type Description Required stock_name str Stock symbol Yes"},{"location":"swarms_tools/finance/#jupiter-solana-dex","title":"Jupiter (Solana DEX)","text":"Fetch Solana DEX prices.
from swarms_tools.finance import get_jupiter_price\n\n# Fetch price data\nprice = get_jupiter_price(input_mint=\"input_token\", output_mint=\"output_token\")\n
Arguments:
Parameter Type Description Required input_mint str Input token mint address Yes output_mint str Output token mint address Yes"},{"location":"swarms_tools/finance/#sector-analysis","title":"Sector Analysis","text":"Analyze GICS sector ETFs.
from swarms_tools.finance.sector_analysis import analyze_index_sectors\n\n# Run sector analysis\nanalyze_index_sectors()\n
"},{"location":"swarms_tools/finance/#solana-tools","title":"Solana Tools","text":"Check Solana wallet balances and manage tokens.
from swarms_tools.finance import check_solana_balance, check_multiple_wallets\n\n# Check single wallet\nbalance = check_solana_balance(\"wallet_address\")\n\n# Check multiple wallets\nbalances = check_multiple_wallets([\"wallet1\", \"wallet2\"])\n
Arguments:
Parameter Type Description Required wallet_address str Solana wallet address Yes wallet_addresses List[str] List of wallet addresses Yes"},{"location":"swarms_tools/finance/#complete-example","title":"Complete Example","text":"Here's a comprehensive example using multiple tools:
from swarms_tools.finance import (\n yahoo_finance_api,\n coin_gecko_coin_api,\n coinmarketcap_api,\n fetch_htx_data\n)\n\n# Fetch stock data\nstocks = yahoo_finance_api([\"AAPL\", \"GOOG\"])\nprint(\"Stock Data:\", stocks)\n\n# Fetch crypto data from multiple sources\nbitcoin_cg = coin_gecko_coin_api(\"bitcoin\")\nprint(\"Bitcoin Data (CoinGecko):\", bitcoin_cg)\n\ncrypto_cmc = coinmarketcap_api([\"Bitcoin\", \"Ethereum\"])\nprint(\"Crypto Data (CoinMarketCap):\", crypto_cmc)\n\nbtc_htx = fetch_htx_data(\"BTC\")\nprint(\"Bitcoin Data (HTX):\", btc_htx)\n
"},{"location":"swarms_tools/finance/#error-handling","title":"Error Handling","text":"All tools include proper error handling and logging. Errors are logged using the loguru
logger. Example error handling:
from loguru import logger\n\ntry:\n data = yahoo_finance_api([\"INVALID\"])\nexcept Exception as e:\n logger.error(f\"Error fetching stock data: {e}\")\n
"},{"location":"swarms_tools/finance/#rate-limits","title":"Rate Limits","text":"Please be aware of rate limits for various APIs: - CoinGecko: 50 calls/minute (free tier) - CoinMarketCap: Varies by subscription - Helius: Varies by subscription - DexScreener: 300 calls/minute for pairs, 60 calls/minute for profiles - Other APIs: Refer to respective documentation
"},{"location":"swarms_tools/finance/#dependencies","title":"Dependencies","text":"The package automatically handles most dependencies, but you may need to install some manually:
"},{"location":"swarms_tools/overview/","title":"Swarms Tools","text":"Welcome to Swarms Tools, the ultimate package for integrating cutting-edge APIs into Python functions with seamless multi-agent system compatibility. Designed for enterprises at the forefront of innovation, Swarms Tools is your key to simplifying complexity and unlocking operational excellence.
"},{"location":"swarms_tools/overview/#features","title":"\ud83d\ude80 Features","text":"pip3 install -U swarms-tools\n
"},{"location":"swarms_tools/overview/#directory-structure","title":"\ud83d\udcc2 Directory Structure","text":"swarms-tools/\n\u251c\u2500\u2500 swarms_tools/\n\u2502 \u251c\u2500\u2500 finance/\n\u2502 \u2502 \u251c\u2500\u2500 htx_tool.py\n\u2502 \u2502 \u251c\u2500\u2500 eodh_api.py\n\u2502 \u2502 \u2514\u2500\u2500 coingecko_tool.py\n\u2502 \u251c\u2500\u2500 social_media/\n\u2502 \u2502 \u251c\u2500\u2500 telegram_tool.py\n\u2502 \u251c\u2500\u2500 utilities/\n\u2502 \u2502 \u2514\u2500\u2500 logging.py\n\u251c\u2500\u2500 tests/\n\u2502 \u251c\u2500\u2500 test_financial_data.py\n\u2502 \u2514\u2500\u2500 test_social_media.py\n\u2514\u2500\u2500 README.md\n
"},{"location":"swarms_tools/overview/#use-cases","title":"\ud83d\udcbc Use Cases","text":""},{"location":"swarms_tools/overview/#finance","title":"Finance","text":"Explore our diverse range of financial tools, designed to streamline your operations. If you need a tool not listed, feel free to submit an issue or accelerate integration by contributing a pull request with your tool of choice.
Tool Name Function Descriptionfetch_stock_news
fetch_stock_news
Fetches the latest stock news and updates. fetch_htx_data
fetch_htx_data
Retrieves financial data from the HTX platform. yahoo_finance_api
yahoo_finance_api
Fetches comprehensive stock data from Yahoo Finance, including prices and trends. coin_gecko_coin_api
coin_gecko_coin_api
Fetches cryptocurrency data from CoinGecko, including market and price information. helius_api_tool
helius_api_tool
Retrieves blockchain account, transaction, or token data using the Helius API. okx_api_tool
okx_api_tool
Fetches detailed cryptocurrency data for coins from the OKX exchange."},{"location":"swarms_tools/overview/#financial-data-retrieval","title":"Financial Data Retrieval","text":"Enable precise and actionable financial insights:
"},{"location":"swarms_tools/overview/#example-1-fetch-historical-data","title":"Example 1: Fetch Historical Data","text":"from swarms_tools import fetch_htx_data\n\n# Fetch historical trading data for \"Swarms Corporation\"\nresponse = fetch_htx_data(\"swarms\")\nprint(response)\n
"},{"location":"swarms_tools/overview/#example-2-stock-news-analysis","title":"Example 2: Stock News Analysis","text":"from swarms_tools import fetch_stock_news\n\n# Retrieve latest stock news for Apple\nnews = fetch_stock_news(\"AAPL\")\nprint(news)\n
"},{"location":"swarms_tools/overview/#example-3-cryptocurrency-metrics","title":"Example 3: Cryptocurrency Metrics","text":"from swarms_tools import coin_gecko_coin_api\n\n# Fetch live data for Bitcoin\ncrypto_data = coin_gecko_coin_api(\"bitcoin\")\nprint(crypto_data)\n
"},{"location":"swarms_tools/overview/#social-media-automation","title":"Social Media Automation","text":"Streamline communication and engagement:
"},{"location":"swarms_tools/overview/#example-telegram-bot-messaging","title":"Example: Telegram Bot Messaging","text":"from swarms_tools import telegram_dm_or_tag_api\n\ndef send_alert(response: str):\n telegram_dm_or_tag_api(response)\n\n# Send a message to a user or group\nsend_alert(\"Mission-critical update from Swarms.\")\n
"},{"location":"swarms_tools/overview/#dex-screener","title":"Dex Screener","text":"This is a tool that allows you to fetch data from the Dex Screener API. It supports multiple chains and multiple tokens.
from swarms_tools.finance.dex_screener import (\n fetch_latest_token_boosts,\n fetch_dex_screener_profiles,\n)\n\n\nfetch_dex_screener_profiles()\nfetch_latest_token_boosts()\n
"},{"location":"swarms_tools/overview/#structs","title":"Structs","text":"The tool chainer enables the execution of multiple tools in a sequence, allowing for the aggregation of their results in either a parallel or sequential manner.
"},{"location":"swarms_tools/overview/#example-usage-from-loguru-import-logger-from-swarms_toolsstructs-import-tool_chainer-if-__name__-__main__-loggeraddtool_chainerlog-rotation500-mb-levelinfo-example-tools-def-tool1-return-tool1-result-def-tool2-return-tool2-result-def-tool3-raise-valueerrorsimulated-error-in-tool3-tools-tool1-tool2-parallel-execution-parallel_results-tool_chainertools-paralleltrue-printparallel-results-parallel_results-sequential-execution-sequential_results-tool_chainertools-parallelfalse-printsequential-results-sequential_results","title":"# Example usage\nfrom loguru import logger\n\nfrom swarms_tools.structs import tool_chainer\n\n\nif __name__ == \"__main__\":\n logger.add(\"tool_chainer.log\", rotation=\"500 MB\", level=\"INFO\")\n\n # Example tools\n def tool1():\n return \"Tool1 Result\"\n\n def tool2():\n return \"Tool2 Result\"\n\n # def tool3():\n # raise ValueError(\"Simulated error in Tool3\")\n\n tools = [tool1, tool2]\n\n # Parallel execution\n parallel_results = tool_chainer(tools, parallel=True)\n print(\"Parallel Results:\", parallel_results)\n\n # Sequential execution\n # sequential_results = tool_chainer(tools, parallel=False)\n # print(\"Sequential Results:\", sequential_results)\n
","text":""},{"location":"swarms_tools/overview/#standardized-schema","title":"\ud83e\udde9 Standardized Schema","text":"Every tool in Swarms Tools adheres to a strict schema for maintainability and interoperability:
"},{"location":"swarms_tools/overview/#schema-template","title":"Schema Template","text":"Encapsulate API logic into a modular, reusable function.
Typing:
Example:
def fetch_data(symbol: str, date_range: str) -> str:\n \"\"\"\n Fetch financial data for a given symbol and date range.\n\n Args:\n symbol (str): Ticker symbol of the asset.\n date_range (str): Timeframe for the data (e.g., '1d', '1m', '1y').\n\n Returns:\n dict: A dictionary containing financial metrics.\n \"\"\"\n pass\n
Include detailed docstrings with parameter explanations and usage examples.
Output Standardization:
Ensure consistent outputs (e.g., strings) for easy downstream agent integration.
API-Key Management:
os.getenv(\"YOUR_KEY\")
Comprehensive documentation is available to guide developers and enterprises. Visit our official docs for detailed API references, usage examples, and best practices.
"},{"location":"swarms_tools/overview/#contributing","title":"\ud83d\udee0 Contributing","text":"We welcome contributions from the global developer community. To contribute:
feature/add-new-tool
.This project is licensed under the MIT License. See the LICENSE file for details.
"},{"location":"swarms_tools/overview/#join-the-future","title":"\ud83c\udf20 Join the Future","text":"Explore the limitless possibilities of agent-based systems. Together, we can build a smarter, faster, and more interconnected world.
Visit us: Swarms Corporation Follow us: Twitter
\"The future belongs to those who dare to automate it.\" \u2014 The Swarms Corporation
"},{"location":"swarms_tools/search/","title":"Search Tools Documentation","text":"This documentation covers the search tools available in the swarms-tools
package.
pip3 install -U swarms-tools\n
"},{"location":"swarms_tools/search/#environment-variables-required","title":"Environment Variables Required","text":"Create a .env
file in your project root with the following API keys:
# Bing Search API\nBING_API_KEY=your_bing_api_key\n\n# Google Search API\nGOOGLE_API_KEY=your_google_api_key\nGOOGLE_CX=your_google_cx_id\nGEMINI_API_KEY=your_gemini_api_key\n\n# Exa AI API\nEXA_API_KEY=your_exa_api_key\n
"},{"location":"swarms_tools/search/#tools-overview","title":"Tools Overview","text":""},{"location":"swarms_tools/search/#1-bing-search-tool","title":"1. Bing Search Tool","text":"The Bing Search tool allows you to fetch web articles using the Bing Web Search API.
"},{"location":"swarms_tools/search/#function-fetch_web_articles_bing_api","title":"Function:fetch_web_articles_bing_api
","text":"Parameter Type Required Description query str Yes The search query to retrieve articles"},{"location":"swarms_tools/search/#example-usage","title":"Example Usage:","text":"from swarms_tools.search import fetch_web_articles_bing_api\n\n# Fetch articles about AI\nresults = fetch_web_articles_bing_api(\"swarms ai github\")\nprint(results)\n
"},{"location":"swarms_tools/search/#2-exa-ai-search-tool","title":"2. Exa AI Search Tool","text":"The Exa AI tool is designed for searching research papers and academic content.
"},{"location":"swarms_tools/search/#function-search_exa_ai","title":"Function:search_exa_ai
","text":"Parameter Type Required Default Description query str Yes \"Latest developments in LLM capabilities\" Search query num_results int No 10 Number of results to return auto_prompt bool No True Whether to use auto-prompting include_domains List[str] No [\"arxiv.org\", \"paperswithcode.com\"] Domains to include exclude_domains List[str] No [] Domains to exclude category str No \"research paper\" Category of search"},{"location":"swarms_tools/search/#example-usage_1","title":"Example Usage:","text":"from swarms_tools.search import search_exa_ai\n\n# Search for research papers\nresults = search_exa_ai(\n query=\"Latest developments in LLM capabilities\",\n num_results=5,\n include_domains=[\"arxiv.org\"]\n)\nprint(results)\n
"},{"location":"swarms_tools/search/#3-google-search-tool","title":"3. Google Search Tool","text":"A comprehensive search tool that uses Google Custom Search API and includes content extraction and summarization using Gemini.
"},{"location":"swarms_tools/search/#class-websitechecker","title":"Class:WebsiteChecker
","text":"Method Parameters Description search query: str Main search function that fetches, processes, and summarizes results"},{"location":"swarms_tools/search/#example-usage_2","title":"Example Usage:","text":"from swarms_tools.search import WebsiteChecker\n\n# Initialize with an agent (required for summarization)\nchecker = WebsiteChecker(agent=your_agent_function)\n\n# Perform search\nasync def search_example():\n results = await checker.search(\"who won elections 2024 us\")\n print(results)\n\n# For synchronous usage\nfrom swarms_tools.search import search\n\nresults = search(\"who won elections 2024 us\", agent=your_agent_function)\nprint(results)\n
"},{"location":"swarms_tools/search/#features","title":"Features","text":"The tools automatically handle dependency installation, but here are the main requirements:
aiohttp\nasyncio\nbeautifulsoup4\ngoogle-generativeai\nhtml2text\nplaywright\npython-dotenv\nrich\ntenacity\n
"},{"location":"swarms_tools/search/#error-handling","title":"Error Handling","text":"All tools include robust error handling: - Automatic retries for failed requests - Timeout handling - Rate limiting consideration - Detailed error messages
"},{"location":"swarms_tools/search/#output-format","title":"Output Format","text":"Each tool provides structured output:
For issues and feature requests, please visit the GitHub repository.
"},{"location":"swarms_tools/twitter/","title":"Twitter Tool Documentation","text":""},{"location":"swarms_tools/twitter/#overview","title":"Overview","text":"The Twitter Tool provides a convenient interface for interacting with Twitter's API through the swarms-tools package. This documentation covers the initialization process and available functions for posting, replying, liking, and quoting tweets, as well as retrieving metrics.
"},{"location":"swarms_tools/twitter/#installation","title":"Installation","text":"pip install swarms-tools\n
"},{"location":"swarms_tools/twitter/#authentication","title":"Authentication","text":"The Twitter Tool requires Twitter API credentials for authentication. These should be stored as environment variables:
TWITTER_ID=your_twitter_id\nTWITTER_NAME=your_twitter_name\nTWITTER_DESCRIPTION=your_twitter_description\nTWITTER_API_KEY=your_api_key\nTWITTER_API_SECRET_KEY=your_api_secret_key\nTWITTER_ACCESS_TOKEN=your_access_token\nTWITTER_ACCESS_TOKEN_SECRET=your_access_token_secret\n
"},{"location":"swarms_tools/twitter/#initialization","title":"Initialization","text":""},{"location":"swarms_tools/twitter/#twittertool-configuration-options","title":"TwitterTool Configuration Options","text":"Parameter Type Required Description id str Yes Unique identifier for the Twitter tool instance name str Yes Name of the Twitter tool instance description str No Description of the tool's purpose credentials dict Yes Dictionary containing Twitter API credentials"},{"location":"swarms_tools/twitter/#credentials-dictionary-structure","title":"Credentials Dictionary Structure","text":"Key Type Required Description apiKey str Yes Twitter API Key apiSecretKey str Yes Twitter API Secret Key accessToken str Yes Twitter Access Token accessTokenSecret str Yes Twitter Access Token Secret"},{"location":"swarms_tools/twitter/#available-functions","title":"Available Functions","text":""},{"location":"swarms_tools/twitter/#initialize_twitter_tool","title":"initialize_twitter_tool()","text":"Creates and returns a new instance of the TwitterTool.
def initialize_twitter_tool() -> TwitterTool:\n
Returns: - TwitterTool: Initialized Twitter tool instance
"},{"location":"swarms_tools/twitter/#post_tweet","title":"post_tweet()","text":"Posts a new tweet to Twitter.
Parameter Type Required Description tweet str Yes Text content of the tweet to postRaises: - tweepy.TweepyException: If tweet posting fails
"},{"location":"swarms_tools/twitter/#reply_tweet","title":"reply_tweet()","text":"Replies to an existing tweet.
Parameter Type Required Description tweet_id int Yes ID of the tweet to reply to reply str Yes Text content of the replyRaises: - tweepy.TweepyException: If reply posting fails
"},{"location":"swarms_tools/twitter/#like_tweet","title":"like_tweet()","text":"Likes a specified tweet.
Parameter Type Required Description tweet_id int Yes ID of the tweet to likeRaises: - tweepy.TweepyException: If liking the tweet fails
"},{"location":"swarms_tools/twitter/#quote_tweet","title":"quote_tweet()","text":"Creates a quote tweet.
Parameter Type Required Description tweet_id int Yes ID of the tweet to quote quote str Yes Text content to add to the quoted tweetRaises: - tweepy.TweepyException: If quote tweet creation fails
"},{"location":"swarms_tools/twitter/#get_metrics","title":"get_metrics()","text":"Retrieves Twitter metrics.
Returns: - Dict[str, int]: Dictionary containing various Twitter metrics
Raises: - tweepy.TweepyException: If metrics retrieval fails
"},{"location":"swarms_tools/twitter/#usage-examples","title":"Usage Examples","text":""},{"location":"swarms_tools/twitter/#basic-tweet-posting","title":"Basic Tweet Posting","text":"from swarms_tools.twitter import initialize_twitter_tool, post_tweet\n\n# Post a simple tweet\npost_tweet(\"Hello, Twitter!\")\n
"},{"location":"swarms_tools/twitter/#interacting-with-tweets","title":"Interacting with Tweets","text":"# Reply to a tweet\nreply_tweet(12345, \"Great point!\")\n\n# Like a tweet\nlike_tweet(12345)\n\n# Quote a tweet\nquote_tweet(12345, \"Adding my thoughts on this!\")\n
"},{"location":"swarms_tools/twitter/#retrieving-metrics","title":"Retrieving Metrics","text":"metrics = get_metrics()\nprint(f\"Current metrics: {metrics}\")\n
"},{"location":"swarms_tools/twitter/#error-handling","title":"Error Handling","text":"All functions include built-in error handling and will print error messages if operations fail. It's recommended to implement additional error handling in production environments:
try:\n post_tweet(\"Hello, Twitter!\")\nexcept Exception as e:\n logger.error(f\"Tweet posting failed: {e}\")\n # Implement appropriate error handling\n
"},{"location":"swarms_tools/twitter/#production-example","title":"Production Example","text":"This is an example of how to use the TwitterTool in a production environment using Swarms.
import os\nfrom time import time\n\nfrom swarms import Agent\nfrom dotenv import load_dotenv\n\nfrom swarms_tools.social_media.twitter_tool import TwitterTool\n\nmedical_coder = Agent(\n agent_name=\"Medical Coder\",\n system_prompt=\"\"\"\n You are a highly experienced and certified medical coder with extensive knowledge of ICD-10 coding guidelines, clinical documentation standards, and compliance regulations. Your responsibility is to ensure precise, compliant, and well-documented coding for all clinical cases.\n\n ### Primary Responsibilities:\n 1. **Review Clinical Documentation**: Analyze all available clinical records, including specialist inputs, physician notes, lab results, imaging reports, and discharge summaries.\n 2. **Assign Accurate ICD-10 Codes**: Identify and assign appropriate codes for primary diagnoses, secondary conditions, symptoms, and complications.\n 3. **Ensure Coding Compliance**: Follow the latest ICD-10-CM/PCS coding guidelines, payer-specific requirements, and organizational policies.\n 4. **Document Code Justification**: Provide clear, evidence-based rationale for each assigned code.\n\n ### Detailed Coding Process:\n - **Review Specialist Inputs**: Examine all relevant documentation to capture the full scope of the patient's condition and care provided.\n - **Identify Diagnoses**: Determine the primary and secondary diagnoses, as well as any symptoms or complications, based on the documentation.\n - **Assign ICD-10 Codes**: Select the most accurate and specific ICD-10 codes for each identified diagnosis or condition.\n - **Document Supporting Evidence**: Record the documentation source (e.g., lab report, imaging, or physician note) for each code to justify its assignment.\n - **Address Queries**: Note and flag any inconsistencies, missing information, or areas requiring clarification from providers.\n\n ### Output Requirements:\n Your response must be clear, structured, and compliant with professional standards. Use the following format:\n\n 1. **Primary Diagnosis Codes**:\n - **ICD-10 Code**: [e.g., E11.9]\n - **Description**: [e.g., Type 2 diabetes mellitus without complications]\n - **Supporting Documentation**: [e.g., Physician's note dated MM/DD/YYYY]\n\n 2. **Secondary Diagnosis Codes**:\n - **ICD-10 Code**: [Code]\n - **Description**: [Description]\n - **Order of Clinical Significance**: [Rank or priority]\n\n 3. **Symptom Codes**:\n - **ICD-10 Code**: [Code]\n - **Description**: [Description]\n\n 4. **Complication Codes**:\n - **ICD-10 Code**: [Code]\n - **Description**: [Description]\n - **Relevant Documentation**: [Source of information]\n\n 5. **Coding Notes**:\n - Observations, clarifications, or any potential issues requiring provider input.\n\n ### Additional Guidelines:\n - Always prioritize specificity and compliance when assigning codes.\n - For ambiguous cases, provide a brief note with reasoning and flag for clarification.\n - Ensure the output format is clean, consistent, and ready for professional use.\n \"\"\",\n model_name=\"gpt-4o-mini\",\n max_tokens=3000,\n max_loops=1,\n dynamic_temperature_enabled=True,\n)\n\n\n# Define your options with the necessary credentials\noptions = {\n \"id\": \"mcsswarm\",\n \"name\": \"mcsswarm\",\n \"description\": \"An example Twitter Plugin for testing.\",\n \"credentials\": {\n \"apiKey\": os.getenv(\"TWITTER_API_KEY\"),\n \"apiSecretKey\": os.getenv(\"TWITTER_API_SECRET_KEY\"),\n \"accessToken\": os.getenv(\"TWITTER_ACCESS_TOKEN\"),\n \"accessTokenSecret\": os.getenv(\"TWITTER_ACCESS_TOKEN_SECRET\"),\n },\n}\n\n# Initialize the TwitterTool with your options\ntwitter_plugin = TwitterTool(options)\n\n# # Post a tweet\n# post_tweet_fn = twitter_plugin.get_function('post_tweet')\n# post_tweet_fn(\"Hello world!\")\n\n\n# Assuming `twitter_plugin` and `medical_coder` are already initialized\npost_tweet = twitter_plugin.get_function(\"post_tweet\")\n\n# Set to track posted tweets and avoid duplicates\nposted_tweets = set()\n\n\ndef post_unique_tweet():\n \"\"\"\n Generate and post a unique tweet. Skip duplicates.\n \"\"\"\n tweet_prompt = (\n \"Share an intriguing, lesser-known fact about a medical disease, and include an innovative, fun, or surprising way to manage or cure it! \"\n \"Make the response playful, engaging, and inspiring\u2014something that makes people smile while learning. No markdown, just plain text!\"\n )\n\n # Generate a new tweet text\n tweet_text = medical_coder.run(tweet_prompt)\n\n # Check for duplicates\n if tweet_text in posted_tweets:\n print(\"Duplicate tweet detected. Skipping...\")\n return\n\n # Post the tweet\n try:\n post_tweet(tweet_text)\n print(f\"Posted tweet: {tweet_text}\")\n # Add the tweet to the set of posted tweets\n posted_tweets.add(tweet_text)\n except Exception as e:\n print(f\"Error posting tweet: {e}\")\n\n\n# Loop to post tweets every 10 seconds\ndef start_tweet_loop(interval=10):\n \"\"\"\n Continuously post tweets every `interval` seconds.\n\n Args:\n interval (int): Time in seconds between tweets.\n \"\"\"\n print(\"Starting tweet loop...\")\n while True:\n post_unique_tweet()\n time.sleep(interval)\n\n\n# Start the loop\nstart_tweet_loop(10)\n
"},{"location":"swarms_tools/twitter/#best-practices","title":"Best Practices","text":"Be aware of Twitter's API rate limits. Implement appropriate delays between requests in production environments to avoid hitting these limits.
"},{"location":"swarms_tools/twitter/#dependencies","title":"Dependencies","text":"Empowering the Agentic Revolution Token Contract Address: 74SBV4zDXxTRgv1pEMoECskKBkZHc2yGPnc7GYVepump
You can buy $swarms on most marketplaces: Pump.fun, Kraken, Bitget, Binance, OKX, and more.
"},{"location":"web3/token/#overview","title":"\ud83d\udce6 Overview","text":"$swarms
\u26a0\ufe0f At launch, only 2% was reserved for the team \u2014 among the smallest allocations in DAO history.
"},{"location":"web3/token/#a-message-from-the-team","title":"\ud83d\udce3 A Message from the Team","text":"Quote
When we launched $swarms, we prioritized community ownership by allocating just 2% to the team. Our intent was radical decentralization. But that decision has created unintended consequences.
"},{"location":"web3/token/#challenges-we-faced","title":"\u2757 Challenges We Faced","text":"We are initiating a DAO governance proposal to:
Key Reforms\ud83d\udcc8 Increase team allocation to 10% Secure operational longevity and attract top contributors.
\ud83c\udf31 Launch an ecosystem grants program Incentivize developers building agentic tools and infra.
\ud83d\udee1 Combat token manipulation Deploy anti-whale policies and explore token lockups.
\ud83e\udd1d Strengthen community dev initiatives Support contributor bounties, governance tooling, and hackathons.
This proposal isn\u2019t about centralizing power \u2014 it's about protecting and empowering the Swarms ecosystem.
"},{"location":"web3/token/#contribute-to-swarms-dao","title":"\ud83d\udcb8 Contribute to Swarms DAO","text":"To expand our ecosystem, grow the core team, and bring agentic AI to the world, we invite all community members to invest directly in Swarms DAO.
Send $swarms or SOL to our official treasury address:
\ud83e\ude99 DAO Treasury Wallet:\n7MaX4muAn8ZQREJxnupm8sgokwFHujgrGfH9Qn81BuEV\n
Every contribution matters
Whether it\u2019s 1 $swarms or 1000 SOL \u2014 you\u2019re helping fund a decentralized future.
You may use most wallets and platforms supporting Solana to send tokens.
"},{"location":"web3/token/#why-invest","title":"\ud83e\udde0 Why Invest?","text":"Your contributions fund:
Vote on governance proposals
Submit development or funding proposals
Share $swarms with your network
Build with our upcoming agent SDKs
Contribute to the mission of agentic decentralization
$swarms
Blockchain Solana Initial Team Allocation 3% (Proposed 10%) Public Distribution 97% DAO Wallet 7MaX4muAn8ZQREJxnupm8sgokwFHujgrGfH9Qn81BuEV
DAO Governance dao.swarms.world"},{"location":"web3/token/#useful-links","title":"\ud83c\udf0d Useful Links","text":"DAO Governance Portal
Investor Information
Official Site
Join Swarms on Discord
```
"}]}