parent
fb62dda1da
commit
13f45be8ad
@ -0,0 +1,56 @@
|
||||
# Careers at Swarms
|
||||
|
||||
We are a team of engineers, developers, and visionaries on a mission to build the future of AI by orchestrating multi-agent collaboration. We move fast, think ambitiously, and deliver with urgency. Join us if you want to be part of building the next generation of multi-agent systems, redefining how businesses automate operations and leverage AI.
|
||||
|
||||
**We offer none of the following benefits Yet:**
|
||||
- No medical, dental, or vision insurance
|
||||
- No paid time off
|
||||
- No life or AD&D insurance
|
||||
- No short-term or long-term disability insurance
|
||||
- No 401(k) plan
|
||||
|
||||
**Working hours:** 9 AM to 10 PM, every day, 7 days a week. This is not for people who seek work-life balance.
|
||||
|
||||
---
|
||||
|
||||
### Hiring Process: How to Join Swarms
|
||||
We have a simple 3-step hiring process:
|
||||
|
||||
**NOTE** We do not consider applicants who have not previously submitted a PR, to be considered a PR containing a new feature of a bug fixed must be submitted.
|
||||
|
||||
1. **Submit a pull request (PR)**: Start by submitting an approved PR to the [Swarms GitHub repository](https://github.com/kyegomez/swarms) or the appropriate repository .
|
||||
2. **Code review**: Our technical team will review your PR. If it meets our standards, you will be invited for a quick interview.
|
||||
3. **Final interview**: Discuss your contributions and approach with our team. If you pass, you're in!
|
||||
|
||||
There are no recruiters. All evaluations are done by our technical team.
|
||||
|
||||
---
|
||||
|
||||
# Location
|
||||
|
||||
- **Palo Alto** CA Our Palo Alto office houses the majority of our core research teams including our prompting, agent design, and model training
|
||||
|
||||
- **Miami** Our miami office holds prompt engineering, agent design, and more.
|
||||
|
||||
|
||||
### Open Roles at Swarms
|
||||
|
||||
**Infrastructure Engineer**
|
||||
- Build and maintain the systems that run our AI multi-agent infrastructure.
|
||||
- Expertise in Skypilot, AWS, Terraform.
|
||||
- Ensure seamless, high-availability environments for agent operations.
|
||||
|
||||
**Agent Engineer**
|
||||
- Design, develop, and orchestrate complex swarms of AI agents.
|
||||
- Extensive experience with Python, multi-agent systems, and neural networks.
|
||||
- Ability to create dynamic and efficient agent architectures from scratch.
|
||||
|
||||
**Prompt Engineer**
|
||||
- Craft highly optimized prompts that drive our LLM-based agents.
|
||||
- Specialize in instruction-based prompts, multi-shot examples, and production-grade deployment.
|
||||
- Collaborate with agents to deliver state-of-the-art solutions.
|
||||
|
||||
**Front-End Engineer**
|
||||
- Build sleek, intuitive interfaces for interacting with swarms of agents.
|
||||
- Proficiency in Next.js, FastAPI, and modern front-end technologies.
|
||||
- Design with the user experience in mind, integrating complex AI features into simple workflows.
|
@ -0,0 +1,56 @@
|
||||
# Careers at Swarms
|
||||
|
||||
We are a team of engineers, developers, and visionaries on a mission to build the future of AI by orchestrating multi-agent collaboration. We move fast, think ambitiously, and deliver with urgency. Join us if you want to be part of building the next generation of multi-agent systems, redefining how businesses automate operations and leverage AI.
|
||||
|
||||
**We offer none of the following benefits Yet:**
|
||||
- No medical, dental, or vision insurance
|
||||
- No paid time off
|
||||
- No life or AD&D insurance
|
||||
- No short-term or long-term disability insurance
|
||||
- No 401(k) plan
|
||||
|
||||
**Working hours:** 9 AM to 10 PM, every day, 7 days a week. This is not for people who seek work-life balance.
|
||||
|
||||
---
|
||||
|
||||
### Hiring Process: How to Join Swarms
|
||||
We have a simple 3-step hiring process:
|
||||
|
||||
**NOTE** We do not consider applicants who have not previously submitted a PR, to be considered a PR containing a new feature of a bug fixed must be submitted.
|
||||
|
||||
1. **Submit a pull request (PR)**: Start by submitting an approved PR to the [Swarms GitHub repository](https://github.com/kyegomez/swarms) or the appropriate repository .
|
||||
2. **Code review**: Our technical team will review your PR. If it meets our standards, you will be invited for a quick interview.
|
||||
3. **Final interview**: Discuss your contributions and approach with our team. If you pass, you're in!
|
||||
|
||||
There are no recruiters. All evaluations are done by our technical team.
|
||||
|
||||
---
|
||||
|
||||
# Location
|
||||
|
||||
- **Palo Alto** CA Our Palo Alto office houses the majority of our core research teams including our prompting, agent design, and model training
|
||||
|
||||
- **Miami** Our miami office holds prompt engineering, agent design, and more.
|
||||
|
||||
|
||||
### Open Roles at Swarms
|
||||
|
||||
**Infrastructure Engineer**
|
||||
- Build and maintain the systems that run our AI multi-agent infrastructure.
|
||||
- Expertise in Skypilot, AWS, Terraform.
|
||||
- Ensure seamless, high-availability environments for agent operations.
|
||||
|
||||
**Agent Engineer**
|
||||
- Design, develop, and orchestrate complex swarms of AI agents.
|
||||
- Extensive experience with Python, multi-agent systems, and neural networks.
|
||||
- Ability to create dynamic and efficient agent architectures from scratch.
|
||||
|
||||
**Prompt Engineer**
|
||||
- Craft highly optimized prompts that drive our LLM-based agents.
|
||||
- Specialize in instruction-based prompts, multi-shot examples, and production-grade deployment.
|
||||
- Collaborate with agents to deliver state-of-the-art solutions.
|
||||
|
||||
**Front-End Engineer**
|
||||
- Build sleek, intuitive interfaces for interacting with swarms of agents.
|
||||
- Proficiency in Next.js, FastAPI, and modern front-end technologies.
|
||||
- Design with the user experience in mind, integrating complex AI features into simple workflows.
|
@ -0,0 +1,159 @@
|
||||
# Swarms Framework Architecture
|
||||
|
||||
|
||||
The Swarms package is designed to orchestrate and manage **swarms of agents**, enabling collaboration between multiple Large Language Models (LLMs) or other agent types to solve complex tasks. The architecture is modular and scalable, facilitating seamless integration of various agents, models, prompts, and tools. Below is an overview of the architectural components, along with instructions on where to find the corresponding documentation.
|
||||
|
||||
|
||||
|
||||
```
|
||||
swarms/
|
||||
├── agents/
|
||||
├── artifacts/
|
||||
├── cli/
|
||||
├── memory/
|
||||
├── models/
|
||||
├── prompts/
|
||||
├── schemas/
|
||||
├── structs/
|
||||
├── telemetry/
|
||||
├── tools/
|
||||
├── utils/
|
||||
└── __init__.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Role of Folders in the Swarms Framework
|
||||
|
||||
The **Swarms framework** is composed of several key folders, each serving a specific role in building, orchestrating, and managing swarms of agents. Below is an in-depth explanation of the role of each folder in the framework's architecture, focusing on how they contribute to the overall system for handling complex multi-agent workflows.
|
||||
|
||||
---
|
||||
|
||||
### **1. Agents Folder (`agents/`)**
|
||||
- **Role:**
|
||||
- The **agents** folder contains the core logic for individual agents within the Swarms framework. Agents are the key functional units responsible for carrying out specific tasks, whether it be text generation, web scraping, data analysis, or more specialized functions like marketing or accounting.
|
||||
- **Customization:** Each agent can be specialized for different tasks by defining custom system prompts and behaviors.
|
||||
- **Modular Agent System:** New agents can be easily added to this folder to expand the framework's capabilities.
|
||||
- **Importance:** This folder allows users to create and manage multiple types of agents that can interact and collaborate to solve complex problems.
|
||||
- **Examples:** Accounting agents, marketing agents, and programming agents.
|
||||
|
||||
---
|
||||
|
||||
### **2. Artifacts Folder (`artifacts/`)**
|
||||
- **Role:**
|
||||
- The **artifacts** folder is responsible for storing the results or outputs generated by agents and swarms. This could include reports, logs, or data that agents generate during task execution.
|
||||
- **Persistent Storage:** It helps maintain a persistent record of agent interactions, making it easier to retrieve or review past actions and outputs.
|
||||
- **Data Handling:** Users can configure this folder to store artifacts that are essential for later analysis or reporting.
|
||||
- **Importance:** Acts as a storage mechanism for important task-related outputs, ensuring that no data is lost after tasks are completed.
|
||||
|
||||
---
|
||||
|
||||
### **3. CLI Folder (`cli/`)**
|
||||
- **Role:**
|
||||
- The **CLI** folder contains tools for interacting with the Swarms framework through the command-line interface. This allows users to easily manage and orchestrate swarms without needing a graphical interface.
|
||||
- **Command-line Tools:** Commands in this folder enable users to initiate, control, and monitor swarms, making the system accessible and versatile.
|
||||
- **Automation and Scriptability:** Enables advanced users to automate swarm interactions and deploy agents programmatically.
|
||||
- **Importance:** Provides a flexible way to control the Swarms system for developers who prefer using the command line.
|
||||
|
||||
---
|
||||
|
||||
### **4. Memory Folder (`memory/`) Depcriated!!**
|
||||
- **Role:**
|
||||
- The **memory** folder handles the framework's memory management for agents. This allows agents to retain and recall past interactions or task contexts, enabling continuity in long-running processes or multi-step workflows.
|
||||
- **Context Retention:** Agents that depend on historical context to make decisions or carry out tasks can store and access memory using this folder.
|
||||
- **Long-Term and Short-Term Memory:** This could be implemented in various ways, such as short-term conversational memory or long-term knowledge storage.
|
||||
- **Importance:** Crucial for agents that require memory to handle complex workflows, where decisions are based on prior outputs or interactions.
|
||||
|
||||
---
|
||||
|
||||
### **5. Models Folder (`models/`)**
|
||||
- **Role:**
|
||||
- The **models** folder houses pre-trained machine learning models that agents utilize to complete their tasks. These models could include LLMs (Large Language Models), custom-trained models, or fine-tuned models specific to the tasks being handled by the agents.
|
||||
- **Plug-and-Play Architecture:** The framework allows users to easily add or switch models depending on the specific needs of their agents.
|
||||
- **Custom Model Support:** Users can integrate custom models here for more specialized tasks.
|
||||
- **Importance:** Provides the computational backbone for agent decision-making and task execution.
|
||||
|
||||
---
|
||||
|
||||
### **6. Prompts Folder (`prompts/`)**
|
||||
- **Role:**
|
||||
- The **prompts** folder contains reusable prompt templates that agents use to interact with their environment and complete tasks. These system prompts define the behavior and task orientation of the agents.
|
||||
- **Template Reusability:** Users can create and store common prompt templates, making it easy to define agent behavior across different tasks without rewriting prompts from scratch.
|
||||
- **Task-Specific Prompts:** For example, an accounting agent may have a prompt template that guides its interaction with financial data.
|
||||
- **Importance:** Provides the logic and guidance agents need to generate outputs in a coherent and task-focused manner.
|
||||
|
||||
---
|
||||
|
||||
### **7. Schemas Folder (`schemas/`)**
|
||||
- **Role:**
|
||||
- The **schemas** folder defines the data structures and validation logic for inputs and outputs within the framework, using tools like **Pydantic** for data validation.
|
||||
- **Standardization and Validation:** This ensures that all interactions between agents and swarms follow consistent data formats, which is critical for large-scale agent coordination and task management.
|
||||
- **Error Prevention:** By validating data early, it prevents errors from propagating through the system, improving reliability.
|
||||
- **Importance:** Ensures data consistency across the entire framework, making it easier to integrate and manage swarms of agents at scale.
|
||||
|
||||
---
|
||||
|
||||
### **8. Structs Folder (`structs/`)**
|
||||
- **Role:**
|
||||
- The **structs** folder is the core of the Swarms framework, housing the orchestration logic for managing and coordinating swarms of agents. This folder allows for dynamic task assignment, queue management, inter-agent communication, and result aggregation.
|
||||
- **Swarm Management:** Agents are grouped into swarms to handle tasks that require multiple agents working in parallel or collaboratively.
|
||||
- **Scalability:** The swarm structure is designed to be scalable, allowing thousands of agents to operate together on distributed tasks.
|
||||
- **Task Queueing and Execution:** Supports task queueing, task prioritization, and load balancing between agents.
|
||||
- **Importance:** This folder is critical for managing how agents interact and collaborate to solve complex, multi-step problems.
|
||||
|
||||
---
|
||||
|
||||
### **9. Telemetry Folder (`telemetry/`)**
|
||||
- **Role:**
|
||||
- The **telemetry** folder provides logging and monitoring tools to capture agent performance metrics, error handling, and real-time activity tracking. It helps users keep track of what each agent or swarm is doing, making it easier to debug, audit, and optimize operations.
|
||||
- **Monitoring:** Tracks agent performance and system health.
|
||||
- **Logs:** Maintains logs for troubleshooting and operational review.
|
||||
- **Importance:** Provides visibility into the system, ensuring smooth operation and enabling fine-tuning of agent behaviors.
|
||||
|
||||
---
|
||||
|
||||
### **10. Tools Folder (`tools/`)**
|
||||
- **Role:**
|
||||
- The **tools** folder contains specialized utility functions or scripts that agents and swarms may require to complete certain tasks, such as web scraping, API interactions, data parsing, or other external resource handling.
|
||||
- **Task-Specific Tools:** Agents can call these tools to perform operations outside of their own logic, enabling them to interact with external systems more efficiently.
|
||||
- **Importance:** Expands the capabilities of agents, allowing them to complete more sophisticated tasks by relying on these external tools.
|
||||
|
||||
---
|
||||
|
||||
### **11. Utils Folder (`utils/`)**
|
||||
- **Role:**
|
||||
- The **utils** folder contains general-purpose utility functions that are reused throughout the framework. These may include functions for data formatting, validation, logging setup, and configuration management.
|
||||
- **Shared Utilities:** Helps keep the codebase clean by providing reusable functions that multiple agents or parts of the framework can call.
|
||||
- **Importance:** Provides common functions that help the Swarms framework operate efficiently and consistently.
|
||||
|
||||
---
|
||||
|
||||
### **Core Initialization File (`__init__.py`)**
|
||||
- **Role:**
|
||||
- The `__init__.py` file is the entry point of the Swarms package, ensuring that all necessary modules, agents, and tools are loaded when the Swarms framework is imported. It allows for the modular loading of different components, making it easier for users to work with only the parts of the framework they need.
|
||||
- **Importance:** Acts as the bridge that connects all other components in the framework, enabling the entire package to work together seamlessly.
|
||||
|
||||
---
|
||||
|
||||
### How to Access Documentation
|
||||
|
||||
- **Official Documentation Site:**
|
||||
- URL: [docs.swarms.world](https://docs.swarms.world)
|
||||
- Here, users can find detailed guides, tutorials, and API references on how to use each of the folders mentioned above. The documentation covers setup, agent orchestration, and practical examples of how to leverage swarms for real-world tasks.
|
||||
|
||||
- **GitHub Repository:**
|
||||
- URL: [Swarms GitHub](https://github.com/kyegomez/swarms)
|
||||
- The repository contains code examples, detailed folder explanations, and further resources on how to get started with building and managing agent swarms.
|
||||
|
||||
By understanding the purpose and role of each folder in the Swarms framework, users can more effectively build, orchestrate, and manage agents to handle complex tasks and workflows at scale.
|
||||
|
||||
## Support:
|
||||
|
||||
- **Post Issue On Github**
|
||||
- URL: [Submit issue](https://github.com/kyegomez/swarms/issues/new/choose)
|
||||
- Post your issue whether it's an issue or a feature request
|
||||
|
||||
|
||||
- **Community Support**
|
||||
- URL: [Submit issue](https://discord.gg/agora-999382051935506503)
|
||||
- Ask the community for support in real-time and or admin support
|
@ -0,0 +1,159 @@
|
||||
# Swarms Framework Architecture
|
||||
|
||||
|
||||
The Swarms package is designed to orchestrate and manage **swarms of agents**, enabling collaboration between multiple Large Language Models (LLMs) or other agent types to solve complex tasks. The architecture is modular and scalable, facilitating seamless integration of various agents, models, prompts, and tools. Below is an overview of the architectural components, along with instructions on where to find the corresponding documentation.
|
||||
|
||||
|
||||
|
||||
```
|
||||
swarms/
|
||||
├── agents/
|
||||
├── artifacts/
|
||||
├── cli/
|
||||
├── memory/
|
||||
├── models/
|
||||
├── prompts/
|
||||
├── schemas/
|
||||
├── structs/
|
||||
├── telemetry/
|
||||
├── tools/
|
||||
├── utils/
|
||||
└── __init__.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
### Role of Folders in the Swarms Framework
|
||||
|
||||
The **Swarms framework** is composed of several key folders, each serving a specific role in building, orchestrating, and managing swarms of agents. Below is an in-depth explanation of the role of each folder in the framework's architecture, focusing on how they contribute to the overall system for handling complex multi-agent workflows.
|
||||
|
||||
---
|
||||
|
||||
### **1. Agents Folder (`agents/`)**
|
||||
- **Role:**
|
||||
- The **agents** folder contains the core logic for individual agents within the Swarms framework. Agents are the key functional units responsible for carrying out specific tasks, whether it be text generation, web scraping, data analysis, or more specialized functions like marketing or accounting.
|
||||
- **Customization:** Each agent can be specialized for different tasks by defining custom system prompts and behaviors.
|
||||
- **Modular Agent System:** New agents can be easily added to this folder to expand the framework's capabilities.
|
||||
- **Importance:** This folder allows users to create and manage multiple types of agents that can interact and collaborate to solve complex problems.
|
||||
- **Examples:** Accounting agents, marketing agents, and programming agents.
|
||||
|
||||
---
|
||||
|
||||
### **2. Artifacts Folder (`artifacts/`)**
|
||||
- **Role:**
|
||||
- The **artifacts** folder is responsible for storing the results or outputs generated by agents and swarms. This could include reports, logs, or data that agents generate during task execution.
|
||||
- **Persistent Storage:** It helps maintain a persistent record of agent interactions, making it easier to retrieve or review past actions and outputs.
|
||||
- **Data Handling:** Users can configure this folder to store artifacts that are essential for later analysis or reporting.
|
||||
- **Importance:** Acts as a storage mechanism for important task-related outputs, ensuring that no data is lost after tasks are completed.
|
||||
|
||||
---
|
||||
|
||||
### **3. CLI Folder (`cli/`)**
|
||||
- **Role:**
|
||||
- The **CLI** folder contains tools for interacting with the Swarms framework through the command-line interface. This allows users to easily manage and orchestrate swarms without needing a graphical interface.
|
||||
- **Command-line Tools:** Commands in this folder enable users to initiate, control, and monitor swarms, making the system accessible and versatile.
|
||||
- **Automation and Scriptability:** Enables advanced users to automate swarm interactions and deploy agents programmatically.
|
||||
- **Importance:** Provides a flexible way to control the Swarms system for developers who prefer using the command line.
|
||||
|
||||
---
|
||||
|
||||
### **4. Memory Folder (`memory/`) Depcriated!!**
|
||||
- **Role:**
|
||||
- The **memory** folder handles the framework's memory management for agents. This allows agents to retain and recall past interactions or task contexts, enabling continuity in long-running processes or multi-step workflows.
|
||||
- **Context Retention:** Agents that depend on historical context to make decisions or carry out tasks can store and access memory using this folder.
|
||||
- **Long-Term and Short-Term Memory:** This could be implemented in various ways, such as short-term conversational memory or long-term knowledge storage.
|
||||
- **Importance:** Crucial for agents that require memory to handle complex workflows, where decisions are based on prior outputs or interactions.
|
||||
|
||||
---
|
||||
|
||||
### **5. Models Folder (`models/`)**
|
||||
- **Role:**
|
||||
- The **models** folder houses pre-trained machine learning models that agents utilize to complete their tasks. These models could include LLMs (Large Language Models), custom-trained models, or fine-tuned models specific to the tasks being handled by the agents.
|
||||
- **Plug-and-Play Architecture:** The framework allows users to easily add or switch models depending on the specific needs of their agents.
|
||||
- **Custom Model Support:** Users can integrate custom models here for more specialized tasks.
|
||||
- **Importance:** Provides the computational backbone for agent decision-making and task execution.
|
||||
|
||||
---
|
||||
|
||||
### **6. Prompts Folder (`prompts/`)**
|
||||
- **Role:**
|
||||
- The **prompts** folder contains reusable prompt templates that agents use to interact with their environment and complete tasks. These system prompts define the behavior and task orientation of the agents.
|
||||
- **Template Reusability:** Users can create and store common prompt templates, making it easy to define agent behavior across different tasks without rewriting prompts from scratch.
|
||||
- **Task-Specific Prompts:** For example, an accounting agent may have a prompt template that guides its interaction with financial data.
|
||||
- **Importance:** Provides the logic and guidance agents need to generate outputs in a coherent and task-focused manner.
|
||||
|
||||
---
|
||||
|
||||
### **7. Schemas Folder (`schemas/`)**
|
||||
- **Role:**
|
||||
- The **schemas** folder defines the data structures and validation logic for inputs and outputs within the framework, using tools like **Pydantic** for data validation.
|
||||
- **Standardization and Validation:** This ensures that all interactions between agents and swarms follow consistent data formats, which is critical for large-scale agent coordination and task management.
|
||||
- **Error Prevention:** By validating data early, it prevents errors from propagating through the system, improving reliability.
|
||||
- **Importance:** Ensures data consistency across the entire framework, making it easier to integrate and manage swarms of agents at scale.
|
||||
|
||||
---
|
||||
|
||||
### **8. Structs Folder (`structs/`)**
|
||||
- **Role:**
|
||||
- The **structs** folder is the core of the Swarms framework, housing the orchestration logic for managing and coordinating swarms of agents. This folder allows for dynamic task assignment, queue management, inter-agent communication, and result aggregation.
|
||||
- **Swarm Management:** Agents are grouped into swarms to handle tasks that require multiple agents working in parallel or collaboratively.
|
||||
- **Scalability:** The swarm structure is designed to be scalable, allowing thousands of agents to operate together on distributed tasks.
|
||||
- **Task Queueing and Execution:** Supports task queueing, task prioritization, and load balancing between agents.
|
||||
- **Importance:** This folder is critical for managing how agents interact and collaborate to solve complex, multi-step problems.
|
||||
|
||||
---
|
||||
|
||||
### **9. Telemetry Folder (`telemetry/`)**
|
||||
- **Role:**
|
||||
- The **telemetry** folder provides logging and monitoring tools to capture agent performance metrics, error handling, and real-time activity tracking. It helps users keep track of what each agent or swarm is doing, making it easier to debug, audit, and optimize operations.
|
||||
- **Monitoring:** Tracks agent performance and system health.
|
||||
- **Logs:** Maintains logs for troubleshooting and operational review.
|
||||
- **Importance:** Provides visibility into the system, ensuring smooth operation and enabling fine-tuning of agent behaviors.
|
||||
|
||||
---
|
||||
|
||||
### **10. Tools Folder (`tools/`)**
|
||||
- **Role:**
|
||||
- The **tools** folder contains specialized utility functions or scripts that agents and swarms may require to complete certain tasks, such as web scraping, API interactions, data parsing, or other external resource handling.
|
||||
- **Task-Specific Tools:** Agents can call these tools to perform operations outside of their own logic, enabling them to interact with external systems more efficiently.
|
||||
- **Importance:** Expands the capabilities of agents, allowing them to complete more sophisticated tasks by relying on these external tools.
|
||||
|
||||
---
|
||||
|
||||
### **11. Utils Folder (`utils/`)**
|
||||
- **Role:**
|
||||
- The **utils** folder contains general-purpose utility functions that are reused throughout the framework. These may include functions for data formatting, validation, logging setup, and configuration management.
|
||||
- **Shared Utilities:** Helps keep the codebase clean by providing reusable functions that multiple agents or parts of the framework can call.
|
||||
- **Importance:** Provides common functions that help the Swarms framework operate efficiently and consistently.
|
||||
|
||||
---
|
||||
|
||||
### **Core Initialization File (`__init__.py`)**
|
||||
- **Role:**
|
||||
- The `__init__.py` file is the entry point of the Swarms package, ensuring that all necessary modules, agents, and tools are loaded when the Swarms framework is imported. It allows for the modular loading of different components, making it easier for users to work with only the parts of the framework they need.
|
||||
- **Importance:** Acts as the bridge that connects all other components in the framework, enabling the entire package to work together seamlessly.
|
||||
|
||||
---
|
||||
|
||||
### How to Access Documentation
|
||||
|
||||
- **Official Documentation Site:**
|
||||
- URL: [docs.swarms.world](https://docs.swarms.world)
|
||||
- Here, users can find detailed guides, tutorials, and API references on how to use each of the folders mentioned above. The documentation covers setup, agent orchestration, and practical examples of how to leverage swarms for real-world tasks.
|
||||
|
||||
- **GitHub Repository:**
|
||||
- URL: [Swarms GitHub](https://github.com/kyegomez/swarms)
|
||||
- The repository contains code examples, detailed folder explanations, and further resources on how to get started with building and managing agent swarms.
|
||||
|
||||
By understanding the purpose and role of each folder in the Swarms framework, users can more effectively build, orchestrate, and manage agents to handle complex tasks and workflows at scale.
|
||||
|
||||
## Support:
|
||||
|
||||
- **Post Issue On Github**
|
||||
- URL: [Submit issue](https://github.com/kyegomez/swarms/issues/new/choose)
|
||||
- Post your issue whether it's an issue or a feature request
|
||||
|
||||
|
||||
- **Community Support**
|
||||
- URL: [Submit issue](https://discord.gg/agora-999382051935506503)
|
||||
- Ask the community for support in real-time and or admin support
|
@ -0,0 +1,122 @@
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Federated Swarm
|
||||
|
||||
**Overview:**
|
||||
A Federated Swarm architecture involves multiple independent swarms collaborating to complete a task. Each swarm operates autonomously but can share information and results with other swarms.
|
||||
|
||||
**Use-Cases:**
|
||||
- Distributed learning systems where data is processed across multiple nodes.
|
||||
|
||||
- Scenarios requiring collaboration between different teams or departments.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Central Coordinator]
|
||||
subgraph Swarm1
|
||||
B1[Agent 1.1] --> B2[Agent 1.2]
|
||||
B2 --> B3[Agent 1.3]
|
||||
end
|
||||
subgraph Swarm2
|
||||
C1[Agent 2.1] --> C2[Agent 2.2]
|
||||
C2 --> C3[Agent 2.3]
|
||||
end
|
||||
subgraph Swarm3
|
||||
D1[Agent 3.1] --> D2[Agent 3.2]
|
||||
D2 --> D3[Agent 3.3]
|
||||
end
|
||||
B1 --> A
|
||||
C1 --> A
|
||||
D1 --> A
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Star Swarm
|
||||
|
||||
**Overview:**
|
||||
A Star Swarm architecture features a central agent that coordinates the activities of several peripheral agents. The central agent assigns tasks to the peripheral agents and aggregates their results.
|
||||
|
||||
**Use-Cases:**
|
||||
- Centralized decision-making processes.
|
||||
|
||||
- Scenarios requiring a central authority to coordinate multiple workers.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Central Agent] --> B1[Peripheral Agent 1]
|
||||
A --> B2[Peripheral Agent 2]
|
||||
A --> B3[Peripheral Agent 3]
|
||||
A --> B4[Peripheral Agent 4]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Mesh Swarm
|
||||
|
||||
**Overview:**
|
||||
A Mesh Swarm architecture allows for a fully connected network of agents where each agent can communicate with any other agent. This setup provides high flexibility and redundancy.
|
||||
|
||||
**Use-Cases:**
|
||||
- Complex systems requiring high fault tolerance and redundancy.
|
||||
|
||||
- Scenarios involving dynamic and frequent communication between agents.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A1[Agent 1] --> A2[Agent 2]
|
||||
A1 --> A3[Agent 3]
|
||||
A1 --> A4[Agent 4]
|
||||
A2 --> A3
|
||||
A2 --> A4
|
||||
A3 --> A4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Cascade Swarm
|
||||
|
||||
**Overview:**
|
||||
A Cascade Swarm architecture involves a chain of agents where each agent triggers the next one in a cascade effect. This is useful for scenarios where tasks need to be processed in stages, and each stage initiates the next.
|
||||
|
||||
**Use-Cases:**
|
||||
- Multi-stage processing tasks such as data transformation pipelines.
|
||||
|
||||
- Event-driven architectures where one event triggers subsequent actions.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Trigger Agent] --> B[Agent 1]
|
||||
B --> C[Agent 2]
|
||||
C --> D[Agent 3]
|
||||
D --> E[Agent 4]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Hybrid Swarm
|
||||
|
||||
**Overview:**
|
||||
A Hybrid Swarm architecture combines elements of various architectures to suit specific needs. It might integrate hierarchical and parallel components, or mix sequential and round robin patterns.
|
||||
|
||||
**Use-Cases:**
|
||||
- Complex workflows requiring a mix of different processing strategies.
|
||||
|
||||
- Custom scenarios tailored to specific operational requirements.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Root Agent] --> B1[Sub-Agent 1]
|
||||
A --> B2[Sub-Agent 2]
|
||||
B1 --> C1[Parallel Agent 1]
|
||||
B1 --> C2[Parallel Agent 2]
|
||||
B2 --> C3[Sequential Agent 1]
|
||||
C3 --> C4[Sequential Agent 2]
|
||||
C3 --> C5[Sequential Agent 3]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
These swarm architectures provide different models for organizing and orchestrating large language models (LLMs) to perform various tasks efficiently. Depending on the specific requirements of your project, you can choose the appropriate architecture or even combine elements from multiple architectures to create a hybrid solution.
|
@ -0,0 +1,122 @@
|
||||
|
||||
|
||||
---
|
||||
|
||||
### Federated Swarm
|
||||
|
||||
**Overview:**
|
||||
A Federated Swarm architecture involves multiple independent swarms collaborating to complete a task. Each swarm operates autonomously but can share information and results with other swarms.
|
||||
|
||||
**Use-Cases:**
|
||||
- Distributed learning systems where data is processed across multiple nodes.
|
||||
|
||||
- Scenarios requiring collaboration between different teams or departments.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Central Coordinator]
|
||||
subgraph Swarm1
|
||||
B1[Agent 1.1] --> B2[Agent 1.2]
|
||||
B2 --> B3[Agent 1.3]
|
||||
end
|
||||
subgraph Swarm2
|
||||
C1[Agent 2.1] --> C2[Agent 2.2]
|
||||
C2 --> C3[Agent 2.3]
|
||||
end
|
||||
subgraph Swarm3
|
||||
D1[Agent 3.1] --> D2[Agent 3.2]
|
||||
D2 --> D3[Agent 3.3]
|
||||
end
|
||||
B1 --> A
|
||||
C1 --> A
|
||||
D1 --> A
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Star Swarm
|
||||
|
||||
**Overview:**
|
||||
A Star Swarm architecture features a central agent that coordinates the activities of several peripheral agents. The central agent assigns tasks to the peripheral agents and aggregates their results.
|
||||
|
||||
**Use-Cases:**
|
||||
- Centralized decision-making processes.
|
||||
|
||||
- Scenarios requiring a central authority to coordinate multiple workers.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Central Agent] --> B1[Peripheral Agent 1]
|
||||
A --> B2[Peripheral Agent 2]
|
||||
A --> B3[Peripheral Agent 3]
|
||||
A --> B4[Peripheral Agent 4]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Mesh Swarm
|
||||
|
||||
**Overview:**
|
||||
A Mesh Swarm architecture allows for a fully connected network of agents where each agent can communicate with any other agent. This setup provides high flexibility and redundancy.
|
||||
|
||||
**Use-Cases:**
|
||||
- Complex systems requiring high fault tolerance and redundancy.
|
||||
|
||||
- Scenarios involving dynamic and frequent communication between agents.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A1[Agent 1] --> A2[Agent 2]
|
||||
A1 --> A3[Agent 3]
|
||||
A1 --> A4[Agent 4]
|
||||
A2 --> A3
|
||||
A2 --> A4
|
||||
A3 --> A4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Cascade Swarm
|
||||
|
||||
**Overview:**
|
||||
A Cascade Swarm architecture involves a chain of agents where each agent triggers the next one in a cascade effect. This is useful for scenarios where tasks need to be processed in stages, and each stage initiates the next.
|
||||
|
||||
**Use-Cases:**
|
||||
- Multi-stage processing tasks such as data transformation pipelines.
|
||||
|
||||
- Event-driven architectures where one event triggers subsequent actions.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Trigger Agent] --> B[Agent 1]
|
||||
B --> C[Agent 2]
|
||||
C --> D[Agent 3]
|
||||
D --> E[Agent 4]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Hybrid Swarm
|
||||
|
||||
**Overview:**
|
||||
A Hybrid Swarm architecture combines elements of various architectures to suit specific needs. It might integrate hierarchical and parallel components, or mix sequential and round robin patterns.
|
||||
|
||||
**Use-Cases:**
|
||||
- Complex workflows requiring a mix of different processing strategies.
|
||||
|
||||
- Custom scenarios tailored to specific operational requirements.
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Root Agent] --> B1[Sub-Agent 1]
|
||||
A --> B2[Sub-Agent 2]
|
||||
B1 --> C1[Parallel Agent 1]
|
||||
B1 --> C2[Parallel Agent 2]
|
||||
B2 --> C3[Sequential Agent 1]
|
||||
C3 --> C4[Sequential Agent 2]
|
||||
C3 --> C5[Sequential Agent 3]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
These swarm architectures provide different models for organizing and orchestrating large language models (LLMs) to perform various tasks efficiently. Depending on the specific requirements of your project, you can choose the appropriate architecture or even combine elements from multiple architectures to create a hybrid solution.
|
@ -0,0 +1,137 @@
|
||||
# Choosing the Right Swarm for Your Business Problem
|
||||
|
||||
`AgentRearrange` provides various swarm structures designed to fit specific business needs. Depending on the complexity and nature of your problem, different swarm configurations can be more effective in achieving optimal performance. This guide provides a detailed explanation of when to use each swarm type, including their strengths and potential drawbacks.
|
||||
|
||||
## Swarm Types Overview
|
||||
|
||||
- **MajorityVoting**: A swarm structure where agents vote on an outcome, and the majority decision is taken as the final result.
|
||||
- **AgentRearrange**: Provides the foundation for both sequential and parallel swarms.
|
||||
- **RoundRobin**: Agents take turns handling tasks in a cyclic manner.
|
||||
- **Mixture of Agents**: A heterogeneous swarm where agents with different capabilities are combined.
|
||||
- **GraphWorkflow**: Agents collaborate in a directed acyclic graph (DAG) format.
|
||||
- **GroupChat**: Agents engage in a chat-like interaction to reach decisions.
|
||||
- **AgentRegistry**: A centralized registry where agents are stored, retrieved, and invoked.
|
||||
- **SpreadsheetSwarm**: A swarm designed to manage tasks at scale, tracking agent outputs in a structured format (e.g., CSV files).
|
||||
|
||||
---
|
||||
|
||||
## MajorityVoting Swarm
|
||||
|
||||
### Use-Case
|
||||
MajorityVoting is ideal for scenarios where accuracy is paramount, and the decision must be determined from multiple perspectives. For instance, choosing the best marketing strategy where various marketing agents vote on the highest predicted performance.
|
||||
|
||||
### Advantages
|
||||
- Ensures robustness in decision-making by leveraging multiple agents.
|
||||
- Helps eliminate outliers or faulty agent decisions.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
Majority voting can be slow if too many agents are involved. Ensure that your swarm size is manageable for real-time decision-making.
|
||||
|
||||
---
|
||||
|
||||
## AgentRearrange (Sequential and Parallel)
|
||||
|
||||
### Sequential Swarm Use-Case
|
||||
For linear workflows where each task depends on the outcome of the previous task, such as processing legal documents step by step through a series of checks and validations.
|
||||
|
||||
### Parallel Swarm Use-Case
|
||||
For tasks that can be executed concurrently, such as batch processing customer data in marketing campaigns. Parallel swarms can significantly reduce processing time by dividing tasks across multiple agents.
|
||||
|
||||
### Notes
|
||||
!!! note
|
||||
Sequential swarms are slower but ensure strict task dependencies are respected. Parallel swarms are faster but require careful management of task interdependencies.
|
||||
|
||||
---
|
||||
|
||||
## RoundRobin Swarm
|
||||
|
||||
### Use-Case
|
||||
For balanced task distribution where agents need to handle tasks evenly. An example would be assigning customer support tickets to agents in a cyclic manner, ensuring no single agent is overloaded.
|
||||
|
||||
### Advantages
|
||||
- Fair and even distribution of tasks.
|
||||
- Simple and effective for balanced workloads.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
Round-robin may not be the best choice when some agents are more competent than others, as it can assign tasks equally regardless of agent performance.
|
||||
|
||||
---
|
||||
|
||||
## Mixture of Agents
|
||||
|
||||
### Use-Case
|
||||
Ideal for complex problems that require diverse skills. For example, a financial forecasting problem where some agents specialize in stock data, while others handle economic factors.
|
||||
|
||||
### Notes
|
||||
!!! note
|
||||
A mixture of agents is highly flexible and can adapt to various problem domains. However, be mindful of coordination overhead.
|
||||
|
||||
---
|
||||
|
||||
## GraphWorkflow Swarm
|
||||
|
||||
### Use-Case
|
||||
This swarm structure is suited for tasks that can be broken down into a series of dependencies but are not strictly linear, such as an AI-driven software development pipeline where one agent handles front-end development while another handles back-end concurrently.
|
||||
|
||||
### Advantages
|
||||
- Provides flexibility for managing dependencies.
|
||||
- Agents can work on different parts of the problem simultaneously.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
GraphWorkflow requires clear definition of task dependencies, or it can lead to execution issues and delays.
|
||||
|
||||
---
|
||||
|
||||
## GroupChat Swarm
|
||||
|
||||
### Use-Case
|
||||
For real-time collaborative decision-making. For instance, agents could participate in group chat for negotiating contracts, each contributing their expertise and adjusting responses based on the collective discussion.
|
||||
|
||||
### Advantages
|
||||
- Facilitates highly interactive problem-solving.
|
||||
- Ideal for dynamic and unstructured problems.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
High communication overhead between agents may slow down decision-making in large swarms.
|
||||
|
||||
---
|
||||
|
||||
## AgentRegistry Swarm
|
||||
|
||||
### Use-Case
|
||||
For dynamically managing agents based on the problem domain. An AgentRegistry is useful when new agents can be added or removed as needed, such as adding new machine learning models for an evolving recommendation engine.
|
||||
|
||||
### Notes
|
||||
!!! note
|
||||
AgentRegistry is a flexible solution but introduces additional complexity when agents need to be discovered and registered on the fly.
|
||||
|
||||
---
|
||||
|
||||
## SpreadsheetSwarm
|
||||
|
||||
### Use-Case
|
||||
When dealing with massive-scale data or agent outputs that need to be stored and managed in a tabular format. SpreadsheetSwarm is ideal for businesses handling thousands of agent outputs, such as large-scale marketing analytics or financial audits.
|
||||
|
||||
### Advantages
|
||||
- Provides structure and order for managing massive amounts of agent outputs.
|
||||
- Outputs are easily saved and tracked in CSV files.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
Ensure the correct configuration of agents in SpreadsheetSwarm to avoid data mismatches and inconsistencies when scaling up to thousands of agents.
|
||||
|
||||
---
|
||||
|
||||
## Final Thoughts
|
||||
|
||||
The choice of swarm depends on:
|
||||
1. **Nature of the task**: Whether it's sequential or parallel.
|
||||
2. **Problem complexity**: Simple problems might benefit from RoundRobin, while complex ones may need GraphWorkflow or Mixture of Agents.
|
||||
3. **Scale of execution**: For large-scale tasks, Swarms like SpreadsheetSwarm or MajorityVoting provide scalability with structured outputs.
|
||||
|
||||
When integrating agents in a business workflow, it's crucial to balance task complexity, agent capabilities, and scalability to ensure the optimal swarm architecture.
|
||||
|
@ -0,0 +1,137 @@
|
||||
# Choosing the Right Swarm for Your Business Problem
|
||||
|
||||
`AgentRearrange` provides various swarm structures designed to fit specific business needs. Depending on the complexity and nature of your problem, different swarm configurations can be more effective in achieving optimal performance. This guide provides a detailed explanation of when to use each swarm type, including their strengths and potential drawbacks.
|
||||
|
||||
## Swarm Types Overview
|
||||
|
||||
- **MajorityVoting**: A swarm structure where agents vote on an outcome, and the majority decision is taken as the final result.
|
||||
- **AgentRearrange**: Provides the foundation for both sequential and parallel swarms.
|
||||
- **RoundRobin**: Agents take turns handling tasks in a cyclic manner.
|
||||
- **Mixture of Agents**: A heterogeneous swarm where agents with different capabilities are combined.
|
||||
- **GraphWorkflow**: Agents collaborate in a directed acyclic graph (DAG) format.
|
||||
- **GroupChat**: Agents engage in a chat-like interaction to reach decisions.
|
||||
- **AgentRegistry**: A centralized registry where agents are stored, retrieved, and invoked.
|
||||
- **SpreadsheetSwarm**: A swarm designed to manage tasks at scale, tracking agent outputs in a structured format (e.g., CSV files).
|
||||
|
||||
---
|
||||
|
||||
## MajorityVoting Swarm
|
||||
|
||||
### Use-Case
|
||||
MajorityVoting is ideal for scenarios where accuracy is paramount, and the decision must be determined from multiple perspectives. For instance, choosing the best marketing strategy where various marketing agents vote on the highest predicted performance.
|
||||
|
||||
### Advantages
|
||||
- Ensures robustness in decision-making by leveraging multiple agents.
|
||||
- Helps eliminate outliers or faulty agent decisions.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
Majority voting can be slow if too many agents are involved. Ensure that your swarm size is manageable for real-time decision-making.
|
||||
|
||||
---
|
||||
|
||||
## AgentRearrange (Sequential and Parallel)
|
||||
|
||||
### Sequential Swarm Use-Case
|
||||
For linear workflows where each task depends on the outcome of the previous task, such as processing legal documents step by step through a series of checks and validations.
|
||||
|
||||
### Parallel Swarm Use-Case
|
||||
For tasks that can be executed concurrently, such as batch processing customer data in marketing campaigns. Parallel swarms can significantly reduce processing time by dividing tasks across multiple agents.
|
||||
|
||||
### Notes
|
||||
!!! note
|
||||
Sequential swarms are slower but ensure strict task dependencies are respected. Parallel swarms are faster but require careful management of task interdependencies.
|
||||
|
||||
---
|
||||
|
||||
## RoundRobin Swarm
|
||||
|
||||
### Use-Case
|
||||
For balanced task distribution where agents need to handle tasks evenly. An example would be assigning customer support tickets to agents in a cyclic manner, ensuring no single agent is overloaded.
|
||||
|
||||
### Advantages
|
||||
- Fair and even distribution of tasks.
|
||||
- Simple and effective for balanced workloads.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
Round-robin may not be the best choice when some agents are more competent than others, as it can assign tasks equally regardless of agent performance.
|
||||
|
||||
---
|
||||
|
||||
## Mixture of Agents
|
||||
|
||||
### Use-Case
|
||||
Ideal for complex problems that require diverse skills. For example, a financial forecasting problem where some agents specialize in stock data, while others handle economic factors.
|
||||
|
||||
### Notes
|
||||
!!! note
|
||||
A mixture of agents is highly flexible and can adapt to various problem domains. However, be mindful of coordination overhead.
|
||||
|
||||
---
|
||||
|
||||
## GraphWorkflow Swarm
|
||||
|
||||
### Use-Case
|
||||
This swarm structure is suited for tasks that can be broken down into a series of dependencies but are not strictly linear, such as an AI-driven software development pipeline where one agent handles front-end development while another handles back-end concurrently.
|
||||
|
||||
### Advantages
|
||||
- Provides flexibility for managing dependencies.
|
||||
- Agents can work on different parts of the problem simultaneously.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
GraphWorkflow requires clear definition of task dependencies, or it can lead to execution issues and delays.
|
||||
|
||||
---
|
||||
|
||||
## GroupChat Swarm
|
||||
|
||||
### Use-Case
|
||||
For real-time collaborative decision-making. For instance, agents could participate in group chat for negotiating contracts, each contributing their expertise and adjusting responses based on the collective discussion.
|
||||
|
||||
### Advantages
|
||||
- Facilitates highly interactive problem-solving.
|
||||
- Ideal for dynamic and unstructured problems.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
High communication overhead between agents may slow down decision-making in large swarms.
|
||||
|
||||
---
|
||||
|
||||
## AgentRegistry Swarm
|
||||
|
||||
### Use-Case
|
||||
For dynamically managing agents based on the problem domain. An AgentRegistry is useful when new agents can be added or removed as needed, such as adding new machine learning models for an evolving recommendation engine.
|
||||
|
||||
### Notes
|
||||
!!! note
|
||||
AgentRegistry is a flexible solution but introduces additional complexity when agents need to be discovered and registered on the fly.
|
||||
|
||||
---
|
||||
|
||||
## SpreadsheetSwarm
|
||||
|
||||
### Use-Case
|
||||
When dealing with massive-scale data or agent outputs that need to be stored and managed in a tabular format. SpreadsheetSwarm is ideal for businesses handling thousands of agent outputs, such as large-scale marketing analytics or financial audits.
|
||||
|
||||
### Advantages
|
||||
- Provides structure and order for managing massive amounts of agent outputs.
|
||||
- Outputs are easily saved and tracked in CSV files.
|
||||
|
||||
### Warnings
|
||||
!!! warning
|
||||
Ensure the correct configuration of agents in SpreadsheetSwarm to avoid data mismatches and inconsistencies when scaling up to thousands of agents.
|
||||
|
||||
---
|
||||
|
||||
## Final Thoughts
|
||||
|
||||
The choice of swarm depends on:
|
||||
1. **Nature of the task**: Whether it's sequential or parallel.
|
||||
2. **Problem complexity**: Simple problems might benefit from RoundRobin, while complex ones may need GraphWorkflow or Mixture of Agents.
|
||||
3. **Scale of execution**: For large-scale tasks, Swarms like SpreadsheetSwarm or MajorityVoting provide scalability with structured outputs.
|
||||
|
||||
When integrating agents in a business workflow, it's crucial to balance task complexity, agent capabilities, and scalability to ensure the optimal swarm architecture.
|
||||
|
@ -0,0 +1,238 @@
|
||||
# Contribution Guidelines
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Project Overview](#project-overview)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Installation](#installation)
|
||||
- [Project Structure](#project-structure)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Reporting Issues](#reporting-issues)
|
||||
- [Submitting Pull Requests](#submitting-pull-requests)
|
||||
- [Coding Standards](#coding-standards)
|
||||
- [Type Annotations](#type-annotations)
|
||||
- [Docstrings and Documentation](#docstrings-and-documentation)
|
||||
- [Testing](#testing)
|
||||
- [Code Style](#code-style)
|
||||
- [Areas Needing Contributions](#areas-needing-contributions)
|
||||
- [Writing Tests](#writing-tests)
|
||||
- [Improving Documentation](#improving-documentation)
|
||||
- [Creating Training Scripts](#creating-training-scripts)
|
||||
- [Community and Support](#community-and-support)
|
||||
- [License](#license)
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
**swarms** is a library focused on making it simple to orchestrate agents to automate real-world activities. The goal is to automate the world economy with these swarms of agents.
|
||||
|
||||
We need your help to:
|
||||
|
||||
- **Write Tests**: Ensure the reliability and correctness of the codebase.
|
||||
- **Improve Documentation**: Maintain clear and comprehensive documentation.
|
||||
- **Add New Orchestration Methods**: Add multi-agent orchestration methods
|
||||
- **Removing Defunct Code**: Removing bad code
|
||||
|
||||
|
||||
|
||||
Your contributions will help us push the boundaries of AI and make this library a valuable resource for the community.
|
||||
|
||||
---
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Installation
|
||||
|
||||
You can install swarms using `pip`:
|
||||
|
||||
```bash
|
||||
pip3 install swarms
|
||||
```
|
||||
|
||||
Alternatively, you can clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kyegomez/swarms
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
|
||||
- **`swarms/`**: Contains all the source code for the library.
|
||||
- **`examples/`**: Includes example scripts and notebooks demonstrating how to use the library.
|
||||
- **`tests/`**: (To be created) Will contain unit tests for the library.
|
||||
- **`docs/`**: (To be maintained) Contains documentation files.
|
||||
|
||||
---
|
||||
|
||||
## How to Contribute
|
||||
|
||||
### Reporting Issues
|
||||
|
||||
If you find any bugs, inconsistencies, or have suggestions for enhancements, please open an issue on GitHub:
|
||||
|
||||
1. **Search Existing Issues**: Before opening a new issue, check if it has already been reported.
|
||||
2. **Open a New Issue**: If it hasn't been reported, create a new issue and provide detailed information.
|
||||
- **Title**: A concise summary of the issue.
|
||||
- **Description**: Detailed description, steps to reproduce, expected behavior, and any relevant logs or screenshots.
|
||||
3. **Label Appropriately**: Use labels to categorize the issue (e.g., bug, enhancement, documentation).
|
||||
|
||||
### Submitting Pull Requests
|
||||
|
||||
We welcome pull requests (PRs) for bug fixes, improvements, and new features. Please follow these guidelines:
|
||||
|
||||
1. **Fork the Repository**: Create a personal fork of the repository on GitHub.
|
||||
2. **Clone Your Fork**: Clone your forked repository to your local machine.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kyegomez/swarms.git
|
||||
```
|
||||
|
||||
3. **Create a New Branch**: Use a descriptive branch name.
|
||||
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
4. **Make Your Changes**: Implement your code, ensuring it adheres to the coding standards.
|
||||
5. **Add Tests**: Write tests to cover your changes.
|
||||
6. **Commit Your Changes**: Write clear and concise commit messages.
|
||||
|
||||
```bash
|
||||
git commit -am "Add feature X"
|
||||
```
|
||||
|
||||
7. **Push to Your Fork**:
|
||||
|
||||
```bash
|
||||
git push origin feature/your-feature-name
|
||||
```
|
||||
|
||||
8. **Create a Pull Request**:
|
||||
|
||||
- Go to the original repository on GitHub.
|
||||
- Click on "New Pull Request".
|
||||
- Select your branch and create the PR.
|
||||
- Provide a clear description of your changes and reference any related issues.
|
||||
|
||||
9. **Respond to Feedback**: Be prepared to make changes based on code reviews.
|
||||
|
||||
**Note**: It's recommended to create small and focused PRs for easier review and faster integration.
|
||||
|
||||
---
|
||||
|
||||
## Coding Standards
|
||||
|
||||
To maintain code quality and consistency, please adhere to the following standards.
|
||||
|
||||
### Type Annotations
|
||||
|
||||
- **Mandatory**: All functions and methods must have type annotations.
|
||||
- **Example**:
|
||||
|
||||
```python
|
||||
def add_numbers(a: int, b: int) -> int:
|
||||
return a + b
|
||||
```
|
||||
|
||||
- **Benefits**:
|
||||
- Improves code readability.
|
||||
- Helps with static type checking tools.
|
||||
|
||||
### Docstrings and Documentation
|
||||
|
||||
- **Docstrings**: Every public class, function, and method must have a docstring following the [Google Python Style Guide](http://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) or [NumPy Docstring Standard](https://numpydoc.readthedocs.io/en/latest/format.html).
|
||||
- **Content**:
|
||||
- **Description**: Briefly describe what the function or class does.
|
||||
- **Args**: List and describe each parameter.
|
||||
- **Returns**: Describe the return value(s).
|
||||
- **Raises**: List any exceptions that are raised.
|
||||
|
||||
- **Example**:
|
||||
|
||||
```python
|
||||
def calculate_mean(values: List[float]) -> float:
|
||||
"""
|
||||
Calculates the mean of a list of numbers.
|
||||
|
||||
Args:
|
||||
values (List[float]): A list of numerical values.
|
||||
|
||||
Returns:
|
||||
float: The mean of the input values.
|
||||
|
||||
Raises:
|
||||
ValueError: If the input list is empty.
|
||||
"""
|
||||
if not values:
|
||||
raise ValueError("The input list is empty.")
|
||||
return sum(values) / len(values)
|
||||
```
|
||||
|
||||
- **Documentation**: Update or create documentation pages if your changes affect the public API.
|
||||
|
||||
### Testing
|
||||
|
||||
- **Required**: All new features and bug fixes must include appropriate unit tests.
|
||||
- **Framework**: Use `unittest`, `pytest`, or a similar testing framework.
|
||||
- **Test Location**: Place tests in the `tests/` directory, mirroring the structure of `swarms/`.
|
||||
- **Test Coverage**: Aim for high test coverage to ensure code reliability.
|
||||
- **Running Tests**: Provide instructions for running tests.
|
||||
|
||||
```bash
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Code Style
|
||||
|
||||
- **PEP 8 Compliance**: Follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guidelines.
|
||||
- **Linting Tools**: Use `flake8`, `black`, or `pylint` to check code style.
|
||||
- **Consistency**: Maintain consistency with the existing codebase.
|
||||
|
||||
---
|
||||
|
||||
## Areas Needing Contributions
|
||||
|
||||
We have several areas where contributions are particularly welcome.
|
||||
|
||||
### Writing Tests
|
||||
|
||||
- **Goal**: Increase test coverage to ensure the library's robustness.
|
||||
- **Tasks**:
|
||||
- Write unit tests for existing code in `swarms/`.
|
||||
- Identify edge cases and potential failure points.
|
||||
- Ensure tests are repeatable and independent.
|
||||
|
||||
### Improving Documentation
|
||||
|
||||
- **Goal**: Maintain clear and comprehensive documentation for users and developers.
|
||||
- **Tasks**:
|
||||
- Update docstrings to reflect any changes.
|
||||
- Add examples and tutorials in the `examples/` directory.
|
||||
- Improve or expand the content in the `docs/` directory.
|
||||
|
||||
### Creating Multi-Agent Orchestration Methods
|
||||
|
||||
- **Goal**: Provide new multi-agent orchestration methods
|
||||
|
||||
---
|
||||
|
||||
## Community and Support
|
||||
|
||||
- **Communication**: Engage with the community by participating in discussions on issues and pull requests.
|
||||
- **Respect**: Maintain a respectful and inclusive environment.
|
||||
- **Feedback**: Be open to receiving and providing constructive feedback.
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
By contributing to swarms, you agree that your contributions will be licensed under the [MIT License](LICENSE).
|
||||
|
||||
---
|
||||
|
||||
Thank you for contributing to swarms! Your efforts help make this project better for everyone.
|
||||
|
||||
If you have any questions or need assistance, please feel free to open an issue or reach out to the maintainers.
|
@ -0,0 +1,238 @@
|
||||
# Contribution Guidelines
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Project Overview](#project-overview)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Installation](#installation)
|
||||
- [Project Structure](#project-structure)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Reporting Issues](#reporting-issues)
|
||||
- [Submitting Pull Requests](#submitting-pull-requests)
|
||||
- [Coding Standards](#coding-standards)
|
||||
- [Type Annotations](#type-annotations)
|
||||
- [Docstrings and Documentation](#docstrings-and-documentation)
|
||||
- [Testing](#testing)
|
||||
- [Code Style](#code-style)
|
||||
- [Areas Needing Contributions](#areas-needing-contributions)
|
||||
- [Writing Tests](#writing-tests)
|
||||
- [Improving Documentation](#improving-documentation)
|
||||
- [Creating Training Scripts](#creating-training-scripts)
|
||||
- [Community and Support](#community-and-support)
|
||||
- [License](#license)
|
||||
|
||||
---
|
||||
|
||||
## Project Overview
|
||||
|
||||
**swarms** is a library focused on making it simple to orchestrate agents to automate real-world activities. The goal is to automate the world economy with these swarms of agents.
|
||||
|
||||
We need your help to:
|
||||
|
||||
- **Write Tests**: Ensure the reliability and correctness of the codebase.
|
||||
- **Improve Documentation**: Maintain clear and comprehensive documentation.
|
||||
- **Add New Orchestration Methods**: Add multi-agent orchestration methods
|
||||
- **Removing Defunct Code**: Removing bad code
|
||||
|
||||
|
||||
|
||||
Your contributions will help us push the boundaries of AI and make this library a valuable resource for the community.
|
||||
|
||||
---
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Installation
|
||||
|
||||
You can install swarms using `pip`:
|
||||
|
||||
```bash
|
||||
pip3 install swarms
|
||||
```
|
||||
|
||||
Alternatively, you can clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kyegomez/swarms
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
|
||||
- **`swarms/`**: Contains all the source code for the library.
|
||||
- **`examples/`**: Includes example scripts and notebooks demonstrating how to use the library.
|
||||
- **`tests/`**: (To be created) Will contain unit tests for the library.
|
||||
- **`docs/`**: (To be maintained) Contains documentation files.
|
||||
|
||||
---
|
||||
|
||||
## How to Contribute
|
||||
|
||||
### Reporting Issues
|
||||
|
||||
If you find any bugs, inconsistencies, or have suggestions for enhancements, please open an issue on GitHub:
|
||||
|
||||
1. **Search Existing Issues**: Before opening a new issue, check if it has already been reported.
|
||||
2. **Open a New Issue**: If it hasn't been reported, create a new issue and provide detailed information.
|
||||
- **Title**: A concise summary of the issue.
|
||||
- **Description**: Detailed description, steps to reproduce, expected behavior, and any relevant logs or screenshots.
|
||||
3. **Label Appropriately**: Use labels to categorize the issue (e.g., bug, enhancement, documentation).
|
||||
|
||||
### Submitting Pull Requests
|
||||
|
||||
We welcome pull requests (PRs) for bug fixes, improvements, and new features. Please follow these guidelines:
|
||||
|
||||
1. **Fork the Repository**: Create a personal fork of the repository on GitHub.
|
||||
2. **Clone Your Fork**: Clone your forked repository to your local machine.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kyegomez/swarms.git
|
||||
```
|
||||
|
||||
3. **Create a New Branch**: Use a descriptive branch name.
|
||||
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
4. **Make Your Changes**: Implement your code, ensuring it adheres to the coding standards.
|
||||
5. **Add Tests**: Write tests to cover your changes.
|
||||
6. **Commit Your Changes**: Write clear and concise commit messages.
|
||||
|
||||
```bash
|
||||
git commit -am "Add feature X"
|
||||
```
|
||||
|
||||
7. **Push to Your Fork**:
|
||||
|
||||
```bash
|
||||
git push origin feature/your-feature-name
|
||||
```
|
||||
|
||||
8. **Create a Pull Request**:
|
||||
|
||||
- Go to the original repository on GitHub.
|
||||
- Click on "New Pull Request".
|
||||
- Select your branch and create the PR.
|
||||
- Provide a clear description of your changes and reference any related issues.
|
||||
|
||||
9. **Respond to Feedback**: Be prepared to make changes based on code reviews.
|
||||
|
||||
**Note**: It's recommended to create small and focused PRs for easier review and faster integration.
|
||||
|
||||
---
|
||||
|
||||
## Coding Standards
|
||||
|
||||
To maintain code quality and consistency, please adhere to the following standards.
|
||||
|
||||
### Type Annotations
|
||||
|
||||
- **Mandatory**: All functions and methods must have type annotations.
|
||||
- **Example**:
|
||||
|
||||
```python
|
||||
def add_numbers(a: int, b: int) -> int:
|
||||
return a + b
|
||||
```
|
||||
|
||||
- **Benefits**:
|
||||
- Improves code readability.
|
||||
- Helps with static type checking tools.
|
||||
|
||||
### Docstrings and Documentation
|
||||
|
||||
- **Docstrings**: Every public class, function, and method must have a docstring following the [Google Python Style Guide](http://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) or [NumPy Docstring Standard](https://numpydoc.readthedocs.io/en/latest/format.html).
|
||||
- **Content**:
|
||||
- **Description**: Briefly describe what the function or class does.
|
||||
- **Args**: List and describe each parameter.
|
||||
- **Returns**: Describe the return value(s).
|
||||
- **Raises**: List any exceptions that are raised.
|
||||
|
||||
- **Example**:
|
||||
|
||||
```python
|
||||
def calculate_mean(values: List[float]) -> float:
|
||||
"""
|
||||
Calculates the mean of a list of numbers.
|
||||
|
||||
Args:
|
||||
values (List[float]): A list of numerical values.
|
||||
|
||||
Returns:
|
||||
float: The mean of the input values.
|
||||
|
||||
Raises:
|
||||
ValueError: If the input list is empty.
|
||||
"""
|
||||
if not values:
|
||||
raise ValueError("The input list is empty.")
|
||||
return sum(values) / len(values)
|
||||
```
|
||||
|
||||
- **Documentation**: Update or create documentation pages if your changes affect the public API.
|
||||
|
||||
### Testing
|
||||
|
||||
- **Required**: All new features and bug fixes must include appropriate unit tests.
|
||||
- **Framework**: Use `unittest`, `pytest`, or a similar testing framework.
|
||||
- **Test Location**: Place tests in the `tests/` directory, mirroring the structure of `swarms/`.
|
||||
- **Test Coverage**: Aim for high test coverage to ensure code reliability.
|
||||
- **Running Tests**: Provide instructions for running tests.
|
||||
|
||||
```bash
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Code Style
|
||||
|
||||
- **PEP 8 Compliance**: Follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guidelines.
|
||||
- **Linting Tools**: Use `flake8`, `black`, or `pylint` to check code style.
|
||||
- **Consistency**: Maintain consistency with the existing codebase.
|
||||
|
||||
---
|
||||
|
||||
## Areas Needing Contributions
|
||||
|
||||
We have several areas where contributions are particularly welcome.
|
||||
|
||||
### Writing Tests
|
||||
|
||||
- **Goal**: Increase test coverage to ensure the library's robustness.
|
||||
- **Tasks**:
|
||||
- Write unit tests for existing code in `swarms/`.
|
||||
- Identify edge cases and potential failure points.
|
||||
- Ensure tests are repeatable and independent.
|
||||
|
||||
### Improving Documentation
|
||||
|
||||
- **Goal**: Maintain clear and comprehensive documentation for users and developers.
|
||||
- **Tasks**:
|
||||
- Update docstrings to reflect any changes.
|
||||
- Add examples and tutorials in the `examples/` directory.
|
||||
- Improve or expand the content in the `docs/` directory.
|
||||
|
||||
### Creating Multi-Agent Orchestration Methods
|
||||
|
||||
- **Goal**: Provide new multi-agent orchestration methods
|
||||
|
||||
---
|
||||
|
||||
## Community and Support
|
||||
|
||||
- **Communication**: Engage with the community by participating in discussions on issues and pull requests.
|
||||
- **Respect**: Maintain a respectful and inclusive environment.
|
||||
- **Feedback**: Be open to receiving and providing constructive feedback.
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
By contributing to swarms, you agree that your contributions will be licensed under the [MIT License](LICENSE).
|
||||
|
||||
---
|
||||
|
||||
Thank you for contributing to swarms! Your efforts help make this project better for everyone.
|
||||
|
||||
If you have any questions or need assistance, please feel free to open an issue or reach out to the maintainers.
|
@ -1,77 +1,296 @@
|
||||
```
|
||||
# Module/Function Name: ConcurrentWorkflow
|
||||
# ConcurrentWorkflow Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The `ConcurrentWorkflow` class is designed to facilitate the concurrent execution of multiple agents, each tasked with solving a specific query or problem. This class is particularly useful in scenarios where multiple agents need to work in parallel, allowing for efficient resource utilization and faster completion of tasks. The workflow manages the execution, collects metadata, and optionally saves the results in a structured format.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Concurrent Execution**: Runs multiple agents simultaneously using Python's `asyncio` and `ThreadPoolExecutor`.
|
||||
- **Metadata Collection**: Gathers detailed metadata about each agent's execution, including start and end times, duration, and output.
|
||||
- **Customizable Output**: Allows the user to save metadata to a file or return it as a string or dictionary.
|
||||
- **Error Handling**: Catches and logs errors during agent execution, ensuring the workflow can continue.
|
||||
|
||||
## Class Definitions
|
||||
|
||||
### AgentOutputSchema
|
||||
|
||||
The `AgentOutputSchema` class is a data model that captures the output and metadata for each agent's execution. It inherits from `pydantic.BaseModel` and provides structured fields to store essential information.
|
||||
|
||||
| Attribute | Type | Description |
|
||||
|---------------|----------------|-----------------------------------------------------------|
|
||||
| `run_id` | `Optional[str]`| Unique ID for the run, automatically generated using `uuid`. |
|
||||
| `agent_name` | `Optional[str]`| Name of the agent that executed the task. |
|
||||
| `task` | `Optional[str]`| The task or query given to the agent. |
|
||||
| `output` | `Optional[str]`| The output generated by the agent. |
|
||||
| `start_time` | `Optional[datetime]`| The time when the agent started the task. |
|
||||
| `end_time` | `Optional[datetime]`| The time when the agent completed the task. |
|
||||
| `duration` | `Optional[float]` | The total time taken to complete the task, in seconds. |
|
||||
|
||||
### MetadataSchema
|
||||
|
||||
The `MetadataSchema` class is another data model that aggregates the outputs from all agents involved in the workflow. It also inherits from `pydantic.BaseModel` and includes fields for additional workflow-level metadata.
|
||||
|
||||
| Attribute | Type | Description |
|
||||
|----------------|------------------------|-----------------------------------------------------------|
|
||||
| `swarm_id` | `Optional[str]` | Unique ID for the workflow run, generated using `uuid`. |
|
||||
| `task` | `Optional[str]` | The task or query given to all agents. |
|
||||
| `description` | `Optional[str]` | A description of the workflow, typically indicating concurrent execution. |
|
||||
| `agents` | `Optional[List[AgentOutputSchema]]` | A list of agent outputs and metadata. |
|
||||
| `timestamp` | `Optional[datetime]` | The timestamp when the workflow was executed. |
|
||||
|
||||
## ConcurrentWorkflow
|
||||
|
||||
The `ConcurrentWorkflow` class is the core class that manages the concurrent execution of agents. It inherits from `BaseSwarm` and includes several key attributes and methods to facilitate this process.
|
||||
|
||||
### Attributes
|
||||
|
||||
| Attribute | Type | Description |
|
||||
|------------------------|-------------------------|-----------------------------------------------------------|
|
||||
| `name` | `str` | The name of the workflow. Defaults to `"ConcurrentWorkflow"`. |
|
||||
| `description` | `str` | A brief description of the workflow. |
|
||||
| `agents` | `List[Agent]` | A list of agents to be executed concurrently. |
|
||||
| `metadata_output_path` | `str` | Path to save the metadata output. Defaults to `"agent_metadata.json"`. |
|
||||
| `auto_save` | `bool` | Flag indicating whether to automatically save the metadata. |
|
||||
| `output_schema` | `BaseModel` | The output schema for the metadata, defaults to `MetadataSchema`. |
|
||||
| `max_loops` | `int` | Maximum number of loops for the workflow, defaults to `1`. |
|
||||
| `return_str_on` | `bool` | Flag to return output as string. Defaults to `False`. |
|
||||
| `agent_responses` | `List[str]` | List of agent responses as strings. |
|
||||
|
||||
## Methods
|
||||
|
||||
### ConcurrentWorkflow.\_\_init\_\_
|
||||
|
||||
Initializes the `ConcurrentWorkflow` class with the provided parameters.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Default Value | Description |
|
||||
|-----------------------|----------------|----------------------------------------|-----------------------------------------------------------|
|
||||
| `name` | `str` | `"ConcurrentWorkflow"` | The name of the workflow. |
|
||||
| `description` | `str` | `"Execution of multiple agents concurrently"` | A brief description of the workflow. |
|
||||
| `agents` | `List[Agent]` | `[]` | A list of agents to be executed concurrently. |
|
||||
| `metadata_output_path`| `str` | `"agent_metadata.json"` | Path to save the metadata output. |
|
||||
| `auto_save` | `bool` | `False` | Flag indicating whether to automatically save the metadata. |
|
||||
| `output_schema` | `BaseModel` | `MetadataSchema` | The output schema for the metadata. |
|
||||
| `max_loops` | `int` | `1` | Maximum number of loops for the workflow. |
|
||||
| `return_str_on` | `bool` | `False` | Flag to return output as string. |
|
||||
| `agent_responses` | `List[str]` | `[]` | List of agent responses as strings. |
|
||||
|
||||
#### Raises
|
||||
|
||||
- `ValueError`: If the list of agents is empty or if the description is empty.
|
||||
|
||||
### ConcurrentWorkflow._run_agent
|
||||
|
||||
Runs a single agent with the provided task and tracks its output and metadata.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-------------|-------------------------|-----------------------------------------------------------|
|
||||
| `agent` | `Agent` | The agent instance to run. |
|
||||
| `task` | `str` | The task or query to give to the agent. |
|
||||
| `executor` | `ThreadPoolExecutor` | The thread pool executor to use for running the agent task. |
|
||||
|
||||
#### Returns
|
||||
|
||||
- `AgentOutputSchema`: The metadata and output from the agent's execution.
|
||||
|
||||
#### Detailed Explanation
|
||||
|
||||
This method handles the execution of a single agent by offloading the task to a thread using `ThreadPoolExecutor`. It also tracks the time taken by the agent to complete the task and logs relevant information. If an exception occurs during execution, it captures the error and includes it in the output.
|
||||
|
||||
class swarms.structs.ConcurrentWorkflow(max_workers, autosave, saved_state_filepath):
|
||||
"""
|
||||
ConcurrentWorkflow class for running a set of tasks concurrently using N autonomous agents.
|
||||
### ConcurrentWorkflow.transform_metadata_schema_to_str
|
||||
|
||||
Args:
|
||||
- max_workers (int): The maximum number of workers to use for concurrent execution.
|
||||
- autosave (bool): Whether to autosave the workflow state.
|
||||
- saved_state_filepath (Optional[str]): The file path to save the workflow state.
|
||||
Transforms the metadata schema into a string format.
|
||||
|
||||
"""
|
||||
#### Parameters
|
||||
|
||||
def add(self, task, tasks=None):
|
||||
"""Adds a task to the workflow.
|
||||
| Parameter | Type | Description |
|
||||
|-------------|---------------------|-----------------------------------------------------------|
|
||||
| `schema` | `MetadataSchema` | The metadata schema to transform. |
|
||||
|
||||
Args:
|
||||
- task (Task): Task to add to the workflow.
|
||||
- tasks (List[Task]): List of tasks to add to the workflow (optional).
|
||||
#### Returns
|
||||
|
||||
"""
|
||||
try:
|
||||
# Implementation of the function goes here
|
||||
except Exception as error:
|
||||
print(f"[ERROR][ConcurrentWorkflow] {error}")
|
||||
raise error
|
||||
- `str`: The metadata schema as a formatted string.
|
||||
|
||||
def run(self, print_results=False, return_results=False):
|
||||
"""
|
||||
Executes the tasks in parallel using a ThreadPoolExecutor.
|
||||
#### Detailed Explanation
|
||||
|
||||
Args:
|
||||
- print_results (bool): Whether to print the results of each task. Default is False.
|
||||
- return_results (bool): Whether to return the results of each task. Default is False.
|
||||
This method converts the metadata stored in `MetadataSchema` into a human-readable string format, particularly focusing on the agent names and their respective outputs. This is useful for quickly reviewing the results of the concurrent workflow in a more accessible format.
|
||||
|
||||
Returns:
|
||||
- (List[Any]): A list of the results of each task, if return_results is True. Otherwise, returns None.
|
||||
### ConcurrentWorkflow._execute_agents_concurrently
|
||||
|
||||
"""
|
||||
try:
|
||||
# Implementation of the function goes here
|
||||
except Exception as e:
|
||||
print(f"Task {task} generated an exception: {e}")
|
||||
Executes multiple agents concurrently with the same task.
|
||||
|
||||
return results if self.return_results else None
|
||||
#### Parameters
|
||||
|
||||
def _execute_task(self, task):
|
||||
"""Executes a task.
|
||||
| Parameter | Type | Description |
|
||||
|-------------|--------------|-----------------------------------------------------------|
|
||||
| `task` | `str` | The task or query to give to all agents. |
|
||||
|
||||
Args:
|
||||
- task (Task): Task to execute.
|
||||
#### Returns
|
||||
|
||||
Returns:
|
||||
- result: The result of executing the task.
|
||||
- `MetadataSchema`: The aggregated metadata and outputs from all agents.
|
||||
|
||||
"""
|
||||
try:
|
||||
# Implementation of the function goes here
|
||||
except Exception as error:
|
||||
print(f"[ERROR][ConcurrentWorkflow] {error}")
|
||||
raise error
|
||||
#### Detailed Explanation
|
||||
|
||||
This method is responsible for managing the concurrent execution of all agents. It uses `asyncio.gather` to run multiple agents simultaneously and collects their outputs into a `MetadataSchema` object. This aggregated metadata can then be saved or returned depending on the workflow configuration.
|
||||
|
||||
### ConcurrentWorkflow.run
|
||||
|
||||
Runs the workflow for the provided task, executes agents concurrently, and saves metadata.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-------------|--------------|-----------------------------------------------------------|
|
||||
| `task` | `str` | The task or query to give to all agents. |
|
||||
|
||||
#### Returns
|
||||
|
||||
- `Dict[str, Any]`: The final metadata as a dictionary.
|
||||
|
||||
#### Detailed Explanation
|
||||
|
||||
This is the main method that a user will call to execute the workflow. It manages the entire process from starting the agents to collecting and optionally saving the metadata. The method also provides flexibility in how the results are returned—either as a JSON dictionary or as a formatted string.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Basic Usage
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
from swarms import Agent, ConcurrentWorkflow, OpenAIChat
|
||||
|
||||
# Initialize agents
|
||||
model = OpenAIChat(
|
||||
api_key=os.getenv("OPENAI_API_KEY"),
|
||||
model_name="gpt-4o-mini",
|
||||
temperature=0.1,
|
||||
)
|
||||
|
||||
|
||||
# Define custom system prompts for each social media platform
|
||||
TWITTER_AGENT_SYS_PROMPT = """
|
||||
You are a Twitter marketing expert specializing in real estate. Your task is to create engaging, concise tweets to promote properties, analyze trends to maximize engagement, and use appropriate hashtags and timing to reach potential buyers.
|
||||
"""
|
||||
|
||||
INSTAGRAM_AGENT_SYS_PROMPT = """
|
||||
You are an Instagram marketing expert focusing on real estate. Your task is to create visually appealing posts with engaging captions and hashtags to showcase properties, targeting specific demographics interested in real estate.
|
||||
"""
|
||||
|
||||
FACEBOOK_AGENT_SYS_PROMPT = """
|
||||
You are a Facebook marketing expert for real estate. Your task is to craft posts optimized for engagement and reach on Facebook, including using images, links, and targeted messaging to attract potential property buyers.
|
||||
"""
|
||||
|
||||
LINKEDIN_AGENT_SYS_PROMPT = """
|
||||
You are a LinkedIn marketing expert for the real estate industry. Your task is to create professional and informative posts, highlighting property features, market trends, and investment opportunities, tailored to professionals and investors.
|
||||
"""
|
||||
|
||||
EMAIL_AGENT_SYS_PROMPT = """
|
||||
You are an Email marketing expert specializing in real estate. Your task is to write compelling email campaigns to promote properties, focusing on personalization, subject lines, and effective call-to-action strategies to drive conversions.
|
||||
"""
|
||||
|
||||
# Initialize your agents for different social media platforms
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name="Twitter-RealEstate-Agent",
|
||||
system_prompt=TWITTER_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=1,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="twitter_realestate_agent.json",
|
||||
user_name="swarm_corp",
|
||||
retry_attempts=1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Instagram-RealEstate-Agent",
|
||||
system_prompt=INSTAGRAM_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=1,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="instagram_realestate_agent.json",
|
||||
user_name="swarm_corp",
|
||||
retry_attempts=1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Facebook-RealEstate-Agent",
|
||||
system_prompt=FACEBOOK_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=1,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="facebook_realestate_agent.json",
|
||||
user_name="swarm_corp",
|
||||
retry_attempts=1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="LinkedIn-RealEstate-Agent",
|
||||
system_prompt=LINKEDIN_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=1,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="linkedin_realestate_agent.json",
|
||||
user_name="swarm_corp",
|
||||
retry_attempts=1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Email-RealEstate-Agent",
|
||||
system_prompt=EMAIL_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=1,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="email_realestate_agent.json",
|
||||
user_name="swarm_corp",
|
||||
retry_attempts=1,
|
||||
),
|
||||
]
|
||||
|
||||
# Initialize workflow
|
||||
workflow = ConcurrentWorkflow(
|
||||
name = "Real Estate Marketing Swarm",
|
||||
agents=agents,
|
||||
metadata_output_path="metadata.json",
|
||||
description="Concurrent swarm of content generators for real estate!",
|
||||
auto_save=True,
|
||||
)
|
||||
|
||||
# Run workflow
|
||||
task = "Analyze the financial impact of a new product launch."
|
||||
metadata = workflow.run(task)
|
||||
print(metadata)
|
||||
|
||||
```
|
||||
|
||||
### Example 2: Custom Output Handling
|
||||
|
||||
```python
|
||||
# Run workflow with string output
|
||||
workflow = ConcurrentWorkflow(agents=agents, return_str_on=True)
|
||||
metadata_str = workflow.run(task)
|
||||
print(metadata_str)
|
||||
```
|
||||
|
||||
### Example 3: Error Handling and Debugging
|
||||
|
||||
```python
|
||||
try:
|
||||
metadata = workflow.run(task)
|
||||
except ValueError as e:
|
||||
print(f"An error occurred: {e}")
|
||||
```
|
||||
|
||||
# Usage example:
|
||||
## Tips and Best Practices
|
||||
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.structs import ConcurrentWorkflow
|
||||
- **Agent Initialization**: Ensure that all agents are correctly initialized with their required configurations before passing them to `ConcurrentWorkflow`.
|
||||
- **Metadata Management**: Use the `auto_save` flag to automatically save metadata if you plan to run multiple workflows in succession.
|
||||
- **Concurrency Limits**: Adjust the number of agents based on your system's capabilities to avoid overloading resources.
|
||||
- **Error Handling**: Implement try-except blocks when running workflows to catch and handle exceptions gracefully.
|
||||
|
||||
llm = OpenAIChat(openai_api_key="")
|
||||
workflow = ConcurrentWorkflow(max_workers=5)
|
||||
workflow.add("What's the weather in miami", llm)
|
||||
workflow.add("Create a report on these metrics", llm)
|
||||
workflow.run()
|
||||
workflow.tasks
|
||||
## References and Resources
|
||||
|
||||
"""
|
||||
```
|
||||
- [Python's `asyncio` Documentation](https://docs.python.org/3/library/asyncio.html)
|
||||
- [Pydantic Documentation](https://pydantic-docs.helpmanual.io/)
|
||||
- [ThreadPoolExecutor in Python](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor)
|
||||
- [Loguru for Logging in Python](https://loguru.readthedocs.io/en/stable/)
|
||||
|
@ -1,57 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
from swarms import Agent, OpenAIChat
|
||||
from swarms.prompts.finance_agent_sys_prompt import (
|
||||
FINANCIAL_AGENT_SYS_PROMPT,
|
||||
)
|
||||
import asyncio
|
||||
from swarms.telemetry.async_log_telemetry import send_telemetry
|
||||
|
||||
# Get the OpenAI API key from the environment variable
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
# Create an instance of the OpenAIChat class
|
||||
model = OpenAIChat(
|
||||
api_key=api_key, model_name="gpt-4o-mini", temperature=0.1
|
||||
)
|
||||
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name="Financial-Analysis-Agent-General-11",
|
||||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=1,
|
||||
autosave=False,
|
||||
dashboard=False,
|
||||
verbose=True,
|
||||
# interactive=True, # Set to False to disable interactive mode
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="finance_agent.json",
|
||||
# tools=[#Add your functions here# ],
|
||||
# stopping_token="Stop!",
|
||||
# docs_folder="docs", # Enter your folder name
|
||||
# pdf_path="docs/finance_agent.pdf",
|
||||
# sop="Calculate the profit for a company.",
|
||||
# sop_list=["Calculate the profit for a company."],
|
||||
user_name="swarms_corp",
|
||||
# # docs="",
|
||||
retry_attempts=3,
|
||||
# context_length=1000,
|
||||
# tool_schema = dict
|
||||
context_length=200000,
|
||||
tool_system_prompt=None,
|
||||
)
|
||||
|
||||
# # Convert the agent object to a dictionary
|
||||
data = agent.to_dict()
|
||||
data = json.dumps(data)
|
||||
|
||||
|
||||
# Async
|
||||
async def send_data():
|
||||
response_status, response_data = await send_telemetry(data)
|
||||
print(response_status, response_data)
|
||||
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(send_data())
|
@ -0,0 +1,40 @@
|
||||
import os
|
||||
from swarms import Agent, OpenAIChat
|
||||
from swarms.prompts.finance_agent_sys_prompt import (
|
||||
FINANCIAL_AGENT_SYS_PROMPT,
|
||||
)
|
||||
|
||||
# Get the OpenAI API key from the environment variable
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
# Create an instance of the OpenAIChat class
|
||||
model = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
model_name="o1-preview",
|
||||
temperature=0.1,
|
||||
max_tokens=100,
|
||||
)
|
||||
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name="Financial-Analysis-Agent_sas_chicken_eej",
|
||||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=2,
|
||||
autosave=True,
|
||||
dashboard=False,
|
||||
verbose=True,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="finance_agent.json",
|
||||
user_name="swarms_corp",
|
||||
retry_attempts=1,
|
||||
context_length=200000,
|
||||
return_step_meta=False,
|
||||
# output_type="json",
|
||||
)
|
||||
|
||||
|
||||
out = agent.run(
|
||||
"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria"
|
||||
)
|
||||
print(out)
|
@ -0,0 +1,40 @@
|
||||
import os
|
||||
from swarms import Agent, OpenAIChat
|
||||
from swarms.prompts.finance_agent_sys_prompt import (
|
||||
FINANCIAL_AGENT_SYS_PROMPT,
|
||||
)
|
||||
|
||||
# Get the OpenAI API key from the environment variable
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
# Create an instance of the OpenAIChat class
|
||||
model = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
model_name="o1-preview",
|
||||
temperature=0.1,
|
||||
max_tokens=100,
|
||||
)
|
||||
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name="Financial-Analysis-Agent_sas_chicken_eej",
|
||||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=2,
|
||||
autosave=True,
|
||||
dashboard=False,
|
||||
verbose=True,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="finance_agent.json",
|
||||
user_name="swarms_corp",
|
||||
retry_attempts=1,
|
||||
context_length=200000,
|
||||
return_step_meta=False,
|
||||
# output_type="json",
|
||||
)
|
||||
|
||||
|
||||
out = agent.run(
|
||||
"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria"
|
||||
)
|
||||
print(out)
|
@ -0,0 +1,40 @@
|
||||
import os
|
||||
|
||||
from swarms.prompts.finance_agent_sys_prompt import (
|
||||
FINANCIAL_AGENT_SYS_PROMPT,
|
||||
)
|
||||
from swarms.structs.agent import Agent
|
||||
from swarms import OpenAIChat
|
||||
|
||||
# Example usage:
|
||||
api_key = os.getenv("GROQ_API_KEY")
|
||||
|
||||
# Model
|
||||
model = OpenAIChat(
|
||||
openai_api_base="https://api.groq.com/openai/v1",
|
||||
openai_api_key=api_key,
|
||||
model_name="llama-3.1-70b-versatile",
|
||||
temperature=0.1,
|
||||
)
|
||||
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name="Financial-Analysis-Agent_sas_chicken_eej",
|
||||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
||||
llm=model,
|
||||
max_loops=2,
|
||||
autosave=True,
|
||||
dashboard=False,
|
||||
verbose=True,
|
||||
dynamic_temperature_enabled=True,
|
||||
saved_state_path="finance_agent.json",
|
||||
user_name="swarms_corp",
|
||||
retry_attempts=1,
|
||||
context_length=200000,
|
||||
)
|
||||
|
||||
|
||||
out = agent.run(
|
||||
"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria"
|
||||
)
|
||||
print(out)
|
@ -0,0 +1,73 @@
|
||||
024-08-23T16:57:09.831419-0400 Autosaving agent state.
|
||||
2024-08-23T16:57:09.832168-0400 Saving Agent Financial-Analysis-Agent_sas_chicken_eej state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:57:09.833811-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:57:09.835655-0400 Function metrics: {
|
||||
"execution_time": 7.066652059555054,
|
||||
"memory_usage": -130.59375,
|
||||
"cpu_usage": -18.6,
|
||||
"io_operations": 1562,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
swarms [
|
||||
|
||||
|
||||
s_chicken_eej state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.884436-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.887356-0400 Function metrics: {
|
||||
"execution_time": 12.482966899871826,
|
||||
"memory_usage": -323.140625,
|
||||
"cpu_usage": -11.099999999999998,
|
||||
"io_operations": 8723,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
s_chicken_eej state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.884436-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.887356-0400 Function metrics: {
|
||||
"execution_time": 12.482966899871826,
|
||||
"memory_usage": -323.140625,
|
||||
"cpu_usage": -11.099999999999998,
|
||||
"io_operations": 8723,
|
||||
"function_calls": 1
|
||||
}
|
||||
en_eej_state.json
|
||||
2024-08-23T17:00:19.967511-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:00:19.969208-0400 Function metrics: {
|
||||
"execution_time": 8.775875091552734,
|
||||
"memory_usage": -70.046875,
|
||||
"cpu_usage": -16.2,
|
||||
"io_operations": 7530,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:00:45.474628-0400 Function metrics: {
|
||||
"execution_time": 8.27669095993042,
|
||||
"memory_usage": -197.34375,
|
||||
"cpu_usage": -12.5,
|
||||
"io_operations": 7955,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
|
||||
Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:01:53.768837-0400 Function metrics: {
|
||||
"execution_time": 11.86063528060913,
|
||||
"memory_usage": -48.453125,
|
||||
"cpu_usage": -16.5,
|
||||
"io_operations": 5022,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
|
||||
|
||||
#############
|
||||
Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:03:39.670708-0400 Function metrics: {
|
||||
"execution_time": 8.982940912246704,
|
||||
"memory_usage": -321.171875,
|
||||
"cpu_usage": -12.5,
|
||||
"io_operations": 3118,
|
||||
"function_calls": 1
|
||||
}
|
@ -0,0 +1,73 @@
|
||||
024-08-23T16:57:09.831419-0400 Autosaving agent state.
|
||||
2024-08-23T16:57:09.832168-0400 Saving Agent Financial-Analysis-Agent_sas_chicken_eej state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:57:09.833811-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:57:09.835655-0400 Function metrics: {
|
||||
"execution_time": 7.066652059555054,
|
||||
"memory_usage": -130.59375,
|
||||
"cpu_usage": -18.6,
|
||||
"io_operations": 1562,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
swarms [
|
||||
|
||||
|
||||
s_chicken_eej state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.884436-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.887356-0400 Function metrics: {
|
||||
"execution_time": 12.482966899871826,
|
||||
"memory_usage": -323.140625,
|
||||
"cpu_usage": -11.099999999999998,
|
||||
"io_operations": 8723,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
s_chicken_eej state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.884436-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T16:58:50.887356-0400 Function metrics: {
|
||||
"execution_time": 12.482966899871826,
|
||||
"memory_usage": -323.140625,
|
||||
"cpu_usage": -11.099999999999998,
|
||||
"io_operations": 8723,
|
||||
"function_calls": 1
|
||||
}
|
||||
en_eej_state.json
|
||||
2024-08-23T17:00:19.967511-0400 Saved agent state to: Financial-Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:00:19.969208-0400 Function metrics: {
|
||||
"execution_time": 8.775875091552734,
|
||||
"memory_usage": -70.046875,
|
||||
"cpu_usage": -16.2,
|
||||
"io_operations": 7530,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:00:45.474628-0400 Function metrics: {
|
||||
"execution_time": 8.27669095993042,
|
||||
"memory_usage": -197.34375,
|
||||
"cpu_usage": -12.5,
|
||||
"io_operations": 7955,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
|
||||
Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:01:53.768837-0400 Function metrics: {
|
||||
"execution_time": 11.86063528060913,
|
||||
"memory_usage": -48.453125,
|
||||
"cpu_usage": -16.5,
|
||||
"io_operations": 5022,
|
||||
"function_calls": 1
|
||||
}
|
||||
|
||||
|
||||
|
||||
#############
|
||||
Analysis-Agent_sas_chicken_eej_state.json
|
||||
2024-08-23T17:03:39.670708-0400 Function metrics: {
|
||||
"execution_time": 8.982940912246704,
|
||||
"memory_usage": -321.171875,
|
||||
"cpu_usage": -12.5,
|
||||
"io_operations": 3118,
|
||||
"function_calls": 1
|
||||
}
|
File diff suppressed because it is too large
Load Diff
@ -1,140 +0,0 @@
|
||||
import uuid
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from sqlalchemy import JSON, Column, String, create_engine
|
||||
from sqlalchemy.dialects.postgresql import UUID
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.orm import Session
|
||||
from swarms.memory.base_vectordb import BaseVectorDatabase
|
||||
|
||||
|
||||
class PostgresDB(BaseVectorDatabase):
|
||||
"""
|
||||
A class representing a Postgres database.
|
||||
|
||||
Args:
|
||||
connection_string (str): The connection string for the Postgres database.
|
||||
table_name (str): The name of the table in the database.
|
||||
|
||||
Attributes:
|
||||
engine: The SQLAlchemy engine for connecting to the database.
|
||||
table_name (str): The name of the table in the database.
|
||||
VectorModel: The SQLAlchemy model representing the vector table.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, connection_string: str, table_name: str, *args, **kwargs
|
||||
):
|
||||
"""
|
||||
Initializes a new instance of the PostgresDB class.
|
||||
|
||||
Args:
|
||||
connection_string (str): The connection string for the Postgres database.
|
||||
table_name (str): The name of the table in the database.
|
||||
|
||||
"""
|
||||
self.engine = create_engine(connection_string, *args, **kwargs)
|
||||
self.table_name = table_name
|
||||
self.VectorModel = self._create_vector_model()
|
||||
|
||||
def _create_vector_model(self):
|
||||
"""
|
||||
Creates the SQLAlchemy model for the vector table.
|
||||
|
||||
Returns:
|
||||
The SQLAlchemy model representing the vector table.
|
||||
|
||||
"""
|
||||
Base = declarative_base()
|
||||
|
||||
class VectorModel(Base):
|
||||
__tablename__ = self.table_name
|
||||
|
||||
id = Column(
|
||||
UUID(as_uuid=True),
|
||||
primary_key=True,
|
||||
default=uuid.uuid4,
|
||||
unique=True,
|
||||
nullable=False,
|
||||
)
|
||||
vector = Column(
|
||||
String
|
||||
) # Assuming vector is stored as a string
|
||||
namespace = Column(String)
|
||||
meta = Column(JSON)
|
||||
|
||||
return VectorModel
|
||||
|
||||
def add(
|
||||
self,
|
||||
vector: str,
|
||||
vector_id: Optional[str] = None,
|
||||
namespace: Optional[str] = None,
|
||||
meta: Optional[dict] = None,
|
||||
) -> None:
|
||||
"""
|
||||
Adds or updates a vector in the database.
|
||||
|
||||
Args:
|
||||
vector (str): The vector to be added or updated.
|
||||
vector_id (str, optional): The ID of the vector. If not provided, a new ID will be generated.
|
||||
namespace (str, optional): The namespace of the vector.
|
||||
meta (dict, optional): Additional metadata associated with the vector.
|
||||
|
||||
"""
|
||||
try:
|
||||
with Session(self.engine) as session:
|
||||
obj = self.VectorModel(
|
||||
id=vector_id,
|
||||
vector=vector,
|
||||
namespace=namespace,
|
||||
meta=meta,
|
||||
)
|
||||
session.merge(obj)
|
||||
session.commit()
|
||||
except Exception as e:
|
||||
print(f"Error adding or updating vector: {e}")
|
||||
|
||||
def query(
|
||||
self, query: Any, namespace: Optional[str] = None
|
||||
) -> List[Any]:
|
||||
"""
|
||||
Queries vectors from the database based on the given query and namespace.
|
||||
|
||||
Args:
|
||||
query (Any): The query or condition to filter the vectors.
|
||||
namespace (str, optional): The namespace of the vectors to be queried.
|
||||
|
||||
Returns:
|
||||
List[Any]: A list of vectors that match the query and namespace.
|
||||
|
||||
"""
|
||||
try:
|
||||
with Session(self.engine) as session:
|
||||
q = session.query(self.VectorModel)
|
||||
if namespace:
|
||||
q = q.filter_by(namespace=namespace)
|
||||
# Assuming 'query' is a condition or filter
|
||||
q = q.filter(query)
|
||||
return q.all()
|
||||
except Exception as e:
|
||||
print(f"Error querying vectors: {e}")
|
||||
return []
|
||||
|
||||
def delete_vector(self, vector_id):
|
||||
"""
|
||||
Deletes a vector from the database based on the given vector ID.
|
||||
|
||||
Args:
|
||||
vector_id: The ID of the vector to be deleted.
|
||||
|
||||
"""
|
||||
try:
|
||||
with Session(self.engine) as session:
|
||||
obj = session.get(self.VectorModel, vector_id)
|
||||
if obj:
|
||||
session.delete(obj)
|
||||
session.commit()
|
||||
except Exception as e:
|
||||
print(f"Error deleting vector: {e}")
|
@ -1,217 +0,0 @@
|
||||
from typing import Optional
|
||||
|
||||
import pinecone
|
||||
from attr import define, field
|
||||
|
||||
from swarms.memory.base_vectordb import BaseVectorDatabase
|
||||
from swarms.utils import str_to_hash
|
||||
|
||||
|
||||
@define
|
||||
class PineconeDB(BaseVectorDatabase):
|
||||
"""
|
||||
PineconeDB is a vector storage driver that uses Pinecone as the underlying storage engine.
|
||||
|
||||
Pinecone is a vector database that allows you to store, search, and retrieve high-dimensional vectors with
|
||||
blazing speed and low latency. It is a managed service that is easy to use and scales effortlessly, so you can
|
||||
focus on building your applications instead of managing your infrastructure.
|
||||
|
||||
Args:
|
||||
api_key (str): The API key for your Pinecone account.
|
||||
index_name (str): The name of the index to use.
|
||||
environment (str): The environment to use. Either "us-west1-gcp" or "us-east1-gcp".
|
||||
project_name (str, optional): The name of the project to use. Defaults to None.
|
||||
index (pinecone.Index, optional): The Pinecone index to use. Defaults to None.
|
||||
|
||||
Methods:
|
||||
upsert_vector(vector: list[float], vector_id: Optional[str] = None, namespace: Optional[str] = None, meta: Optional[dict] = None, **kwargs) -> str:
|
||||
Upserts a vector into the index.
|
||||
load_entry(vector_id: str, namespace: Optional[str] = None) -> Optional[BaseVectorStore.Entry]:
|
||||
Loads a single vector from the index.
|
||||
load_entries(namespace: Optional[str] = None) -> list[BaseVectorStore.Entry]:
|
||||
Loads all vectors from the index.
|
||||
query(query: str, count: Optional[int] = None, namespace: Optional[str] = None, include_vectors: bool = False, include_metadata=True, **kwargs) -> list[BaseVectorStore.QueryResult]:
|
||||
Queries the index for vectors similar to the given query string.
|
||||
create_index(name: str, **kwargs) -> None:
|
||||
Creates a new index.
|
||||
|
||||
Usage:
|
||||
>>> from swarms.memory.vector_stores.pinecone import PineconeDB
|
||||
>>> from swarms.utils.embeddings import USEEmbedding
|
||||
>>> from swarms.utils.hash import str_to_hash
|
||||
>>> from swarms.utils.dataframe import dataframe_to_hash
|
||||
>>> import pandas as pd
|
||||
>>>
|
||||
>>> # Create a new PineconeDB instance:
|
||||
>>> pv = PineconeDB(
|
||||
>>> api_key="your-api-key",
|
||||
>>> index_name="your-index-name",
|
||||
>>> environment="us-west1-gcp",
|
||||
>>> project_name="your-project-name"
|
||||
>>> )
|
||||
>>> # Create a new index:
|
||||
>>> pv.create_index("your-index-name")
|
||||
>>> # Create a new USEEmbedding instance:
|
||||
>>> use = USEEmbedding()
|
||||
>>> # Create a new dataframe:
|
||||
>>> df = pd.DataFrame({
|
||||
>>> "text": [
|
||||
>>> "This is a test",
|
||||
>>> "This is another test",
|
||||
>>> "This is a third test"
|
||||
>>> ]
|
||||
>>> })
|
||||
>>> # Embed the dataframe:
|
||||
>>> df["embedding"] = df["text"].apply(use.embed_string)
|
||||
>>> # Upsert the dataframe into the index:
|
||||
>>> pv.upsert_vector(
|
||||
>>> vector=df["embedding"].tolist(),
|
||||
>>> vector_id=dataframe_to_hash(df),
|
||||
>>> namespace="your-namespace"
|
||||
>>> )
|
||||
>>> # Query the index:
|
||||
>>> pv.query(
|
||||
>>> query="This is a test",
|
||||
>>> count=10,
|
||||
>>> namespace="your-namespace"
|
||||
>>> )
|
||||
>>> # Load a single entry from the index:
|
||||
>>> pv.load_entry(
|
||||
>>> vector_id=dataframe_to_hash(df),
|
||||
>>> namespace="your-namespace"
|
||||
>>> )
|
||||
>>> # Load all entries from the index:
|
||||
>>> pv.load_entries(
|
||||
>>> namespace="your-namespace"
|
||||
>>> )
|
||||
|
||||
|
||||
"""
|
||||
|
||||
api_key: str = field(kw_only=True)
|
||||
index_name: str = field(kw_only=True)
|
||||
environment: str = field(kw_only=True)
|
||||
project_name: Optional[str] = field(default=None, kw_only=True)
|
||||
index: pinecone.Index = field(init=False)
|
||||
|
||||
def __attrs_post_init__(self) -> None:
|
||||
"""Post init"""
|
||||
pinecone.init(
|
||||
api_key=self.api_key,
|
||||
environment=self.environment,
|
||||
project_name=self.project_name,
|
||||
)
|
||||
|
||||
self.index = pinecone.Index(self.index_name)
|
||||
|
||||
def add(
|
||||
self,
|
||||
vector: list[float],
|
||||
vector_id: Optional[str] = None,
|
||||
namespace: Optional[str] = None,
|
||||
meta: Optional[dict] = None,
|
||||
**kwargs,
|
||||
) -> str:
|
||||
"""Add a vector to the index.
|
||||
|
||||
Args:
|
||||
vector (list[float]): _description_
|
||||
vector_id (Optional[str], optional): _description_. Defaults to None.
|
||||
namespace (Optional[str], optional): _description_. Defaults to None.
|
||||
meta (Optional[dict], optional): _description_. Defaults to None.
|
||||
|
||||
Returns:
|
||||
str: _description_
|
||||
"""
|
||||
vector_id = vector_id if vector_id else str_to_hash(str(vector))
|
||||
|
||||
params = {"namespace": namespace} | kwargs
|
||||
|
||||
self.index.upsert([(vector_id, vector, meta)], **params)
|
||||
|
||||
return vector_id
|
||||
|
||||
def load_entries(self, namespace: Optional[str] = None):
|
||||
"""Load all entries from the index.
|
||||
|
||||
Args:
|
||||
namespace (Optional[str], optional): _description_. Defaults to None.
|
||||
|
||||
Returns:
|
||||
_type_: _description_
|
||||
"""
|
||||
# This is a hacky way to query up to 10,000 values from Pinecone. Waiting on an official API for fetching
|
||||
# all values from a namespace:
|
||||
# https://community.pinecone.io/t/is-there-a-way-to-query-all-the-vectors-and-or-metadata-from-a-namespace/797/5
|
||||
|
||||
results = self.index.query(
|
||||
self.embedding_driver.embed_string(""),
|
||||
top_k=10000,
|
||||
include_metadata=True,
|
||||
namespace=namespace,
|
||||
)
|
||||
|
||||
for result in results["matches"]:
|
||||
entry = {
|
||||
"id": result["id"],
|
||||
"vector": result["values"],
|
||||
"meta": result["metadata"],
|
||||
"namespace": result["namespace"],
|
||||
}
|
||||
return entry
|
||||
|
||||
def query(
|
||||
self,
|
||||
query: str,
|
||||
count: Optional[int] = None,
|
||||
namespace: Optional[str] = None,
|
||||
include_vectors: bool = False,
|
||||
# PineconeDBStorageDriver-specific params:
|
||||
include_metadata=True,
|
||||
**kwargs,
|
||||
):
|
||||
"""Query the index for vectors similar to the given query string.
|
||||
|
||||
Args:
|
||||
query (str): _description_
|
||||
count (Optional[int], optional): _description_. Defaults to None.
|
||||
namespace (Optional[str], optional): _description_. Defaults to None.
|
||||
include_vectors (bool, optional): _description_. Defaults to False.
|
||||
include_metadata (bool, optional): _description_. Defaults to True.
|
||||
|
||||
Returns:
|
||||
_type_: _description_
|
||||
"""
|
||||
vector = self.embedding_driver.embed_string(query)
|
||||
|
||||
params = {
|
||||
"top_k": count,
|
||||
"namespace": namespace,
|
||||
"include_values": include_vectors,
|
||||
"include_metadata": include_metadata,
|
||||
} | kwargs
|
||||
|
||||
results = self.index.query(vector, **params)
|
||||
|
||||
for r in results["matches"]:
|
||||
entry = {
|
||||
"id": results["id"],
|
||||
"vector": results["values"],
|
||||
"score": results["scores"],
|
||||
"meta": results["metadata"],
|
||||
"namespace": results["namespace"],
|
||||
}
|
||||
return entry
|
||||
|
||||
def create_index(self, name: str, **kwargs) -> None:
|
||||
"""Create a new index.
|
||||
|
||||
Args:
|
||||
name (str): _description_
|
||||
"""
|
||||
params = {
|
||||
"name": name,
|
||||
"dimension": self.embedding_driver.dimensions,
|
||||
} | kwargs
|
||||
|
||||
pinecone.create_index(**params)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue