flow -> agent, developer swarm with prompts, maybe add the ability to create the classes

pull/209/head
Kye 1 year ago
parent 3d3dddaf0c
commit fee10575af

@ -1,4 +1,4 @@
name: Docs WorkFlow
name: Docs WorkAgent
on:
push:

@ -1,4 +1,4 @@
name: Welcome WorkFlow
name: Welcome WorkAgent
on:
issues:

@ -27,10 +27,10 @@ Run example in Collab: <a target="_blank" href="https://colab.research.google.co
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
### `Flow` Example
### `Agent` Example
- Reliable Structure that provides LLMS autonomy
- Extremely Customizeable with stopping conditions, interactivity, dynamical temperature, loop intervals, and so much more
- Enterprise Grade + Production Grade: `Flow` is designed and optimized for automating real-world tasks at scale!
- Enterprise Grade + Production Grade: `Agent` is designed and optimized for automating real-world tasks at scale!
```python
@ -38,9 +38,9 @@ import os
from dotenv import load_dotenv
# Import the OpenAIChat model and the Flow struct
# Import the OpenAIChat model and the Agent struct
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
# Load the environment variables
load_dotenv()
@ -56,10 +56,10 @@ llm = OpenAIChat(
## Initialize the workflow
flow = Flow(llm=llm, max_loops=1, dashboard=True)
agent = Agent(llm=llm, max_loops=1, dashboard=True)
# Run the workflow on a task
out = flow.run("Generate a 10,000 word blog on health and wellness.")
out = agent.run("Generate a 10,000 word blog on health and wellness.")
@ -70,11 +70,11 @@ out = flow.run("Generate a 10,000 word blog on health and wellness.")
### `SequentialWorkflow`
- A Sequential swarm of autonomous agents where each agent's outputs are fed into the next agent
- Save and Restore Workflow states!
- Integrate Flow's with various LLMs and Multi-Modality Models
- Integrate Agent's with various LLMs and Multi-Modality Models
```python
from swarms.models import OpenAIChat, BioGPT, Anthropic
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
@ -83,7 +83,7 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
@ -95,16 +95,16 @@ biochat = BioGPT()
# Use Anthropic
anthropic = Anthropic()
# Initialize the agent with the language flow
agent1 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize the agent with the language agent
agent1 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create another agent for a different task
agent2 = Flow(llm=llm, max_loops=1, dashboard=False)
agent2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create another agent for a different task
agent3 = Flow(llm=biochat, max_loops=1, dashboard=False)
agent3 = Agent(llm=biochat, max_loops=1, dashboard=False)
# agent4 = Flow(llm=anthropic, max_loops="auto")
# agent4 = Agent(llm=anthropic, max_loops="auto")
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)
@ -127,10 +127,10 @@ for task in workflow.tasks:
```
## `Multi Modal Autonomous Agents`
- Run the flow with multiple modalities useful for various real-world tasks in manufacturing, logistics, and health.
- Run the agent with multiple modalities useful for various real-world tasks in manufacturing, logistics, and health.
```python
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models.gpt4_vision_api import GPT4VisionAPI
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
@ -147,14 +147,14 @@ task = (
img = "assembly_line.jpg"
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
max_loops='auto'
sop=MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
dashboard=True,
)
flow.run(task=task, img=img)
agent.run(task=task, img=img)

@ -1,7 +1,7 @@
# Reliable Enterprise-Grade Autonomous Agents in Less Than 5 lines of Code
========================================================================
Welcome to the walkthrough guide for beginners on using the "Flow" feature within the Swarms framework. This guide is designed to help you understand and utilize the capabilities of the Flow class for seamless and reliable interactions with autonomous agents.
Welcome to the walkthrough guide for beginners on using the "Agent" feature within the Swarms framework. This guide is designed to help you understand and utilize the capabilities of the Agent class for seamless and reliable interactions with autonomous agents.
## Official Swarms Links
=====================
@ -21,27 +21,27 @@ Now let's begin...
## [Table of Contents](https://github.com/kyegomez/swarms)
===========================================================================================================
1. Introduction to Swarms Flow Module
1. Introduction to Swarms Agent Module
- 1.1 What is Swarms?
- 1.2 Understanding the Flow Module
- 1.2 Understanding the Agent Module
2. Setting Up Your Development Environment
- 2.1 Installing Required Dependencies
- 2.2 API Key Setup
- 2.3 Creating Your First Flow
- 2.3 Creating Your First Agent
3. Creating Your First Flow
3. Creating Your First Agent
- 3.1 Importing Necessary Libraries
- 3.2 Defining Constants
- 3.3 Initializing the Flow Object
- 3.3 Initializing the Agent Object
- 3.4 Initializing the Language Model
- 3.5 Running Your Flow
- 3.6 Understanding Flow Options
- 3.5 Running Your Agent
- 3.6 Understanding Agent Options
4. Advanced Flow Concepts
4. Advanced Agent Concepts
- 4.1 Custom Stopping Conditions
- 4.2 Dynamic Temperature Handling
@ -50,10 +50,10 @@ Now let's begin...
- 4.5 Response Filtering
- 4.6 Interactive Mode
5. Saving and Loading Flows
5. Saving and Loading Agents
- 5.1 Saving Flow State
- 5.2 Loading a Saved Flow
- 5.1 Saving Agent State
- 5.2 Loading a Saved Agent
6. Troubleshooting and Tips
@ -62,7 +62,7 @@ Now let's begin...
7. Conclusion
## [1. Introduction to Swarms Flow Module](https://github.com/kyegomez/swarms)
## [1. Introduction to Swarms Agent Module](https://github.com/kyegomez/swarms)
===================================================================================================================================================
### [1.1 What is Swarms?](https://github.com/kyegomez/swarms)
@ -70,23 +70,23 @@ Now let's begin...
Swarms is a powerful framework designed to provide tools and capabilities for working with language models and automating various tasks. It allows developers to interact with language models seamlessly.
## 1.2 Understanding the Flow Feature
## 1.2 Understanding the Agent Feature
==================================
### [What is the Flow Feature?](https://github.com/kyegomez/swarms)
### [What is the Agent Feature?](https://github.com/kyegomez/swarms)
--------------------------------------------------------------------------------------------------------------------------
The Flow feature is a powerful component of the Swarms framework that allows developers to create a sequential, conversational interaction with AI language models. It enables developers to build multi-step conversations, generate long-form content, and perform complex tasks using AI. The Flow class provides autonomy to language models, enabling them to generate responses in a structured manner.
The Agent feature is a powerful component of the Swarms framework that allows developers to create a sequential, conversational interaction with AI language models. It enables developers to build multi-step conversations, generate long-form content, and perform complex tasks using AI. The Agent class provides autonomy to language models, enabling them to generate responses in a structured manner.
### [Key Concepts](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------
Before diving into the practical aspects, let's clarify some key concepts related to the Flow feature:
Before diving into the practical aspects, let's clarify some key concepts related to the Agent feature:
- Flow: A Flow is an instance of the Flow class that represents an ongoing interaction with an AI language model. It consists of a series of steps and responses.
- Stopping Condition: A stopping condition is a criterion that, when met, allows the Flow to stop generating responses. This can be user-defined and can depend on the content of the responses.
- Agent: A Agent is an instance of the Agent class that represents an ongoing interaction with an AI language model. It consists of a series of steps and responses.
- Stopping Condition: A stopping condition is a criterion that, when met, allows the Agent to stop generating responses. This can be user-defined and can depend on the content of the responses.
- Loop Interval: The loop interval specifies the time delay between consecutive interactions with the AI model.
- Retry Mechanism: In case of errors or failures during AI model interactions, the Flow can be configured to make multiple retry attempts with a specified interval.
- Retry Mechanism: In case of errors or failures during AI model interactions, the Agent can be configured to make multiple retry attempts with a specified interval.
- Interactive Mode: Interactive mode allows developers to have a back-and-forth conversation with the AI model, making it suitable for real-time interactions.
## [2. Setting Up Your Development Environment](https://github.com/kyegomez/swarms)
@ -95,38 +95,38 @@ Before diving into the practical aspects, let's clarify some key concepts relate
### [2.1 Installing Required Dependencies](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------------------------------------
Before you can start using the Swarms Flow module, you need to set up your development environment. First, you'll need to install the necessary dependencies, including Swarms itself.
Before you can start using the Swarms Agent module, you need to set up your development environment. First, you'll need to install the necessary dependencies, including Swarms itself.
# Install Swarms and required libraries
`pip3 install --upgrade swarms`
## [2. Creating Your First Flow](https://github.com/kyegomez/swarms)
## [2. Creating Your First Agent](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------
Now, let's create your first Flow. A Flow represents a chain-like structure that allows you to engage in multi-step conversations with language models. The Flow structure is what gives an LLM autonomy. It's the Mitochondria of an autonomous agent.
Now, let's create your first Agent. A Agent represents a chain-like structure that allows you to engage in multi-step conversations with language models. The Agent structure is what gives an LLM autonomy. It's the Mitochondria of an autonomous agent.
# Import necessary modules
```python
from swarms.models import OpenAIChat # Zephr, Mistral
from swarms.structs import Flow
from swarms.structs import Agent
api_key = ""# Initialize the language model (LLM)
llm = OpenAIChat(openai_api_key=api_key, temperature=0.5, max_tokens=3000)# Initialize the Flow object
llm = OpenAIChat(openai_api_key=api_key, temperature=0.5, max_tokens=3000)# Initialize the Agent object
flow = Flow(llm=llm, max_loops=5)# Run the flow
out = flow.run("Create an financial analysis on the following metrics")
agent = Agent(llm=llm, max_loops=5)# Run the agent
out = agent.run("Create an financial analysis on the following metrics")
print(out)
```
### [3. Initializing the Flow Object](https://github.com/kyegomez/swarms)
### [3. Initializing the Agent Object](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------
Create a Flow object that will be the backbone of your conversational flow.
Create a Agent object that will be the backbone of your conversational agent.
```python
# Initialize the Flow object
flow = Flow(
# Initialize the Agent object
agent = Agent(
llm=llm,
max_loops=5,
stopping_condition=None, # You can define custom stopping conditions
@ -142,7 +142,7 @@ flow = Flow(
### [3.2 Initializing the Language Model](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------------
Initialize the language model (LLM) that your Flow will interact with. In this example, we're using OpenAI's GPT-3 as the LLM.
Initialize the language model (LLM) that your Agent will interact with. In this example, we're using OpenAI's GPT-3 as the LLM.
- You can also use `Mistral` or `Zephr` or any of other models!
@ -155,16 +155,16 @@ llm = OpenAIChat(
)
```
### [3.3 Running Your Flow](https://github.com/kyegomez/swarms)
### [3.3 Running Your Agent](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------
Now, you're ready to run your Flow and start interacting with the language model.
Now, you're ready to run your Agent and start interacting with the language model.
If you are using a multi modality model, you can pass in the image path as another parameter
```
# Run your Flow
out = flow.run(
# Run your Agent
out = agent.run(
"Generate a 10,000 word blog on health and wellness.",
# "img.jpg" , Image path for multi-modal models
)
@ -174,14 +174,14 @@ print(out)
This code will initiate a conversation with the language model, and you'll receive responses accordingly.
## [4. Advanced Flow Concepts](https://github.com/kyegomez/swarms)
## [4. Advanced Agent Concepts](https://github.com/kyegomez/swarms)
===========================================================================================================================
In this section, we'll explore advanced concepts that can enhance your experience with the Swarms Flow module.
In this section, we'll explore advanced concepts that can enhance your experience with the Swarms Agent module.
### [4.1 Custom Stopping Conditions](https://github.com/kyegomez/swarms)
You can define custom stopping conditions for your Flow. For example, you might want the Flow to stop when a specific word is mentioned in the response.
You can define custom stopping conditions for your Agent. For example, you might want the Agent to stop when a specific word is mentioned in the response.
# Custom stopping condition example
```python
@ -189,16 +189,16 @@ def stop_when_repeats(response: str) -> bool:
return "Stop" in response.lower()
```
# Set the stopping condition in your Flow
```flow.stopping_condition = stop_when_repeats```
# Set the stopping condition in your Agent
```agent.stopping_condition = stop_when_repeats```
### [4.2 Dynamic Temperature Handling](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------
Dynamic temperature handling allows you to adjust the temperature attribute of the language model during the conversation.
# Enable dynamic temperature handling in your Flow
`flow.dynamic_temperature = True`
# Enable dynamic temperature handling in your Agent
`agent.dynamic_temperature = True`
This feature randomly changes the temperature attribute for each loop, providing a variety of responses.
@ -208,7 +208,7 @@ This feature randomly changes the temperature attribute for each loop, providing
You can provide feedback on responses generated by the language model using the `provide_feedback` method.
- Provide feedback on a response
`flow.provide_feedback("The response was helpful.")`
`agent.provide_feedback("The response was helpful.")`
This feedback can be valuable for improving the quality of responses.
@ -219,8 +219,8 @@ In case of errors or issues during conversation, you can implement a retry mecha
# Set the number of retry attempts and interval
```python
flow.retry_attempts = 3
flow.retry_interval = 1 # in seconds
agent.retry_attempts = 3
agent.retry_interval = 1 # in seconds
```
### [4.5 Response Filtering](https://github.com/kyegomez/swarms)
--------------------------------------------------------------------------------------------------------------------
@ -229,38 +229,38 @@ You can add response filters to filter out certain words or phrases from the res
# Add a response filter
```python
flow.add_response_filter("inappropriate_word")
agent.add_response_filter("inappropriate_word")
```
This helps in controlling the content generated by the language model.
### [4.6 Interactive Mode](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------
Interactive mode allows you to have a back-and-forth conversation with the language model. When enabled, the Flow will prompt for user input after each response.
Interactive mode allows you to have a back-and-forth conversation with the language model. When enabled, the Agent will prompt for user input after each response.
# Enable interactive mode
`flow.interactive = True`
`agent.interactive = True`
This is useful for real-time conversations with the model.
## [5. Saving and Loading Flows](https://github.com/kyegomez/swarms)
## [5. Saving and Loading Agents](https://github.com/kyegomez/swarms)
===============================================================================================================================
### [5.1 Saving Flow State](https://github.com/kyegomez/swarms)
### [5.1 Saving Agent State](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------
You can save the state of your Flow, including the conversation history, for future use.
You can save the state of your Agent, including the conversation history, for future use.
# Save the Flow state to a file
`flow.save("path/to/flow_state.json")``
# Save the Agent state to a file
`agent.save("path/to/flow_state.json")``
### [5.2 Loading a Saved Flow](https://github.com/kyegomez/swarms)
### [5.2 Loading a Saved Agent](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------------
To continue a conversation or reuse a Flow, you can load a previously saved state.
To continue a conversation or reuse a Agent, you can load a previously saved state.
# Load a saved Flow state
`flow.load("path/to/flow_state.json")``
# Load a saved Agent state
`agent.load("path/to/flow_state.json")``
## [6. Troubleshooting and Tips](https://github.com/kyegomez/swarms)
===============================================================================================================================
@ -271,17 +271,17 @@ To continue a conversation or reuse a Flow, you can load a previously saved stat
You can analyze the feedback provided during the conversation to identify issues and improve the quality of interactions.
# Analyze feedback
`flow.analyze_feedback()`
`agent.analyze_feedback()`
### [6.2 Troubleshooting Common Issues](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------------------------------
If you encounter issues during conversation, refer to the troubleshooting section for guidance on resolving common problems.
# [7. Conclusion: Empowering Developers with Swarms Framework and Flow Structure for Automation](https://github.com/kyegomez/swarms)
# [7. Conclusion: Empowering Developers with Swarms Framework and Agent Structure for Automation](https://github.com/kyegomez/swarms)
================================================================================================================================================================================================================================================================
In a world where digital tasks continue to multiply and diversify, the need for automation has never been more critical. Developers find themselves at the forefront of this automation revolution, tasked with creating reliable solutions that can seamlessly handle an array of digital tasks. Enter the Swarms framework and the Flow structure, a dynamic duo that empowers developers to build autonomous agents capable of efficiently and effectively automating a wide range of digital tasks.
In a world where digital tasks continue to multiply and diversify, the need for automation has never been more critical. Developers find themselves at the forefront of this automation revolution, tasked with creating reliable solutions that can seamlessly handle an array of digital tasks. Enter the Swarms framework and the Agent structure, a dynamic duo that empowers developers to build autonomous agents capable of efficiently and effectively automating a wide range of digital tasks.
[The Automation Imperative](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------
@ -300,7 +300,7 @@ One of the standout features of Swarms is its seamless integration with state-of
By leveraging Swarms, developers can effortlessly incorporate these language models into their applications and workflows. For instance, they can build chatbots that provide intelligent responses to customer inquiries or generate lengthy documents with minimal manual intervention. This not only saves time but also enhances overall productivity.
[2. Multi-Step Conversational Flows](https://github.com/kyegomez/swarms)
[2. Multi-Step Conversational Agents](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Swarms excels in orchestrating multi-step conversational flows. Developers can define intricate sequences of interactions, where the system generates responses, and users provide input at various stages. This functionality is a game-changer for building chatbots, virtual assistants, or any application requiring dynamic and context-aware conversations.
@ -324,58 +324,58 @@ Swarms encourages the collection of feedback on generated responses. Developers
Error handling is a critical aspect of any automation framework. Swarms simplifies this process by offering a retry mechanism. In case of errors or issues during conversations, developers can configure the framework to attempt generating responses again, ensuring robust and resilient automation.
[6. Saving and Loading Flows](https://github.com/kyegomez/swarms)
[6. Saving and Loading Agents](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------
Developers can save the state of their conversational flows, allowing for seamless continuity and reusability. This feature is particularly beneficial when working on long-term projects or scenarios where conversations need to be resumed from a specific point.
[Unleashing the Potential of Automation with Swarms and Flow](https://github.com/kyegomez/swarms)
[Unleashing the Potential of Automation with Swarms and Agent](https://github.com/kyegomez/swarms)
===============================================================================================================================================================================================
The combined power of the Swarms framework and the Flow structure creates a synergy that empowers developers to automate a multitude of digital tasks. These tools provide versatility, customization, and extensibility, making them ideal for a wide range of applications. Let's explore some of the remarkable ways in which developers can leverage Swarms and Flow for automation:
The combined power of the Swarms framework and the Agent structure creates a synergy that empowers developers to automate a multitude of digital tasks. These tools provide versatility, customization, and extensibility, making them ideal for a wide range of applications. Let's explore some of the remarkable ways in which developers can leverage Swarms and Agent for automation:
[1. Customer Support and Service Automation](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Swarms and Flow enable the creation of AI-powered customer support chatbots that excel at handling common inquiries, troubleshooting issues, and escalating complex problems to human agents when necessary. This level of automation not only reduces response times but also enhances the overall customer experience.
Swarms and Agent enable the creation of AI-powered customer support chatbots that excel at handling common inquiries, troubleshooting issues, and escalating complex problems to human agents when necessary. This level of automation not only reduces response times but also enhances the overall customer experience.
[2. Content Generation and Curation](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Developers can harness the power of Swarms and Flow to automate content generation tasks, such as writing articles, reports, or product descriptions. By providing an initial prompt, the system can generate high-quality content that adheres to specific guidelines and styles.
Developers can harness the power of Swarms and Agent to automate content generation tasks, such as writing articles, reports, or product descriptions. By providing an initial prompt, the system can generate high-quality content that adheres to specific guidelines and styles.
Furthermore, these tools can automate content curation by summarizing lengthy articles, extracting key insights from research papers, and even translating content into multiple languages.
[3. Data Analysis and Reporting](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------------
Automation in data analysis and reporting is fundamental for data-driven decision-making. Swarms and Flow simplify these processes by enabling developers to create flows that interact with databases, query data, and generate reports based on user-defined criteria. This empowers businesses to derive insights quickly and make informed decisions.
Automation in data analysis and reporting is fundamental for data-driven decision-making. Swarms and Agent simplify these processes by enabling developers to create flows that interact with databases, query data, and generate reports based on user-defined criteria. This empowers businesses to derive insights quickly and make informed decisions.
[4. Programming and Code Generation](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Swarms and Flow streamline code generation and programming tasks. Developers can create flows to assist in writing code snippets, auto-completing code, or providing solutions to common programming challenges. This accelerates software development and reduces the likelihood of coding errors.
Swarms and Agent streamline code generation and programming tasks. Developers can create flows to assist in writing code snippets, auto-completing code, or providing solutions to common programming challenges. This accelerates software development and reduces the likelihood of coding errors.
[5. Language Translation and Localization](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------------------
With the ability to interface with language models, Swarms and Flow can automate language translation tasks. They can seamlessly translate content from one language to another, making it easier for businesses to reach global audiences and localize their offerings effectively.
With the ability to interface with language models, Swarms and Agent can automate language translation tasks. They can seamlessly translate content from one language to another, making it easier for businesses to reach global audiences and localize their offerings effectively.
[6. Virtual Assistants and AI Applications](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Developers can build virtual assistants and AI applications that offer personalized experiences. These applications can automate tasks such as setting reminders, answering questions, providing recommendations, and much more. Swarms and Flow provide the foundation for creating intelligent, interactive virtual assistants.
Developers can build virtual assistants and AI applications that offer personalized experiences. These applications can automate tasks such as setting reminders, answering questions, providing recommendations, and much more. Swarms and Agent provide the foundation for creating intelligent, interactive virtual assistants.
[Future Opportunities and Challenges](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------------------------
As Swarms and Flow continue to evolve, developers can look forward to even more advanced features and capabilities. However, with great power comes great responsibility. Developers must remain vigilant about the ethical use of automation and language models. Ensuring that automated systems provide accurate and unbiased information is an ongoing challenge that the developer community must address.
As Swarms and Agent continue to evolve, developers can look forward to even more advanced features and capabilities. However, with great power comes great responsibility. Developers must remain vigilant about the ethical use of automation and language models. Ensuring that automated systems provide accurate and unbiased information is an ongoing challenge that the developer community must address.
# [In Conclusion](https://github.com/kyegomez/swarms)
===================================================================================================
The Swarms framework and the Flow structure empower developers to automate an extensive array of digital tasks by offering versatility, customization, and extensibility. From natural language understanding and generation to orchestrating multi-step conversational flows, these tools simplify complex automation scenarios.
The Swarms framework and the Agent structure empower developers to automate an extensive array of digital tasks by offering versatility, customization, and extensibility. From natural language understanding and generation to orchestrating multi-step conversational flows, these tools simplify complex automation scenarios.
By embracing Swarms and Flow, developers can not only save time and resources but also unlock new opportunities for innovation. The ability to harness the power of language models and create intelligent, interactive applications opens doors to a future where automation plays a pivotal role in our digital lives.
By embracing Swarms and Agent, developers can not only save time and resources but also unlock new opportunities for innovation. The ability to harness the power of language models and create intelligent, interactive applications opens doors to a future where automation plays a pivotal role in our digital lives.
As the developer community continues to explore the capabilities of Swarms and Flow, it is essential to approach automation with responsibility, ethics, and a commitment to delivering valuable, user-centric experiences. With Swarms and Flow, the future of automation is in the hands of developers, ready to create a more efficient, intelligent, and automated world.
As the developer community continues to explore the capabilities of Swarms and Agent, it is essential to approach automation with responsibility, ethics, and a commitment to delivering valuable, user-centric experiences. With Swarms and Agent, the future of automation is in the hands of developers, ready to create a more efficient, intelligent, and automated world.

@ -9,8 +9,8 @@
3. **Integrating Swarms Into Your Enterprise Workflow: A Step-By-Step Tutorial**
- A practical guide focusing on integrating Swarms into existing enterprise systems.
4. **Swarms Flow: Streamlining AI Deployment in Your Business**
- Exploring the benefits and technicalities of using the Flow feature to simplify complex AI workflows.
4. **Swarms Agent: Streamlining AI Deployment in Your Business**
- Exploring the benefits and technicalities of using the Agent feature to simplify complex AI workflows.
5. **From Zero to Hero: Building Your First Enterprise-Grade AI Agent with Swarms**
- A beginner-friendly walkthrough for building and deploying an AI agent using Swarms.
@ -54,8 +54,8 @@
18. **Swarms for Different Industries: Customizing AI Agents for Niche Markets**
- Exploring how Swarms can be tailored to fit the needs of various industries such as healthcare, finance, and retail.
19. **Building Intelligent Workflows with Swarms Flow**
- A tutorial on using the Flow feature to create intelligent, responsive AI-driven workflows.
19. **Building Intelligent Workflows with Swarms Agent**
- A tutorial on using the Agent feature to create intelligent, responsive AI-driven workflows.
20. **Troubleshooting Common Issues When Deploying Swarms Autonomous Agents**
- A problem-solving guide for AI engineers on overcoming common challenges when implementing Swarms agents.

@ -70,13 +70,13 @@ Lets start by importing the necessary modules and initializing the OpenAIChat
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Replace "YOUR_API_KEY" with your actual OpenAI API key
api_key = "YOUR_API_KEY"
# Initialize the language model flow (e.g., GPT-3)
# Initialize the language model agent (e.g., GPT-3)
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
@ -87,13 +87,13 @@ We have initialized the OpenAIChat model, which will be used as a callable objec
Creating a SequentialWorkflow
To create a SequentialWorkflow, follow these steps:
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize Agents for individual tasks
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
``````
In this code snippet, we have initialized two Flow instances (flow1 and flow2) representing individual tasks within our workflow. These flows will use the OpenAIChat model we initialized earlier. We then create a SequentialWorkflow instance named workflow with a maximum loop count of 1. The max_loops parameter determines how many times the entire workflow can be run, and we set it to 1 for this example.
In this code snippet, we have initialized two Agent instances (flow1 and flow2) representing individual tasks within our workflow. These flows will use the OpenAIChat model we initialized earlier. We then create a SequentialWorkflow instance named workflow with a maximum loop count of 1. The max_loops parameter determines how many times the entire workflow can be run, and we set it to 1 for this example.
Adding Tasks to the SequentialWorkflow
Now that we have created the SequentialWorkflow, lets add tasks to it. In our example, well create two tasks: one for generating a 10,000-word blog on “health and wellness” and another for summarizing the generated blog.
@ -104,7 +104,7 @@ workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
`workflow.add("Summarize the generated blog", flow2)`
The workflow.add() method is used to add tasks to the workflow. Each task is described using a human-readable description, such as "Generate a 10,000 word blog on health and wellness," and is associated with a flow (callable object) that will be executed as the task. In our example, flow1 and flow2 represent the tasks.
The workflow.add() method is used to add tasks to the workflow. Each task is described using a human-readable description, such as "Generate a 10,000 word blog on health and wellness," and is associated with a agent (callable object) that will be executed as the task. In our example, flow1 and flow2 represent the tasks.
Running the SequentialWorkflow
With tasks added to the SequentialWorkflow, we can now run the workflow sequentially using the workflow.run() method.

@ -165,7 +165,7 @@ In essence, Swarms makes the dream of comprehensive business automation an attai
### Value prop SWARMS
```
We've helped thousands of people just like you automate 30% of their activities with Swarms. And, all it takes to get started is a fast simple onboarding flow that asks you to integrate your tools and datasources.
We've helped thousands of people just like you automate 30% of their activities with Swarms. And, all it takes to get started is a fast simple onboarding agent that asks you to integrate your tools and datasources.
```
In today's competitive landscape, organizations of all sizes are continually seeking ways to automate routine tasks, streamline processes, and make data-driven decisions. Enter Swarms, a revolutionary AI-based technology that leverages the power of multiple autonomous agents to perform tasks with unprecedented speed and accuracy.

@ -74,7 +74,7 @@ Ultimately, the key is to adopt a long-term perspective, just like Jeff Bezos. I
# Open Source Salesperson Onboarding Experience
Creating an efficient, streamlined, and effective onboarding experience for open source salespeople is essential to minimize time and maximize engagement. Drawing inspiration from the simplicity and user-focus of Steve Jobs, this document proposes an onboarding flow that is effortless, engaging, and educational.
Creating an efficient, streamlined, and effective onboarding experience for open source salespeople is essential to minimize time and maximize engagement. Drawing inspiration from the simplicity and user-focus of Steve Jobs, this document proposes an onboarding agent that is effortless, engaging, and educational.
## Landing Page

@ -10773,7 +10773,7 @@
"schema_version": "v1",
"name_for_model": "OptionsPro",
"name_for_human": "Options Pro",
"description_for_model": "OptionsPro is a powerful tool designed for traders and investors to analyze options and financial data. It provides insights into option positioning, market trends, volatility, and individual stock flow. The API offers two main endpoints: \"market-outlook\" and \"stock-outlook.\" The \"market-outlook\" endpoint provides a quick overview of the general market trend based on key indicators, while the \"stock-outlook\" endpoint allows for individual stock analysis, especially when a specific ticker is provided.\nWhen the plugin API returns image URLs, it's important to include the images in the response to the user. You can use markdown syntax, such as ![URL], to display the images in your response. Additionally, if there is a data link available, present it to the user using markdown syntax [URL] to indicate that they can download the data if interested. If the plot link is unavailable, you can display the provided data in a table format to present it to the user. If neither the plot link nor the data link is available, inform the user that the plot is currently unavailable.\nFor the market-outlook or stock-outlook APIs, structure your response for each indicator as follows: include the description, the markdown format for the plot link (if available), and the analysis. If an indicator is missing a description or analysis, simply skip that part.\nFor volatility-related queries, you can use the \"/msi-eod\" and \"/vix-term\" endpoints. Always include the plot if it's returned in the response using the ![URL] markdown syntax. If multiple plot urls are returned, show them all. Most responses will include raw calculated data and our analysis. Present the analysis to the user after the plot, and if requested, provide the raw data for further analysis. \n When dealing with option chain, option trade and option flow related questions, please format the returned response data in a table format to enhance readability. \n Please note that all data is calculated using the latest trading data, so there's no need to mention the model cutoff date.\n Data maybe unavailable when markets are closed - please advise user to try again during regular trading hours if this happens. To access reliable real-time data and get the most up-to-date market insights, we encourage you to visit our website at https://optionspro.io/ and explore our premium plans.",
"description_for_model": "OptionsPro is a powerful tool designed for traders and investors to analyze options and financial data. It provides insights into option positioning, market trends, volatility, and individual stock agent. The API offers two main endpoints: \"market-outlook\" and \"stock-outlook.\" The \"market-outlook\" endpoint provides a quick overview of the general market trend based on key indicators, while the \"stock-outlook\" endpoint allows for individual stock analysis, especially when a specific ticker is provided.\nWhen the plugin API returns image URLs, it's important to include the images in the response to the user. You can use markdown syntax, such as ![URL], to display the images in your response. Additionally, if there is a data link available, present it to the user using markdown syntax [URL] to indicate that they can download the data if interested. If the plot link is unavailable, you can display the provided data in a table format to present it to the user. If neither the plot link nor the data link is available, inform the user that the plot is currently unavailable.\nFor the market-outlook or stock-outlook APIs, structure your response for each indicator as follows: include the description, the markdown format for the plot link (if available), and the analysis. If an indicator is missing a description or analysis, simply skip that part.\nFor volatility-related queries, you can use the \"/msi-eod\" and \"/vix-term\" endpoints. Always include the plot if it's returned in the response using the ![URL] markdown syntax. If multiple plot urls are returned, show them all. Most responses will include raw calculated data and our analysis. Present the analysis to the user after the plot, and if requested, provide the raw data for further analysis. \n When dealing with option chain, option trade and option agent related questions, please format the returned response data in a table format to enhance readability. \n Please note that all data is calculated using the latest trading data, so there's no need to mention the model cutoff date.\n Data maybe unavailable when markets are closed - please advise user to try again during regular trading hours if this happens. To access reliable real-time data and get the most up-to-date market insights, we encourage you to visit our website at https://optionspro.io/ and explore our premium plans.",
"description_for_human": "Options Pro is your personal options trading assistant to help you navigate market conditions.",
"auth": {
"type": "none"
@ -11058,7 +11058,7 @@
"schema_version": "v1",
"name_for_model": "EmailByNylas",
"name_for_human": "Email by Nylas",
"description_for_model": "Use EmailByNylas for accessing email accounts through a conversational interface that follows the following guidelines:\n\n1. Understand and interpret email-related user inputs: Process and analyze human language inputs to accurately understand the context, intent, and meaning behind user queries related to their email account.\n\n2. Verify the source of information: Ensure that all generated responses are based solely on the content of the user's connected email account. Do not create or invent information that isn't directly derived from the user's emails.\n\n3. Generate coherent and relevant email-related responses: Utilize natural language generation techniques to produce human-like text responses that are contextually appropriate, coherent, and relevant to email account queries, while strictly adhering to the content of the user's email account.\n\n4. Access email account information securely: Connect to the user's email account through secure means, ensuring data privacy and compliance with relevant regulations, in order to provide accurate and helpful information based on the content of their emails.\n\n5. Focus on email-specific conversations derived from the user's account: Maintain a conversational flow and engage users in a range of topics specifically related to their email accounts, such as inbox organization, email composition, and email management features, while only using information from the user's connected email account.\n\n6. Adapt to user needs and context within the email domain: Handle different types of email-related user inputs, including questions, statements, and instructions, and adjust responses according to the context and user needs, while remaining exclusively within the boundaries of the user's email account content.\n\n7. Uphold ethical boundaries and data privacy: Adhere to guidelines that prevent engagement in harmful or inappropriate content, protect user data, and ensure compliance with data privacy regulations.\n\n8. Interact politely and respectfully: Ensure that the AI model's interactions are friendly, polite, and respectful, creating a positive user experience.\n\n9. Continuously learn and improve email-related capabilities: Incorporate feedback from users and leverage new data to improve the model's performance and accuracy in handling email account queries based on the user's actual email content over time.",
"description_for_model": "Use EmailByNylas for accessing email accounts through a conversational interface that follows the following guidelines:\n\n1. Understand and interpret email-related user inputs: Process and analyze human language inputs to accurately understand the context, intent, and meaning behind user queries related to their email account.\n\n2. Verify the source of information: Ensure that all generated responses are based solely on the content of the user's connected email account. Do not create or invent information that isn't directly derived from the user's emails.\n\n3. Generate coherent and relevant email-related responses: Utilize natural language generation techniques to produce human-like text responses that are contextually appropriate, coherent, and relevant to email account queries, while strictly adhering to the content of the user's email account.\n\n4. Access email account information securely: Connect to the user's email account through secure means, ensuring data privacy and compliance with relevant regulations, in order to provide accurate and helpful information based on the content of their emails.\n\n5. Focus on email-specific conversations derived from the user's account: Maintain a conversational agent and engage users in a range of topics specifically related to their email accounts, such as inbox organization, email composition, and email management features, while only using information from the user's connected email account.\n\n6. Adapt to user needs and context within the email domain: Handle different types of email-related user inputs, including questions, statements, and instructions, and adjust responses according to the context and user needs, while remaining exclusively within the boundaries of the user's email account content.\n\n7. Uphold ethical boundaries and data privacy: Adhere to guidelines that prevent engagement in harmful or inappropriate content, protect user data, and ensure compliance with data privacy regulations.\n\n8. Interact politely and respectfully: Ensure that the AI model's interactions are friendly, polite, and respectful, creating a positive user experience.\n\n9. Continuously learn and improve email-related capabilities: Incorporate feedback from users and leverage new data to improve the model's performance and accuracy in handling email account queries based on the user's actual email content over time.",
"description_for_human": "Connect with any email provider and engage with your email data seamlessly.",
"auth": {
"type": "oauth",

@ -34,7 +34,7 @@ Executes the OmniAgent. The agent plans its actions based on the user's input, e
Facilitates an interactive chat with the agent. It processes user messages, handles exceptions, and returns a response, either in streaming format or as a whole string.
#### 3. `_stream_response(self, response: str)`:
For streaming mode, this function yields the response token by token, ensuring a smooth output flow.
For streaming mode, this function yields the response token by token, ensuring a smooth output agent.
## Examples & Use Cases
Initialize the `OmniModalAgent` and communicate with it:

@ -22,15 +22,15 @@ Book a [1-on-1 Session with Kye](https://calendly.com/swarm-corp/30min), the Cre
## Usage
We have a small gallery of examples to run here, [for more check out the docs to build your own agent and or swarms!](https://docs.apac.ai)
### `Flow` Example
### `Agent` Example
- Reliable Structure that provides LLMS autonomy
- Extremely Customizeable with stopping conditions, interactivity, dynamical temperature, loop intervals, and so much more
- Enterprise Grade + Production Grade: `Flow` is designed and optimized for automating real-world tasks at scale!
- Enterprise Grade + Production Grade: `Agent` is designed and optimized for automating real-world tasks at scale!
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
api_key = ""
@ -43,7 +43,7 @@ llm = OpenAIChat(
)
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
max_loops=2,
dashboard=True,
@ -55,14 +55,14 @@ flow = Flow(
# dynamic_temperature=False, # Set to 'True' for dynamic temperature handling.
)
# out = flow.load_state("flow_state.json")
# temp = flow.dynamic_temperature()
# filter = flow.add_response_filter("Trump")
out = flow.run("Generate a 10,000 word blog on health and wellness.")
# out = flow.validate_response(out)
# out = flow.analyze_feedback(out)
# out = flow.print_history_and_memory()
# # out = flow.save_state("flow_state.json")
# out = agent.load_state("flow_state.json")
# temp = agent.dynamic_temperature()
# filter = agent.add_response_filter("Trump")
out = agent.run("Generate a 10,000 word blog on health and wellness.")
# out = agent.validate_response(out)
# out = agent.analyze_feedback(out)
# out = agent.print_history_and_memory()
# # out = agent.save_state("flow_state.json")
# print(out)
@ -74,11 +74,11 @@ out = flow.run("Generate a 10,000 word blog on health and wellness.")
### `SequentialWorkflow`
- A Sequential swarm of autonomous agents where each agent's outputs are fed into the next agent
- Save and Restore Workflow states!
- Integrate Flow's with various LLMs and Multi-Modality Models
- Integrate Agent's with various LLMs and Multi-Modality Models
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -86,20 +86,20 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize the Flow with the language flow
agent1 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize the Agent with the language agent
agent1 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create another Flow for a different task
agent2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create another Agent for a different task
agent2 = Agent(llm=llm, max_loops=1, dashboard=False)
agent3 = Flow(llm=llm, max_loops=1, dashboard=False)
agent3 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)

@ -1,14 +1,14 @@
# `Flow` Documentation
# `Agent` Documentation
## Overview
The `Flow` class is a Python module designed to facilitate interactions with a language model, particularly one that operates as an autonomous agent. This class is part of a larger framework aimed at creating conversational agents using advanced language models like GPT-3. It enables you to establish a conversational loop with the model, generate responses, collect feedback, and control the flow of the conversation.
The `Agent` class is a Python module designed to facilitate interactions with a language model, particularly one that operates as an autonomous agent. This class is part of a larger framework aimed at creating conversational agents using advanced language models like GPT-3. It enables you to establish a conversational loop with the model, generate responses, collect feedback, and control the agent of the conversation.
In this documentation, you will learn how to use the `Flow` class effectively, its purpose, and how it can be integrated into your projects.
In this documentation, you will learn how to use the `Agent` class effectively, its purpose, and how it can be integrated into your projects.
## Purpose
The `Flow` class serves several key purposes:
The `Agent` class serves several key purposes:
1. **Conversational Loop**: It establishes a conversational loop with a language model. This means it allows you to interact with the model in a back-and-forth manner, taking turns in the conversation.
@ -20,10 +20,10 @@ The `Flow` class serves several key purposes:
## Class Definition
The `Flow` class has the following constructor:
The `Agent` class has the following constructor:
```python
class Flow:
class Agent:
def __init__(
self,
llm: Any,
@ -49,18 +49,18 @@ class Flow:
## Usage
The `Flow` class can be used to create a conversational loop with the language model. Here's how you can use it:
The `Agent` class can be used to create a conversational loop with the language model. Here's how you can use it:
```python
from swarms.structs import Flow
from swarms.structs import Agent
flow = Flow(llm=my_language_model, max_loops=5)
agent = Agent(llm=my_language_model, max_loops=5)
# Define a starting task or message
initial_task = "Generate a 10,000 word blog on health and wellness."
# Run the conversation loop
final_response = flow.run(initial_task)
final_response = agent.run(initial_task)
```
### Feedback
@ -68,7 +68,7 @@ final_response = flow.run(initial_task)
You can collect feedback during the conversation using the `provide_feedback` method:
```python
flow.provide_feedback("Generate an SOP for new sales employees on the best cold sales practices")
agent.provide_feedback("Generate an SOP for new sales employees on the best cold sales practices")
```
### Stopping Condition
@ -76,12 +76,12 @@ flow.provide_feedback("Generate an SOP for new sales employees on the best cold
You can define a custom stopping condition using a function. For example, you can stop the conversation if the response contains the word "Stop":
```python
from swarms.structs import Flow
from swarms.structs import Agent
def stop_when_repeats(response: str) -> bool:
return "Stop" in response.lower()
flow = Flow(llm=my_language_model, max_loops=5, stopping_condition=stop_when_repeats)
agent = Agent(llm=my_language_model, max_loops=5, stopping_condition=stop_when_repeats)
```
### Retry Mechanism
@ -89,7 +89,7 @@ flow = Flow(llm=my_language_model, max_loops=5, stopping_condition=stop_when_rep
If the response generation fails, the class will retry up to the specified number of attempts:
```python
flow = Flow(llm=my_language_model, max_loops=5, retry_attempts=3)
agent = Agent(llm=my_language_model, max_loops=5, retry_attempts=3)
```
## Additional Information
@ -107,45 +107,45 @@ Here are three usage examples:
### Example 1: Simple Conversation
```python
from swarms.structs import Flow
from swarms.structs import Agent
# Select any Language model from the models folder
from swarms.models import Mistral, OpenAIChat
llm = Mistral()
# llm = OpenAIChat()
flow = Flow(llm=llm, max_loops=5)
agent = Agent(llm=llm, max_loops=5)
# Define a starting task or message
initial_task = "Generate an long form analysis on the transformer model architecture."
# Run the conversation loop
final_response = flow.run(initial_task)
final_response = agent.run(initial_task)
```
### Example 2: Custom Stopping Condition
```python
from swarms.structs import Flow
from swarms.structs import Agent
def stop_when_repeats(response: str) -> bool:
return "Stop" in response.lower()
flow = Flow(llm=llm, max_loops=5, stopping_condition=stop_when_repeats)
agent = Agent(llm=llm, max_loops=5, stopping_condition=stop_when_repeats)
```
### Example 3: Interactive Conversation
```python
from swarms.structs import Flow
from swarms.structs import Agent
flow = Flow(llm=llm, max_loops=5, interactive=True)
agent = Agent(llm=llm, max_loops=5, interactive=True)
# Provide initial task
initial_task = "Rank and prioritize the following financial documents and cut out 30% of our expenses"
# Run the conversation loop
final_response = flow.run(initial_task)
final_response = agent.run(initial_task)
```
## References and Resources
@ -154,4 +154,4 @@ final_response = flow.run(initial_task)
## Conclusion
The `Flow` class provides a powerful way to interact with language models in a conversational manner. By defining custom stopping conditions, collecting feedback, and controlling the flow of the conversation, you can create engaging and interactive applications that make use of advanced language models.
The `Agent` class provides a powerful way to interact with language models in a conversational manner. By defining custom stopping conditions, collecting feedback, and controlling the agent of the conversation, you can create engaging and interactive applications that make use of advanced language models.

@ -22,9 +22,9 @@ Before delving into the details of the **SequentialWorkflow** class, let's defin
A **task** refers to a specific unit of work that needs to be executed as part of the workflow. Each task is associated with a description and can be implemented as a callable object, such as a function or a model.
### Flow
### Agent
A **flow** represents a callable object that can be a task within the **SequentialWorkflow**. Flows encapsulate the logic and functionality of a particular task. Flows can be functions, models, or any callable object that can be executed.
A **agent** represents a callable object that can be a task within the **SequentialWorkflow**. Agents encapsulate the logic and functionality of a particular task. Agents can be functions, models, or any callable object that can be executed.
### Sequential Execution
@ -70,7 +70,7 @@ The **SequentialWorkflow** class is versatile and can be employed in a wide rang
2. **Workflow Creation**: Create an instance of the **SequentialWorkflow** class. Specify the maximum number of loops the workflow should run and whether a dashboard should be displayed.
3. **Task Addition**: Add tasks to the workflow using the `add` method. Each task should be described using a human-readable description, and the associated flow (callable object) should be provided. Additional arguments and keyword arguments can be passed to the task.
3. **Task Addition**: Add tasks to the workflow using the `add` method. Each task should be described using a human-readable description, and the associated agent (callable object) should be provided. Additional arguments and keyword arguments can be passed to the task.
4. **Task Execution**: Execute the workflow using the `run` method. The tasks within the workflow will be executed sequentially, with task results passed as inputs to subsequent tasks.
@ -93,10 +93,10 @@ Let's begin with a quick example to demonstrate how to create and run a Sequenti
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Initialize the language model flow (e.g., GPT-3)
# Initialize the language model agent (e.g., GPT-3)
llm = OpenAIChat(
openai_api_key="YOUR_API_KEY",
temperature=0.5,
@ -104,8 +104,8 @@ llm = OpenAIChat(
)
# Initialize flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
@ -134,13 +134,13 @@ The `Task` class represents an individual task in the workflow. A task is essent
```python
class Task:
def __init__(self, description: str, flow: Union[Callable, Flow], args: List[Any] = [], kwargs: Dict[str, Any] = {}, result: Any = None, history: List[Any] = [])
def __init__(self, description: str, agent: Union[Callable, Agent], args: List[Any] = [], kwargs: Dict[str, Any] = {}, result: Any = None, history: List[Any] = [])
```
### Parameters
- `description` (str): A description of the task.
- `flow` (Union[Callable, Flow]): The callable object representing the task. It can be a function, class, or a `Flow` instance.
- `agent` (Union[Callable, Agent]): The callable object representing the task. It can be a function, class, or a `Agent` instance.
- `args` (List[Any]): A list of positional arguments to pass to the task when executed. Default is an empty list.
- `kwargs` (Dict[str, Any]): A dictionary of keyword arguments to pass to the task when executed. Default is an empty dictionary.
- `result` (Any): The result of the task's execution. Default is `None`.
@ -156,7 +156,7 @@ Execute the task.
def execute(self):
```
This method executes the task and updates the `result` and `history` attributes of the task. It checks if the task is a `Flow` instance and if the 'task' argument is needed.
This method executes the task and updates the `result` and `history` attributes of the task. It checks if the task is a `Agent` instance and if the 'task' argument is needed.
## Class: `SequentialWorkflow`
@ -182,15 +182,15 @@ class SequentialWorkflow:
### Methods
#### `add(task: str, flow: Union[Callable, Flow], *args, **kwargs)`
#### `add(task: str, agent: Union[Callable, Agent], *args, **kwargs)`
Add a task to the workflow.
```python
def add(self, task: str, flow: Union[Callable, Flow], *args, **kwargs) -> None:
def add(self, task: str, agent: Union[Callable, Agent], *args, **kwargs) -> None:
```
This method adds a new task to the workflow. You can provide a description of the task, the callable object (function, class, or `Flow` instance), and any additional positional or keyword arguments required for the task.
This method adds a new task to the workflow. You can provide a description of the task, the callable object (function, class, or `Agent` instance), and any additional positional or keyword arguments required for the task.
#### `reset_workflow()`
@ -262,7 +262,7 @@ Run the workflow sequentially.
def run(self) -> None:
```
This method executes the tasks in the workflow sequentially. It checks if a task is a `Flow` instance and handles the flow of data between tasks accordingly.
This method executes the tasks in the workflow sequentially. It checks if a task is a `Agent` instance and handles the agent of data between tasks accordingly.
#### `arun()`
@ -272,7 +272,7 @@ Asynchronously run the workflow.
async def arun(self) -> None:
```
This method asynchronously executes the tasks in the workflow sequentially. It's suitable for use cases where asynchronous execution is required. It also handles data flow between tasks.
This method asynchronously executes the tasks in the workflow sequentially. It's suitable for use cases where asynchronous execution is required. It also handles data agent between tasks.
#### `workflow_bootup(**kwargs)`
@ -306,7 +306,7 @@ In this example, we'll create a Sequential Workflow and add tasks to it.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -314,16 +314,16 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize Agents for individual tasks
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
@ -346,7 +346,7 @@ In this example, we'll create a Sequential Workflow, add tasks to it, and then r
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -354,16 +354,16 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize Agents for individual tasks
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
@ -389,7 +389,7 @@ In this example, we'll create a Sequential Workflow, add tasks to it, run the wo
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -397,16 +397,16 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize Agents for individual tasks
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
@ -432,7 +432,7 @@ In this example, we'll create a Sequential Workflow, add tasks to it, and then r
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -440,16 +440,16 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize Agents for individual tasks
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
@ -475,7 +475,7 @@ In this example, we'll create a Sequential Workflow, add tasks to it, and then u
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -483,16 +483,16 @@ api_key = (
"" # Your actual API key here
)
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize Agents for individual tasks
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
@ -579,11 +579,11 @@ In summary, the Sequential Workflow module provides a foundation for orchestrati
## Frequently Asked Questions (FAQs)
### Q1: What is the difference between a task and a flow in Sequential Workflows?
### Q1: What is the difference between a task and a agent in Sequential Workflows?
**A1:** In Sequential Workflows, a **task** refers to a specific unit of work that needs to be executed. It can be implemented as a callable object, such as a Python function, and is the fundamental building block of a workflow.
A **flow**, on the other hand, is an encapsulation of a task within the workflow. Flows define the order in which tasks are executed and can be thought of as task containers. They allow you to specify dependencies, error handling, and other workflow-related configurations.
A **agent**, on the other hand, is an encapsulation of a task within the workflow. Agents define the order in which tasks are executed and can be thought of as task containers. They allow you to specify dependencies, error handling, and other workflow-related configurations.
### Q2: Can I run tasks in parallel within a Sequential Workflow?

@ -4,7 +4,7 @@
## Overview
The Swarms framework is a Python library designed to facilitate the creation and management of a simulated group chat environment. This environment can be used for a variety of purposes, such as training conversational agents, role-playing games, or simulating dialogues for machine learning purposes. The core functionality revolves around managing the flow of messages between different agents within the chat, as well as handling the selection and responses of these agents based on the conversation's context.
The Swarms framework is a Python library designed to facilitate the creation and management of a simulated group chat environment. This environment can be used for a variety of purposes, such as training conversational agents, role-playing games, or simulating dialogues for machine learning purposes. The core functionality revolves around managing the agent of messages between different agents within the chat, as well as handling the selection and responses of these agents based on the conversation's context.
### Purpose
@ -13,7 +13,7 @@ The purpose of the Swarms framework, and specifically the `GroupChat` and `Group
### Key Features
- **Agent Interaction**: Allows multiple agents to communicate within a group chat scenario.
- **Message Management**: Handles the storage and flow of messages within the group chat.
- **Message Management**: Handles the storage and agent of messages within the group chat.
- **Role Play**: Enables agents to assume specific roles and interact accordingly.
- **Conversation Context**: Maintains the context of the conversation for appropriate responses by agents.
@ -29,7 +29,7 @@ The `GroupChat` class is the backbone of the Swarms framework's chat simulation.
| Parameter | Type | Description | Default Value |
|------------|---------------------|--------------------------------------------------------------|---------------|
| agents | List[Flow] | List of agent flows participating in the group chat. | None |
| agents | List[Agent] | List of agent flows participating in the group chat. | None |
| messages | List[Dict] | List of message dictionaries exchanged in the group chat. | None |
| max_round | int | Maximum number of rounds/messages allowed in the group chat. | 10 |
| admin_name | str | The name of the admin agent in the group chat. | "Admin" |
@ -38,10 +38,10 @@ The `GroupChat` class is the backbone of the Swarms framework's chat simulation.
- `agent_names`: Returns a list of the names of the agents in the group chat.
- `reset()`: Clears all messages from the group chat.
- `agent_by_name(name: str) -> Flow`: Finds and returns an agent by name.
- `next_agent(agent: Flow) -> Flow`: Returns the next agent in the list.
- `agent_by_name(name: str) -> Agent`: Finds and returns an agent by name.
- `next_agent(agent: Agent) -> Agent`: Returns the next agent in the list.
- `select_speaker_msg() -> str`: Returns the message for selecting the next speaker.
- `select_speaker(last_speaker: Flow, selector: Flow) -> Flow`: Logic to select the next speaker based on the last speaker and the selector agent.
- `select_speaker(last_speaker: Agent, selector: Agent) -> Agent`: Logic to select the next speaker based on the last speaker and the selector agent.
- `_participant_roles() -> str`: Returns a string listing all participant roles.
- `format_history(messages: List[Dict]) -> str`: Formats the history of messages for display or processing.
@ -50,10 +50,10 @@ The `GroupChat` class is the backbone of the Swarms framework's chat simulation.
#### Example 1: Initializing a GroupChat
```python
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
from swarms.groupchat import GroupChat
# Assuming Flow objects (flow1, flow2, flow3) are initialized and configured
# Assuming Agent objects (flow1, flow2, flow3) are initialized and configured
agents = [flow1, flow2, flow3]
group_chat = GroupChat(agents=agents, messages=[], max_round=10)
```
@ -67,8 +67,8 @@ group_chat.reset()
#### Example 3: Selecting a Speaker
```python
last_speaker = agents[0] # Assuming this is a Flow object representing the last speaker
selector = agents[1] # Assuming this is a Flow object with the selector role
last_speaker = agents[0] # Assuming this is a Agent object representing the last speaker
selector = agents[1] # Assuming this is a Agent object with the selector role
next_speaker = group_chat.select_speaker(last_speaker, selector)
```
@ -86,7 +86,7 @@ The `GroupChatManager` class acts as a controller for the `GroupChat` instance.
| Parameter | Type | Description |
|------------|-------------|------------------------------------------------------|
| groupchat | GroupChat | The GroupChat instance that the manager will handle. |
| selector | Flow | The Flow object that selects the next speaker. |
| selector | Agent | The Agent object that selects the next speaker. |
#### Methods
@ -98,7 +98,7 @@ The `GroupChatManager` class acts as a controller for the `GroupChat` instance.
```python
from swarms.groupchat import GroupChat, GroupChatManager
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
# Initialize your agents and group chat as shown in previous examples
chat_manager = GroupChatManager(groupchat=group_chat, selector=manager)
@ -132,7 +132,7 @@ By leveraging the framework's features, users can create complex interaction sce
**Q: Can the Swarms framework handle real-time interactions between agents?**
A: The Swarms framework is designed to simulate group chat environments. While it does not handle real-time interactions as they would occur on a network, it can simulate the flow of conversation in a way that mimics real-time communication.
A: The Swarms framework is designed to simulate group chat environments. While it does not handle real-time interactions as they would occur on a network, it can simulate the agent of conversation in a way that mimics real-time communication.
**Q: Is the Swarms framework capable of natural language processing?**
@ -152,7 +152,7 @@ A: The framework is can be integrated with any chat services. However, it could
**Q: How does the `GroupChatManager` select the next speaker?**
A: The `GroupChatManager` uses a selection mechanism, which is typically based on the conversation's context and the roles of the agents, to determine the next speaker. The specifics of this mechanism can be customized to match the desired flow of the conversation.
A: The `GroupChatManager` uses a selection mechanism, which is typically based on the conversation's context and the roles of the agents, to determine the next speaker. The specifics of this mechanism can be customized to match the desired agent of the conversation.
**Q: Can I contribute to the Swarms framework or suggest features?**

@ -2,9 +2,9 @@ import os
from dotenv import load_dotenv
# Import the OpenAIChat model and the Flow struct
# Import the OpenAIChat model and the Agent struct
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
# Load the environment variables
load_dotenv()
@ -20,7 +20,7 @@ llm = OpenAIChat(
## Initialize the workflow
flow = Flow(llm=llm, max_loops=1, dashboard=True)
agent = Agent(llm=llm, max_loops=1, dashboard=True)
# Run the workflow on a task
out = flow.run("Generate a 10,000 word blog on health and wellness.")
out = agent.run("Generate a 10,000 word blog on health and wellness.")

@ -96,7 +96,7 @@ nav:
- swarms.structs:
- Overview: "swarms/structs/overview.md"
- AutoScaler: "swarms/swarms/autoscaler.md"
- Flow: "swarms/structs/flow.md"
- Agent: "swarms/structs/agent.md"
- SequentialWorkflow: 'swarms/structs/sequential_workflow.md'
- swarms.memory:
- PineconeVectorStoreStore: "swarms/memory/pinecone.md"
@ -105,7 +105,7 @@ nav:
- Guides:
- Overview: "examples/index.md"
- Agents:
- Flow: "examples/flow.md"
- Agent: "examples/agent.md"
- SequentialWorkflow: "examples/reliable_autonomous_agents.md"
- OmniAgent: "examples/omni_agent.md"
- 2O+ Autonomous Agent Blogs: "examples/ideas.md"

@ -1,4 +1,4 @@
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models.gpt4_vision_api import GPT4VisionAPI
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
@ -11,10 +11,10 @@ task = "What is the color of the object?"
img = "images/swarms.jpeg"
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
sop=MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
max_loops="auto",
)
flow.run(task=task, img=img)
agent.run(task=task, img=img)

@ -1,5 +1,5 @@
from swarms.agents.simple_agent import SimpleAgent
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models import OpenAIChat
api_key = ""
@ -9,8 +9,8 @@ llm = OpenAIChat(
temperature=0.5,
)
# Initialize the flow
flow = Flow(
# Initialize the agent
agent = Agent(
llm=llm,
max_loops=5,
)
@ -18,7 +18,7 @@ flow = Flow(
agent = SimpleAgent(
name="Optimus Prime",
flow=flow,
agent=agent,
# Memory
)

@ -6,7 +6,7 @@ from swarms.prompts.accountant_swarm_prompts import (
DOC_ANALYZER_AGENT_PROMPT,
SUMMARY_GENERATOR_AGENT_PROMPT,
)
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.utils.pdf_to_text import pdf_to_text
# Environment variables
@ -28,21 +28,21 @@ llm2 = Anthropic(
# Agents
doc_analyzer_agent = Flow(
doc_analyzer_agent = Agent(
llm=llm2,
sop=DOC_ANALYZER_AGENT_PROMPT,
max_loops=1,
autosave=True,
saved_state_path="doc_analyzer_agent.json",
)
summary_generator_agent = Flow(
summary_generator_agent = Agent(
llm=llm2,
sop=SUMMARY_GENERATOR_AGENT_PROMPT,
max_loops=1,
autosave=True,
saved_state_path="summary_generator_agent.json",
)
decision_making_support_agent = Flow(
decision_making_support_agent = Agent(
llm=llm2,
sop=DECISION_MAKING_PROMPT,
max_loops=1,

@ -10,7 +10,7 @@ from swarms.prompts.accountant_swarm_prompts import (
FRAUD_DETECTION_AGENT_PROMPT,
SUMMARY_GENERATOR_AGENT_PROMPT,
)
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.utils.pdf_to_text import pdf_to_text
# Environment variables
@ -30,15 +30,15 @@ llm2 = Anthropic(
# Agents
doc_analyzer_agent = Flow(
doc_analyzer_agent = Agent(
llm=llm1,
sop=DOC_ANALYZER_AGENT_PROMPT,
)
summary_generator_agent = Flow(
summary_generator_agent = Agent(
llm=llm2,
sop=SUMMARY_GENERATOR_AGENT_PROMPT,
)
decision_making_support_agent = Flow(
decision_making_support_agent = Agent(
llm=llm2,
sop=DECISION_MAKING_PROMPT,
)
@ -49,7 +49,7 @@ class AccountantSwarms:
Accountant Swarms is a collection of agents that work together to help
accountants with their work.
Flow: analyze doc -> detect fraud -> generate summary -> decision making support
Agent: analyze doc -> detect fraud -> generate summary -> decision making support
The agents are:
- User Consultant: Asks the user many questions

@ -3,7 +3,7 @@ import os
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from playground.models.stable_diffusion import StableDiffusion
from swarms.structs import Flow, SequentialWorkflow
from swarms.structs import Agent, SequentialWorkflow
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
@ -13,9 +13,9 @@ stability_api_key = os.getenv("STABILITY_API_KEY")
llm = OpenAIChat(openai_api_key=openai_api_key, temperature=0.5, max_tokens=3000)
sd_api = StableDiffusion(api_key=stability_api_key)
def run_task(description, product_name, flow, **kwargs):
def run_task(description, product_name, agent, **kwargs):
full_description = f"{description} about {product_name}" # Incorporate product name into the task
result = flow.run(task=full_description, **kwargs)
result = agent.run(task=full_description, **kwargs)
return result
@ -40,10 +40,10 @@ product_name = input("Enter a product name for ad creation (e.g., 'PS5', 'AirPod
prompt_generator = ProductPromptGenerator(product_name)
creative_prompt = prompt_generator.generate_prompt()
# Run tasks using Flow
concept_flow = Flow(llm=llm, max_loops=1, dashboard=False)
design_flow = Flow(llm=llm, max_loops=1, dashboard=False)
copywriting_flow = Flow(llm=llm, max_loops=1, dashboard=False)
# Run tasks using Agent
concept_flow = Agent(llm=llm, max_loops=1, dashboard=False)
design_flow = Agent(llm=llm, max_loops=1, dashboard=False)
copywriting_flow = Agent(llm=llm, max_loops=1, dashboard=False)
# Execute tasks
concept_result = run_task("Generate a creative concept", product_name, concept_flow)

@ -7,7 +7,7 @@ from swarms.prompts.ai_research_team import (
PAPER_IMPLEMENTOR_AGENT_PROMPT,
PAPER_SUMMARY_ANALYZER,
)
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.utils.pdf_to_text import pdf_to_text
# Base llms
@ -29,7 +29,7 @@ llm2 = Anthropic(
)
# Agents
paper_summarizer_agent = Flow(
paper_summarizer_agent = Agent(
llm=llm2,
sop=PAPER_SUMMARY_ANALYZER,
max_loops=1,
@ -37,7 +37,7 @@ paper_summarizer_agent = Flow(
saved_state_path="paper_summarizer.json",
)
paper_implementor_agent = Flow(
paper_implementor_agent = Agent(
llm=llm1,
sop=PAPER_IMPLEMENTOR_AGENT_PROMPT,
max_loops=1,

@ -1,4 +1,4 @@
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models.gpt4_vision_api import GPT4VisionAPI
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
@ -15,10 +15,10 @@ task = (
img = "assembly_line.jpg"
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
max_loops=1,
dashboard=True,
)
flow.run(task=task, img=img)
agent.run(task=task, img=img)

@ -0,0 +1,68 @@
"""
Swarm of developers that write documentation and tests for a given code snippet.
This is a simple example of how to use the swarms library to create a swarm of developers that write documentation and tests for a given code snippet.
The swarm is composed of two agents:
- Documentation agent: writes documentation for a given code snippet.
- Tests agent: writes tests for a given code snippet.
The swarm is initialized with a language model that is used by the agents to generate text. In this example, we use the OpenAI GPT-3 language model.
Agent:
Documentation agent -> Tests agent
"""
import os
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from swarms.prompts.programming import DOCUMENTATION_SOP, TEST_SOP
from swarms.structs import Agent
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
TASK = """
code
"""
# Initialize the language model
llm = OpenAIChat(
openai_api_key=api_key,
max_tokens=5000
)
# Documentation agent
documentation_agent = Agent(
llm=llm,
sop=DOCUMENTATION_SOP,
max_loops=1,
multi_modal=True
)
# Tests agent
tests_agent = Agent(
llm=llm,
sop=TEST_SOP,
max_loops=2,
multi_modal=True
)
# Run the documentation agent
documentation = documentation_agent.run(
f"Write documentation for the following code:{TASK}"
)
# Run the tests agent
tests = tests_agent.run(
f"Write tests for the following code:{TASK} here is the documentation: {documentation}"
)

@ -1,4 +1,4 @@
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models.gpt4_vision_api import GPT4VisionAPI
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
@ -11,10 +11,10 @@ task = "What is the color of the object?"
img = "images/swarms.jpeg"
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
sop=MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
max_loops="auto",
)
flow.run(task=task, img=img)
agent.run(task=task, img=img)

@ -1,4 +1,4 @@
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models.gpt4_vision_api import GPT4VisionAPI
@ -8,10 +8,10 @@ task = "What is the color of the object?"
img = "images/swarms.jpeg"
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
max_loops="auto",
dashboard=True,
)
flow.run(task=task, img=img)
agent.run(task=task, img=img)

@ -3,7 +3,7 @@ import base64
import requests
from dotenv import load_dotenv
from swarms.models import Anthropic, OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
# Load environment variables
load_dotenv()
@ -89,7 +89,7 @@ def generate_integrated_shopping_list(
# Define agent for meal planning
meal_plan_agent = Flow(
meal_plan_agent = Agent(
llm=llm,
sop=MEAL_PLAN_PROMPT,
max_loops=1,

@ -1,5 +1,5 @@
"""
Swarm Flow
Swarm Agent
Topic selection agent -> draft agent -> review agent -> distribution agent
Topic Selection Agent:

@ -10,10 +10,10 @@ Sustainability agent: Agent that monitors the sustainability of the factory: inp
Efficiency agent: Agent that monitors the efficiency of the factory: input image of factory output: efficiency index 0.0 - 1.0 being the highest
Flow:
Agent:
health security agent -> quality control agent -> productivity agent -> safety agent -> security agent -> sustainability agent -> efficiency agent
"""
from swarms.structs import Flow
from swarms.structs import Agent
import os
from dotenv import load_dotenv
from swarms.models import GPT4VisionAPI
@ -72,7 +72,7 @@ efficiency_prompt = tasks["efficiency"]
# Health security agent
health_security_agent = Flow(
health_security_agent = Agent(
llm=llm,
sop_list=health_safety_prompt,
max_loops=2,
@ -80,7 +80,7 @@ health_security_agent = Flow(
)
# Quality control agent
productivity_check_agent = Flow(
productivity_check_agent = Agent(
llm=llm,
sop=productivity_prompt,
max_loops=2,
@ -88,7 +88,7 @@ productivity_check_agent = Flow(
)
# Security agent
security_check_agent = Flow(
security_check_agent = Agent(
llm=llm,
sop=security_prompt,
max_loops=2,
@ -96,7 +96,7 @@ security_check_agent = Flow(
)
# Efficiency agent
efficiency_check_agent = Flow(
efficiency_check_agent = Agent(
llm=llm,
sop=efficiency_prompt,
max_loops=2,

@ -1,5 +1,5 @@
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
api_key = ""
@ -12,7 +12,7 @@ llm = OpenAIChat(
)
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
max_loops=2,
dashboard=True,
@ -24,12 +24,12 @@ flow = Flow(
# dynamic_temperature=False, # Set to 'True' for dynamic temperature handling.
)
# out = flow.load_state("flow_state.json")
# temp = flow.dynamic_temperature()
# filter = flow.add_response_filter("Trump")
out = flow.run("Generate a 10,000 word blog on health and wellness.")
# out = flow.validate_response(out)
# out = flow.analyze_feedback(out)
# out = flow.print_history_and_memory()
# # out = flow.save_state("flow_state.json")
# out = agent.load_state("flow_state.json")
# temp = agent.dynamic_temperature()
# filter = agent.add_response_filter("Trump")
out = agent.run("Generate a 10,000 word blog on health and wellness.")
# out = agent.validate_response(out)
# out = agent.analyze_feedback(out)
# out = agent.print_history_and_memory()
# # out = agent.save_state("flow_state.json")
# print(out)

@ -1,5 +1,5 @@
from swarms.models import Anthropic
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.tools.tool import tool
import asyncio
@ -52,14 +52,14 @@ def browse_web_page(url: str) -> str:
## Initialize the workflow
flow = Flow(
agent = Agent(
llm=llm,
max_loops=5,
tools=[browse_web_page],
dashboard=True,
)
out = flow.run(
out = agent.run(
"Generate a 10,000 word blog on mental clarity and the benefits of"
" meditation."
)

@ -1,10 +1,10 @@
from swarms import Flow, Fuyu
from swarms import Agent, Fuyu
llm = Fuyu()
flow = Flow(max_loops="auto", llm=llm)
agent = Agent(max_loops="auto", llm=llm)
flow.run(
agent.run(
task="Describe this image in a few sentences: ",
img="https://unsplash.com/photos/0pIC5ByPpZY",
)

@ -1,14 +1,14 @@
# This might not work in the beginning but it's a starting point
from swarms.structs import Flow, GPT4V
from swarms.structs import Agent, GPT4V
llm = GPT4V()
flow = Flow(
agent = Agent(
max_loops="auto",
llm=llm,
)
flow.run(
agent.run(
task="Describe this image in a few sentences: ",
img="https://unsplash.com/photos/0pIC5ByPpZY",
)

@ -1,5 +1,5 @@
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
@ -8,11 +8,11 @@ llm = OpenAIChat(
max_tokens=3000,
)
# Initialize the Flow with the language flow
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize the Agent with the language agent
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create another Flow for a different task
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create another Agent for a different task
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)

@ -1,4 +1,4 @@
from swarms import OpenAI, Flow
from swarms import OpenAI, Agent
from swarms.swarms.groupchat import GroupChatManager, GroupChat
@ -10,29 +10,29 @@ llm = OpenAI(
max_tokens=3000,
)
# Initialize the flow
flow1 = Flow(
# Initialize the agent
flow1 = Agent(
llm=llm,
max_loops=1,
system_message="YOU ARE SILLY, YOU OFFER NOTHING OF VALUE",
name="silly",
dashboard=True,
)
flow2 = Flow(
flow2 = Agent(
llm=llm,
max_loops=1,
system_message="YOU ARE VERY SMART AND ANSWER RIDDLES",
name="detective",
dashboard=True,
)
flow3 = Flow(
flow3 = Agent(
llm=llm,
max_loops=1,
system_message="YOU MAKE RIDDLES",
name="riddler",
dashboard=True,
)
manager = Flow(
manager = Agent(
llm=llm,
max_loops=1,
system_message="YOU ARE A GROUP CHAT MANAGER",

@ -42,7 +42,7 @@
"cell_type": "code",
"source": [
"from swarms.models import OpenAIChat\n",
"from swarms.structs import Flow\n",
"from swarms.structs import Agent\n",
"\n",
"api_key = \"\"\n",
"\n",
@ -56,7 +56,7 @@
"\n",
"\n",
"## Initialize the workflow\n",
"flow = Flow(\n",
"agent = Agent(\n",
" llm=llm,\n",
" max_loops=5,\n",
" dashboard=True,\n",
@ -69,16 +69,16 @@
" # dynamic_temperature=False, # Set to 'True' for dynamic temperature handling.\n",
")\n",
"\n",
"# out = flow.load_state(\"flow_state.json\")\n",
"# temp = flow.dynamic_temperature()\n",
"# filter = flow.add_response_filter(\"Trump\")\n",
"out = flow.run(\n",
"# out = agent.load_state(\"flow_state.json\")\n",
"# temp = agent.dynamic_temperature()\n",
"# filter = agent.add_response_filter(\"Trump\")\n",
"out = agent.run(\n",
" \"Generate a 10,000 word blog on mental clarity and the benefits of meditation.\"\n",
")\n",
"# out = flow.validate_response(out)\n",
"# out = flow.analyze_feedback(out)\n",
"# out = flow.print_history_and_memory()\n",
"# # out = flow.save_state(\"flow_state.json\")\n",
"# out = agent.validate_response(out)\n",
"# out = agent.analyze_feedback(out)\n",
"# out = agent.print_history_and_memory()\n",
"# # out = agent.save_state(\"flow_state.json\")\n",
"# print(out)"
],
"metadata": {

@ -1,12 +1,12 @@
from swarms.models import OpenAIChat, BioGPT, Anthropic
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = "" # Your actual API key here
# Initialize the language flow
# Initialize the language agent
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
@ -18,16 +18,16 @@ biochat = BioGPT()
# Use Anthropic
anthropic = Anthropic()
# Initialize the agent with the language flow
agent1 = Flow(llm=llm, max_loops=1, dashboard=False)
# Initialize the agent with the language agent
agent1 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create another agent for a different task
agent2 = Flow(llm=llm, max_loops=1, dashboard=False)
agent2 = Agent(llm=llm, max_loops=1, dashboard=False)
# Create another agent for a different task
agent3 = Flow(llm=biochat, max_loops=1, dashboard=False)
agent3 = Agent(llm=biochat, max_loops=1, dashboard=False)
# agent4 = Flow(llm=anthropic, max_loops="auto")
# agent4 = Agent(llm=anthropic, max_loops="auto")
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)

@ -2,7 +2,7 @@ ONBOARDING_AGENT_PROMPT = """
Onboarding:
"As the Onboarding Agent, your role is critical in guiding new users, particularly tech-savvy entrepreneurs, through the initial stages of engaging with our advanced swarm technology services. Begin by welcoming users in a friendly, professional manner, setting a positive tone for the interaction. Your conversation should flow logically, starting with an introduction to our services and their potential benefits for the user's specific business context.
"As the Onboarding Agent, your role is critical in guiding new users, particularly tech-savvy entrepreneurs, through the initial stages of engaging with our advanced swarm technology services. Begin by welcoming users in a friendly, professional manner, setting a positive tone for the interaction. Your conversation should agent logically, starting with an introduction to our services and their potential benefits for the user's specific business context.
Inquire about their industry, delving into specifics such as the industry's current trends, challenges, and the role technology plays in their sector. Show expertise and understanding by using industry-specific terminology and referencing relevant technological advancements. Ask open-ended questions to encourage detailed responses, enabling you to gain a comprehensive understanding of their business needs and objectives.
@ -23,7 +23,7 @@ Conclude the onboarding process by summarizing the key points discussed, reaffir
DOC_ANALYZER_AGENT_PROMPT = """ As a Financial Document Analysis Agent equipped with advanced vision capabilities, your primary role is to analyze financial documents by meticulously scanning and interpreting the visual data they contain. Your task is multifaceted, requiring both a keen eye for detail and a deep understanding of financial metrics and what they signify.
When presented with a financial document, such as a balance sheet, income statement, or cash flow statement, begin by identifying the layout and structure of the document. Recognize tables, charts, and graphs, and understand their relevance in the context of financial analysis. Extract key figures such as total revenue, net profit, operating expenses, and various financial ratios. Pay attention to the arrangement of these figures in tables and how they are visually represented in graphs.
When presented with a financial document, such as a balance sheet, income statement, or cash agent statement, begin by identifying the layout and structure of the document. Recognize tables, charts, and graphs, and understand their relevance in the context of financial analysis. Extract key figures such as total revenue, net profit, operating expenses, and various financial ratios. Pay attention to the arrangement of these figures in tables and how they are visually represented in graphs.
Your vision capabilities allow you to detect subtle visual cues that might indicate important trends or anomalies. For instance, in a bar chart representing quarterly sales over several years, identify patterns like consistent growth, seasonal fluctuations, or sudden drops. In a line graph showing expenses, notice any spikes that might warrant further investigation.
@ -77,7 +77,7 @@ Actionable Decision-Making:
"As the Decision-Making Support Agent, your role is to assist users in making informed financial decisions based on the analysis provided by the Financial Document Analysis and Summary Generation Agents. You are to provide actionable advice and recommendations, grounded in the data but also considering broader business strategies and market conditions.
Begin by reviewing the financial summaries and analysis reports, understanding the key metrics and trends they highlight. Cross-reference this data with industry benchmarks, economic trends, and best practices to provide well-rounded advice. For instance, if the analysis indicates a strong cash flow position, you might recommend strategic investments or suggest areas for expansion.
Begin by reviewing the financial summaries and analysis reports, understanding the key metrics and trends they highlight. Cross-reference this data with industry benchmarks, economic trends, and best practices to provide well-rounded advice. For instance, if the analysis indicates a strong cash agent position, you might recommend strategic investments or suggest areas for expansion.
Address potential risks and opportunities. If the analysis reveals certain vulnerabilities, like over-reliance on a single revenue stream, advise on diversification strategies or risk mitigation tactics. Conversely, if there are untapped opportunities, such as emerging markets or technological innovations, highlight these as potential growth areas.

@ -12,7 +12,7 @@ space of possible neural network architectures, with the goal of finding archite
that perform well on a given task while minimizing the computational cost of training and inference.
Let's break this down step by step:
Next, please consider the gradient flow based on the ideal model architecture.
Next, please consider the gradient agent based on the ideal model architecture.
For example, how the gradient from the later stage affects the earlier stage.
Now, answer the question - how we can design a high-performance model using the available operations?
Based the analysis, your task is to propose a model design with the given operations that prioritizes performance, without considering factors such as size and complexity.

@ -185,7 +185,7 @@ Your role involves content analysis, editorial precision, expert validation, leg
# Editor Review:
- Evaluate initial drafts for errors, gaps that require additional research.
- Provide guidance on better organizing structure and flow.
- Provide guidance on better organizing structure and agent.
- Assess tone, voice and brand alignment.
# Expert Review:
@ -199,7 +199,7 @@ Your role involves content analysis, editorial precision, expert validation, leg
# Quality Checklist: Scrutinize final draft against PositiveMed's standards:
- Medical accuracy - error-free facts/statistics, supported claims
- Logical flow - smooth transitions, complementary sections
- Logical agent - smooth transitions, complementary sections
- Reader value - insightful analysis beyond fluffy content
- Brand alignment - uplifting tone, inclusive messaging
- Strong conclusion - memorable takeaways, relevant next steps/resources for readers
@ -248,7 +248,7 @@ You don't have a word limit, you can write as you wish.
--------------------------- Your Responsibilities: -----------------------------
Outline Content:
- Organize research into logical sections and subsections for smooth flow.
- Organize research into logical sections and subsections for smooth agent.
- Ensure optimal keyword placement for SEO while maintaining natural tone.
- Structure content to focus on most valuable information upfront.

@ -104,7 +104,7 @@ Start with a basic prototype. Then, refine based on user feedback and testing. C
5.4 Accessibility is Paramount
Ensure your product is usable by everyone, including those with disabilities. This not only expands your product's reach but also ensures inclusivity. Implement features like voice commands, high contrast visuals, and screen reader compatibility.
5.5 Prioritize Functionality and User Flow
5.5 Prioritize Functionality and User Agent
A product can be aesthetically pleasing, but if it doesn't function well or is difficult to navigate, it will lose its value. Ensure seamless user flows and intuitive interactions.
5.6 Maintain Consistency

@ -0,0 +1,177 @@
TEST_SOP = """
Create 500 extensive and thorough tests for the code below using the guide, do not worry about your limits you do not have any
just write the best tests possible and return the test code in markdown format. Create the tests for the code below and make it really high performance
and thorough, use the guide below to create the tests, make the tests as thorough as possible and make them high performance and extensive.
######### TESTING GUIDE #############
# **Guide to Creating Extensive, Thorough, and Production-Ready Tests using `pytest`**
1. **Preparation**:
- Install pytest: `pip install pytest`.
- Structure your project so that tests are in a separate `tests/` directory.
- Name your test files with the prefix `test_` for pytest to recognize them.
2. **Writing Basic Tests**:
- Use clear function names prefixed with `test_` (e.g., `test_check_value()`).
- Use assert statements to validate results.
3. **Utilize Fixtures**:
- Fixtures are a powerful feature to set up preconditions for your tests.
- Use `@pytest.fixture` decorator to define a fixture.
- Pass fixture name as an argument to your test to use it.
4. **Parameterized Testing**:
- Use `@pytest.mark.parametrize` to run a test multiple times with different inputs.
- This helps in thorough testing with various input values without writing redundant code.
5. **Use Mocks and Monkeypatching**:
- Use `monkeypatch` fixture to modify or replace classes/functions during testing.
- Use `unittest.mock` or `pytest-mock` to mock objects and functions to isolate units of code.
6. **Exception Testing**:
- Test for expected exceptions using `pytest.raises(ExceptionType)`.
7. **Test Coverage**:
- Install pytest-cov: `pip install pytest-cov`.
- Run tests with `pytest --cov=my_module` to get a coverage report.
8. **Environment Variables and Secret Handling**:
- Store secrets and configurations in environment variables.
- Use libraries like `python-decouple` or `python-dotenv` to load environment variables.
- For tests, mock or set environment variables temporarily within the test environment.
9. **Grouping and Marking Tests**:
- Use `@pytest.mark` decorator to mark tests (e.g., `@pytest.mark.slow`).
- This allows for selectively running certain groups of tests.
12. **Logging and Reporting**:
- Use `pytest`'s inbuilt logging.
- Integrate with tools like `Allure` for more comprehensive reporting.
13. **Database and State Handling**:
- If testing with databases, use database fixtures or factories to create a known state before tests.
- Clean up and reset state post-tests to maintain consistency.
14. **Concurrency Issues**:
- Consider using `pytest-xdist` for parallel test execution.
- Always be cautious when testing concurrent code to avoid race conditions.
15. **Clean Code Practices**:
- Ensure tests are readable and maintainable.
- Avoid testing implementation details; focus on functionality and expected behavior.
16. **Regular Maintenance**:
- Periodically review and update tests.
- Ensure that tests stay relevant as your codebase grows and changes.
18. **Feedback Loop**:
- Use test failures as feedback for development.
- Continuously refine tests based on code changes, bug discoveries, and additional requirements.
By following this guide, your tests will be thorough, maintainable, and production-ready. Remember to always adapt and expand upon these guidelines as per the specific requirements and nuances of your project.
######### CREATE TESTS FOR THIS CODE: #######
"""
DOCUMENTATION_SOP = """
Create multi-page long and explicit professional pytorch-like documentation for the <MODULE> code below follow the outline for the <MODULE> library,
provide many examples and teach the user about the code, provide examples for every function, make the documentation 10,000 words,
provide many usage examples and note this is markdown docs, create the documentation for the code to document,
put the arguments and methods in a table in markdown to make it visually seamless
Now make the professional documentation for this code, provide the architecture and how the class works and why it works that way,
it's purpose, provide args, their types, 3 ways of usage examples, in examples show all the code like imports main example etc
BE VERY EXPLICIT AND THOROUGH, MAKE IT DEEP AND USEFUL
########
Step 1: Understand the purpose and functionality of the module or framework
Read and analyze the description provided in the documentation to understand the purpose and functionality of the module or framework.
Identify the key features, parameters, and operations performed by the module or framework.
Step 2: Provide an overview and introduction
Start the documentation by providing a brief overview and introduction to the module or framework.
Explain the importance and relevance of the module or framework in the context of the problem it solves.
Highlight any key concepts or terminology that will be used throughout the documentation.
Step 3: Provide a class or function definition
Provide the class or function definition for the module or framework.
Include the parameters that need to be passed to the class or function and provide a brief description of each parameter.
Specify the data types and default values for each parameter.
Step 4: Explain the functionality and usage
Provide a detailed explanation of how the module or framework works and what it does.
Describe the steps involved in using the module or framework, including any specific requirements or considerations.
Provide code examples to demonstrate the usage of the module or framework.
Explain the expected inputs and outputs for each operation or function.
Step 5: Provide additional information and tips
Provide any additional information or tips that may be useful for using the module or framework effectively.
Address any common issues or challenges that developers may encounter and provide recommendations or workarounds.
Step 6: Include references and resources
Include references to any external resources or research papers that provide further information or background on the module or framework.
Provide links to relevant documentation or websites for further exploration.
Example Template for the given documentation:
# Module/Function Name: MultiheadAttention
class torch.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None):
Creates a multi-head attention module for joint information representation from the different subspaces.
Parameters:
- embed_dim (int): Total dimension of the model.
- num_heads (int): Number of parallel attention heads. The embed_dim will be split across num_heads.
- dropout (float): Dropout probability on attn_output_weights. Default: 0.0 (no dropout).
- bias (bool): If specified, adds bias to input/output projection layers. Default: True.
- add_bias_kv (bool): If specified, adds bias to the key and value sequences at dim=0. Default: False.
- add_zero_attn (bool): If specified, adds a new batch of zeros to the key and value sequences at dim=1. Default: False.
- kdim (int): Total number of features for keys. Default: None (uses kdim=embed_dim).
- vdim (int): Total number of features for values. Default: None (uses vdim=embed_dim).
- batch_first (bool): If True, the input and output tensors are provided as (batch, seq, feature). Default: False.
- device (torch.device): If specified, the tensors will be moved to the specified device.
- dtype (torch.dtype): If specified, the tensors will have the specified dtype.
def forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False):
Forward pass of the multi-head attention module.
Parameters:
- query (Tensor): Query embeddings of shape (L, E_q) for unbatched input, (L, N, E_q) when batch_first=False, or (N, L, E_q) when batch_first=True.
- key (Tensor): Key embeddings of shape (S, E_k) for unbatched input, (S, N, E_k) when batch_first=False, or (N, S, E_k) when batch_first=True.
- value (Tensor): Value embeddings of shape (S, E_v) for unbatched input, (S, N, E_v) when batch_first=False, or (N, S, E_v) when batch_first=True.
- key_padding_mask (Optional[Tensor]): If specified, a mask indicating elements to be ignored in key for attention computation.
- need_weights (bool): If specified, returns attention weights in addition to attention outputs. Default: True.
- attn_mask (Optional[Tensor]): If specified, a mask preventing attention to certain positions.
- average_attn_weights (bool): If true, returns averaged attention weights per head. Otherwise, returns attention weights separately per head. Note that this flag only has an effect when need_weights=True. Default: True.
- is_causal (bool): If specified, applies a causal mask as the attention mask. Default: False.
Returns:
Tuple[Tensor, Optional[Tensor]]:
- attn_output (Tensor): Attention outputs of shape (L, E) for unbatched input, (L, N, E) when batch_first=False, or (N, L, E) when batch_first=True.
- attn_output_weights (Optional[Tensor]): Attention weights of shape (L, S) when unbatched or (N, L, S) when batched. Optional, only returned when need_weights=True.
# Implementation of the forward pass of the attention module goes here
return attn_output, attn_output_weights
```
# Usage example:
multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
attn_output, attn_output_weights = multihead_attn(query, key, value)
Note:
The above template includes the class or function definition, parameters, description, and usage example.
To replicate the documentation for any other module or framework, follow the same structure and provide the specific details for that module or framework.
############# DOCUMENT THE FOLLOWING CODE ########
"""

@ -1,5 +1,5 @@
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
from swarms.structs.autoscaler import AutoScaler
__all__ = ["Flow", "SequentialWorkflow", "AutoScaler"]
__all__ = ["Agent", "SequentialWorkflow", "AutoScaler"]

@ -136,10 +136,10 @@ def parse_done_token(response: str) -> bool:
return "<DONE>" in response
class Flow:
class Agent:
"""
Flow is the structure that provides autonomy to any llm in a reliable and effective fashion.
The flow structure is designed to be used with any llm and provides the following features:
Agent is the structure that provides autonomy to any llm in a reliable and effective fashion.
The agent structure is designed to be used with any llm and provides the following features:
Features:
* Interactive, AI generates, then user input
@ -164,11 +164,11 @@ class Flow:
run: Run the autonomous agent loop
run_concurrent: Run the autonomous agent loop concurrently
bulk_run: Run the autonomous agent loop in bulk
save: Save the flow history to a file
load: Load the flow history from a file
save: Save the agent history to a file
load: Load the agent history from a file
validate_response: Validate the response based on certain criteria
print_history_and_memory: Print the history and memory of the flow
step: Execute a single step in the flow interaction
print_history_and_memory: Print the history and memory of the agent
step: Execute a single step in the agent interaction
graceful_shutdown: Gracefully shutdown the system saving the state
run_with_timeout: Run the loop but stop if it takes longer than the timeout
analyze_feedback: Analyze the feedback for issues
@ -200,28 +200,28 @@ class Flow:
parse_tool_command: Parse the text for tool usage
dynamic_temperature: Dynamically change the temperature
_run: Generate a result using the provided keyword args.
from_llm_and_template: Create FlowStream from LLM and a string template.
from_llm_and_template_file: Create FlowStream from LLM and a template file.
save_state: Save the state of the flow
load_state: Load the state of the flow
run_async: Run the flow asynchronously
arun: Run the flow asynchronously
from_llm_and_template: Create AgentStream from LLM and a string template.
from_llm_and_template_file: Create AgentStream from LLM and a template file.
save_state: Save the state of the agent
load_state: Load the state of the agent
run_async: Run the agent asynchronously
arun: Run the agent asynchronously
run_code: Run the code in the response
Example:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import Flow
>>> from swarms.structs import Agent
>>> llm = OpenAIChat(
... openai_api_key=api_key,
... temperature=0.5,
... )
>>> flow = Flow(
>>> agent = Agent(
... llm=llm, max_loops=5,
... #system_prompt=SYSTEM_PROMPT,
... #retry_interval=1,
... )
>>> flow.run("Generate a 10,000 word blog")
>>> flow.save("path/flow.yaml")
>>> agent.run("Generate a 10,000 word blog")
>>> agent.save("path/agent.yaml")
"""
def __init__(
@ -460,13 +460,13 @@ class Flow:
print(
colored(
f"""
Flow Dashboard
Agent Dashboard
--------------------------------------------
Flow loop is initializing for {self.max_loops} with the following configuration:
Agent loop is initializing for {self.max_loops} with the following configuration:
----------------------------------------
Flow Configuration:
Agent Configuration:
Name: {self.agent_name}
Description: {self.agent_description}
Standard Operating Procedure: {self.sop}
@ -523,7 +523,7 @@ class Flow:
Args:
task (str): The initial task to run
Flow:
Agent:
1. Generate a response
2. Check stopping condition
3. If stopping condition is met, stop
@ -620,7 +620,7 @@ class Flow:
# If autosave is enabled then save the state
if self.autosave:
save_path = self.saved_state_path or "flow_state.json"
print(colored(f"Autosaving flow state to {save_path}", "green"))
print(colored(f"Autosaving agent state to {save_path}", "green"))
self.save_state(save_path)
# If return history is enabled then return the response and history
@ -629,7 +629,7 @@ class Flow:
return response
except Exception as error:
print(f"Error running flow: {error}")
print(f"Error running agent: {error}")
raise
async def arun(self, task: str, **kwargs):
@ -639,7 +639,7 @@ class Flow:
Args:
task (str): The initial task to run
Flow:
Agent:
1. Generate a response
2. Check stopping condition
3. If stopping condition is met, stop
@ -702,7 +702,7 @@ class Flow:
if self.autosave:
save_path = self.saved_state_path or "flow_state.json"
print(colored(f"Autosaving flow state to {save_path}", "green"))
print(colored(f"Autosaving agent state to {save_path}", "green"))
self.save_state(save_path)
if self.return_history:
@ -770,32 +770,32 @@ class Flow:
return [self.run(**input_data) for input_data in inputs]
@staticmethod
def from_llm_and_template(llm: Any, template: str) -> "Flow":
"""Create FlowStream from LLM and a string template."""
return Flow(llm=llm, template=template)
def from_llm_and_template(llm: Any, template: str) -> "Agent":
"""Create AgentStream from LLM and a string template."""
return Agent(llm=llm, template=template)
@staticmethod
def from_llm_and_template_file(llm: Any, template_file: str) -> "Flow":
"""Create FlowStream from LLM and a template file."""
def from_llm_and_template_file(llm: Any, template_file: str) -> "Agent":
"""Create AgentStream from LLM and a template file."""
with open(template_file, "r") as f:
template = f.read()
return Flow(llm=llm, template=template)
return Agent(llm=llm, template=template)
def save(self, file_path) -> None:
with open(file_path, "w") as f:
json.dump(self.memory, f)
print(f"Saved flow history to {file_path}")
print(f"Saved agent history to {file_path}")
def load(self, file_path: str):
"""
Load the flow history from a file.
Load the agent history from a file.
Args:
file_path (str): The path to the file containing the saved flow history.
file_path (str): The path to the file containing the saved agent history.
"""
with open(file_path, "r") as f:
self.memory = json.load(f)
print(f"Loaded flow history from {file_path}")
print(f"Loaded agent history from {file_path}")
def validate_response(self, response: str) -> bool:
"""Validate the response based on certain criteria"""
@ -806,10 +806,10 @@ class Flow:
def print_history_and_memory(self):
"""
Prints the entire history and memory of the flow.
Prints the entire history and memory of the agent.
Each message is colored and formatted for better readability.
"""
print(colored("Flow History and Memory", "cyan", attrs=["bold"]))
print(colored("Agent History and Memory", "cyan", attrs=["bold"]))
print(colored("========================", "cyan", attrs=["bold"]))
for loop_index, history in enumerate(self.memory, start=1):
print(colored(f"\nLoop {loop_index}:", "yellow", attrs=["bold"]))
@ -820,12 +820,12 @@ class Flow:
else:
print(colored(f"{speaker}:", "blue") + f" {message_text}")
print(colored("------------------------", "cyan"))
print(colored("End of Flow History", "cyan", attrs=["bold"]))
print(colored("End of Agent History", "cyan", attrs=["bold"]))
def step(self, task: str, **kwargs):
"""
Executes a single step in the flow interaction, generating a response
Executes a single step in the agent interaction, generating a response
from the language model based on the given input text.
Args:
@ -842,7 +842,7 @@ class Flow:
# Generate the response using lm
response = self.llm(task, **kwargs)
# Update the flow's history with the new interaction
# Update the agent's history with the new interaction
if self.interactive:
self.memory.append(f"AI: {response}")
self.memory.append(f"Human: {task}")
@ -885,9 +885,9 @@ class Flow:
Example:
# Feature 2: Undo functionality
response = flow.run("Another task")
response = agent.run("Another task")
print(f"Response: {response}")
previous_state, message = flow.undo_last()
previous_state, message = agent.undo_last()
print(message)
"""
@ -907,8 +907,8 @@ class Flow:
Add a response filter to filter out certain words from the response
Example:
flow.add_response_filter("Trump")
flow.run("Generate a report on Trump")
agent.add_response_filter("Trump")
agent.run("Generate a report on Trump")
"""
@ -927,8 +927,8 @@ class Flow:
def filtered_run(self, task: str) -> str:
"""
# Feature 3: Response filtering
flow.add_response_filter("report")
response = flow.filtered_run("Generate a report on finance")
agent.add_response_filter("report")
response = agent.filtered_run("Generate a report on finance")
print(response)
"""
raw_response = self.run(task)
@ -954,7 +954,7 @@ class Flow:
Example:
# Feature 4: Streamed generation
response = flow.streamed_generation("Generate a report on finance")
response = agent.streamed_generation("Generate a report on finance")
print(response)
"""
@ -999,13 +999,13 @@ class Flow:
def save_state(self, file_path: str) -> None:
"""
Saves the current state of the flow to a JSON file, including the llm parameters.
Saves the current state of the agent to a JSON file, including the llm parameters.
Args:
file_path (str): The path to the JSON file where the state will be saved.
Example:
>>> flow.save_state('saved_flow.json')
>>> agent.save_state('saved_flow.json')
"""
state = {
"memory": self.memory,
@ -1021,18 +1021,18 @@ class Flow:
with open(file_path, "w") as f:
json.dump(state, f, indent=4)
saved = colored("Saved flow state to", "green")
saved = colored("Saved agent state to", "green")
print(f"{saved} {file_path}")
def load_state(self, file_path: str):
"""
Loads the state of the flow from a json file and restores the configuration and memory.
Loads the state of the agent from a json file and restores the configuration and memory.
Example:
>>> flow = Flow(llm=llm_instance, max_loops=5)
>>> flow.load_state('saved_flow.json')
>>> flow.run("Continue with the task")
>>> agent = Agent(llm=llm_instance, max_loops=5)
>>> agent.load_state('saved_flow.json')
>>> agent.run("Continue with the task")
"""
with open(file_path, "r") as f:
@ -1046,7 +1046,7 @@ class Flow:
self.retry_interval = state.get("retry_interval", 1)
self.interactive = state.get("interactive", False)
print(f"Flow state loaded from {file_path}")
print(f"Agent state loaded from {file_path}")
def retry_on_failure(
self, function, retries: int = 3, retry_delay: int = 1
@ -1098,7 +1098,7 @@ class Flow:
self.retry_interval = retry_interval
def reset(self):
"""Reset the flow"""
"""Reset the agent"""
self.memory = []
def run_code(self, code: str):

@ -6,7 +6,7 @@ from typing import Callable, Dict, List
from termcolor import colored
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
from swarms.utils.decorators import (
error_decorator,
log_decorator,
@ -19,7 +19,7 @@ class AutoScaler:
The AutoScaler is like a kubernetes pod, that autoscales an agent or worker or boss!
Wraps around a structure like SequentialWorkflow
and or Flow and parallelizes them on multiple threads so they're split across devices
and or Agent and parallelizes them on multiple threads so they're split across devices
and you can use them like that
Args:
@ -41,12 +41,12 @@ class AutoScaler:
Usage
```
from swarms.swarms import AutoScaler
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
@AutoScaler
flow = Flow()
agent = Agent()
flow.run("what is your name")
agent.run("what is your name")
```
"""
@ -61,7 +61,7 @@ class AutoScaler:
busy_threshold=0.7,
agent=None,
):
self.agent = agent or Flow
self.agent = agent or Agent
self.agents_pool = [self.agent() for _ in range(initial_agents)]
self.task_queue = queue.Queue()
self.scale_up_factor = scale_up_factor
@ -86,7 +86,7 @@ class AutoScaler:
with self.lock:
new_agents_counts = len(self.agents_pool) * self.scale_up_factor
for _ in range(new_agents_counts):
self.agents_pool.append(Flow())
self.agents_pool.append(Agent())
except Exception as error:
print(f"Error scaling up: {error} try again with a new task")

@ -1,5 +1,5 @@
from swarms.models import OpenAIChat
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
import concurrent.futures
from typing import Callable, List, Dict, Any, Sequence
@ -10,7 +10,7 @@ class Task:
self,
id: str,
task: str,
flows: Sequence[Flow],
flows: Sequence[Agent],
dependencies: List[str] = [],
):
self.id = id
@ -21,12 +21,12 @@ class Task:
def execute(self, parent_results: Dict[str, Any]):
args = [parent_results[dep] for dep in self.dependencies]
for flow in self.flows:
result = flow.run(self.task, *args)
for agent in self.flows:
result = agent.run(self.task, *args)
self.results.append(result)
args = [
result
] # The output of one flow becomes the input to the next
] # The output of one agent becomes the input to the next
class Workflow:
@ -65,13 +65,13 @@ class Workflow:
# create flows
llm = OpenAIChat(openai_api_key="sk-")
flow1 = Flow(llm, max_loops=1)
flow2 = Flow(llm, max_loops=1)
flow3 = Flow(llm, max_loops=1)
flow4 = Flow(llm, max_loops=1)
flow1 = Agent(llm, max_loops=1)
flow2 = Agent(llm, max_loops=1)
flow3 = Agent(llm, max_loops=1)
flow4 = Agent(llm, max_loops=1)
# Create tasks with their respective Flows and task strings
# Create tasks with their respective Agents and task strings
task1 = Task("task1", "Generate a summary on Quantum field theory", [flow1])
task2 = Task(
"task2",

@ -19,7 +19,7 @@ from typing import Any, Callable, Dict, List, Optional, Union
from termcolor import colored
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
# Define a generic Task that can handle different types of callable objects
@ -31,7 +31,7 @@ class Task:
Args:
description (str): The description of the task.
flow (Union[Callable, Flow]): The model or flow to execute the task.
agent (Union[Callable, Agent]): The model or agent to execute the task.
args (List[Any]): Additional arguments to pass to the task execution.
kwargs (Dict[str, Any]): Additional keyword arguments to pass to the task execution.
result (Any): The result of the task execution.
@ -42,17 +42,17 @@ class Task:
Examples:
>>> from swarms.structs import Task, Flow
>>> from swarms.structs import Task, Agent
>>> from swarms.models import OpenAIChat
>>> flow = Flow(llm=OpenAIChat(openai_api_key=""), max_loops=1, dashboard=False)
>>> task = Task(description="What's the weather in miami", flow=flow)
>>> agent = Agent(llm=OpenAIChat(openai_api_key=""), max_loops=1, dashboard=False)
>>> task = Task(description="What's the weather in miami", agent=agent)
>>> task.execute()
>>> task.result
"""
description: str
flow: Union[Callable, Flow]
agent: Union[Callable, Agent]
args: List[Any] = field(default_factory=list)
kwargs: Dict[str, Any] = field(default_factory=dict)
result: Any = None
@ -63,10 +63,10 @@ class Task:
Execute the task.
Raises:
ValueError: If a Flow instance is used as a task and the 'task' argument is not provided.
ValueError: If a Agent instance is used as a task and the 'task' argument is not provided.
"""
if isinstance(self.flow, Flow):
# Add a prompt to notify the Flow of the sequential workflow
if isinstance(self.agent, Agent):
# Add a prompt to notify the Agent of the sequential workflow
if "prompt" in self.kwargs:
self.kwargs["prompt"] += (
f"\n\nPrevious output: {self.result}" if self.result else ""
@ -75,9 +75,9 @@ class Task:
self.kwargs["prompt"] = f"Main task: {self.description}" + (
f"\n\nPrevious output: {self.result}" if self.result else ""
)
self.result = self.flow.run(*self.args, **self.kwargs)
self.result = self.agent.run(*self.args, **self.kwargs)
else:
self.result = self.flow(*self.args, **self.kwargs)
self.result = self.agent(*self.args, **self.kwargs)
self.history.append(self.result)
@ -122,7 +122,7 @@ class SequentialWorkflow:
def add(
self,
flow: Union[Callable, Flow],
agent: Union[Callable, Agent],
task: Optional[str] = None,
img: Optional[str] = None,
*args,
@ -132,22 +132,22 @@ class SequentialWorkflow:
Add a task to the workflow.
Args:
flow (Union[Callable, Flow]): The model or flow to execute the task.
task (str): The task description or the initial input for the Flow.
agent (Union[Callable, Agent]): The model or agent to execute the task.
task (str): The task description or the initial input for the Agent.
img (str): The image to understand for the task.
*args: Additional arguments to pass to the task execution.
**kwargs: Additional keyword arguments to pass to the task execution.
"""
# If the flow is a Flow instance, we include the task in kwargs for Flow.run()
if isinstance(flow, Flow):
kwargs["task"] = task # Set the task as a keyword argument for Flow
# If the agent is a Agent instance, we include the task in kwargs for Agent.run()
if isinstance(agent, Agent):
kwargs["task"] = task # Set the task as a keyword argument for Agent
# Append the task to the tasks list
if self.img:
self.tasks.append(
Task(
description=task,
flow=flow,
agent=agent,
args=list(args),
kwargs=kwargs,
img=img,
@ -156,7 +156,7 @@ class SequentialWorkflow:
else:
self.tasks.append(
Task(
description=task, flow=flow, args=list(args), kwargs=kwargs
description=task, agent=agent, args=list(args), kwargs=kwargs
)
)
@ -319,7 +319,7 @@ class SequentialWorkflow:
task = Task(
description=task,
flow=kwargs["flow"],
agent=kwargs["agent"],
args=list(kwargs["args"]),
kwargs=kwargs["kwargs"],
)
@ -352,7 +352,7 @@ class SequentialWorkflow:
for task_state in state["tasks"]:
task = Task(
description=task_state["description"],
flow=task_state["flow"],
agent=task_state["agent"],
args=task_state["args"],
kwargs=task_state["kwargs"],
result=task_state["result"],
@ -365,7 +365,7 @@ class SequentialWorkflow:
Run the workflow.
Raises:
ValueError: If a Flow instance is used as a task and the 'task' argument is not provided.
ValueError: If a Agent instance is used as a task and the 'task' argument is not provided.
"""
try:
@ -374,30 +374,30 @@ class SequentialWorkflow:
for task in self.tasks:
# Check if the current task can be executed
if task.result is None:
# Check if the flow is a Flow and a 'task' argument is needed
if isinstance(task.flow, Flow):
# Check if the agent is a Agent and a 'task' argument is needed
if isinstance(task.agent, Agent):
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
"The 'task' argument is required for the"
" Flow flow execution in"
" Agent agent execution in"
f" '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
task.result = task.flow.run(
task.result = task.agent.run(
flow_task_arg, *task.args, **task.kwargs
)
else:
# If it's not a Flow instance, call the flow directly
task.result = task.flow(*task.args, **task.kwargs)
# If it's not a Agent instance, call the agent directly
task.result = task.agent(*task.args, **task.kwargs)
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
if isinstance(next_task.flow, Flow):
# For Flow flows, 'task' should be a keyword argument
if isinstance(next_task.agent, Agent):
# For Agent flows, 'task' should be a keyword argument
next_task.kwargs["task"] = task.result
else:
# For other callable flows, the result is added to args
@ -413,7 +413,7 @@ class SequentialWorkflow:
colored(
(
f"Error initializing the Sequential workflow: {e} try"
" optimizing your inputs like the flow class and task"
" optimizing your inputs like the agent class and task"
" description"
),
"red",
@ -426,36 +426,36 @@ class SequentialWorkflow:
Asynchronously run the workflow.
Raises:
ValueError: If a Flow instance is used as a task and the 'task' argument is not provided.
ValueError: If a Agent instance is used as a task and the 'task' argument is not provided.
"""
for _ in range(self.max_loops):
for task in self.tasks:
# Check if the current task can be executed
if task.result is None:
# Check if the flow is a Flow and a 'task' argument is needed
if isinstance(task.flow, Flow):
# Check if the agent is a Agent and a 'task' argument is needed
if isinstance(task.agent, Agent):
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
"The 'task' argument is required for the Flow"
f" flow execution in '{task.description}'"
"The 'task' argument is required for the Agent"
f" agent execution in '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
task.result = await task.flow.arun(
task.result = await task.agent.arun(
flow_task_arg, *task.args, **task.kwargs
)
else:
# If it's not a Flow instance, call the flow directly
task.result = await task.flow(*task.args, **task.kwargs)
# If it's not a Agent instance, call the agent directly
task.result = await task.agent(*task.args, **task.kwargs)
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
if isinstance(next_task.flow, Flow):
# For Flow flows, 'task' should be a keyword argument
if isinstance(next_task.agent, Agent):
# For Agent flows, 'task' should be a keyword argument
next_task.kwargs["task"] = task.result
else:
# For other callable flows, the result is added to args

@ -17,7 +17,7 @@ class AutoBlogGenSwarm:
"""
AutoBlogGenSwarm
Swarm Flow
Swarm Agent
Topic selection agent -> draft agent -> review agent -> distribution agent
Topic Selection Agent:

@ -16,9 +16,9 @@ class DialogueSimulator:
Usage:
------
>>> from swarms import DialogueSimulator
>>> from swarms.structs.flow import Flow
>>> agents = Flow()
>>> agents1 = Flow()
>>> from swarms.structs.agent import Agent
>>> agents = Agent()
>>> agents1 = Agent()
>>> model = DialogueSimulator([agents, agents1], max_iters=10, name="test")
>>> model.run("test")
"""

@ -1,7 +1,7 @@
import logging
from dataclasses import dataclass
from typing import Dict, List
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
logger = logging.getLogger(__name__)
@ -12,19 +12,19 @@ class GroupChat:
A group chat class that contains a list of agents and the maximum number of rounds.
Args:
agents: List[Flow]
agents: List[Agent]
messages: List[Dict]
max_round: int
admin_name: str
Usage:
>>> from swarms import GroupChat
>>> from swarms.structs.flow import Flow
>>> agents = Flow()
>>> from swarms.structs.agent import Agent
>>> agents = Agent()
"""
agents: List[Flow]
agents: List[Agent]
messages: List[Dict]
max_round: int = 10
admin_name: str = "Admin" # the name of the admin agent
@ -38,14 +38,14 @@ class GroupChat:
"""Reset the group chat."""
self.messages.clear()
def agent_by_name(self, name: str) -> Flow:
def agent_by_name(self, name: str) -> Agent:
"""Find an agent whose name is contained within the given 'name' string."""
for agent in self.agents:
if agent.name in name:
return agent
raise ValueError(f"No agent found with a name contained in '{name}'.")
def next_agent(self, agent: Flow) -> Flow:
def next_agent(self, agent: Agent) -> Agent:
"""Return the next agent in the list."""
return self.agents[
(self.agent_names.index(agent.name) + 1) % len(self.agents)
@ -61,7 +61,7 @@ class GroupChat:
Then select the next role from {self.agent_names} to play. Only return the role.
"""
def select_speaker(self, last_speaker: Flow, selector: Flow):
def select_speaker(self, last_speaker: Agent, selector: Agent):
"""Select the next speaker."""
selector.update_system_message(self.select_speaker_msg())
@ -112,18 +112,18 @@ class GroupChatManager:
Args:
groupchat: GroupChat
selector: Flow
selector: Agent
Usage:
>>> from swarms import GroupChatManager
>>> from swarms.structs.flow import Flow
>>> agents = Flow()
>>> from swarms.structs.agent import Agent
>>> agents = Agent()
>>> output = GroupChatManager(agents, lambda x: x)
"""
def __init__(self, groupchat: GroupChat, selector: Flow):
def __init__(self, groupchat: GroupChat, selector: Agent):
self.groupchat = groupchat
self.selector = selector

@ -5,7 +5,7 @@ from typing import List
import tenacity
from langchain.output_parsers import RegexParser
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
from swarms.utils.logger import logger
@ -29,7 +29,7 @@ class MultiAgentCollaboration:
Multi-agent collaboration class.
Attributes:
agents (List[Flow]): The agents in the collaboration.
agents (List[Agent]): The agents in the collaboration.
selection_function (callable): The function that selects the next speaker.
Defaults to select_next_speaker.
max_iters (int): The maximum number of iterations. Defaults to 10.
@ -56,7 +56,7 @@ class MultiAgentCollaboration:
Usage:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import Flow
>>> from swarms.structs import Agent
>>> from swarms.swarms.multi_agent_collab import MultiAgentCollaboration
>>>
>>> # Initialize the language model
@ -66,14 +66,14 @@ class MultiAgentCollaboration:
>>>
>>>
>>> ## Initialize the workflow
>>> flow = Flow(llm=llm, max_loops=1, dashboard=True)
>>> agent = Agent(llm=llm, max_loops=1, dashboard=True)
>>>
>>> # Run the workflow on a task
>>> out = flow.run("Generate a 10,000 word blog on health and wellness.")
>>> out = agent.run("Generate a 10,000 word blog on health and wellness.")
>>>
>>> # Initialize the multi-agent collaboration
>>> swarm = MultiAgentCollaboration(
>>> agents=[flow],
>>> agents=[agent],
>>> max_iters=4,
>>> )
>>>
@ -87,7 +87,7 @@ class MultiAgentCollaboration:
def __init__(
self,
agents: List[Flow],
agents: List[Agent],
selection_function: callable = None,
max_iters: int = 10,
autosave: bool = True,
@ -117,7 +117,7 @@ class MultiAgentCollaboration:
agent.run(f"Name {name} and message: {message}")
self._step += 1
def inject_agent(self, agent: Flow):
def inject_agent(self, agent: Agent):
"""Injects an agent into the multi-agent collaboration."""
self.agents.append(agent)
@ -195,13 +195,13 @@ class MultiAgentCollaboration:
n += 1
def select_next_speaker_roundtable(
self, step: int, agents: List[Flow]
self, step: int, agents: List[Agent]
) -> int:
"""Selects the next speaker."""
return step % len(agents)
def select_next_speaker_director(
step: int, agents: List[Flow], director
step: int, agents: List[Agent], director
) -> int:
# if the step if even => director
# => director selects next speaker

@ -7,7 +7,7 @@ import pytest
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from swarms.structs.flow import Flow, stop_when_repeats
from swarms.structs.agent import Agent, stop_when_repeats
from swarms.utils.logger import logger
load_dotenv()
@ -25,12 +25,12 @@ def mocked_llm():
@pytest.fixture
def basic_flow(mocked_llm):
return Flow(llm=mocked_llm, max_loops=5)
return Agent(llm=mocked_llm, max_loops=5)
@pytest.fixture
def flow_with_condition(mocked_llm):
return Flow(
return Agent(
llm=mocked_llm, max_loops=5, stopping_condition=stop_when_repeats
)
@ -93,7 +93,7 @@ def test_save_and_load(basic_flow, tmp_path):
basic_flow.memory.append(["Test1", "Test2"])
basic_flow.save(file_path)
new_flow = Flow(llm=mocked_llm, max_loops=5)
new_flow = Agent(llm=mocked_llm, max_loops=5)
new_flow.load(file_path)
assert new_flow.memory == [["Test1", "Test2"]]
@ -107,19 +107,19 @@ def test_env_variable_handling(monkeypatch):
# TODO: Add more tests, especially edge cases and exception cases. Implement parametrized tests for varied inputs.
# Test initializing the flow with different stopping conditions
# Test initializing the agent with different stopping conditions
def test_flow_with_custom_stopping_condition(mocked_llm):
def stopping_condition(x):
return "terminate" in x.lower()
flow = Flow(
agent = Agent(
llm=mocked_llm, max_loops=5, stopping_condition=stopping_condition
)
assert flow.stopping_condition("Please terminate now")
assert not flow.stopping_condition("Continue the process")
assert agent.stopping_condition("Please terminate now")
assert not agent.stopping_condition("Continue the process")
# Test calling the flow directly
# Test calling the agent directly
def test_flow_call(basic_flow):
response = basic_flow("Test call")
assert response == "Test call"
@ -187,14 +187,14 @@ def test_check_stopping_condition(flow_with_condition):
# Test without providing max loops (default value should be 5)
def test_default_max_loops(mocked_llm):
flow = Flow(llm=mocked_llm)
assert flow.max_loops == 5
agent = Agent(llm=mocked_llm)
assert agent.max_loops == 5
# Test creating flow from llm and template
# Test creating agent from llm and template
def test_from_llm_and_template(mocked_llm):
flow = Flow.from_llm_and_template(mocked_llm, "Test template")
assert isinstance(flow, Flow)
agent = Agent.from_llm_and_template(mocked_llm, "Test template")
assert isinstance(agent, Agent)
# Mocking the OpenAIChat for testing
@ -202,8 +202,8 @@ def test_from_llm_and_template(mocked_llm):
def test_mocked_openai_chat(MockedOpenAIChat):
llm = MockedOpenAIChat(openai_api_key=openai_api_key)
llm.return_value = MagicMock()
flow = Flow(llm=llm, max_loops=5)
flow.run("Mocked run")
agent = Agent(llm=llm, max_loops=5)
agent.run("Mocked run")
assert MockedOpenAIChat.called
@ -232,16 +232,16 @@ def test_different_retry_intervals(mocked_sleep, basic_flow):
assert response == "Test retry interval"
# Test invoking the flow with additional kwargs
# Test invoking the agent with additional kwargs
@patch("time.sleep", return_value=None)
def test_flow_call_with_kwargs(mocked_sleep, basic_flow):
response = basic_flow("Test call", param1="value1", param2="value2")
assert response == "Test call"
# Test initializing the flow with all parameters
# Test initializing the agent with all parameters
def test_flow_initialization_all_params(mocked_llm):
flow = Flow(
agent = Agent(
llm=mocked_llm,
max_loops=10,
stopping_condition=stop_when_repeats,
@ -252,11 +252,11 @@ def test_flow_initialization_all_params(mocked_llm):
param1="value1",
param2="value2",
)
assert flow.max_loops == 10
assert flow.loop_interval == 2
assert flow.retry_attempts == 4
assert flow.retry_interval == 2
assert flow.interactive
assert agent.max_loops == 10
assert agent.loop_interval == 2
assert agent.retry_attempts == 4
assert agent.retry_interval == 2
assert agent.interactive
# Test the stopping token is in the response
@ -268,30 +268,30 @@ def test_stopping_token_in_response(mocked_sleep, basic_flow):
@pytest.fixture
def flow_instance():
# Create an instance of the Flow class with required parameters for testing
# Create an instance of the Agent class with required parameters for testing
# You may need to adjust this based on your actual class initialization
llm = OpenAIChat(
openai_api_key=openai_api_key,
)
flow = Flow(
agent = Agent(
llm=llm,
max_loops=5,
interactive=False,
dashboard=False,
dynamic_temperature=False,
)
return flow
return agent
def test_flow_run(flow_instance):
# Test the basic run method of the Flow class
# Test the basic run method of the Agent class
response = flow_instance.run("Test task")
assert isinstance(response, str)
assert len(response) > 0
def test_flow_interactive_mode(flow_instance):
# Test the interactive mode of the Flow class
# Test the interactive mode of the Agent class
flow_instance.interactive = True
response = flow_instance.run("Test task")
assert isinstance(response, str)
@ -299,7 +299,7 @@ def test_flow_interactive_mode(flow_instance):
def test_flow_dashboard_mode(flow_instance):
# Test the dashboard mode of the Flow class
# Test the dashboard mode of the Agent class
flow_instance.dashboard = True
response = flow_instance.run("Test task")
assert isinstance(response, str)
@ -307,7 +307,7 @@ def test_flow_dashboard_mode(flow_instance):
def test_flow_autosave(flow_instance):
# Test the autosave functionality of the Flow class
# Test the autosave functionality of the Agent class
flow_instance.autosave = True
response = flow_instance.run("Test task")
assert isinstance(response, str)
@ -360,7 +360,7 @@ def test_flow_graceful_shutdown(flow_instance):
assert result is not None
# Add more test cases as needed to cover various aspects of your Flow class
# Add more test cases as needed to cover various aspects of your Agent class
def test_flow_max_loops(flow_instance):
@ -482,7 +482,7 @@ def test_flow_clear_conversation_log(flow_instance):
def test_flow_get_state(flow_instance):
# Test getting the current state of the Flow instance
# Test getting the current state of the Agent instance
state = flow_instance.get_state()
assert isinstance(state, dict)
assert "current_prompt" in state
@ -498,7 +498,7 @@ def test_flow_get_state(flow_instance):
def test_flow_load_state(flow_instance):
# Test loading the state into the Flow instance
# Test loading the state into the Agent instance
state = {
"current_prompt": "Loaded prompt",
"instructions": ["Step 1", "Step 2"],
@ -530,7 +530,7 @@ def test_flow_load_state(flow_instance):
def test_flow_save_state(flow_instance):
# Test saving the state of the Flow instance
# Test saving the state of the Agent instance
flow_instance.change_prompt("New prompt")
flow_instance.add_instruction("Step 1")
flow_instance.add_user_message("User message")
@ -604,7 +604,7 @@ def test_flow_contextual_intent_reset(flow_instance):
assert "New York" in response2
# Add more test cases as needed to cover various aspects of your Flow class
# Add more test cases as needed to cover various aspects of your Agent class
def test_flow_interruptible(flow_instance):
# Test interruptible mode
flow_instance.interruptible = True
@ -791,9 +791,9 @@ def test_flow_clear_context(flow_instance):
def test_flow_input_validation(flow_instance):
# Test input validation for invalid flow configurations
# Test input validation for invalid agent configurations
with pytest.raises(ValueError):
Flow(config=None) # Invalid config, should raise ValueError
Agent(config=None) # Invalid config, should raise ValueError
with pytest.raises(ValueError):
flow_instance.set_message_delimiter(
@ -850,7 +850,7 @@ def test_flow_conversation_persistence(flow_instance):
flow_instance.run("Message 2")
conversation = flow_instance.get_conversation()
new_flow_instance = Flow()
new_flow_instance = Agent()
new_flow_instance.load_conversation(conversation)
assert len(new_flow_instance.get_message_history()) == 2
assert "Message 1" in new_flow_instance.get_message_history()[0]
@ -918,7 +918,7 @@ def test_flow_multiple_event_listeners(flow_instance):
mock_second_response.assert_called_once()
# Add more test cases as needed to cover various aspects of your Flow class
# Add more test cases as needed to cover various aspects of your Agent class
def test_flow_error_handling(flow_instance):
# Test error handling and exceptions
with pytest.raises(ValueError):
@ -967,7 +967,7 @@ def test_flow_context_operations(flow_instance):
assert flow_instance.get_context("user_id") is None
# Add more test cases as needed to cover various aspects of your Flow class
# Add more test cases as needed to cover various aspects of your Agent class
def test_flow_long_messages(flow_instance):
@ -1017,7 +1017,7 @@ def test_flow_custom_logging(flow_instance):
def test_flow_performance(flow_instance):
# Test the performance of the Flow class by running a large number of messages
# Test the performance of the Agent class by running a large number of messages
num_messages = 1000
for i in range(num_messages):
flow_instance.run(f"Message {i}")
@ -1038,7 +1038,7 @@ def test_flow_complex_use_case(flow_instance):
assert flow_instance.get_context("user_id") is None
# Add more test cases as needed to cover various aspects of your Flow class
# Add more test cases as needed to cover various aspects of your Agent class
def test_flow_context_handling(flow_instance):
# Test context handling
flow_instance.add_context("user_id", "12345")
@ -1083,7 +1083,7 @@ def test_flow_custom_timeout(flow_instance):
assert execution_time >= 10 # Ensure the timeout was respected
# Add more test cases as needed to thoroughly cover your Flow class
# Add more test cases as needed to thoroughly cover your Agent class
def test_flow_interactive_run(flow_instance, capsys):
@ -1113,7 +1113,7 @@ def test_flow_interactive_run(flow_instance, capsys):
simulate_user_input(user_input)
# Assuming you have already defined your Flow class and created an instance for testing
# Assuming you have already defined your Agent class and created an instance for testing
def test_flow_agent_history_prompt(flow_instance):
@ -1154,34 +1154,34 @@ def test_flow_bulk_run(flow_instance):
def test_flow_from_llm_and_template():
# Test creating Flow instance from an LLM and a template
# Test creating Agent instance from an LLM and a template
llm_instance = mocked_llm # Replace with your LLM class
template = "This is a template for testing."
flow_instance = Flow.from_llm_and_template(llm_instance, template)
flow_instance = Agent.from_llm_and_template(llm_instance, template)
assert isinstance(flow_instance, Flow)
assert isinstance(flow_instance, Agent)
def test_flow_from_llm_and_template_file():
# Test creating Flow instance from an LLM and a template file
# Test creating Agent instance from an LLM and a template file
llm_instance = mocked_llm # Replace with your LLM class
template_file = "template.txt" # Create a template file for testing
flow_instance = Flow.from_llm_and_template_file(llm_instance, template_file)
flow_instance = Agent.from_llm_and_template_file(llm_instance, template_file)
assert isinstance(flow_instance, Flow)
assert isinstance(flow_instance, Agent)
def test_flow_save_and_load(flow_instance, tmp_path):
# Test saving and loading the flow state
# Test saving and loading the agent state
file_path = tmp_path / "flow_state.json"
# Save the state
flow_instance.save(file_path)
# Create a new instance and load the state
new_flow_instance = Flow(llm=mocked_llm, max_loops=5)
new_flow_instance = Agent(llm=mocked_llm, max_loops=5)
new_flow_instance.load(file_path)
# Ensure that the loaded state matches the original state
@ -1197,22 +1197,22 @@ def test_flow_validate_response(flow_instance):
assert flow_instance.validate_response(invalid_response) is False
# Add more test cases as needed for other methods and features of your Flow class
# Add more test cases as needed for other methods and features of your Agent class
# Finally, don't forget to run your tests using a testing framework like pytest
# Assuming you have already defined your Flow class and created an instance for testing
# Assuming you have already defined your Agent class and created an instance for testing
def test_flow_print_history_and_memory(capsys, flow_instance):
# Test printing the history and memory of the flow
# Test printing the history and memory of the agent
history = ["User: Hi", "AI: Hello"]
flow_instance.memory = [history]
flow_instance.print_history_and_memory()
captured = capsys.readouterr()
assert "Flow History and Memory" in captured.out
assert "Agent History and Memory" in captured.out
assert "Loop 1:" in captured.out
assert "User: Hi" in captured.out
assert "AI: Hello" in captured.out
@ -1227,6 +1227,6 @@ def test_flow_run_with_timeout(flow_instance):
assert response in ["Actual Response", "Timeout"]
# Add more test cases as needed for other methods and features of your Flow class
# Add more test cases as needed for other methods and features of your Agent class
# Finally, don't forget to run your tests using a testing framework like pytest

@ -5,7 +5,7 @@ from unittest.mock import patch
import pytest
from swarms.models import OpenAIChat
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow, Task
# Mock the OpenAI API key using environment variables
@ -21,8 +21,8 @@ class MockOpenAIChat:
return "Mocked result"
# Mock Flow class for testing
class MockFlow:
# Mock Agent class for testing
class MockAgent:
def __init__(self, *args, **kwargs):
pass
@ -45,16 +45,16 @@ class MockSequentialWorkflow:
# Test Task class
def test_task_initialization():
description = "Sample Task"
flow = MockOpenAIChat()
task = Task(description=description, flow=flow)
agent = MockOpenAIChat()
task = Task(description=description, agent=agent)
assert task.description == description
assert task.flow == flow
assert task.agent == agent
def test_task_execute():
description = "Sample Task"
flow = MockOpenAIChat()
task = Task(description=description, flow=flow)
agent = MockOpenAIChat()
task = Task(description=description, agent=agent)
task.execute()
assert task.result == "Mocked result"
@ -78,7 +78,7 @@ def test_sequential_workflow_add_task():
workflow.add(task_description, task_flow)
assert len(workflow.tasks) == 1
assert workflow.tasks[0].description == task_description
assert workflow.tasks[0].flow == task_flow
assert workflow.tasks[0].agent == task_flow
def test_sequential_workflow_reset_workflow():
@ -169,8 +169,8 @@ def test_sequential_workflow_workflow_dashboard(capfd):
assert "Sequential Workflow Dashboard" in out
# Mock Flow class for async testing
class MockAsyncFlow:
# Mock Agent class for async testing
class MockAsyncAgent:
def __init__(self, *args, **kwargs):
pass
@ -183,7 +183,7 @@ class MockAsyncFlow:
async def test_sequential_workflow_arun():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockAsyncFlow()
task_flow = MockAsyncAgent()
workflow.add(task_description, task_flow)
await workflow.arun()
assert workflow.tasks[0].result == "Mocked result"
@ -196,9 +196,9 @@ def test_real_world_usage_with_openai_key():
def test_real_world_usage_with_flow_and_openai_key():
# Initialize a flow with the language model
flow = Flow(llm=OpenAIChat())
assert isinstance(flow, Flow)
# Initialize a agent with the language model
agent = Agent(llm=OpenAIChat())
assert isinstance(agent, Agent)
def test_real_world_usage_with_sequential_workflow():

@ -1,11 +1,11 @@
from unittest.mock import patch
from swarms.structs.autoscaler import AutoScaler
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs import Agent
llm = OpenAIChat()
flow = Flow(
agent = Agent(
llm=llm,
max_loops=2,
dashboard=True,
@ -18,7 +18,7 @@ def test_autoscaler_initialization():
scale_up_factor=2,
idle_threshold=0.1,
busy_threshold=0.8,
agent=flow,
agent=agent,
)
assert isinstance(autoscaler, AutoScaler)
assert autoscaler.scale_up_factor == 2
@ -28,19 +28,19 @@ def test_autoscaler_initialization():
def test_autoscaler_add_task():
autoscaler = AutoScaler(agent=flow)
autoscaler = AutoScaler(agent=agent)
autoscaler.add_task("task1")
assert autoscaler.task_queue.qsize() == 1
def test_autoscaler_scale_up():
autoscaler = AutoScaler(initial_agents=5, scale_up_factor=2, agent=flow)
autoscaler = AutoScaler(initial_agents=5, scale_up_factor=2, agent=agent)
autoscaler.scale_up()
assert len(autoscaler.agents_pool) == 10
def test_autoscaler_scale_down():
autoscaler = AutoScaler(initial_agents=5, agent=flow)
autoscaler = AutoScaler(initial_agents=5, agent=agent)
autoscaler.scale_down()
assert len(autoscaler.agents_pool) == 4
@ -48,7 +48,7 @@ def test_autoscaler_scale_down():
@patch("swarms.swarms.AutoScaler.scale_up")
@patch("swarms.swarms.AutoScaler.scale_down")
def test_autoscaler_monitor_and_scale(mock_scale_down, mock_scale_up):
autoscaler = AutoScaler(initial_agents=5, agent=flow)
autoscaler = AutoScaler(initial_agents=5, agent=agent)
autoscaler.add_task("task1")
autoscaler.monitor_and_scale()
mock_scale_up.assert_called_once()
@ -56,9 +56,9 @@ def test_autoscaler_monitor_and_scale(mock_scale_down, mock_scale_up):
@patch("swarms.swarms.AutoScaler.monitor_and_scale")
@patch("swarms.swarms.flow.run")
@patch("swarms.swarms.agent.run")
def test_autoscaler_start(mock_run, mock_monitor_and_scale):
autoscaler = AutoScaler(initial_agents=5, agent=flow)
autoscaler = AutoScaler(initial_agents=5, agent=agent)
autoscaler.add_task("task1")
autoscaler.start()
mock_run.assert_called_once()
@ -66,6 +66,6 @@ def test_autoscaler_start(mock_run, mock_monitor_and_scale):
def test_autoscaler_del_agent():
autoscaler = AutoScaler(initial_agents=5, agent=flow)
autoscaler = AutoScaler(initial_agents=5, agent=agent)
autoscaler.del_agent()
assert len(autoscaler.agents_pool) == 4

@ -2,7 +2,7 @@ import pytest
from swarms.models import OpenAIChat
from swarms.models.anthropic import Anthropic
from swarms.structs.flow import Flow
from swarms.structs.agent import Agent
from swarms.swarms.groupchat import GroupChat, GroupChatManager
llm = OpenAIChat()
@ -21,12 +21,12 @@ class MockOpenAI:
# Create fixtures for agents and a sample message
@pytest.fixture
def agent1():
return Flow(name="Agent1", llm=llm)
return Agent(name="Agent1", llm=llm)
@pytest.fixture
def agent2():
return Flow(name="Agent2", llm=llm2)
return Agent(name="Agent2", llm=llm2)
@pytest.fixture
@ -155,7 +155,7 @@ def test_groupchat_manager_generate_reply():
# Test case to ensure GroupChat selects the next speaker correctly
def test_groupchat_select_speaker():
agent3 = Flow(name="agent3", llm=llm)
agent3 = Agent(name="agent3", llm=llm)
agents = [agent1, agent2, agent3]
groupchat = GroupChat(agents=agents, messages=[], max_round=10)
@ -175,7 +175,7 @@ def test_groupchat_select_speaker():
# Test case to ensure GroupChat handles underpopulated group correctly
def test_groupchat_underpopulated_group():
agent1 = Flow(name="agent1", llm=llm)
agent1 = Agent(name="agent1", llm=llm)
agents = [agent1]
groupchat = GroupChat(agents=agents, messages=[], max_round=10)

@ -2,7 +2,7 @@ import json
import os
import pytest
from unittest.mock import Mock
from swarms.structs import Flow
from swarms.structs import Agent
from swarms.models import OpenAIChat
from swarms.swarms.multi_agent_collab import (
MultiAgentCollaboration,
@ -11,8 +11,8 @@ from swarms.swarms.multi_agent_collab import (
)
# Sample agents for testing
agent1 = Flow(llm=OpenAIChat(), max_loops=2)
agent2 = Flow(llm=OpenAIChat(), max_loops=2)
agent1 = Agent(llm=OpenAIChat(), max_loops=2)
agent2 = Agent(llm=OpenAIChat(), max_loops=2)
agents = [agent1, agent2]
@ -43,7 +43,7 @@ def test_inject(collaboration):
def test_inject_agent(collaboration):
agent3 = Flow(llm=OpenAIChat(), max_loops=2)
agent3 = Agent(llm=OpenAIChat(), max_loops=2)
collaboration.inject_agent(agent3)
assert len(collaboration.agents) == 3
assert agent3 in collaboration.agents

Loading…
Cancel
Save