[REFACTOR][SequentialWorkflow] [Agent] [FEATS][AsyncWorkflow] [BUFG][ConcurrentWorkflow] [MISC][DOCS]

pull/362/head
Kye 12 months ago
parent e06cdd5fbb
commit a4c4c3b943

@ -200,9 +200,7 @@ task2 = Task(agent, "What's the weather in new york")
task3 = Task(agent, "What's the weather in london")
# Add tasks to the workflow
workflow.add(task1)
workflow.add(task2)
workflow.add(task3)
workflow.add(tasks=[task1, task2, task3])
# Run the workflow
workflow.run()
@ -413,9 +411,10 @@ print(out)
```python
import os
from swarms import Task, Agent, OpenAIChat
from dotenv import load_dotenv
from swarms.structs import Agent, OpenAIChat, Task
# Load the environment variables
load_dotenv()
@ -440,7 +439,13 @@ agent = Agent(
)
# Create a task
task = Task(agent, "Create a strategy to cut business costs by 40% this month")
task = Task(
description=(
"Generate a report on the top 3 biggest expenses for small"
" businesses and how businesses can save 20%"
),
agent=agent,
)
# Set the action and condition
task.set_action(my_action)

@ -1,90 +0,0 @@
`AbsractAgent` Class: A Deep Dive
========================
The `AbstractAgent` class is a fundamental building block in the design of AI systems. It encapsulates the behavior of an AI entity, allowing it to interact with other agents and perform actions. The class is designed to be flexible and extensible, enabling the creation of agents with diverse behaviors.
## Architecture
------------
The architecture of the `AbstractAgent` class is centered around three main components: the agent's name, tools, and memory.
- The `name` is a string that uniquely identifies the agent. This is crucial for communication between agents and for tracking their actions.
- The `tools` are a list of `Tool` objects that the agent uses to perform its tasks. These could include various AI models, data processing utilities, or any other resources that the agent needs to function. The `tools` method is used to initialize these tools.
- The `memory` is a `Memory` object that the agent uses to store and retrieve information. This could be used, for example, to remember past actions or to store the state of the environment. The `memory` method is used to initialize the memory.
The `AbstractAgent` class also includes several methods that define the agent's behavior. These methods are designed to be overridden in subclasses to implement specific behaviors.
## Methods
-------
### `reset`
The `reset` method is used to reset the agent's state. This could involve clearing the agent's memory, resetting its tools, or any other actions necessary to bring the agent back to its initial state. This method is abstract and must be overridden in subclasses.
### `run` and `_arun`
The `run` method is used to execute a task. The task is represented as a string, which could be a command, a query, or any other form of instruction that the agent can interpret. The `_arun` method is the asynchronous version of `run`, allowing tasks to be executed concurrently.
### `chat` and `_achat`
The `chat` method is used for communication between agents. It takes a list of messages as input, where each message is a dictionary. The `_achat` method is the asynchronous version of `chat`, allowing messages to be sent and received concurrently.
### `step` and `_astep`
The `step` method is used to advance the agent's state by one step in response to a message. The `_astep` method is the asynchronous version of `step`, allowing the agent's state to be updated concurrently.
## Usage E#xamples
--------------
### Example 1: Creating an Agent
```
from swarms.agents.base import AbtractAgent
agent = Agent(name="Agent1")
print(agent.name) # Output: Agent1
```
In this example, we create an instance of `AbstractAgent` named "Agent1" and print its name.
### Example 2: Initializing Tools and Memory
```
from swarms.agents.base import AbtractAgent
agent = Agent(name="Agent1")
tools = [Tool1(), Tool2(), Tool3()]
memory_store = Memory()
agent.tools(tools)
agent.memory(memory_store)
```
In this example, we initialize the tools and memory of "Agent1". The tools are a list of `Tool` instances, and the memory is a `Memory` instance.
### Example 3: Running an Agent
```
from swarms.agents.base import AbtractAgent
agent = Agent(name="Agent1")
task = "Task1"
agent.run(task)
```
In this example, we run "Agent1" with a task named "Task1".
Notes
-----
- The `AbstractAgent` class is an abstract class, which means it cannot be instantiated directly. Instead, it should be subclassed, and at least the `reset`, `run`, `chat`, and `step` methods should be overridden.
- The `run`, `chat`, and `step` methods are designed to be flexible and can be adapted to a wide range of tasks and behaviors. For example, the `run` method could be used to execute a machine learning model, the `chat` method could be used to send and receive messages in a chatbot, and the `step` method could be used to update the agent's state in a reinforcement learning environment.
- The `_arun`, `_achat`, and `_astep` methods are asynchronous versions of the `run`, `chat`, and `step` methods, respectively. They return a coroutine that can be awaited using the `await` keyword. This allows multiple tasks to be executed concurrently, improving the efficiency of the agent.
- The `tools` and `memory` methods are used to initialize the agent's tools and memory, respectively. These methods can be overridden in subclasses to initialize specific tools and memory structures.
- The `reset` method is used to reset the agent's state. This method can be overridden in subclasses to define specific reset behaviors. For example, in a reinforcement learning agent, the

@ -0,0 +1,124 @@
# swarms.agents
## 1. Introduction
`AbstractAgent` is an abstract class that serves as a foundation for implementing AI agents. An agent is an entity that can communicate with other agents and perform actions. The `AbstractAgent` class allows for customization in the implementation of the `receive` method, enabling different agents to define unique actions for receiving and processing messages.
`AbstractAgent` provides capabilities for managing tools and accessing memory, and has methods for running, chatting, and stepping through communication with other agents.
## 2. Class Definition
```python
class AbstractAgent:
"""An abstract class for AI agent.
An agent can communicate with other agents and perform actions.
Different agents can differ in what actions they perform in the `receive` method.
Agents are full and completed:
Agents = llm + tools + memory
"""
def __init__(self, name: str):
"""
Args:
name (str): name of the agent.
"""
self._name = name
@property
def name(self):
"""Get the name of the agent."""
return self._name
def tools(self, tools):
"""init tools"""
def memory(self, memory_store):
"""init memory"""
pass
def reset(self):
"""(Abstract method) Reset the agent."""
def run(self, task: str):
"""Run the agent once"""
def _arun(self, taks: str):
"""Run Async run"""
def chat(self, messages: List[Dict]):
"""Chat with the agent"""
def _achat(self, messages: List[Dict]):
"""Asynchronous Chat"""
def step(self, message: str):
"""Step through the agent"""
def _astep(self, message: str):
"""Asynchronous step"""
```
## 3. Functionality and Usage
The `AbstractAgent` class represents a generic AI agent and provides a set of methods to interact with it.
To create an instance of an agent, the `name` of the agent should be specified.
### Core Methods
#### 1. `reset`
The `reset` method allows the agent to be reset to its initial state.
```python
agent.reset()
```
#### 2. `run`
The `run` method allows the agent to perform a specific task.
```python
agent.run('some_task')
```
#### 3. `chat`
The `chat` method enables communication with the agent through a series of messages.
```python
messages = [{'id': 1, 'text': 'Hello, agent!'}, {'id': 2, 'text': 'How are you?'}]
agent.chat(messages)
```
#### 4. `step`
The `step` method allows the agent to process a single message.
```python
agent.step('Hello, agent!')
```
### Asynchronous Methods
The class also provides asynchronous variants of the core methods.
### Additional Functionality
Additional functionalities for agent initialization and management of tools and memory are also provided.
```python
agent.tools(some_tools)
agent.memory(some_memory_store)
```
## 4. Additional Information and Tips
When implementing a new agent using the `AbstractAgent` class, ensure that the `receive` method is overridden to define the specific behavior of the agent upon receiving messages.
## 5. References and Resources
For further exploration and understanding of AI agents and agent communication, refer to the relevant literature and research on this topic.

@ -0,0 +1,120 @@
# The Module/Class Name: Message
In the swarms.agents framework, the class `Message` is used to represent a message with timestamp and optional metadata.
## Overview and Introduction
The `Message` class is a fundamental component that enables the representation of messages within an agent system. Messages contain essential information such as the sender, content, timestamp, and optional metadata.
## Class Definition
### Constructor: `__init__`
The constructor of the `Message` class takes three parameters:
1. `sender` (str): The sender of the message.
2. `content` (str): The content of the message.
3. `metadata` (dict or None): Optional metadata associated with the message.
### Methods
1. `__repr__(self)`: Returns a string representation of the `Message` object, including the timestamp, sender, and content.
```python
class Message:
"""
Represents a message with timestamp and optional metadata.
Usage
--------------
mes = Message(
sender = "Kye",
content = "message"
)
print(mes)
"""
def __init__(self, sender, content, metadata=None):
self.timestamp = datetime.datetime.now()
self.sender = sender
self.content = content
self.metadata = metadata or {}
def __repr__(self):
"""
__repr__ represents the string representation of the Message object.
Returns:
(str) A string containing the timestamp, sender, and content of the message.
"""
return f"{self.timestamp} - {self.sender}: {self.content}"
```
## Functionality and Usage
The `Message` class represents a message in the agent system. Upon initialization, the `timestamp` is set to the current date and time, and the `metadata` is set to an empty dictionary if no metadata is provided.
### Usage Example 1
Creating a `Message` object and displaying its string representation.
```python
mes = Message(
sender = "Kye",
content = "Hello! How are you?"
)
print(mes)
```
Output:
```
2023-09-20 13:45:00 - Kye: Hello! How are you?
```
### Usage Example 2
Creating a `Message` object with metadata.
```python
metadata = {"priority": "high", "category": "urgent"}
mes_with_metadata = Message(
sender = "Alice",
content = "Important update",
metadata = metadata
)
print(mes_with_metadata)
```
Output:
```
2023-09-20 13:46:00 - Alice: Important update
```
### Usage Example 3
Creating a `Message` object without providing metadata.
```python
mes_no_metadata = Message(
sender = "Bob",
content = "Reminder: Meeting at 2PM"
)
print(mes_no_metadata)
```
Output:
```
2023-09-20 13:47:00 - Bob: Reminder: Meeting at 2PM
```
## Additional Information and Tips
When creating a new `Message` object, ensure that the required parameters `sender` and `content` are provided. The `timestamp` will automatically be assigned the current date and time. Optional `metadata` can be included to provide additional context or information associated with the message.
## References and Resources
For further information on the `Message` class and its usage, refer to the official swarms.agents documentation and relevant tutorials related to message handling and communication within the agent system.

@ -0,0 +1,79 @@
# Module/Class Name: OmniModalAgent
The `OmniModalAgent` class is a module that operates based on the Language Model (LLM) aka Language Understanding Model, Plans, Tasks, and Tools. It is designed to be a multi-modal chatbot which uses various AI-based capabilities for fulfilling user requests.
It has the following architecture:
1. Language Model (LLM).
2. Chat Planner - Plans
3. Task Executor - Tasks
4. Tools - Tools
![OmniModalAgent](https://source.unsplash.com/random)
---
### Usage
from swarms import OmniModalAgent, OpenAIChat
llm = OpenAIChat()
agent = OmniModalAgent(llm)
response = agent.run("Hello, how are you? Create an image of how your are doing!")
---
---
### Initialization
The constructor of `OmniModalAgent` class takes two main parameters:
- `llm`: A `BaseLanguageModel` that represents the language model
- `tools`: A List of `BaseTool` instances that are used by the agent for fulfilling different requests.
```python
def __init__(
self,
llm: BaseLanguageModel,
# tools: List[BaseTool]
):
```
---
### Methods
The class has two main methods:
1. `run`: This method takes an input string and executes various plans and tasks using the provided tools. Ultimately, it generates a response based on the user's input and returns it.
- Parameters:
- `input`: A string representing the user's input text.
- Returns:
- A string representing the response.
Usage:
```python
response = agent.run("Hello, how are you? Create an image of how your are doing!")
```
2. `chat`: This method is used to simulate a chat dialog with the agent. It can take user's messages and return the response (or stream the response word-by-word if required).
- Parameters:
- `msg` (optional): A string representing the message to send to the agent.
- `streaming` (optional): A boolean specifying whether to stream the response.
- Returns:
- A string representing the response from the agent.
Usage:
```python
response = agent.chat("Hello")
```
---
### Streaming Response
The class provides a method `_stream_response` that can be used to get the response token by token (i.e. word by word). It yields individual tokens from the response.
Usage:
```python
for token in _stream_response(response):
print(token)
```

@ -0,0 +1,113 @@
# ToolAgent Documentation
### Overview and Introduction
The `ToolAgent` class represents an intelligent agent capable of performing a specific task using a pre-trained model and tokenizer. It leverages the Transformer models of the Hugging Face `transformers` library to generate outputs that adhere to a specific JSON schema. This provides developers with a flexible tool for creating bots, text generators, and conversational AI agents. The `ToolAgent` operates based on a JSON schema provided by you, the user. Using the schema, the agent applies the provided model and tokenizer to generate structured text data that matches the specified format.
The primary objective of the `ToolAgent` class is to amplify the efficiency of developers and AI practitioners by simplifying the process of generating meaningful outputs that navigate the complexities of the model and tokenizer.
### Class Definition
The `ToolAgent` class has the following definition:
```python
class ToolAgent(AbstractLLM):
def __init__(
self,
name: str,
description: str,
model: Any,
tokenizer: Any,
json_schema: Any,
*args,
**kwargs,
)
def run(self, task: str, *args, **kwargs)
def __call__(self, task: str, *args, **kwargs)
```
### Arguments
The `ToolAgent` class takes the following arguments:
| Argument | Type | Description |
| --- | --- | --- |
| name | str | The name of the tool agent.
| description | str | A description of the tool agent.
| model | Any | The model used by the tool agent (e.g., `transformers.AutoModelForCausalLM`).
| tokenizer | Any | The tokenizer used by the tool agent (e.g., `transformers.AutoTokenizer`).
| json_schema | Any | The JSON schema used by the tool agent.
| *args | - | Variable-length arguments.
| **kwargs | - | Keyword arguments.
### Methods
`ToolAgent` exposes the following methods:
#### `run(self, task: str, *args, **kwargs) -> Any`
- Description: Runs the tool agent for a specific task.
- Parameters:
- `task` (str): The task to be performed by the tool agent.
- `*args`: Variable-length argument list.
- `**kwargs`: Arbitrary keyword arguments.
- Returns: The output of the tool agent.
- Raises: Exception if an error occurs during the execution of the tool agent.
#### `__call__(self, task: str, *args, **kwargs) -> Any`
- Description: Calls the tool agent to perform a specific task.
- Parameters:
- `task` (str): The task to be performed by the tool agent.
- `*args`: Variable-length argument list.
- `**kwargs`: Arbitrary keyword arguments.
- Returns: The output of the tool agent.
### Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms import ToolAgent
# Creating a model and tokenizer
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
# Defining a JSON schema
json_schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "number"},
"is_student": {"type": "boolean"},
"courses": {
"type": "array",
"items": {"type": "string"}
}
}
}
# Defining a task
task = "Generate a person's information based on the following schema:"
# Creating the ToolAgent instance
agent = ToolAgent(model=model, tokenizer=tokenizer, json_schema=json_schema)
# Running the tool agent
generated_data = agent.run(task)
# Accessing and printing the generated data
print(generated_data)
```
### Additional Information and Tips
When using the `ToolAgent`, it is important to ensure compatibility between the provided model, tokenizer, and the JSON schema. Additionally, any errors encountered during the execution of the tool agent are propagated as exceptions. Handling such exceptions appropriately can improve the robustness of the tool agent usage.
### References and Resources
For further exploration and understanding of the underlying Transformer-based models and tokenizers, refer to the Hugging Face `transformers` library documentation and examples. Additionally, for JSON schema modeling, you can refer to the official JSON Schema specification and examples.
This documentation provides a comprehensive guide on using the `ToolAgent` class from `swarms` library, and it is recommended to refer back to this document when utilizing the `ToolAgent` for developing your custom conversational agents or text generation tools.

@ -0,0 +1,78 @@
# WorkerClass Documentation
## Overview
The Worker class represents an autonomous agent that can perform tasks through function calls or by running a chat. It can be used to create applications that demand effective user interactions like search engines, human-like conversational bots, or digital assistants.
The `Worker` class is part of the `swarms.agents` codebase. This module is largely used in Natural Language Processing (NLP) projects where the agent undertakes conversations and other language-specific operations.
## Class Definition
The class `Worker` has the following arguments:
| Argument | Type | Default Value | Description |
|-----------------------|---------------|----------------------------------|----------------------------------------------------|
| name | str | "Worker" | Name of the agent. |
| role | str | "Worker in a swarm" | Role of the agent. |
| external_tools | list | None | List of external tools available to the agent. |
| human_in_the_loop | bool | False | Determines whether human interaction is required. |
| temperature | float | 0.5 | Temperature for the autonomous agent. |
| llm | None | None | Language model. |
| openai_api_key | str | None | OpenAI API key. |
| tools | List[Any] | None | List of tools available to the agent. |
| embedding_size | int | 1536 | Size of the word embeddings. |
| search_kwargs | dict | {"k": 8} | Search parameters. |
| args | Multiple | | Additional arguments that can be passed. |
| kwargs | Multiple | | Additional keyword arguments that can be passed. |
## Usage
#### Example 1: Creating and Running an Agent
```python
from swarms import Worker
worker = Worker(
name="My Worker",
role="Worker",
external_tools=[MyTool1(), MyTool2()],
human_in_the_loop=False,
temperature=0.5,
llm=some_language_model,
openai_api_key="my_key"
)
worker.run("What's the weather in Miami?")
```
#### Example 2: Receiving and Sending Messages
```python
worker.receieve("User", "Hello there!")
worker.receieve("User", "Can you tell me something about history?")
worker.send()
```
#### Example 3: Setting up Tools
```python
external_tools = [MyTool1(), MyTool2()]
worker = Worker(
name="My Worker",
role="Worker",
external_tools=external_tools,
human_in_the_loop=False,
temperature=0.5,
)
```
## Additional Information and Tips
- The class allows the setting up of tools for the worker to operate effectively. It provides setup facilities for essential computing infrastructure, such as the agent's memory and language model.
- By setting the `human_in_the_loop` parameter to True, interactions with the worker can be made more user-centric.
- The `openai_api_key` argument can be provided for leveraging the OpenAI infrastructure and services.
- A qualified language model can be passed as an instance of the `llm` object, which can be useful when integrating with state-of-the-art text generation engines.
## References and Resources
- [OpenAI APIs](https://openai.com)
- [Models and Languages at HuggingFace](https://huggingface.co/models)
- [Deep Learning and Language Modeling at the Allen Institute for AI](https://allenai.org)

@ -60,12 +60,12 @@ nav:
- Contributing: "contributing.md"
- Swarms:
- Overview: "swarms/index.md"
- swarms.workers:
- Overview: "swarms/workers/index.md"
- AbstractWorker: "swarms/workers/abstract_worker.md"
- swarms.agents:
- AbstractAgent: "swarms/agents/abstract_agent.md"
- OmniModalAgent: "swarms/agents/omni_agent.md"
- Agents:
- WorkerAgent: "swarms/utils/workeragent.md"
- OmniAgent: "swarms/utils/omni_agent.md"
- AbstractAgent: "swarms/utils/abstractagent.md"
- ToolAgent: "swarms/utils/toolagent.md"
- swarms.models:
- Language:
- BaseLLM: "swarms/models/base_llm.md"

@ -12,15 +12,18 @@ agent = Agent(llm=llm, max_loops=1)
# Create a workflow
workflow = ConcurrentWorkflow(max_workers=5)
task = (
"Generate a report on how small businesses spend money and how"
" can they cut 40 percent of their costs"
)
# Create tasks
task1 = Task(agent, "What's the weather in miami")
task2 = Task(agent, "What's the weather in new york")
task3 = Task(agent, "What's the weather in london")
task1 = Task(agent, task)
task2 = Task(agent, task)
task3 = Task(agent, task)
# Add tasks to the workflow
workflow.add(task1)
workflow.add(task2)
workflow.add(task3)
workflow.add(tasks=[task1, task2, task3])
# Run the workflow
workflow.run()

@ -1,6 +1,4 @@
from swarms.models import OpenAIChat
from swarms.structs import Agent
from swarms.structs.sequential_workflow import SequentialWorkflow
from swarms import OpenAIChat, Agent, Task, SequentialWorkflow
# Example usage
llm = OpenAIChat(
@ -9,21 +7,38 @@ llm = OpenAIChat(
)
# Initialize the Agent with the language agent
flow1 = Agent(llm=llm, max_loops=1, dashboard=False)
agent1 = Agent(
agent_name="John the writer",
llm=llm,
max_loops=0,
dashboard=False,
)
task1 = Task(
agent=agent1,
description="Write a 1000 word blog about the future of AI",
)
# Create another Agent for a different task
flow2 = Agent(llm=llm, max_loops=1, dashboard=False)
agent2 = Agent("Summarizer", llm=llm, max_loops=1, dashboard=False)
task2 = Task(
agent=agent2,
description="Summarize the generated blog",
)
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add(
"Generate a 10,000 word blog on health and wellness.", flow1
workflow = SequentialWorkflow(
name="Blog Generation Workflow",
description=(
"A workflow to generate and summarize a blog about the future"
" of AI"
),
max_loops=1,
autosave=True,
dashboard=False,
)
# Suppose the next task takes the output of the first task as input
workflow.add("Summarize the generated blog", flow2)
# Add tasks to the workflow
workflow.add(tasks=[task1, task2])
# Run the workflow
workflow.run()

@ -1,8 +1,8 @@
from swarms.structs import Task, Agent
from swarms.models import OpenAIChat
from dotenv import load_dotenv
import os
from dotenv import load_dotenv
from swarms.structs import Agent, OpenAIChat, Task
# Load the environment variables
load_dotenv()
@ -27,7 +27,13 @@ agent = Agent(
)
# Create a task
task = Task(description="What's the weather in miami", agent=agent)
task = Task(
description=(
"Generate a report on the top 3 biggest expenses for small"
" businesses and how businesses can save 20%"
),
agent=agent,
)
# Set the action and condition
task.set_action(my_action)

@ -9,20 +9,11 @@ from scripts.auto_tests_docs.docs import DOCUMENTATION_WRITER_SOP
from swarms import OpenAIChat
##########
from swarms.structs.task import Task
from swarms.structs.swarm_net import SwarmNetwork
from swarms.structs.nonlinear_workflow import NonlinearWorkflow
from swarms.structs.recursive_workflow import RecursiveWorkflow
from swarms.structs.groupchat import GroupChat, GroupChatManager
from swarms.structs.base_workflow import BaseWorkflow
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms.structs.base import BaseStructure
from swarms.structs.schemas import (
Artifact,
ArtifactUpload,
StepInput,
TaskInput,
)
from swarms.agents.base import AbstractAgent
from swarms.structs.message import Message
from swarms.agents.omni_modal_agent import OmniModalAgent
from swarms.agents.tool_agent import ToolAgent
from swarms.agents.worker_agent import WorkerAgent
####################
load_dotenv()
@ -49,14 +40,14 @@ def process_documentation(cls):
# Process with OpenAI model (assuming the model's __call__ method takes this input and returns processed content)
processed_content = model(
DOCUMENTATION_WRITER_SOP(input_content, "swarms.structs")
DOCUMENTATION_WRITER_SOP(input_content, "swarms.agents")
)
# doc_content = f"# {cls.__name__}\n\n{processed_content}\n"
doc_content = f"{processed_content}\n"
# Create the directory if it doesn't exist
dir_path = "docs/swarms/structs"
dir_path = "docs/swarms/agents"
os.makedirs(dir_path, exist_ok=True)
# Write the processed documentation to a Markdown file
@ -69,19 +60,11 @@ def process_documentation(cls):
def main():
classes = [
Task,
SwarmNetwork,
NonlinearWorkflow,
RecursiveWorkflow,
GroupChat,
GroupChatManager,
BaseWorkflow,
ConcurrentWorkflow,
BaseStructure,
Artifact,
ArtifactUpload,
StepInput,
TaskInput,
AbstractAgent,
Message,
OmniModalAgent,
ToolAgent,
WorkerAgent,
]
threads = []
for cls in classes:
@ -95,7 +78,7 @@ def main():
for thread in threads:
thread.join()
print("Documentation generated in 'swarms.structs' directory.")
print("Documentation generated in 'swarms.agents' directory.")
if __name__ == "__main__":

@ -28,4 +28,4 @@ def generate_file_list(directory, output_file):
# Use the function to generate the file list
generate_file_list("docs/swarms/structs", "file_list.txt")
generate_file_list("docs/swarms/agents", "file_list.txt")

@ -1,13 +1,35 @@
from swarms.agents.message import Message
from swarms.agents.base import AbstractAgent
from swarms.agents.tool_agent import ToolAgent
from swarms.agents.simple_agent import SimpleAgent
from swarms.agents.omni_modal_agent import OmniModalAgent
from swarms.agents.simple_agent import SimpleAgent
from swarms.agents.stopping_conditions import (
check_cancelled,
check_complete,
check_done,
check_end,
check_error,
check_exit,
check_failure,
check_finished,
check_stopped,
check_success,
)
from swarms.agents.tool_agent import ToolAgent
from swarms.agents.worker_agent import WorkerAgent
__all__ = [
"Message",
"AbstractAgent",
"ToolAgent",
"SimpleAgent",
"OmniModalAgent",
"check_done",
"check_finished",
"check_complete",
"check_success",
"check_failure",
"check_error",
"check_stopped",
"check_cancelled",
"check_exit",
"check_end",
"WorkerAgent",
]

@ -10,7 +10,7 @@ from langchain_experimental.autonomous_agents.hugginggpt.task_planner import (
)
from transformers import load_tool
from swarms.agents.message import Message
from swarms.structs.message import Message
class OmniModalAgent:

@ -0,0 +1,38 @@
def check_done(s):
return "<DONE>" in s
def check_finished(s):
return "finished" in s
def check_complete(s):
return "complete" in s
def check_success(s):
return "success" in s
def check_failure(s):
return "failure" in s
def check_error(s):
return "error" in s
def check_stopped(s):
return "stopped" in s
def check_cancelled(s):
return "cancelled" in s
def check_exit(s):
return "exit" in s
def check_end(s):
return "end" in s

@ -1,7 +1,3 @@
"""
Tool Agent
"""
from swarms.tools.format_tools import Jsonformer
from typing import Any
from swarms.models.base_llm import AbstractLLM

@ -1,7 +1,7 @@
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms.agents.message import Message
from swarms.structs.message import Message
class Mistral:

@ -32,6 +32,7 @@ from swarms.structs.block_wrapper import block
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.step import Step
from swarms.structs.plan import Plan
from swarms.structs.message import Message
__all__ = [
@ -66,4 +67,5 @@ __all__ = [
"GraphWorkflow",
"Step",
"Plan",
"Message",
]

@ -1,9 +1,7 @@
import asyncio
import inspect
import json
import logging
import random
import re
import time
import uuid
from typing import Any, Callable, Dict, List, Optional, Tuple
@ -13,23 +11,19 @@ from termcolor import colored
from swarms.memory.base_vectordb import VectorDatabase
from swarms.prompts.agent_system_prompts import (
AGENT_SYSTEM_PROMPT_3,
agent_system_prompt_2,
)
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
)
from swarms.prompts.tools import (
SCENARIOS,
)
from swarms.tools.tool import BaseTool
from swarms.tools.tool_func_doc_scraper import scrape_tool_func_docs
from swarms.utils.code_interpreter import SubprocessCodeInterpreter
from swarms.utils.data_to_text import data_to_text
from swarms.utils.logger import logger
from swarms.utils.parse_code import (
extract_code_from_markdown,
)
from swarms.utils.pdf_to_text import pdf_to_text
from swarms.utils.token_count_tiktoken import limit_tokens_from_string
from swarms.utils.data_to_text import data_to_text
# Utils
@ -112,26 +106,17 @@ class Agent:
filtered_run: Run the agent with filtered responses
interactive_run: Run the agent in interactive mode
streamed_generation: Stream the generation of the response
get_llm_params: Get the llm parameters
save_state: Save the state
load_state: Load the state
get_llm_init_params: Get the llm init parameters
get_tool_description: Get the tool description
find_tool_by_name: Find a tool by name
extract_tool_commands: Extract the tool commands
execute_tools: Execute the tools
parse_and_execute_tools: Parse and execute the tools
truncate_history: Truncate the history
add_task_to_memory: Add the task to the memory
add_message_to_memory: Add the message to the memory
add_message_to_memory_and_truncate: Add the message to the memory and truncate
parse_tool_docs: Parse the tool docs
print_dashboard: Print the dashboard
loop_count_print: Print the loop count
streaming: Stream the content
_history: Generate the history
_dynamic_prompt_setup: Setup the dynamic prompt
agent_system_prompt_2: Agent system prompt 2
run_async: Run the agent asynchronously
run_async_concurrent: Run the agent asynchronously and concurrently
run_async_concurrent: Run the agent asynchronously and concurrently
@ -182,7 +167,7 @@ class Agent:
pdf_path: Optional[str] = None,
list_of_pdf: Optional[str] = None,
tokenizer: Optional[Any] = None,
memory: Optional[VectorDatabase] = None,
long_term_memory: Optional[VectorDatabase] = None,
preset_stopping_token: Optional[bool] = False,
traceback: Any = None,
traceback_handlers: Any = None,
@ -210,9 +195,7 @@ class Agent:
self.context_length = context_length
self.sop = sop
self.sop_list = sop_list
self.sop_list = []
self.tools = tools or []
self.tool_docs = []
self.tools = tools
self.system_prompt = system_prompt
self.agent_name = agent_name
self.agent_description = agent_description
@ -226,7 +209,7 @@ class Agent:
self.pdf_path = pdf_path
self.list_of_pdf = list_of_pdf
self.tokenizer = tokenizer
self.memory = memory
self.long_term_memory = long_term_memory
self.preset_stopping_token = preset_stopping_token
self.traceback = traceback
self.traceback_handlers = traceback_handlers
@ -256,13 +239,7 @@ class Agent:
if preset_stopping_token:
self.stopping_token = "<DONE>"
# If tools exist then add the tool docs usage to the sop
if self.tools:
self.sop_list.append(
self.tools_prompt_prep(self.tool_docs, SCENARIOS)
)
# self.short_memory_test = Conversation(time_enabled=True)
# self.short_memory = Conversation(time_enabled=True)
# If the docs exist then ingest the docs
if self.docs:
@ -316,93 +293,6 @@ class Agent:
"""Format the template with the provided kwargs using f-string interpolation."""
return template.format(**kwargs)
def get_llm_init_params(self) -> str:
"""Get LLM init params"""
init_signature = inspect.signature(self.llm.__init__)
params = init_signature.parameters
params_str_list = []
for name, param in params.items():
if name == "self":
continue
if hasattr(self.llm, name):
value = getattr(self.llm, name)
else:
value = self.llm.__dict__.get(name, "Unknown")
params_str_list.append(
f" {name.capitalize().replace('_', ' ')}: {value}"
)
return "\n".join(params_str_list)
def get_tool_description(self):
"""Get the tool description"""
if self.tools:
try:
tool_descriptions = []
for tool in self.tools:
description = f"{tool.name}: {tool.description}"
tool_descriptions.append(description)
return "\n".join(tool_descriptions)
except Exception as error:
print(
f"Error getting tool description: {error} try"
" adding a description to the tool or removing"
" the tool"
)
else:
return "No tools available"
def find_tool_by_name(self, name: str):
"""Find a tool by name"""
for tool in self.tools:
if tool.name == name:
return tool
def extract_tool_commands(self, text: str):
"""
Extract the tool commands from the text
Example:
```json
{
"tool": "tool_name",
"params": {
"tool1": "inputs",
"param2": "value2"
}
}
```
"""
# Regex to find JSON like strings
pattern = r"```json(.+?)```"
matches = re.findall(pattern, text, re.DOTALL)
json_commands = []
for match in matches:
try:
json_commands = json.loads(match)
json_commands.append(json_commands)
except Exception as error:
print(f"Error parsing JSON command: {error}")
def execute_tools(self, tool_name, params):
"""Execute the tool with the provided params"""
tool = self.tool_find_by_name(tool_name)
if tool:
# Execute the tool with the provided parameters
tool_result = tool.run(**params)
print(tool_result)
def parse_and_execute_tools(self, response: str):
"""Parse and execute the tools"""
json_commands = self.extract_tool_commands(response)
for command in json_commands:
tool_name = command.get("tool")
params = command.get("parmas", {})
self.execute_tools(tool_name, params)
def truncate_history(self):
"""
Take the history and truncate it to fit into the model context length
@ -446,12 +336,6 @@ class Agent:
self.short_memory[-1].append(message)
self.truncate_history()
def parse_tool_docs(self):
"""Parse the tool docs"""
for tool in self.tools:
docs = self.tool_docs.append(scrape_tool_func_docs(tool))
return str(docs)
def print_dashboard(self, task: str):
"""Print dashboard"""
model_config = self.get_llm_init_params()
@ -578,14 +462,11 @@ class Agent:
combined_prompt = f"{dynamic_prompt}\n{task}"
return combined_prompt
def agent_system_prompt_2(self):
"""Agent system prompt 2"""
return agent_system_prompt_2(self.agent_name)
def run(
self,
task: Optional[str] = None,
img: Optional[str] = None,
*args,
**kwargs,
):
"""
@ -638,9 +519,7 @@ class Agent:
self.dynamic_temperature()
# Preparing the prompt
task = self.agent_history_prompt(
AGENT_SYSTEM_PROMPT_3, response
)
task = self.agent_history_prompt(history=response)
attempt = 0
while attempt < self.retry_attempts:
@ -663,10 +542,6 @@ class Agent:
if self.code_interpreter:
self.run_code(response)
# If there are any tools then parse and execute them
if self.tools:
self.parse_and_execute_tools(response)
# If interactive mode is enabled then print the response and get user input
if self.interactive:
print(f"AI: {response}")
@ -712,7 +587,7 @@ class Agent:
return response
except Exception as error:
print(f"Error running agent: {error}")
logger.error(f"Error running agent: {error}")
raise
def __call__(self, task: str, img: str = None, *args, **kwargs):
@ -740,8 +615,7 @@ class Agent:
def agent_history_prompt(
self,
system_prompt: str = AGENT_SYSTEM_PROMPT_3,
history=None,
history: str = None,
):
"""
Generate the agent history prompt
@ -754,7 +628,7 @@ class Agent:
str: The agent history prompt
"""
if self.sop:
system_prompt = system_prompt or self.system_prompt
system_prompt = self.system_prompt
agent_history_prompt = f"""
SYSTEM_PROMPT: {system_prompt}
@ -767,7 +641,7 @@ class Agent:
"""
return agent_history_prompt
else:
system_prompt = system_prompt or self.system_prompt
system_prompt = self.system_prompt
agent_history_prompt = f"""
SYSTEM_PROMPT: {system_prompt}
@ -777,7 +651,7 @@ class Agent:
"""
return agent_history_prompt
def agent_memory_prompt(self, query, prompt):
def long_term_memory_prompt(self, query: str, prompt: str):
"""
Generate the agent long term memory prompt
@ -788,17 +662,13 @@ class Agent:
Returns:
str: The agent history prompt
"""
context_injected_prompt = prompt
if self.memory:
ltr = self.memory.query(query)
ltr = self.long_term_memory.query(query)
context_injected_prompt = f"""{prompt}
return f"""{prompt}
################ CONTEXT ####################
{ltr}
"""
return context_injected_prompt
async def run_concurrent(self, tasks: List[str], **kwargs):
"""
Run a batch of tasks concurrently and handle an infinite level of task inputs.
@ -1045,45 +915,6 @@ class Agent:
print()
return response
def get_llm_params(self):
"""
Extracts and returns the parameters of the llm object for serialization.
It assumes that the llm object has an __init__ method
with parameters that can be used to recreate it.
"""
if not hasattr(self.llm, "__init__"):
return None
init_signature = inspect.signature(self.llm.__init__)
params = init_signature.parameters
llm_params = {}
for name, param in params.items():
if name == "self":
continue
if hasattr(self.llm, name):
value = getattr(self.llm, name)
if isinstance(
value,
(
str,
int,
float,
bool,
list,
dict,
tuple,
type(None),
),
):
llm_params[name] = value
else:
llm_params[name] = str(
value
) # For non-serializable objects, save their string representation.
return llm_params
def save_state(self, file_path: str) -> None:
"""
Saves the current state of the agent to a JSON file, including the llm parameters.
@ -1277,85 +1108,40 @@ class Agent:
text = limit_tokens_from_string(text, num_limits)
return text
def tools_prompt_prep(
self, docs: str = None, scenarios: str = SCENARIOS
):
"""
Tools prompt prep
def ingest_docs(self, docs: List[str], *args, **kwargs):
"""Ingest the docs into the memory
Args:
docs (str, optional): _description_. Defaults to None.
scenarios (str, optional): _description_. Defaults to None.
docs (List[str]): _description_
Returns:
_type_: _description_
"""
PROMPT = f"""
# Task
You will be provided with a list of APIs. These APIs will have a
description and a list of parameters and return types for each tool. Your
task involves creating varied, complex, and detailed user scenarios
that require to call API calls. You must select what api to call based on
the context of the task and the scenario.
For instance, given the APIs: SearchHotels, BookHotel, CancelBooking,
GetNFLNews. Given that GetNFLNews is explicitly provided, your scenario
should articulate something akin to:
"The user wants to see if the Broncos won their last game (GetNFLNews).
They then want to see if that qualifies them for the playoffs and who
they will be playing against (GetNFLNews). The Broncos did make it into
the playoffs, so the user wants watch the game in person. They want to
look for hotels where the playoffs are occurring (GetNBANews +
SearchHotels). After looking at the options, the user chooses to book a
3-day stay at the cheapest 4-star option (BookHotel)."
13
This scenario exemplifies a scenario using 5 API calls. The scenario is
complex, detailed, and concise as desired. The scenario also includes two
APIs used in tandem, the required API, GetNBANews to search for the
playoffs location and SearchHotels to find hotels based on the returned
location. Usage of multiple APIs in tandem is highly desirable and will
receive a higher score. Ideally each scenario should contain one or more
instances of multiple APIs being used in tandem.
Note that this scenario does not use all the APIs given and re-uses the "
GetNBANews" API. Re-using APIs is allowed, but each scenario should
involve as many different APIs as the user demands. Note that API usage is also included
in the scenario, but exact parameters ar necessary. You must use a
different combination of APIs for each scenario. All APIs must be used in
at least one scenario. You can only use the APIs provided in the APIs
section.
Note that API calls are not explicitly mentioned and their uses are
included in parentheses. This behaviour should be mimicked in your
response.
Output the tool usage in a strict json format with the function name and input to
the function. For example, Deliver your response in this format:
{scenarios}
# APIs
{docs}
# Response
"""
return PROMPT
for doc in docs:
data = data_to_text(doc)
def ingest_docs(self, docs: List[str], *args, **kwargs):
"""Ingest the docs into the memory
return self.short_memory.append(data)
def ingest_pdf(self, pdf: str):
"""Ingest the pdf into the memory
Args:
docs (List[str]): _description_
pdf (str): _description_
Returns:
_type_: _description_
"""
for doc in docs:
data = data_to_text(doc)
text = pdf_to_text(pdf)
return self.short_memory.append(text)
return self.short_memory.append(data)
def receieve_mesage(self, name: str, message: str):
"""Receieve a message"""
message = f"{name}: {message}"
return self.short_memory.append(message)
def send_agent_message(
self, agent_name: str, message: str, *args, **kwargs
):
"""Send a message to the agent"""
message = f"{agent_name}: {message}"
return self.run(message, *args, **kwargs)

@ -0,0 +1,103 @@
import asyncio
from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from swarms.structs.task import Task
from swarms.utils.logger import logger
@dataclass
class AsyncWorkflow:
"""
Represents an asynchronous workflow to run tasks.
Attributes:
name (str): The name of the workflow.
description (str): The description of the workflow.
max_loops (int): The maximum number of loops to run the workflow.
autosave (bool): Flag indicating whether to autosave the results.
dashboard (bool): Flag indicating whether to display a dashboard.
task_pool (List[Any]): The list of tasks in the workflow.
results (List[Any]): The list of results from running the tasks.
loop (Optional[asyncio.AbstractEventLoop]): The event loop to use.
stopping_condition (Optional[Callable]): The stopping condition for the workflow.
Methods:
add(tasks: List[Any]) -> None:
Add tasks to the workflow.
delete(task: Task = None, tasks: List[Task] = None) -> None:
Delete a task from the workflow.
run() -> List[Any]:
Run the workflow and return the results.
"""
name: str = "Async Workflow"
description: str = "A workflow to run asynchronous tasks"
max_loops: int = 1
autosave: bool = True
dashboard: bool = False
task_pool: List[Any] = field(default_factory=list)
results: List[Any] = field(default_factory=list)
loop: Optional[asyncio.AbstractEventLoop] = None
stopping_condition: Optional[Callable] = None
async def add(self, task: Any, tasks: List[Any]):
"""Add tasks to the workflow"""
try:
if tasks:
for task in tasks:
self.task_pool.extend(tasks)
elif task:
self.task_pool.append(task)
else:
if task and tasks:
# Add the task and tasks to the task pool
self.task_pool.append(task)
self.task_pool.extend(tasks)
else:
raise ValueError(
"Either task or tasks must be provided"
)
except Exception as error:
logger.error(f"[ERROR][AsyncWorkflow] {error}")
async def delete(
self, task: Any = None, tasks: List[Task] = None
):
"""Delete a task from the workflow"""
try:
if task:
self.task_pool.remove(task)
elif tasks:
for task in tasks:
self.task_pool.remove(task)
except Exception as error:
logger.error(f"[ERROR][AsyncWorkflow] {error}")
async def run(self):
"""Run the workflow"""
if self.loop is None:
self.loop = asyncio.get_event_loop()
for i in range(self.max_loops):
logger.info(
f"[INFO][AsyncWorkflow] Loop {i + 1}/{self.max_loops}"
)
futures = [
asyncio.ensure_future(task.execute())
for task in self.task_pool
]
self.results = await asyncio.gather(*futures)
# if self.autosave:
# self.save()
# if self.dashboard:
# self.display()
# Add a stopping condition to stop the workflow, if provided but stopping_condition takes in a parameter s for string
if self.stopping_condition:
if self.stopping_condition(self.results):
break
return self.results

@ -1,6 +1,6 @@
import concurrent.futures
from dataclasses import dataclass, field
from typing import Dict, List, Optional
from typing import Dict, List, Optional, Callable
from swarms.structs.base import BaseStructure
from swarms.structs.task import Task
@ -33,6 +33,7 @@ class ConcurrentWorkflow(BaseStructure):
"""
task_pool: List[Dict] = field(default_factory=list)
max_loops: int = 1
max_workers: int = 5
autosave: bool = False
saved_state_filepath: Optional[str] = (
@ -41,6 +42,7 @@ class ConcurrentWorkflow(BaseStructure):
print_results: bool = False
return_results: bool = False
use_processes: bool = False
stopping_condition: Optional[Callable] = None
def add(self, task: Task = None, tasks: List[Task] = None):
"""Adds a task to the workflow.
@ -66,7 +68,7 @@ class ConcurrentWorkflow(BaseStructure):
logger.warning(f"[ERROR][ConcurrentWorkflow] {error}")
raise error
def run(self):
def run(self, *args, **kwargs):
"""
Executes the tasks in parallel using a ThreadPoolExecutor.
@ -77,6 +79,8 @@ class ConcurrentWorkflow(BaseStructure):
Returns:
List[Any]: A list of the results of each task, if return_results is True. Otherwise, returns None.
"""
loop_count = 0
while loop_count < self.max_loops:
with concurrent.futures.ThreadPoolExecutor(
max_workers=self.max_workers
) as executor:
@ -86,7 +90,9 @@ class ConcurrentWorkflow(BaseStructure):
}
results = []
for future in concurrent.futures.as_completed(futures):
for future in concurrent.futures.as_completed(
futures
):
task = futures[future]
try:
result = future.result()
@ -99,6 +105,12 @@ class ConcurrentWorkflow(BaseStructure):
f"Task {task} generated an exception: {e}"
)
loop_count += 1
if self.stopping_condition and self.stopping_condition(
results
):
break
return results if self.return_results else None
def list_tasks(self):

@ -31,7 +31,11 @@ class RecursiveWorkflow(BaseStructure):
>>> workflow.run()
"""
def __init__(self, stop_token: str = "<DONE>"):
def __init__(
self,
stop_token: str = "<DONE>",
stopping_conditions: callable = None,
):
self.stop_token = stop_token
self.task_pool = []

@ -1,11 +1,9 @@
import concurrent.futures
import json
from dataclasses import dataclass, field
from typing import Any, Callable, Dict, List, Optional, Union
from typing import Any, Dict, List, Optional
from termcolor import colored
from swarms.structs.agent import Agent
from swarms.structs.task import Task
from swarms.utils.logger import logger
@ -14,7 +12,7 @@ from swarms.utils.logger import logger
@dataclass
class SequentialWorkflow:
"""
SequentialWorkflow class for running a sequence of tasks using N number of autonomous agents.
SequentialWorkflow class for running a sequence of task_pool using N number of autonomous agents.
Args:
max_loops (int): The maximum number of times to run the workflow.
@ -22,7 +20,7 @@ class SequentialWorkflow:
Attributes:
tasks (List[Task]): The list of tasks to execute.
task_pool (List[Task]): The list of task_pool to execute.
max_loops (int): The maximum number of times to run the workflow.
dashboard (bool): Whether to display the dashboard for the workflow.
@ -35,13 +33,13 @@ class SequentialWorkflow:
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.run()
>>> workflow.tasks
>>> workflow.task_pool
"""
name: str = None
description: str = None
tasks: List[Task] = field(default_factory=list)
task_pool: List[Task] = field(default_factory=list)
max_loops: int = 1
autosave: bool = False
saved_state_filepath: Optional[str] = (
@ -52,9 +50,8 @@ class SequentialWorkflow:
def add(
self,
agent: Union[Callable, Agent],
task: Optional[str] = None,
tasks: Optional[List[str]] = None,
task: Optional[Task] = None,
tasks: Optional[List[Task]] = None,
*args,
**kwargs,
) -> None:
@ -69,26 +66,28 @@ class SequentialWorkflow:
**kwargs: Additional keyword arguments to pass to the task execution.
"""
try:
# If the agent is a Agent instance, we include the task in kwargs for Agent.run()
if isinstance(agent, Agent):
kwargs["task"] = (
task # Set the task as a keyword argument for Agent
)
# Append the task to the tasks list
self.tasks.append(
Task(
description=task,
agent=agent,
args=list(args),
kwargs=kwargs,
)
)
# If the agent is a Task instance, we include the task in kwargs for Agent.run()
# Append the task to the task_pool list
if task:
self.task_pool.append(task)
logger.info(
f"[INFO][SequentialWorkflow] Added task {task} to"
" workflow"
)
elif tasks:
for task in tasks:
self.task_pool.append(task)
logger.info(
"[INFO][SequentialWorkflow] Added task"
f" {task} to workflow"
)
else:
if task and tasks is not None:
# Add the task and list of tasks to the task_pool at the same time
self.task_pool.append(task)
for task in tasks:
self.task_pool.append(task)
except Exception as error:
logger.error(
colored(
@ -99,8 +98,12 @@ class SequentialWorkflow:
def reset_workflow(self) -> None:
"""Resets the workflow by clearing the results of each task."""
try:
for task in self.tasks:
for task in self.task_pool:
task.result = None
logger.info(
f"[INFO][SequentialWorkflow] Reset task {task} in"
" workflow"
)
except Exception as error:
logger.error(
colored(f"Error resetting workflow: {error}", "red"),
@ -115,7 +118,8 @@ class SequentialWorkflow:
"""
try:
return {
task.description: task.result for task in self.tasks
task.description: task.result
for task in self.task_pool
}
except Exception as error:
logger.error(
@ -124,14 +128,10 @@ class SequentialWorkflow:
),
)
def remove_task(self, task: str) -> None:
"""Remove tasks from sequential workflow"""
def remove_task(self, task: Task) -> None:
"""Remove task_pool from sequential workflow"""
try:
self.tasks = [
task
for task in self.tasks
if task.description != task
]
self.task_pool.remove(task)
logger.info(
f"[INFO][SequentialWorkflow] Removed task {task} from"
" workflow"
@ -144,129 +144,6 @@ class SequentialWorkflow:
),
)
def update_task(self, task: str, **updates) -> None:
"""
Updates the arguments of a task in the workflow.
Args:
task (str): The description of the task to update.
**updates: The updates to apply to the task.
Raises:
ValueError: If the task is not found in the workflow.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.update_task("What's the weather in miami", max_tokens=1000)
>>> workflow.tasks[0].kwargs
{'max_tokens': 1000}
"""
try:
for task in self.tasks:
if task.description == task:
task.kwargs.update(updates)
break
else:
raise ValueError(
f"Task {task} not found in workflow."
)
print(
f"[INFO][SequentialWorkflow] Updated task {task} in"
" workflow"
)
except Exception as error:
logger.error(
colored(
f"Error updating task in workflow: {error}", "red"
),
)
def delete_task(self, task: str) -> None:
"""
Delete a task from the workflow.
Args:
task (str): The description of the task to delete.
Raises:
ValueError: If the task is not found in the workflow.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.delete_task("What's the weather in miami")
>>> workflow.tasks
[Task(description='Create a report on these metrics', agent=Agent(llm=OpenAIChat(openai_api_key=''), max_loops=1, dashboard=False), args=[], kwargs={}, result=None, history=[])]
"""
try:
for task in self.tasks:
if task.description == task:
self.tasks.remove(task)
break
else:
raise ValueError(
f"Task {task} not found in workflow."
)
print(
f"[INFO][SequentialWorkflow] Deleted task {task} from"
" workflow"
)
except Exception as error:
logger.error(
colored(
f"Error deleting task from workflow: {error}",
"red",
),
)
def concurrent_run(self):
"""
Concurrently run the workflow using a pool of workers.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
"""
try:
with concurrent.futures.ThreadPoolExecutor() as executor:
futures_to_task = {
executor.submit(task.run): task
for task in self.tasks
}
results = []
for future in concurrent.futures.as_completed(
futures_to_task
):
task = futures_to_task[future]
try:
result = future.result()
except Exception as error:
print(f"Error running workflow: {error}")
else:
results.append(result)
print(
f"Task {task} completed successfully with"
f" result: {result}"
)
except Exception as error:
print(colored(f"Error running workflow: {error}", "red"))
def save_workflow_state(
self,
filepath: Optional[str] = "sequential_workflow_state.json",
@ -293,7 +170,7 @@ class SequentialWorkflow:
with open(filepath, "w") as f:
# Saving the state as a json for simplicuty
state = {
"tasks": [
"task_pool": [
{
"description": task.description,
"args": task.args,
@ -301,13 +178,13 @@ class SequentialWorkflow:
"result": task.result,
"history": task.history,
}
for task in self.tasks
for task in self.task_pool
],
"max_loops": self.max_loops,
}
json.dump(state, f, indent=4)
print(
logger.info(
"[INFO][SequentialWorkflow] Saved workflow state to"
f" {filepath}"
)
@ -357,7 +234,7 @@ class SequentialWorkflow:
--------------------------------
Name: {self.name}
Description: {self.description}
Tasks: {len(self.tasks)}
task_pool: {len(self.task_pool)}
Max Loops: {self.max_loops}
Autosave: {self.autosave}
Autosave Filepath: {self.saved_state_filepath}
@ -382,38 +259,6 @@ class SequentialWorkflow:
)
)
def add_objective_to_workflow(self, task: str, **kwargs) -> None:
"""Adds an objective to the workflow."""
try:
print(
colored(
"""
Adding Objective to Workflow...""",
"green",
attrs=["bold", "underline"],
)
)
task = Task(
description=task,
agent=kwargs["agent"],
args=list(kwargs["args"]),
kwargs=kwargs["kwargs"],
)
self.tasks.append(task)
print(
f"[INFO][SequentialWorkflow] Added task {task} to"
" workflow"
)
except Exception as error:
logger.error(
colored(
f"Error adding objective to workflow: {error}",
"red",
)
)
def load_workflow_state(
self, filepath: str = None, **kwargs
) -> None:
@ -440,8 +285,8 @@ class SequentialWorkflow:
with open(filepath, "r") as f:
state = json.load(f)
self.max_loops = state["max_loops"]
self.tasks = []
for task_state in state["tasks"]:
self.task_pool = []
for task_state in state["task_pool"]:
task = Task(
description=task_state["description"],
agent=task_state["agent"],
@ -450,7 +295,7 @@ class SequentialWorkflow:
result=task_state["result"],
history=task_state["history"],
)
self.tasks.append(task)
self.task_pool.append(task)
print(
"[INFO][SequentialWorkflow] Loaded workflow state"
@ -474,114 +319,35 @@ class SequentialWorkflow:
"""
try:
self.workflow_bootup()
for _ in range(self.max_loops):
for task in self.tasks:
loops = 0
while loops < self.max_loops:
for i in range(len(self.task_pool)):
task = self.task_pool[i]
# Check if the current task can be executed
if task.result is None:
# Check if the agent is a Agent and a 'task' argument is needed
if isinstance(task.agent, Agent):
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
"The 'task' argument is required"
" for the Agent agent execution"
f" in '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
task.result = task.agent.run(
flow_task_arg,
*task.args,
**task.kwargs,
)
else:
# If it's not a Agent instance, call the agent directly
task.result = task.agent(
*task.args, **task.kwargs
)
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
if isinstance(next_task.agent, Agent):
# For Agent flows, 'task' should be a keyword argument
next_task.kwargs["task"] = task.result
else:
# For other callable flows, the result is added to args
next_task.args.insert(0, task.result)
# Autosave the workflow state
if self.autosave:
self.save_workflow_state(
"sequential_workflow_state.json"
)
except Exception as e:
logger.error(
colored(
(
"Error initializing the Sequential workflow:"
f" {e} try optimizing your inputs like the"
" agent class and task description"
),
"red",
attrs=["bold", "underline"],
)
)
# Get the inputs for the current task
task.context(task)
async def arun(self) -> None:
"""
Asynchronously run the workflow.
result = task.execute()
Raises:
ValueError: If a Agent instance is used as a task and the 'task' argument is not provided.
# Pass the inputs to the next task
if i < len(self.task_pool) - 1:
next_task = self.task_pool[i + 1]
next_task.description = result
"""
try:
for _ in range(self.max_loops):
for task in self.tasks:
# Check if the current task can be executed
if task.result is None:
# Check if the agent is a Agent and a 'task' argument is needed
if isinstance(task.agent, Agent):
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
"The 'task' argument is required"
" for the Agent agent execution"
f" in '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
task.result = await task.agent.arun(
flow_task_arg,
*task.args,
**task.kwargs,
)
else:
# If it's not a Agent instance, call the agent directly
task.result = await task.agent(
*task.args, **task.kwargs
)
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
if isinstance(next_task.agent, Agent):
# For Agent flows, 'task' should be a keyword argument
next_task.kwargs["task"] = task.result
else:
# For other callable flows, the result is added to args
next_task.args.insert(0, task.result)
# Execute the current task
task.execute()
# Autosave the workflow state
if self.autosave:
self.save_workflow_state(
"sequential_workflow_state.json"
)
self.workflow_shutdown()
loops += 1
except Exception as e:
print(
logger.error(
colored(
(
"Error initializing the Sequential workflow:"

@ -12,6 +12,7 @@ from typing import (
from swarms.structs.agent import Agent
from swarms.utils.logger import logger
from swarms.structs.conversation import Conversation
@dataclass
@ -57,9 +58,7 @@ class Task:
"""
agent: Union[Callable, Agent]
description: str
args: List[Any] = field(default_factory=list)
kwargs: Dict[str, Any] = field(default_factory=dict)
description: str = None
result: Any = None
history: List[Any] = field(default_factory=list)
schedule_time: datetime = None
@ -69,8 +68,10 @@ class Task:
condition: Callable = None
priority: int = 0
dependencies: List["Task"] = field(default_factory=list)
args: List[Any] = field(default_factory=list)
kwargs: Dict[str, Any] = field(default_factory=dict)
def execute(self, task: str, img: str = None, *args, **kwargs):
def execute(self, *args, **kwargs):
"""
Execute the task by calling the agent or model with the arguments and
keyword arguments. You can add images to the agent by passing the
@ -86,8 +87,10 @@ class Task:
>>> task.result
"""
logger.info(f"[INFO][Task] Executing task: {task}")
task = self.description or task
logger.info(
f"[INFO][Task] Executing task: {self.description}"
)
task = self.description
try:
if isinstance(self.agent, Agent):
if self.condition is None or self.condition():
@ -109,11 +112,11 @@ class Task:
except Exception as error:
logger.error(f"[ERROR][Task] {error}")
def run(self, task: str, *args, **kwargs):
self.execute(task, *args, **kwargs)
def run(self, *args, **kwargs):
self.execute(*args, **kwargs)
def __call__(self, task: str, *args, **kwargs):
self.execute(task, *args, **kwargs)
def __call__(self, *args, **kwargs):
self.execute(*args, **kwargs)
def handle_scheduled_task(self):
"""
@ -206,3 +209,51 @@ class Task:
logger.error(
f"[ERROR][Task][check_dependency_completion] {error}"
)
def context(
self,
task: "Task" = None,
context: List["Task"] = None,
*args,
**kwargs,
):
"""
Set the context for the task.
Args:
context (str): The context to set.
"""
# For sequential workflow, sequentially add the context of the previous task in the list
new_context = Conversation(time_enabled=True, *args, **kwargs)
if context:
for task in context:
description = (
task.description
if task.description is not None
else ""
)
result = (
task.result if task.result is not None else ""
)
# Add the context of the task to the conversation
new_context.add(
task.agent.agent_name, f"{description} {result}"
)
elif task:
description = (
task.description
if task.description is not None
else ""
)
result = task.result if task.result is not None else ""
new_context.add(
task.agent.agent_name, f"{description} {result}"
)
prompt = new_context.return_history_as_string()
# Add to history
return self.history.append(prompt)

@ -1,6 +1,12 @@
import re
import json
from typing import List, Any
import re
from typing import Any, List
from swarms.prompts.tools import (
SCENARIOS,
)
from swarms.tools.tool import BaseTool
from swarms.tools.tool_func_doc_scraper import scrape_tool_func_docs
def tool_find_by_name(tool_name: str, tools: List[Any]):
@ -55,3 +61,79 @@ def execute_tools(tool_name, params):
# Execute the tool with the provided parameters
tool_result = tool.run(**params)
print(tool_result)
def parse_tool_docs(tools: List[BaseTool]):
"""Parse the tool docs"""
tool_docs = []
for tool in tools:
docs = tool_docs.append(scrape_tool_func_docs(tool))
return str(docs)
def tools_prompt_prep(docs: str = None, scenarios: str = SCENARIOS):
"""
Tools prompt prep
Args:
docs (str, optional): _description_. Defaults to None.
scenarios (str, optional): _description_. Defaults to None.
Returns:
_type_: _description_
"""
PROMPT = f"""
# Task
You will be provided with a list of APIs. These APIs will have a
description and a list of parameters and return types for each tool. Your
task involves creating varied, complex, and detailed user scenarios
that require to call API calls. You must select what api to call based on
the context of the task and the scenario.
For instance, given the APIs: SearchHotels, BookHotel, CancelBooking,
GetNFLNews. Given that GetNFLNews is explicitly provided, your scenario
should articulate something akin to:
"The user wants to see if the Broncos won their last game (GetNFLNews).
They then want to see if that qualifies them for the playoffs and who
they will be playing against (GetNFLNews). The Broncos did make it into
the playoffs, so the user wants watch the game in person. They want to
look for hotels where the playoffs are occurring (GetNBANews +
SearchHotels). After looking at the options, the user chooses to book a
3-day stay at the cheapest 4-star option (BookHotel)."
13
This scenario exemplifies a scenario using 5 API calls. The scenario is
complex, detailed, and concise as desired. The scenario also includes two
APIs used in tandem, the required API, GetNBANews to search for the
playoffs location and SearchHotels to find hotels based on the returned
location. Usage of multiple APIs in tandem is highly desirable and will
receive a higher score. Ideally each scenario should contain one or more
instances of multiple APIs being used in tandem.
Note that this scenario does not use all the APIs given and re-uses the "
GetNBANews" API. Re-using APIs is allowed, but each scenario should
involve as many different APIs as the user demands. Note that API usage is also included
in the scenario, but exact parameters ar necessary. You must use a
different combination of APIs for each scenario. All APIs must be used in
at least one scenario. You can only use the APIs provided in the APIs
section.
Note that API calls are not explicitly mentioned and their uses are
included in parentheses. This behaviour should be mimicked in your
response.
Output the tool usage in a strict json format with the function name and input to
the function. For example, Deliver your response in this format:
{scenarios}
# APIs
{docs}
# Response
"""
return PROMPT

@ -0,0 +1,33 @@
import inspect
def get_cls_init_params(cls) -> str:
"""
Get the initialization parameters of a class.
Args:
cls: The class to retrieve the initialization parameters from.
Returns:
str: A string representation of the initialization parameters.
"""
init_signature = inspect.signature(cls.__init__)
params = init_signature.parameters
params_str_list = []
for name, param in params.items():
if name == "self":
continue
if name == "kwargs":
value = "Any keyword arguments"
elif hasattr(cls, name):
value = getattr(cls, name)
else:
value = cls.__dict__.get(name, "Unknown")
params_str_list.append(
f" {name.capitalize().replace('_', ' ')}: {value}"
)
return "\n".join(params_str_list)

@ -0,0 +1,21 @@
import re
def get_file_extension(s):
"""
Get the file extension from a given string.
Args:
s (str): The input string.
Returns:
str or None: The file extension if found, or None if not found.
Raises:
ValueError: If the input is not a string.
"""
if not isinstance(s, str):
raise ValueError("Input must be a string")
match = re.search(r"\.(pdf|csv|txt|docx|xlsx)$", s, re.IGNORECASE)
return match.group()[1:] if match else None
Loading…
Cancel
Save