sequential workflow tests, prorotytpe with documentation

Former-commit-id: 310230a417
jojo-group-chat
Kye 1 year ago
parent 0ef45c36f6
commit 70a20ad7a7

@ -264,7 +264,7 @@ Denote the social media's by using the social media name in HTML like tags
{{ARTICLE}}
"""
llm = OpenAIChat(openai_api_key="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")
llm = OpenAIChat(openai_api_key="")
def get_review_prompt(article):

@ -0,0 +1,577 @@
# `SequentialWorkflow` Documentation
The **SequentialWorkflow** class is a Python module designed to facilitate the execution of a sequence of tasks in a sequential manner. It is a part of the `swarms.structs` package and is particularly useful for orchestrating the execution of various callable objects, such as functions or models, in a predefined order. This documentation will provide an in-depth understanding of the **SequentialWorkflow** class, including its purpose, architecture, usage, and examples.
## Purpose and Relevance
The **SequentialWorkflow** class is essential for managing and executing a series of tasks or processes, where each task may depend on the outcome of the previous one. It is commonly used in various application scenarios, including but not limited to:
1. **Natural Language Processing (NLP) Workflows:** In NLP workflows, multiple language models are employed sequentially to process and generate text. Each model may depend on the results of the previous one, making sequential execution crucial.
2. **Data Analysis Pipelines:** Data analysis often involves a series of tasks such as data preprocessing, transformation, and modeling steps. These tasks must be performed sequentially to ensure data consistency and accuracy.
3. **Task Automation:** In task automation scenarios, there is a need to execute a series of automated tasks in a specific order. Sequential execution ensures that each task is performed in a predefined sequence, maintaining the workflow's integrity.
By providing a structured approach to managing these tasks, the **SequentialWorkflow** class helps developers streamline their workflow execution and improve code maintainability.
## Key Concepts and Terminology
Before delving into the details of the **SequentialWorkflow** class, let's define some key concepts and terminology that will be used throughout the documentation:
### Task
A **task** refers to a specific unit of work that needs to be executed as part of the workflow. Each task is associated with a description and can be implemented as a callable object, such as a function or a model.
### Flow
A **flow** represents a callable object that can be a task within the **SequentialWorkflow**. Flows encapsulate the logic and functionality of a particular task. Flows can be functions, models, or any callable object that can be executed.
### Sequential Execution
Sequential execution refers to the process of running tasks one after the other in a predefined order. In a **SequentialWorkflow**, tasks are executed sequentially, meaning that each task starts only after the previous one has completed.
### Workflow
A **workflow** is a predefined sequence of tasks that need to be executed in a specific order. It represents the overall process or pipeline that the **SequentialWorkflow** manages.
### Dashboard (Optional)
A **dashboard** is an optional feature of the **SequentialWorkflow** that provides real-time monitoring and visualization of the workflow's progress. It displays information such as the current task being executed, task results, and other relevant metadata.
### Max Loops
The **maximum number of times** the entire workflow can be run. This parameter allows developers to control how many times the workflow is executed.
### Autosaving
**Autosaving** is a feature that allows the **SequentialWorkflow** to automatically save its state to a file at specified intervals. This feature helps in resuming a workflow from where it left off, even after interruptions.
Now that we have a clear understanding of the key concepts and terminology, let's explore the architecture and usage of the **SequentialWorkflow** class in more detail.
## Architecture of SequentialWorkflow
The architecture of the **SequentialWorkflow** class is designed to provide a structured and flexible way to define, manage, and execute a sequence of tasks. It comprises the following core components:
1. **Task**: The **Task** class represents an individual unit of work within the workflow. Each task has a description, which serves as a human-readable identifier for the task. Tasks can be implemented as callable objects, allowing for great flexibility in defining their functionality.
2. **Workflow**: The **SequentialWorkflow** class itself represents the workflow. It manages a list of tasks in the order they should be executed. Workflows can be run sequentially or asynchronously, depending on the use case.
3. **Task Execution**: Task execution is the process of running each task in the workflow. Tasks are executed one after another in the order they were added to the workflow. Task results can be passed as inputs to subsequent tasks.
4. **Dashboard (Optional)**: The **SequentialWorkflow** optionally includes a dashboard feature. The dashboard provides a visual interface for monitoring the progress of the workflow. It displays information about the current task, task results, and other relevant metadata.
5. **State Management**: The **SequentialWorkflow** supports state management, allowing developers to save and load the state of the workflow to and from JSON files. This feature is valuable for resuming workflows after interruptions or for sharing workflow configurations.
## Usage of SequentialWorkflow
The **SequentialWorkflow** class is versatile and can be employed in a wide range of applications. Its usage typically involves the following steps:
1. **Initialization**: Begin by initializing any callable objects or flows that will serve as tasks in the workflow. These callable objects can include functions, models, or any other Python objects that can be executed.
2. **Workflow Creation**: Create an instance of the **SequentialWorkflow** class. Specify the maximum number of loops the workflow should run and whether a dashboard should be displayed.
3. **Task Addition**: Add tasks to the workflow using the `add` method. Each task should be described using a human-readable description, and the associated flow (callable object) should be provided. Additional arguments and keyword arguments can be passed to the task.
4. **Task Execution**: Execute the workflow using the `run` method. The tasks within the workflow will be executed sequentially, with task results passed as inputs to subsequent tasks.
5. **Accessing Results**: After running the workflow, you can access the results of each task using the `get_task_results` method or by directly accessing the `result` attribute of each task.
6. **Optional Features**: Optionally, you can enable features such as autosaving of the workflow state and utilize the dashboard for real-time monitoring.
## Installation
Before using the Sequential Workflow library, you need to install it. You can install it via pip:
```bash
pip3 install --upgrade swarms
```
## Quick Start
Let's begin with a quick example to demonstrate how to create and run a Sequential Workflow. In this example, we'll create a workflow that generates a 10,000-word blog on "health and wellness" using an AI model and then summarizes the generated content.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Initialize the language model flow (e.g., GPT-3)
llm = OpenAIChat(
openai_api_key="YOUR_API_KEY",
temperature=0.5,
max_tokens=3000,
)
# Initialize flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
workflow.add("Summarize the generated blog", flow2)
# Run the workflow
workflow.run()
# Output the results
for task in workflow.tasks:
print(f"Task: {task.description}, Result: {task.result}")
```
This quick example demonstrates the basic usage of the Sequential Workflow. It creates two tasks and executes them sequentially.
## Class: `Task`
### Description
The `Task` class represents an individual task in the workflow. A task is essentially a callable object, such as a function or a class, that can be executed sequentially. Tasks can have arguments and keyword arguments.
### Class Definition
```python
class Task:
def __init__(self, description: str, flow: Union[Callable, Flow], args: List[Any] = [], kwargs: Dict[str, Any] = {}, result: Any = None, history: List[Any] = [])
```
### Parameters
- `description` (str): A description of the task.
- `flow` (Union[Callable, Flow]): The callable object representing the task. It can be a function, class, or a `Flow` instance.
- `args` (List[Any]): A list of positional arguments to pass to the task when executed. Default is an empty list.
- `kwargs` (Dict[str, Any]): A dictionary of keyword arguments to pass to the task when executed. Default is an empty dictionary.
- `result` (Any): The result of the task's execution. Default is `None`.
- `history` (List[Any]): A list to store the historical results of the task. Default is an empty list.
### Methods
#### `execute()`
Execute the task.
```python
def execute(self):
```
This method executes the task and updates the `result` and `history` attributes of the task. It checks if the task is a `Flow` instance and if the 'task' argument is needed.
## Class: `SequentialWorkflow`
### Description
The `SequentialWorkflow` class is responsible for managing a sequence of tasks and executing them in a sequential order. It provides methods for adding tasks, running the workflow, and managing the state of the tasks.
### Class Definition
```python
class SequentialWorkflow:
def __init__(self, max_loops: int = 1, autosave: bool = False, saved_state_filepath: Optional[str] = "sequential_workflow_state.json", restore_state_filepath: Optional[str] = None, dashboard: bool = False, tasks: List[Task] = [])
```
### Parameters
- `max_loops` (int): The maximum number of times to run the workflow sequentially. Default is `1`.
- `autosave` (bool): Whether to enable autosaving of the workflow state. Default is `False`.
- `saved_state_filepath` (Optional[str]): The file path to save the workflow state when autosave is enabled. Default is `"sequential_workflow_state.json"`.
- `restore_state_filepath` (Optional[str]): The file path to restore the workflow state when initializing. Default is `None`.
- `dashboard` (bool): Whether to display a dashboard with workflow information. Default is `False`.
- `tasks` (List[Task]): A list of `Task` instances representing the tasks in the workflow. Default is an empty list.
### Methods
#### `add(task: str, flow: Union[Callable, Flow], *args, **kwargs)`
Add a task to the workflow.
```python
def add(self, task: str, flow: Union[Callable, Flow], *args, **kwargs) -> None:
```
This method adds a new task to the workflow. You can provide a description of the task, the callable object (function, class, or `Flow` instance), and any additional positional or keyword arguments required for the task.
#### `reset_workflow()`
Reset the workflow by clearing the results of each task.
```python
def reset_workflow(self) -> None:
```
This method clears the results of each task in the workflow, allowing you to start fresh without reinitializing the workflow.
#### `get_task_results()`
Get the results of each task in the workflow.
```python
def get_task_results(self) -> Dict[str, Any]:
```
This method returns a dictionary containing the results of each task in the workflow, where the keys are task descriptions, and the values are the corresponding results.
#### `remove_task(task_description: str)`
Remove a task from the workflow.
```python
def remove_task(self, task_description: str) -> None:
```
This method removes a specific task from the workflow based on its description.
#### `update_task(task_description: str, **updates)`
Update the arguments of a task in the workflow.
```python
def update_task(self, task_description: str, **updates) -> None:
```
This method allows you to update the arguments and keyword arguments of a task in the workflow. You specify the task's description and provide the updates as keyword arguments.
#### `save_workflow_state(filepath: Optional[str] = "sequential_workflow_state.json", **kwargs)`
Save the workflow state to a JSON file.
```python
def save_workflow_state(self, filepath: Optional[str] = "sequential_workflow_state.json", **kwargs) -> None:
```
This method saves the current state of the workflow, including the results and history of each task, to a JSON file. You can specify the file path for saving the state.
#### `load_workflow_state(filepath: str = None, **kwargs)`
Load the workflow state from a JSON file and restore the workflow state.
```python
def load_workflow_state(self, filepath: str = None, **kwargs) -> None:
```
This method loads a previously saved workflow state from a JSON file
and restores the state, allowing you to continue the workflow from where it was saved. You can specify the file path for loading the state.
#### `run()`
Run the workflow sequentially.
```python
def run(self) -> None:
```
This method executes the tasks in the workflow sequentially. It checks if a task is a `Flow` instance and handles the flow of data between tasks accordingly.
#### `arun()`
Asynchronously run the workflow.
```python
async def arun(self) -> None:
```
This method asynchronously executes the tasks in the workflow sequentially. It's suitable for use cases where asynchronous execution is required. It also handles data flow between tasks.
#### `workflow_bootup(**kwargs)`
Display a bootup message for the workflow.
```python
def workflow_bootup(self, **kwargs) -> None:
```
This method displays a bootup message when the workflow is initialized. You can customize the message by providing additional keyword arguments.
#### `workflow_dashboard(**kwargs)`
Display a dashboard for the workflow.
```python
def workflow_dashboard(self, **kwargs) -> None:
```
This method displays a dashboard with information about the workflow, such as the number of tasks, maximum loops, and autosave settings. You can customize the dashboard by providing additional keyword arguments.
## Examples
Let's explore some examples to illustrate how to use the Sequential Workflow library effectively.
Sure, I'll recreate the usage examples section for each method and use case using the provided foundation. Here are the examples:
### Example 1: Adding Tasks to a Sequential Workflow
In this example, we'll create a Sequential Workflow and add tasks to it.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = (
"" # Your actual API key here
)
# Initialize the language flow
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
workflow.add("Summarize the generated blog", flow2)
# Output the list of tasks in the workflow
print("Tasks in the workflow:")
for task in workflow.tasks:
print(f"Task: {task.description}")
```
In this example, we create a Sequential Workflow and add two tasks to it.
### Example 2: Resetting a Sequential Workflow
In this example, we'll create a Sequential Workflow, add tasks to it, and then reset it.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = (
"" # Your actual API key here
)
# Initialize the language flow
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
workflow.add("Summarize the generated blog", flow2)
# Reset the workflow
workflow.reset_workflow()
# Output the list of tasks in the workflow after resetting
print("Tasks in the workflow after resetting:")
for task in workflow.tasks:
print(f"Task: {task.description}")
```
In this example, we create a Sequential Workflow, add two tasks to it, and then reset the workflow, clearing all task results.
### Example 3: Getting Task Results from a Sequential Workflow
In this example, we'll create a Sequential Workflow, add tasks to it, run the workflow, and then retrieve the results of each task.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = (
"" # Your actual API key here
)
# Initialize the language flow
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
workflow.add("Summarize the generated blog", flow2)
# Run the workflow
workflow.run()
# Get and display the results of each task in the workflow
results = workflow.get_task_results()
for task_description, result in results.items():
print(f"Task: {task_description}, Result: {result}")
```
In this example, we create a Sequential Workflow, add two tasks to it, run the workflow, and then retrieve and display the results of each task.
### Example 4: Removing a Task from a Sequential Workflow
In this example, we'll create a Sequential Workflow, add tasks to it, and then remove a specific task from the workflow.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = (
"" # Your actual API key here
)
# Initialize the language flow
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
workflow.add("Summarize the generated blog", flow2)
# Remove a specific task from the workflow
workflow.remove_task("Generate a 10,000 word blog on health and wellness.")
# Output the list of tasks in the workflow after removal
print("Tasks in the workflow after removing a task:")
for task in workflow.tasks:
print(f"Task: {task.description}")
```
In this example, we create a Sequential Workflow, add two tasks to it, and then remove a specific task from the workflow.
### Example 5: Updating Task Arguments in a Sequential Workflow
In this example, we'll create a Sequential Workflow, add tasks to it, and then update the arguments of a specific task in the workflow.
```python
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = (
"" # Your actual API key here
)
# Initialize the language flow
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize Flows for individual tasks
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the Sequential Workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
workflow.add("Summarize the generated blog", flow2)
# Update the arguments of a specific task in the workflow
workflow.update_task("Generate a 10,000 word blog on health and wellness.", max_loops=2)
# Output the list of tasks in the workflow after updating task arguments
print("Tasks in the workflow after updating task arguments:")
for task in workflow.tasks:
print(f"Task: {task.description}, Arguments: {
task.arguments}")
```
In this example, we create a Sequential Workflow, add two tasks to it, and then update the arguments of a specific task in the workflow.
These examples demonstrate various operations and use cases for working with a Sequential Workflow.
# Why `SequentialWorkflow`?
## Enhancing Autonomous Agent Development
The development of autonomous agents, whether they are conversational AI, robotic systems, or any other AI-driven application, often involves complex workflows that require a sequence of tasks to be executed in a specific order. Managing and orchestrating these tasks efficiently is crucial for building reliable and effective agents. The Sequential Workflow module serves as a valuable tool for AI engineers in achieving this goal.
## Reliability and Coordination
One of the primary challenges in autonomous agent development is ensuring that tasks are executed in the correct sequence and that the results of one task can be used as inputs for subsequent tasks. The Sequential Workflow module simplifies this process by allowing AI engineers to define and manage workflows in a structured and organized manner.
By using the Sequential Workflow module, AI engineers can achieve the following benefits:
### 1. Improved Reliability
Reliability is a critical aspect of autonomous agents. The ability to handle errors gracefully and recover from failures is essential for building robust systems. The Sequential Workflow module offers a systematic approach to task execution, making it easier to handle errors, retry failed tasks, and ensure that the agent continues to operate smoothly.
### 2. Task Coordination
Coordinating tasks in the correct order is essential for achieving the desired outcome. The Sequential Workflow module enforces task sequencing, ensuring that each task is executed only when its dependencies are satisfied. This eliminates the risk of executing tasks out of order, which can lead to incorrect results.
### 3. Code Organization
Managing complex workflows can become challenging without proper organization. The Sequential Workflow module encourages AI engineers to structure their code in a modular and maintainable way. Each task can be encapsulated as a separate unit, making it easier to understand, modify, and extend the agent's behavior.
### 4. Workflow Visualization
Visualization is a powerful tool for understanding and debugging workflows. The Sequential Workflow module can be extended to include a visualization dashboard, allowing AI engineers to monitor the progress of tasks, track results, and identify bottlenecks or performance issues.
## TODO: Future Features
While the Sequential Workflow module offers significant advantages, there are opportunities for further enhancement. Here is a list of potential features and improvements that can be added to make it even more versatile and adaptable for various AI engineering tasks:
### 1. Asynchronous Support
Adding support for asynchronous task execution can improve the efficiency of workflows, especially when dealing with tasks that involve waiting for external events or resources.
### 2. Context Managers
Introducing context manager support for tasks can simplify resource management, such as opening and closing files, database connections, or network connections within a task's context.
### 3. Workflow History
Maintaining a detailed history of workflow execution, including timestamps, task durations, and input/output data, can facilitate debugging and performance analysis.
### 4. Parallel Processing
Enhancing the module to support parallel processing with a pool of workers can significantly speed up the execution of tasks, especially for computationally intensive workflows.
### 5. Error Handling Strategies
Providing built-in error handling strategies, such as retries, fallbacks, and custom error handling functions, can make the module more robust in handling unexpected failures.
## Conclusion
The Sequential Workflow module is a valuable tool for AI engineers working on autonomous agents and complex AI-driven applications. It offers a structured and reliable approach to defining and executing workflows, ensuring that tasks are performed in the correct sequence. By using this module, AI engineers can enhance the reliability, coordination, and maintainability of their agents.
As the field of AI continues to evolve, the demand for efficient workflow management tools will only increase. The Sequential Workflow module is a step towards meeting these demands and empowering AI engineers to create more reliable and capable autonomous agents. With future enhancements and features, it has the potential to become an indispensable asset in the AI engineer's toolkit.
In summary, the Sequential Workflow module provides a foundation for orchestrating complex tasks and workflows, enabling AI engineers to focus on designing intelligent agents that can perform tasks with precision and reliability.

@ -1,7 +1,7 @@
from swarms.models import OpenAIChat
from swarms.structs import Flow
api_key = "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"
api_key = ""
# Initialize the language model, this model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
llm = OpenAIChat(

@ -0,0 +1,37 @@
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow
# Example usage
api_key = (
"" # Your actual API key here
)
# Initialize the language flow
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize the Flow with the language flow
flow1 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create another Flow for a different task
flow2 = Flow(llm=llm, max_loops=1, dashboard=False)
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
# Suppose the next task takes the output of the first task as input
workflow.add("Summarize the generated blog", flow2)
# Run the workflow
workflow.run()
# Output the results
for task in workflow.tasks:
print(f"Task: {task.description}, Result: {task.result}")

@ -1,8 +1,14 @@
"""
TODO:
- add a method that scrapes all the methods from the llm object and outputs them as a string
- Add tools
- Add open interpreter style conversation
- Add memory vector database retrieval
- add batch processing
- add async processing for run and batch run
- add plan module
- concurrent
-
"""
import json
@ -14,8 +20,15 @@ import inspect
import random
# Prompts
DYNAMIC_STOP_PROMPT = """
When you have finished the task from the Human, output a special token: <DONE>
This will enable you to leave the autonomous loop.
"""
# Constants
FLOW_SYSTEM_PROMPT = """
FLOW_SYSTEM_PROMPT = f"""
You are an autonomous agent granted autonomy from a Flow structure.
Your role is to engage in multi-step conversations with your self or the user,
generate long-form content like blogs, screenplays, or SOPs,
@ -23,19 +36,15 @@ and accomplish tasks. You can have internal dialogues with yourself or can inter
to aid in these complex tasks. Your responses should be coherent, contextually relevant, and tailored to the task at hand.
When you have finished the task, and you feel as if you are done: output a special token: <DONE>
This will enable you to leave the flow loop.
{DYNAMIC_STOP_PROMPT}
"""
DYNAMIC_STOP_PROMPT = """
When you have finished the task, and you feel as if you are done: output a special token: <DONE>
This will enable you to leave the flow loop.
"""
# Utility functions
# Custome stopping condition
# Custom stopping condition
def stop_when_repeats(response: str) -> bool:
# Stop if the word stop appears in the response
return "Stop" in response.lower()
@ -182,6 +191,7 @@ class Flow:
def print_dashboard(self, task: str):
"""Print dashboard"""
model_config = self.get_llm_init_params()
print(colored("Initializing Agent Dashboard...", "yellow"))
dashboard = print(
colored(
@ -195,6 +205,8 @@ class Flow:
----------------------------------------
Flow Configuration:
Name: {self.name}
System Prompt: {self.system_message}
Task: {task}
Max Loops: {self.max_loops}
Stopping Condition: {self.stopping_condition}
@ -202,14 +214,35 @@ class Flow:
Retry Attempts: {self.retry_attempts}
Retry Interval: {self.retry_interval}
Interactive: {self.interactive}
Dashboard: {self.dashboard}
Dynamic Temperature: {self.dynamic_temperature}
Autosave: {self.autosave}
Saved State: {self.saved_state}
----------------------------------------
""",
"green",
)
)
print(dashboard)
# print(dashboard)
def activate_autonomous_agent(self):
"""Print the autonomous agent activation message"""
try:
print(colored("Initializing Autonomous Agent...", "yellow"))
# print(colored("Loading modules...", "yellow"))
# print(colored("Modules loaded successfully.", "green"))
print(colored("Autonomous Agent Activated.", "cyan", attrs=["bold"]))
print(colored("All systems operational. Executing task...", "green"))
except Exception as error:
print(
colored(
"Error activating autonomous agent. Try optimizing your parameters...",
"red",
)
)
print(error)
def run(self, task: str, **kwargs):
"""
@ -235,6 +268,11 @@ class Flow:
# history = [f"Human: {task}"]
# self.memory.append(history)
# print(colored(">>> Autonomous Agent Activated", "cyan", attrs=["bold"]))
self.activate_autonomous_agent()
# if self.autosave:
response = task
history = [f"Human: {task}"]
@ -284,7 +322,10 @@ class Flow:
return response # , history
def __call__(self, task: str, save: bool = True, **kwargs):
async def arun(self, task: str, **kwargs):
"""Async run"""
pass
"""
Run the autonomous agent loop
@ -298,15 +339,17 @@ class Flow:
4. If stopping condition is not met, generate a response
5. Repeat until stopping condition is met or max_loops is reached
Example:
>>> out = flow.run("Generate a 10,000 word blog on health and wellness.")
"""
# Start with a new history or continue from the last saved state
if not self.memory or not self.memory[-1]:
history = [f"Human: {task}"]
else:
history = self.memory[-1]
# Restore from saved state if provided, ortherwise start with a new history
# if self.saved_state:
# self.load_state(self.saved_state)
# history = self.memory[-1]
# print(f"Loaded state from {self.saved_state}")
# else:
# history = [f"Human: {task}"]
# self.memory.append(history)
print(colored(">>> Autonomous Agent Activated", "cyan", attrs=["bold"]))
response = task
history = [f"Human: {task}"]
@ -315,12 +358,9 @@ class Flow:
if self.dashboard:
self.print_dashboard(task)
# Start or continue the loop process
for i in range(len(history), self.max_loops):
for i in range(self.max_loops):
print(colored(f"\nLoop {i+1} of {self.max_loops}", "blue"))
print("\n")
response = history[-1].split(": ", 1)[-1] # Get the last response
if self._check_stopping_condition(response) or parse_done_token(response):
break
@ -332,8 +372,8 @@ class Flow:
while attempt < self.retry_attempts:
try:
response = self.llm(
self.agent_history_prompt(FLOW_SYSTEM_PROMPT, response)
** kwargs,
self.agent_history_prompt(FLOW_SYSTEM_PROMPT, response),
**kwargs,
)
# print(f"Next query: {response}")
# break
@ -355,8 +395,8 @@ class Flow:
time.sleep(self.loop_interval)
self.memory.append(history)
# if save:
# self.save_state("flow_history.json")
# if self.autosave:
# self.save_state("flow_state.json")
return response # , history

@ -1,99 +1,398 @@
"""
Sequential Workflow
TODO:
- Add a method to update the arguments of a task
- Add a method to get the results of each task
- Add a method to get the results of a specific task
- Add a method to get the results of the workflow
- Add a method to get the results of the workflow as a dataframe
from swarms.models import OpenAIChat, Mistral
from swarms.structs import SequentialWorkflow
- Add a method to run the workflow in parallel with a pool of workers and a queue and a dashboard
- Add a dashboard to visualize the workflow
- Add async support
- Add context manager support
- Add workflow history
"""
import json
from dataclasses import dataclass, field
from typing import Any, Callable, Dict, List, Optional, Union
from termcolor import colored
from pydantic import BaseModel, validator
llm = OpenAIChat(openai_api_key="")
mistral = Mistral()
from swarms.structs.flow import Flow
# Max loops will run over the sequential pipeline twice
workflow = SequentialWorkflow(max_loops=2)
workflow.add("What's the weather in miami", llm)
# Define a generic Task that can handle different types of callable objects
@dataclass
class Task:
"""
Task class for running a task in a sequential workflow.
workflow.add("Create a report on these metrics", mistral)
workflow.run()
Examples:
>>> from swarms.structs import Task, Flow
>>> from swarms.models import OpenAIChat
>>> flow = Flow(llm=OpenAIChat(openai_api_key=""), max_loops=1, dashboard=False)
>>> task = Task(description="What's the weather in miami", flow=flow)
>>> task.execute()
>>> task.result
"""
from dataclasses import dataclass, field
from typing import List, Any, Dict, Callable, Union
from swarms.models import OpenAIChat
from swarms.structs import Flow
# Define a generic Task that can handle different types of callable objects
@dataclass
class Task:
"""
description: str
model: Union[Callable, Flow]
flow: Union[Callable, Flow]
args: List[Any] = field(default_factory=list)
kwargs: Dict[str, Any] = field(default_factory=dict)
result: Any = None
history: List[Any] = field(default_factory=list)
def execute(self):
if isinstance(self.model, Flow):
self.result = self.model.run(*self.args, **self.kwargs)
"""
Execute the task.
Raises:
ValueError: If a Flow instance is used as a task and the 'task' argument is not provided.
"""
if isinstance(self.flow, Flow):
# Add a prompt to notify the Flow of the sequential workflow
if "prompt" in self.kwargs:
self.kwargs["prompt"] += (
f"\n\nPrevious output: {self.result}" if self.result else ""
)
else:
self.kwargs["prompt"] = f"Main task: {self.description}" + (
f"\n\nPrevious output: {self.result}" if self.result else ""
)
self.result = self.flow.run(*self.args, **self.kwargs)
else:
self.result = self.model(*self.args, **self.kwargs)
self.result = self.flow(*self.args, **self.kwargs)
self.history.append(self.result)
# SequentialWorkflow class definition using dataclasses
@dataclass
class SequentialWorkflow:
"""
SequentialWorkflow class for running a sequence of tasks using N number of autonomous agents.
Args:
max_loops (int): The maximum number of times to run the workflow.
dashboard (bool): Whether to display the dashboard for the workflow.
Attributes:
tasks (List[Task]): The list of tasks to execute.
max_loops (int): The maximum number of times to run the workflow.
dashboard (bool): Whether to display the dashboard for the workflow.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.run()
>>> workflow.tasks
"""
tasks: List[Task] = field(default_factory=list)
max_loops: int = 1
autosave: bool = False
saved_state_filepath: Optional[str] = "sequential_workflow_state.json"
restore_state_filepath: Optional[str] = None
dashboard: bool = False
def add(
self, description: str, model: Union[Callable, Flow], *args, **kwargs
) -> None:
def add(self, task: str, flow: Union[Callable, Flow], *args, **kwargs) -> None:
"""
Add a task to the workflow.
Args:
task (str): The task description or the initial input for the Flow.
flow (Union[Callable, Flow]): The model or flow to execute the task.
*args: Additional arguments to pass to the task execution.
**kwargs: Additional keyword arguments to pass to the task execution.
"""
# If the flow is a Flow instance, we include the task in kwargs for Flow.run()
if isinstance(flow, Flow):
kwargs["task"] = task # Set the task as a keyword argument for Flow
# Append the task to the tasks list
self.tasks.append(
Task(description=description, model=model, args=list(args), kwargs=kwargs)
Task(description=task, flow=flow, args=list(args), kwargs=kwargs)
)
def run(self) -> None:
for _ in range(self.max_loops):
for task in self.tasks:
# Check if the current task can be executed
if task.result is None:
task.execute()
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
next_task.args.insert(0, task.result)
def reset_workflow(self) -> None:
"""Resets the workflow by clearing the results of each task."""
for task in self.tasks:
task.result = None
def get_task_results(self) -> Dict[str, Any]:
"""
Returns the results of each task in the workflow.
# Example usage
api_key = "" # Your actual API key here
Returns:
Dict[str, Any]: The results of each task in the workflow
"""
return {task.description: task.result for task in self.tasks}
# Initialize the language model
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
def remove_task(self, task_description: str) -> None:
self.tasks = [
task for task in self.tasks if task.description != task_description
]
# Initialize the Flow with the language model
flow1 = Flow(llm=llm, max_loops=5, dashboard=True)
def update_task(self, task_description: str, **updates) -> None:
"""
Updates the arguments of a task in the workflow.
# Create another Flow for a different task
flow2 = Flow(llm=llm, max_loops=5, dashboard=True)
Args:
task_description (str): The description of the task to update.
**updates: The updates to apply to the task.
# Create the workflow
workflow = SequentialWorkflow(max_loops=1)
Raises:
ValueError: If the task is not found in the workflow.
# Add tasks to the workflow
workflow.add("Generate a 10,000 word blog on health and wellness.", flow1)
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.update_task("What's the weather in miami", max_tokens=1000)
>>> workflow.tasks[0].kwargs
{'max_tokens': 1000}
# Suppose the next task takes the output of the first task as input
workflow.add("Summarize the generated blog", flow2)
"""
for task in self.tasks:
if task.description == task_description:
task.kwargs.update(updates)
break
else:
raise ValueError(f"Task {task_description} not found in workflow.")
# Run the workflow
workflow.run()
def save_workflow_state(
self, filepath: Optional[str] = "sequential_workflow_state.json", **kwargs
) -> None:
"""
Saves the workflow state to a json file.
Args:
filepath (str): The path to save the workflow state to.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.save_workflow_state("sequential_workflow_state.json")
"""
filepath = filepath or self.saved_state_filepath
with open(filepath, "w") as f:
# Saving the state as a json for simplicuty
state = {
"tasks": [
{
"description": task.description,
"args": task.args,
"kwargs": task.kwargs,
"result": task.result,
"history": task.history,
}
for task in self.tasks
],
"max_loops": self.max_loops,
}
json.dump(state, f, indent=4)
def workflow_bootup(self, **kwargs) -> None:
bootup = print(
colored(
f"""
Sequential Workflow Initializing...""",
"green",
attrs=["bold", "underline"],
)
)
def workflow_dashboard(self, **kwargs) -> None:
"""
Displays a dashboard for the workflow.
Args:
**kwargs: Additional keyword arguments to pass to the dashboard.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.workflow_dashboard()
"""
dashboard = print(
colored(
f"""
Sequential Workflow Dashboard
--------------------------------
Tasks: {len(self.tasks)}
Max Loops: {self.max_loops}
Autosave: {self.autosave}
Autosave Filepath: {self.saved_state_filepath}
Restore Filepath: {self.restore_state_filepath}
--------------------------------
Metadata:
kwargs: {kwargs}
""",
"cyan",
attrs=["bold", "underline"],
)
)
def load_workflow_state(self, filepath: str = None, **kwargs) -> None:
"""
Loads the workflow state from a json file and restores the workflow state.
Args:
filepath (str): The path to load the workflow state from.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.save_workflow_state("sequential_workflow_state.json")
>>> workflow.load_workflow_state("sequential_workflow_state.json")
"""
filepath = filepath or self.restore_state_filepath
with open(filepath, "r") as f:
state = json.load(f)
self.max_loops = state["max_loops"]
self.tasks = []
for task_state in state["tasks"]:
task = Task(
description=task_state["description"],
flow=task_state["flow"],
args=task_state["args"],
kwargs=task_state["kwargs"],
result=task_state["result"],
history=task_state["history"],
)
self.tasks.append(task)
def run(self) -> None:
"""
Run the workflow.
Raises:
ValueError: If a Flow instance is used as a task and the 'task' argument is not provided.
"""
try:
self.workflow_bootup()
for _ in range(self.max_loops):
for task in self.tasks:
# Check if the current task can be executed
if task.result is None:
# Check if the flow is a Flow and a 'task' argument is needed
if isinstance(task.flow, Flow):
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
f"The 'task' argument is required for the Flow flow execution in '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
task.result = task.flow.run(
flow_task_arg, *task.args, **task.kwargs
)
else:
# If it's not a Flow instance, call the flow directly
task.result = task.flow(*task.args, **task.kwargs)
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
if isinstance(next_task.flow, Flow):
# For Flow flows, 'task' should be a keyword argument
next_task.kwargs["task"] = task.result
else:
# For other callable flows, the result is added to args
next_task.args.insert(0, task.result)
# Autosave the workflow state
if self.autosave:
self.save_workflow_state("sequential_workflow_state.json")
except Exception as e:
print(
colored(
f"Error initializing the Sequential workflow: {e} try optimizing your inputs like the flow class and task description",
"red",
attrs=["bold", "underline"],
)
)
async def arun(self) -> None:
"""
Asynchronously run the workflow.
Raises:
ValueError: If a Flow instance is used as a task and the 'task' argument is not provided.
"""
for _ in range(self.max_loops):
for task in self.tasks:
# Check if the current task can be executed
if task.result is None:
# Check if the flow is a Flow and a 'task' argument is needed
if isinstance(task.flow, Flow):
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
f"The 'task' argument is required for the Flow flow execution in '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
task.result = await task.flow.arun(
flow_task_arg, *task.args, **task.kwargs
)
else:
# If it's not a Flow instance, call the flow directly
task.result = await task.flow(*task.args, **task.kwargs)
# Pass the result as an argument to the next task if it exists
next_task_index = self.tasks.index(task) + 1
if next_task_index < len(self.tasks):
next_task = self.tasks[next_task_index]
if isinstance(next_task.flow, Flow):
# For Flow flows, 'task' should be a keyword argument
next_task.kwargs["task"] = task.result
else:
# For other callable flows, the result is added to args
next_task.args.insert(0, task.result)
# Output the results
for task in workflow.tasks:
print(f"Task: {task.description}, Result: {task.result}")
# Autosave the workflow state
if self.autosave:
self.save_workflow_state("sequential_workflow_state.json")

@ -0,0 +1,306 @@
import asyncio
import os
from unittest.mock import patch
import pytest
from swarms.models import OpenAIChat
from swarms.structs.flow import Flow
from swarms.structs.sequential_workflow import SequentialWorkflow, Task
# Mock the OpenAI API key using environment variables
os.environ["OPENAI_API_KEY"] = "mocked_api_key"
# Mock OpenAIChat class for testing
class MockOpenAIChat:
def __init__(self, *args, **kwargs):
pass
def run(self, *args, **kwargs):
return "Mocked result"
# Mock Flow class for testing
class MockFlow:
def __init__(self, *args, **kwargs):
pass
def run(self, *args, **kwargs):
return "Mocked result"
# Mock SequentialWorkflow class for testing
class MockSequentialWorkflow:
def __init__(self, *args, **kwargs):
pass
def add(self, *args, **kwargs):
pass
def run(self):
pass
# Test Task class
def test_task_initialization():
description = "Sample Task"
flow = MockOpenAIChat()
task = Task(description=description, flow=flow)
assert task.description == description
assert task.flow == flow
def test_task_execute():
description = "Sample Task"
flow = MockOpenAIChat()
task = Task(description=description, flow=flow)
task.execute()
assert task.result == "Mocked result"
# Test SequentialWorkflow class
def test_sequential_workflow_initialization():
workflow = SequentialWorkflow()
assert isinstance(workflow, SequentialWorkflow)
assert len(workflow.tasks) == 0
assert workflow.max_loops == 1
assert workflow.autosave == False
assert workflow.saved_state_filepath == "sequential_workflow_state.json"
assert workflow.restore_state_filepath == None
assert workflow.dashboard == False
def test_sequential_workflow_add_task():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
assert len(workflow.tasks) == 1
assert workflow.tasks[0].description == task_description
assert workflow.tasks[0].flow == task_flow
def test_sequential_workflow_reset_workflow():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
workflow.reset_workflow()
assert workflow.tasks[0].result == None
def test_sequential_workflow_get_task_results():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
workflow.run()
results = workflow.get_task_results()
assert len(results) == 1
assert task_description in results
assert results[task_description] == "Mocked result"
def test_sequential_workflow_remove_task():
workflow = SequentialWorkflow()
task1_description = "Task 1"
task2_description = "Task 2"
task1_flow = MockOpenAIChat()
task2_flow = MockOpenAIChat()
workflow.add(task1_description, task1_flow)
workflow.add(task2_description, task2_flow)
workflow.remove_task(task1_description)
assert len(workflow.tasks) == 1
assert workflow.tasks[0].description == task2_description
def test_sequential_workflow_update_task():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
workflow.update_task(task_description, max_tokens=1000)
assert workflow.tasks[0].kwargs["max_tokens"] == 1000
def test_sequential_workflow_save_workflow_state():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
workflow.save_workflow_state("test_state.json")
assert os.path.exists("test_state.json")
os.remove("test_state.json")
def test_sequential_workflow_load_workflow_state():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
workflow.save_workflow_state("test_state.json")
workflow.load_workflow_state("test_state.json")
assert len(workflow.tasks) == 1
assert workflow.tasks[0].description == task_description
os.remove("test_state.json")
def test_sequential_workflow_run():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockOpenAIChat()
workflow.add(task_description, task_flow)
workflow.run()
assert workflow.tasks[0].result == "Mocked result"
def test_sequential_workflow_workflow_bootup(capfd):
workflow = SequentialWorkflow()
workflow.workflow_bootup()
out, _ = capfd.readouterr()
assert "Sequential Workflow Initializing..." in out
def test_sequential_workflow_workflow_dashboard(capfd):
workflow = SequentialWorkflow()
workflow.workflow_dashboard()
out, _ = capfd.readouterr()
assert "Sequential Workflow Dashboard" in out
# Mock Flow class for async testing
class MockAsyncFlow:
def __init__(self, *args, **kwargs):
pass
async def arun(self, *args, **kwargs):
return "Mocked result"
# Test async execution in SequentialWorkflow
@pytest.mark.asyncio
async def test_sequential_workflow_arun():
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = MockAsyncFlow()
workflow.add(task_description, task_flow)
await workflow.arun()
assert workflow.tasks[0].result == "Mocked result"
def test_real_world_usage_with_openai_key():
# Initialize the language model
llm = OpenAIChat()
assert isinstance(llm, OpenAIChat)
def test_real_world_usage_with_flow_and_openai_key():
# Initialize a flow with the language model
flow = Flow(llm=OpenAIChat())
assert isinstance(flow, Flow)
def test_real_world_usage_with_sequential_workflow():
# Initialize a sequential workflow
workflow = SequentialWorkflow()
assert isinstance(workflow, SequentialWorkflow)
def test_real_world_usage_add_tasks():
# Create a sequential workflow and add tasks
workflow = SequentialWorkflow()
task1_description = "Task 1"
task2_description = "Task 2"
task1_flow = OpenAIChat()
task2_flow = OpenAIChat()
workflow.add(task1_description, task1_flow)
workflow.add(task2_description, task2_flow)
assert len(workflow.tasks) == 2
assert workflow.tasks[0].description == task1_description
assert workflow.tasks[1].description == task2_description
def test_real_world_usage_run_workflow():
# Create a sequential workflow, add a task, and run the workflow
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = OpenAIChat()
workflow.add(task_description, task_flow)
workflow.run()
assert workflow.tasks[0].result is not None
def test_real_world_usage_dashboard_display():
# Create a sequential workflow, add tasks, and display the dashboard
workflow = SequentialWorkflow()
task1_description = "Task 1"
task2_description = "Task 2"
task1_flow = OpenAIChat()
task2_flow = OpenAIChat()
workflow.add(task1_description, task1_flow)
workflow.add(task2_description, task2_flow)
with patch("builtins.print") as mock_print:
workflow.workflow_dashboard()
mock_print.assert_called()
def test_real_world_usage_async_execution():
# Create a sequential workflow, add an async task, and run the workflow asynchronously
workflow = SequentialWorkflow()
task_description = "Sample Task"
async_task_flow = OpenAIChat()
async def async_run_workflow():
await workflow.arun()
workflow.add(task_description, async_task_flow)
asyncio.run(async_run_workflow())
assert workflow.tasks[0].result is not None
def test_real_world_usage_multiple_loops():
# Create a sequential workflow with multiple loops, add a task, and run the workflow
workflow = SequentialWorkflow(max_loops=3)
task_description = "Sample Task"
task_flow = OpenAIChat()
workflow.add(task_description, task_flow)
workflow.run()
assert workflow.tasks[0].result is not None
def test_real_world_usage_autosave_state():
# Create a sequential workflow with autosave, add a task, run the workflow, and check if state is saved
workflow = SequentialWorkflow(autosave=True)
task_description = "Sample Task"
task_flow = OpenAIChat()
workflow.add(task_description, task_flow)
workflow.run()
assert workflow.tasks[0].result is not None
assert os.path.exists("sequential_workflow_state.json")
os.remove("sequential_workflow_state.json")
def test_real_world_usage_load_state():
# Create a sequential workflow, add a task, save state, load state, and run the workflow
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = OpenAIChat()
workflow.add(task_description, task_flow)
workflow.run()
workflow.save_workflow_state("test_state.json")
workflow.load_workflow_state("test_state.json")
workflow.run()
assert workflow.tasks[0].result is not None
os.remove("test_state.json")
def test_real_world_usage_update_task_args():
# Create a sequential workflow, add a task, and update task arguments
workflow = SequentialWorkflow()
task_description = "Sample Task"
task_flow = OpenAIChat()
workflow.add(task_description, task_flow)
workflow.update_task(task_description, max_tokens=1000)
assert workflow.tasks[0].kwargs["max_tokens"] == 1000
def test_real_world_usage_remove_task():
# Create a sequential workflow, add tasks, remove a task, and run the workflow
workflow = SequentialWorkflow()
task1_description = "Task 1"
task2_description = "Task 2"
task1_flow = OpenAIChat()
task2_flow = OpenAIChat()
workflow.add(task1_description, task1_flow)
workflow.add(task2_description, task2_flow)
workflow.remove_task(task1_description)
workflow.run()
assert len(workflow.tasks) == 1
assert workflow.tasks[0].description == task2_description
def test_real_world_usage_with_environment_variables():
# Ensure that the OpenAI API key is set using environment variables
assert "OPENAI_API_KEY" in os.environ
assert os.environ["OPENAI_API_KEY"] == "mocked_api_key"
del os.environ["OPENAI_API_KEY"] # Clean up after the test
def test_real_world_usage_no_openai_key():
# Ensure that an exception is raised when the OpenAI API key is not set
with pytest.raises(ValueError):
llm = OpenAIChat() # API key not provided, should raise an exception
Loading…
Cancel
Save