@ -1,20 +1,21 @@
# ConcurrentWorkflow Documentation
## Overview
The `ConcurrentWorkflow` class is designed to facilitate the concurrent execution of multiple agents, each tasked with solving a specific query or problem. This class is particularly useful in scenarios where multiple agents need to work in parallel, allowing for efficient resource utilization and faster completion of tasks. The workflow manages the execution, handles streaming callbacks, and provides optional dashboard monitoring for real-time progress tracking.
The `ConcurrentWorkflow` class is designed to facilitate the concurrent execution of multiple agents, each tasked with solving a specific query or problem. This class is particularly useful in scenarios where multiple agents need to work in parallel, allowing for efficient resource utilization and faster completion of tasks. The workflow manages the execution, collects metadata, and optionally saves the results in a structured format.
Full Path: `swarms.structs.concurrent_workflow`
### Key Features
- **Concurrent Execution** : Runs multiple agents simultaneously using Python's `ThreadPoolExecutor`
- **Interactive Mode** : Supports interactive task modification and execution
- **Caching System** : Implements LRU caching for repeated prompts
- **Progress Tracking** : Optional progress bar for task execution
- **Enhanced Error Handling** : Implements retry mechanism with exponential backoff
- **Input Validation** : Validates task inputs before execution
- **Batch Processing** : Supports running tasks in batches
- **Metadata Collection** : Gathers detailed metadata about each agent's execution
- **Customizable Output** : Allows saving metadata to file or returning as string/dictionary
| Feature | Description |
|---------------------------|-----------------------------------------------------------------------------------------------|
| Concurrent Execution | Runs multiple agents simultaneously using Python's `ThreadPoolExecutor` |
| Dashboard Monitoring | Optional real-time dashboard for tracking agent status and progress |
| Streaming Support | Full support for streaming callbacks during agent execution |
| Error Handling | Comprehensive error handling with logging and status tracking |
| Batch Processing | Supports running multiple tasks sequentially |
| Resource Management | Automatic cleanup of resources and connections |
| Flexible Output Types | Multiple output format options for conversation history |
| Agent Status Tracking | Real-time tracking of agent execution states (pending, running, completed, error) |
## Class Definition
@ -40,7 +41,7 @@ The `ConcurrentWorkflow` class is designed to facilitate the concurrent executio
| `_cache` | `dict` | The cache for storing agent outputs. |
| `_progress_bar` | `tqdm` | The progress bar for tracking execution. |
## Methods
## Constructor
### ConcurrentWorkflow.\_\_init\_\_
@ -48,120 +49,203 @@ Initializes the `ConcurrentWorkflow` class with the provided parameters.
#### Parameters
| Parameter | Type | Default Value | Description |
|-----------------------|----------------|----------------------------------------|-----------------------------------------------------------|
| `name` | `str` | `"ConcurrentWorkflow"` | The name of the workflow. |
| `description` | `str` | `"Execution of multiple agents concurrently"` | A brief description of the workflow. |
| `agents` | `List[Agent]` | `[]` | A list of agents to be executed concurrently. |
| `metadata_output_path` | `str` | `"agent_metadata.json"` | Path to save the metadata output. |
| `auto_save` | `bool` | `True` | Flag indicating whether to automatically save the metadata. |
| `output_type` | `str` | `"dict"` | The type of output format. |
| `max_loops` | `int` | `1` | Maximum number of loops for each agent. |
| `return_str_on` | `bool` | `False` | Flag to return output as string. |
| `auto_generate_prompts` | `bool` | `False` | Flag indicating whether to auto-generate prompts for agents. |
| `return_entire_history` | `bool` | `False` | Flag to return entire conversation history. |
| `interactive` | `bool` | `False` | Flag indicating whether to enable interactive mode. |
| `cache_size` | `int` | `100` | The size of the cache. |
| `max_retries` | `int` | `3` | The maximum number of retry attempts. |
| `retry_delay` | `float` | `1.0` | The delay between retry attempts in seconds. |
| `show_progress` | `bool` | `False` | Flag indicating whether to show progress. |
| Parameter | Type | Default Value | Description |
|-----------------------|-------------------------------|----------------------------------------|-----------------------------------------------------------|
| `id` | `str` | `swarm_id()` | Unique identifier for the workflow instance. |
| `name` | `str` | `"ConcurrentWorkflow"` | The name of the workflow. |
| `description` | `str` | `"Execution of multiple agents concurrently"` | A brief description of the workflow. |
| `agents` | `List[Union[Agent, Callable]]` | `None` | A list of agents or callables to be executed concurrently. |
| `auto_save` | `bool` | `True` | Flag indicating whether to automatically save metadata. |
| `output_type` | `str` | `"dict-all-except-first"` | The type of output format. |
| `max_loops` | `int` | `1` | Maximum number of loops for each agent. |
| `auto_generate_prompts` | `bool` | `False` | Flag indicating whether to auto-generate prompts for agents. |
| `show_dashboard` | `bool` | `False` | Flag indicating whether to show real-time dashboard. |
#### Raises
- `ValueError` : If the list of agents is empty or if the description is empty.
- `ValueError` : If no agents are provided or if the agents list is empty.
## Methods
### ConcurrentWorkflow.fix_agents
Configures agents for dashboard mode by disabling print statements when dashboard is enabled.
#### Returns
- `List[Union[Agent, Callable]]` : The configured list of agents.
### ConcurrentWorkflow.disable_agent_prints
```python
agents = workflow.fix_agents()
```
### ConcurrentWorkflow.reliability_check
Validates workflow configuration and ensures agents are properly set up.
Disables print statements for all agents in the workflow.
#### Raises
- `ValueError` : If no agents are provided or if the agents list is empty.
```python
workflow.disable_agent_prints()
workflow.reliability_check ()
```
### ConcurrentWorkflow.activate_auto_prompt_engineering
Activates the auto-generate prompts feature for all agents in the workflow.
Enables automatic prompt generation for all agents in the workflow.
```python
workflow.activate_auto_prompt_engineering()
```
### ConcurrentWorkflow.enable_progress_bar
### ConcurrentWorkflow.display_agent_dashboard
Displays real-time dashboard showing agent status and progress.
#### Parameters
Enables the progress bar display for task execution.
| Parameter | Type | Default Value | Description |
|-------------|---------|----------------------------|-----------------------------------------------------------|
| `title` | `str` | `"ConcurrentWorkflow Dashboard"` | Title for the dashboard. |
| `is_final` | `bool` | `False` | Whether this is the final dashboard display. |
```python
workflow.enable_progress_bar()
workflow.display_agent_dashboard("Execution Progress", is_final=False )
```
### ConcurrentWorkflow.disable_progress_bar
### ConcurrentWorkflow.run_with_dashboard
Executes agents with real-time dashboard monitoring and streaming support.
#### Parameters
| Parameter | Type | Description |
|-----------------------|-----------------------------------|-----------------------------------------------------------|
| `task` | `str` | The task to execute. |
| `img` | `Optional[str]` | Optional image for processing. |
| `imgs` | `Optional[List[str]]` | Optional list of images for processing. |
| `streaming_callback` | `Optional[Callable[[str, str, bool], None]]` | Callback for streaming agent outputs. |
#### Returns
Disables the progress bar display.
- `Any` : The formatted conversation history based on output_type .
```python
workflow.disable_progress_bar()
result = workflow.run_with_dashboard(
task="Analyze this data",
streaming_callback=lambda agent, chunk, done: print(f"{agent}: {chunk}")
)
```
### ConcurrentWorkflow.clear_cache
### ConcurrentWorkflow._run
Clears the task cache.
Executes agents concurrently without dashboard monitoring.
#### Parameters
| Parameter | Type | Description |
|-----------------------|-----------------------------------|-----------------------------------------------------------|
| `task` | `str` | The task to execute. |
| `img` | `Optional[str]` | Optional image for processing. |
| `imgs` | `Optional[List[str]]` | Optional list of images for processing. |
| `streaming_callback` | `Optional[Callable[[str, str, bool], None]]` | Callback for streaming agent outputs. |
#### Returns
- `Any` : The formatted conversation history based on output_type.
```python
workflow.clear_cache()
result = workflow._run(
task="Process this task",
streaming_callback=lambda agent, chunk, done: print(f"{agent}: {chunk}")
)
```
### ConcurrentWorkflow.get_cache_stats
### ConcurrentWorkflow._run_agent_with_streaming
Gets cache statistics.
Runs a single agent with streaming callback support.
#### Parameters
| Parameter | Type | Description |
|-----------------------|-----------------------------------|-----------------------------------------------------------|
| `agent` | `Union[Agent, Callable]` | The agent or callable to execute. |
| `task` | `str` | The task to execute. |
| `img` | `Optional[str]` | Optional image for processing. |
| `imgs` | `Optional[List[str]]` | Optional list of images for processing. |
| `streaming_callback` | `Optional[Callable[[str, str, bool], None]]` | Callback for streaming outputs. |
#### Returns
- `Dict[str, int]` : A dictionary containing cache statistics.
- `str` : The output from the agent execution.
```python
output = workflow._run_agent_with_streaming(
agent=my_agent,
task="Analyze data",
streaming_callback=lambda agent, chunk, done: print(f"{agent}: {chunk}")
)
```
### ConcurrentWorkflow.cleanup
Cleans up resources and connections used by the workflow.
```python
stats = workflow.get_cache_stats()
print(stats) # {'cache_size': 5, 'max_cache_size': 100}
workflow.cleanup()
```
### ConcurrentWorkflow.run
Executes the workflow for the provided task.
Main execution method that runs all agents concurrently .
#### Parameters
| Parameter | Type | Description |
|-------------|---------------------|-----------------------------------------------------------|
| `task` | `Optional[str]` | The task or query to give to all agents. |
| `img` | `Optional[str]` | The image to be processed by the agents. |
| ` *args` | `tuple` | Additional positional arguments. |
| ` **kwargs` | `dict` | Additional keyword arguments. |
| Parameter | Type | Description |
|----------------------- |-------------- ---------------------|-----------------------------------------------------------|
| `task` | `str` | The task to execute. |
| `img` | `Optional[str]` | Optional image for processing. |
| ` imgs` | `Optional[List[str]]` | Optional list of images for processing. |
| ` streaming_callback` | `Optional[Callable[[str, str, bool], None]]` | Callback for streaming agent outputs. |
#### Returns
- `Any` : The result of the execution, format depends on output_type and return_entire_history settings .
- `Any` : The formatted conversation history based on output_type .
#### Raises
- `ValueError` : If an invalid device is specified.
- `Exception` : If any other error occurs during execution.
```python
result = workflow.run(
task="What are the benefits of renewable energy?",
streaming_callback=lambda agent, chunk, done: print(f"{agent}: {chunk}")
)
```
### ConcurrentWorkflow.run_batched
### ConcurrentWorkflow.batch_run
Runs the workflow for a batch of tasks .
Executes the workflow on multiple tasks sequentially .
#### Parameters
| Parameter | Type | Description |
|-------------|--------------|-----------------------------------------------------------|
| `tasks` | `List[str]` | A list of tasks or queries to give to all agents. |
| Parameter | Type | Description |
|-----------------------|-----------------------------------|-----------------------------------------------------------|
| `tasks` | `List[str]` | List of tasks to execute. |
| `imgs` | `Optional[List[str]]` | Optional list of images corresponding to tasks. |
| `streaming_callback` | `Optional[Callable[[str, str, bool], None]]` | Callback for streaming outputs. |
#### Returns
- `List[Any]` : A list of results for each task.
- `List[Any]` : List of results for each task.
```python
results = workflow.batch_run(
tasks=["Task 1", "Task 2", "Task 3"],
streaming_callback=lambda agent, chunk, done: print(f"{agent}: {chunk}")
)
```
## Usage Examples
### Example 1: Basic Usage with Interactive Mode
### Example 1: Basic Concurrent Execution
```python
from swarms import Agent, ConcurrentWorkflow
@ -169,59 +253,136 @@ from swarms import Agent, ConcurrentWorkflow
# Initialize agents
agents = [
Agent(
agent_name=f"Agent-{i}",
system_prompt="You are a helpful assistant.",
agent_name="Research-Agent",
system_prompt="You are a research specialist focused on gathering information.",
model_name="gpt-4",
max_loops=1,
),
Agent(
agent_name="Analysis-Agent",
system_prompt="You are an analysis expert who synthesizes information.",
model_name="gpt-4",
max_loops=1,
),
Agent(
agent_name="Summary-Agent",
system_prompt="You are a summarization expert who creates concise reports.",
model_name="gpt-4",
max_loops=1,
)
for i in range(3)
]
# Initialize workflow with interactive mode
# Initialize workflow
workflow = ConcurrentWorkflow(
name="Interactive Workflow",
name="Research Analysis Workflow",
description="Concurrent execution of research, analysis, and summarization tasks",
agents=agents,
interactive=True,
show_progress=True,
cache_size=100,
max_retries=3,
retry_delay=1.0
auto_save=True,
output_type="dict-all-except-first",
show_dashboard=False
)
# Run workflow
task = "What are the benefits of using Python for data analysi s?"
task = "What are the environmental impacts of electric vehicle s?"
result = workflow.run(task)
print(result)
```
### Example 2: Batch Processing with Progress Bar
### Example 2: Dashboard Monitoring with Streaming
```python
# Initialize workflow
import time
def streaming_callback(agent_name: str, chunk: str, is_final: bool):
"""Handle streaming output from agents."""
if chunk:
print(f"[{agent_name}] {chunk}", end="", flush=True)
if is_final:
print(f"\n[{agent_name}] Completed\n")
# Initialize workflow with dashboard
workflow = ConcurrentWorkflow(
name="Batch Processing Workflow",
name="Monitored Workflow",
agents=agents,
show_progress=True,
auto_save=True
show_dashboard=True, # Enable real-time dashboard
output_type="dict-all-except-first"
)
# Define tasks
# Run with streaming and dashboard
task = "Analyze the future of artificial intelligence in healthcare"
result = workflow.run(
task=task,
streaming_callback=streaming_callback
)
print("Final Result:", result)
```
### Example 3: Batch Processing Multiple Tasks
```python
# Define multiple tasks
tasks = [
"Analyze the impact of climate change on agriculture",
"Evaluate renewable energy solutions",
"Assess water conservation strategies"
"What are the benefits of renewable energy adoption?",
"How does blockchain technology impact supply chains?",
"What are the challenges of implementing remote work policies?",
"Analyze the growth of e-commerce in developing countries"
]
# Initialize workflow for batch processing
workflow = ConcurrentWorkflow(
name="Batch Analysis Workflow",
agents=agents,
output_type="dict-all-except-first",
show_dashboard=False
)
# Process all tasks
results = workflow.batch_run(tasks=tasks)
# Display results
for i, (task, result) in enumerate(zip(tasks, results)):
print(f"\n{'='*50}")
print(f"Task {i+1}: {task}")
print(f"{'='*50}")
print(f"Result: {result}")
```
### Example 4: Auto-Prompt Engineering
```python
# Initialize agents without specific prompts
agents = [
Agent(
agent_name="Creative-Agent",
model_name="gpt-4",
max_loops=1,
),
Agent(
agent_name="Technical-Agent",
model_name="gpt-4",
max_loops=1,
)
]
# Run batch processing
results = workflow.run_batched(tasks)
# Initialize workflow with auto-prompt engineering
workflow = ConcurrentWorkflow(
name="Auto-Prompt Workflow",
agents=agents,
auto_generate_prompts=True, # Enable auto-prompt generation
output_type="dict-all-except-first"
)
# Activate auto-prompt engineering (can also be done in init)
workflow.activate_auto_prompt_engineering()
# Process results
for task, result in zip(tasks, results):
print(f"Task: {task}")
print(f"Result: {result}\n")
# Run workflow
task = "Design a mobile app for fitness tracking"
result = workflow.run(task )
print(result)
```
### Example 3: Error Handling and Retries
### Example 5: Error Handling and Cleanup
```python
import logging
@ -229,38 +390,103 @@ import logging
# Set up logging
logging.basicConfig(level=logging.INFO)
# Initialize workflow with retry settings
# Initialize workflow
workflow = ConcurrentWorkflow(
name="Reliable Workflow",
agents=agents,
max_retries=3,
retry_delay=1.0,
show_progress=True
output_type="dict-all-except-first"
)
# Run workflow with error handling
# Run workflow with proper error handling
try:
task = "Generate a comprehensive market analysis report "
task = "Generate a comprehensive report on quantum computing applications "
result = workflow.run(task)
print("Workflow completed successfully!")
print(result)
except Exception as e:
logging.error(f"An error occurred: {str(e)}")
logging.error(f"Workflow failed: {str(e)}")
finally:
# Always cleanup resources
workflow.cleanup()
print("Resources cleaned up")
```
## Tips and Best Practices
### Example 6: Working with Imag es
- **Agent Initialization** : Ensure all agents are correctly initialized with required configurations.
- **Interactive Mode** : Use interactive mode for tasks requiring user input or modification.
- **Caching** : Utilize the caching system for repeated tasks to improve performance.
- **Progress Tracking** : Enable progress bar for long-running tasks to monitor execution.
- **Error Handling** : Implement proper error handling and use retry mechanism for reliability.
- **Resource Management** : Monitor cache size and clear when necessary.
- **Batch Processing** : Use batch processing for multiple related tasks.
- **Logging** : Implement detailed logging for debugging and monitoring.
```python
# Initialize agents capable of image processing
vision_agents = [
Agent(
agent_name="Image-Analysis-Agent",
system_prompt="You are an expert at analyzing images and extracting insights.",
model_name="gpt-4-vision-preview",
max_loops=1,
),
Agent(
agent_name="Content-Description-Agent",
system_prompt="You specialize in creating detailed descriptions of visual content.",
model_name="gpt-4-vision-preview",
max_loops=1,
)
]
# Initialize workflow for image processing
workflow = ConcurrentWorkflow(
name="Image Analysis Workflow",
agents=vision_agents,
output_type="dict-all-except-first",
show_dashboard=True
)
# Run with image input
task = "Analyze this image and provide insights about its content"
image_path = "/path/to/image.jpg"
result = workflow.run(
task=task,
img=image_path,
streaming_callback=lambda agent, chunk, done: print(f"{agent}: {chunk}")
)
## References and Resources
print(result)
```
- [Python's ThreadPoolExecutor Documentation ](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor )
- [tqdm Progress Bar Documentation ](https://tqdm.github.io/ )
- [Python's functools.lru_cache Documentation ](https://docs.python.org/3/library/functools.html#functools.lru_cache )
- [Loguru for Logging in Python ](https://loguru.readthedocs.io/en/stable/ )
### Example 7: Custom Callable Agents
```python
from typing import Optional
def custom_analysis_agent(task: str, img: Optional[str] = None, **kwargs) -> str:
"""Custom analysis function that can be used as an agent."""
# Custom logic here
return f"Custom analysis result for: {task}"
def sentiment_analysis_agent(task: str, img: Optional[str] = None, **kwargs) -> str:
"""Sentiment analysis function."""
# Custom sentiment analysis logic
return f"Sentiment analysis for: {task}"
# Mix of Agent objects and callable functions
mixed_agents = [
Agent(
agent_name="GPT-Agent",
system_prompt="You are a helpful assistant.",
model_name="gpt-4",
max_loops=1,
),
custom_analysis_agent, # Callable function
sentiment_analysis_agent # Another callable function
]
# Initialize workflow with mixed agent types
workflow = ConcurrentWorkflow(
name="Mixed Agents Workflow",
agents=mixed_agents,
output_type="dict-all-except-first"
)
# Run workflow
task = "Analyze customer feedback and provide insights"
result = workflow.run(task)
print(result)
```