Merge branch 'yikes' of github.com:twilwa/swarms into yikes

pull/57/head
yikes 2 years ago
commit e413b4a5d8

BIN
.DS_Store vendored

Binary file not shown.

@ -0,0 +1,30 @@
name: Linting and Formatting
on:
push:
branches:
- main
jobs:
lint_and_format:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Find Python files
run: find swarms -name "*.py" -type f -exec autopep8 --in-place --aggressive --aggressive {} +
- name: Push changes
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}

@ -0,0 +1,42 @@
name: Continuous Integration
on:
push:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run unit tests
run: pytest tests/unit
- name: Run integration tests
run: pytest tests/integration
- name: Run code coverage
run: pytest --cov=swarms tests/
- name: Run linters
run: pylint swarms
- name: Build documentation
run: make docs
- name: Validate documentation
run: sphinx-build -b linkcheck docs build/docs
- name: Run performance tests
run: pytest tests/performance

@ -0,0 +1,28 @@
name: Documentation Tests
on:
push:
branches:
- master
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Build documentation
run: make docs
- name: Validate documentation
run: sphinx-build -b linkcheck docs build/docs

@ -0,0 +1,25 @@
name: Linting
on:
push:
branches:
- master
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run linters
run: pylint swarms

@ -0,0 +1,27 @@
name: Pull Request Checks
on:
pull_request:
branches:
- master
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests and checks
run: |
pytest tests/unit
pylint swarms

@ -0,0 +1,25 @@
name: Unit Tests
on:
push:
branches:
- master
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: 3.x
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run unit tests
run: pytest tests/

@ -0,0 +1,90 @@
`AbsractAgent` Class: A Deep Dive
========================
The `AbstractAgent` class is a fundamental building block in the design of AI systems. It encapsulates the behavior of an AI entity, allowing it to interact with other agents and perform actions. The class is designed to be flexible and extensible, enabling the creation of agents with diverse behaviors.
## Architecture
------------
The architecture of the `AbstractAgent` class is centered around three main components: the agent's name, tools, and memory.
- The `name` is a string that uniquely identifies the agent. This is crucial for communication between agents and for tracking their actions.
- The `tools` are a list of `Tool` objects that the agent uses to perform its tasks. These could include various AI models, data processing utilities, or any other resources that the agent needs to function. The `tools` method is used to initialize these tools.
- The `memory` is a `Memory` object that the agent uses to store and retrieve information. This could be used, for example, to remember past actions or to store the state of the environment. The `memory` method is used to initialize the memory.
The `AbstractAgent` class also includes several methods that define the agent's behavior. These methods are designed to be overridden in subclasses to implement specific behaviors.
## Methods
-------
### `reset`
The `reset` method is used to reset the agent's state. This could involve clearing the agent's memory, resetting its tools, or any other actions necessary to bring the agent back to its initial state. This method is abstract and must be overridden in subclasses.
### `run` and `_arun`
The `run` method is used to execute a task. The task is represented as a string, which could be a command, a query, or any other form of instruction that the agent can interpret. The `_arun` method is the asynchronous version of `run`, allowing tasks to be executed concurrently.
### `chat` and `_achat`
The `chat` method is used for communication between agents. It takes a list of messages as input, where each message is a dictionary. The `_achat` method is the asynchronous version of `chat`, allowing messages to be sent and received concurrently.
### `step` and `_astep`
The `step` method is used to advance the agent's state by one step in response to a message. The `_astep` method is the asynchronous version of `step`, allowing the agent's state to be updated concurrently.
## Usage E#xamples
--------------
### Example 1: Creating an Agent
```
from swarms.agents.base import AbtractAgent
agent = Agent(name="Agent1")
print(agent.name) # Output: Agent1
```
In this example, we create an instance of `AbstractAgent` named "Agent1" and print its name.
### Example 2: Initializing Tools and Memory
```
from swarms.agents.base import AbtractAgent
agent = Agent(name="Agent1")
tools = [Tool1(), Tool2(), Tool3()]
memory_store = Memory()
agent.tools(tools)
agent.memory(memory_store)
```
In this example, we initialize the tools and memory of "Agent1". The tools are a list of `Tool` instances, and the memory is a `Memory` instance.
### Example 3: Running an Agent
```
from swarms.agents.base import AbtractAgent
agent = Agent(name="Agent1")
task = "Task1"
agent.run(task)
```
In this example, we run "Agent1" with a task named "Task1".
Notes
-----
- The `AbstractAgent` class is an abstract class, which means it cannot be instantiated directly. Instead, it should be subclassed, and at least the `reset`, `run`, `chat`, and `step` methods should be overridden.
- The `run`, `chat`, and `step` methods are designed to be flexible and can be adapted to a wide range of tasks and behaviors. For example, the `run` method could be used to execute a machine learning model, the `chat` method could be used to send and receive messages in a chatbot, and the `step` method could be used to update the agent's state in a reinforcement learning environment.
- The `_arun`, `_achat`, and `_astep` methods are asynchronous versions of the `run`, `chat`, and `step` methods, respectively. They return a coroutine that can be awaited using the `await` keyword. This allows multiple tasks to be executed concurrently, improving the efficiency of the agent.
- The `tools` and `memory` methods are used to initialize the agent's tools and memory, respectively. These methods can be overridden in subclasses to initialize specific tools and memory structures.
- The `reset` method is used to reset the agent's state. This method can be overridden in subclasses to define specific reset behaviors. For example, in a reinforcement learning agent, the

@ -0,0 +1,200 @@
# Module/Class Name: Workflow
===========================
The `Workflow` class is a part of the `swarms` library and is used to create and execute a workflow of tasks. It provides a way to define a sequence of tasks and execute them in order, with the output of each task being used as the input for the next task.
## Overview and Introduction
-------------------------
The `Workflow` class is designed to simplify the execution of a series of tasks by providing a structured way to define and execute them. It allows for sequential execution of tasks, with the output of each task being passed as input to the next task. This makes it easy to create complex workflows and automate multi-step processes.
## Class Definition: Workflow
The `Workflow` class is a powerful tool provided by the `swarms` library that allows users to create and execute a sequence of tasks in a structured and automated manner. It simplifies the process of defining and executing complex workflows by providing a clear and intuitive interface.
## Why Use Workflows?
------------------
Workflows are essential in many domains, including data processing, automation, and task management. They enable the automation of multi-step processes, where the output of one task serves as the input for the next task. By using workflows, users can streamline their work, reduce manual effort, and ensure consistent and reliable execution of tasks.
The `Workflow` class provides a way to define and execute workflows in a flexible and efficient manner. It allows users to define the sequence of tasks, specify dependencies between tasks, and execute them in order. This makes it easier to manage complex processes and automate repetitive tasks.
## How Does it Work?
-----------------
The `Workflow` class consists of two main components: the `Task` class and the `Workflow` class itself. Let's explore each of these components in detail.
### Task Class
The `Task` class represents an individual task within a workflow. Each task is defined by a string description. It contains attributes such as `parents`, `children`, `output`, and `structure`.
The `parents` attribute is a list that stores references to the parent tasks of the current task. Similarly, the `children` attribute is a list that stores references to the child tasks of the current task. These attributes allow for the definition of task dependencies and the establishment of the workflow's structure.
The `output` attribute stores the output of the task, which is generated when the task is executed. Initially, the output is set to `None`, indicating that the task has not been executed yet.
The `structure` attribute refers to the `Workflow` object that the task belongs to. This attribute is set when the task is added to the workflow.
The `Task` class also provides methods such as `add_child` and `execute`. The `add_child` method allows users to add child tasks to the current task, thereby defining the workflow's structure. The `execute` method is responsible for executing the task by running the associated agent's `run` method with the task as input. It returns the response generated by the agent's `run` method.
### Workflow Class
The `Workflow` class is the main class that orchestrates the execution of tasks in a workflow. It takes an agent object as input, which is responsible for executing the tasks. The agent object should have a `run` method that accepts a task as input and returns a response.
The `Workflow` class provides methods such as `add`, `run`, and `context`. The `add` method allows users to add tasks to the workflow. It returns the newly created task object, which can be used to define task dependencies. The `run` method executes the workflow by running each task in order. It returns the last task in the workflow. The `context` method returns a dictionary containing the context information for a given task, including the parent output, parent task, and child task.
The `Workflow` class also has attributes such as `tasks` and `parallel`. The `tasks` attribute is a list that stores references to all the tasks in the workflow. The `parallel` attribute is a boolean flag that determines whether the tasks should be executed in parallel or sequentially.
When executing the workflow, the `run` method iterates over the tasks in the workflow and executes each task in order. If the `parallel` flag is set to `True`, the tasks are executed in parallel using a `ThreadPoolExecutor`. Otherwise, the tasks are executed sequentially.
## Benefits and Use Cases
----------------------
The `Workflow` class provides several benefits and use cases:
- Automation: Workflows automate multi-step processes, reducing manual effort and increasing efficiency. By defining the sequence of tasks and their dependencies, users can automate repetitive tasks and ensure consistent execution.
- Flexibility: Workflows can be easily customized and modified to suit specific needs. Users can add, remove, or rearrange tasks as required, allowing for dynamic and adaptable workflows.
- Error Handling: Workflows provide a structured approach to error handling. If an error occurs during the execution of a task, the workflow can be designed to handle the error gracefully and continue with the remaining tasks.
- Collaboration: Workflows facilitate collaboration by providing a shared structure for task execution. Multiple users can contribute to the workflow by adding or modifying tasks, enabling teamwork and coordination.
- Reproducibility: Workflows ensure reproducibility by defining a clear sequence of tasks. By following the same workflow, users can achieve consistent results and easily reproduce previous analyses or processes.
Overall, the `Workflow` class is a valuable tool for managing and executing complex processes. It simplifies the creation
## Class Parameters
----------------
- `agent` (Any): The agent object that will be used to execute the tasks. It should have a `run` method that takes a task as input and returns a response.
- `parallel` (bool): If `True`, the tasks will be executed in parallel using a `ThreadPoolExecutor`. Default: `False`.
## Class Methods
-------------
### `add(task: str) -> Task`
Adds a new task to the workflow.
- `task` (str): The task to be added.
Returns:
- `Task`: The newly created task object.
### `run(*args) -> Task`
Executes the workflow by running each task in order.
Returns:
- `Task`: The last task in the workflow.
### `context(task: Task) -> Dict[str, Any]`
Returns a dictionary containing the context information for a given task. The context includes the parent output, parent task, and child task.
- `task` (Task): The task for which the context information is required.
Returns:
- `Dict[str, Any]`: A dictionary containing the context information.
## Task Class
----------
The `Task` class is a nested class within the `Workflow` class. It represents an individual task in the workflow.
### Task Parameters
- `task` (str): The task description.
### Task Methods
### `add_child(child: 'Workflow.Task')`
Adds a child task to the current task.
- `child` ('Workflow.Task'): The child task to be added.
### `execute() -> Any`
Executes the task by running the associated agent's `run` method with the task as input.
Returns:
- `Any`: The response from the agent's `run` method.
## Functionality and Usage
-----------------------------------
To use the `Workflow` class, follow these steps:
1. Create an instance of the `Workflow` class, providing an agent object that has a `run` method. This agent will be responsible for executing the tasks in the workflow.
```
from swarms import Workflow
# Create an instance of the Workflow class
workflow = Workflow(agent=my_agent)
```
1. Add tasks to the workflow using the `add` method. Each task should be a string description.
```
# Add tasks to the workflow
task1 = workflow.add("Task 1")
task2 = workflow.add("Task 2")
task3 = workflow.add("Task 3")
```
1. Define the sequence of tasks by adding child tasks to each task using the `add_child` method.
```
# Define the sequence of tasks
task1.add_child(task2)
task2.add_child(task3)
```
1. Execute the workflow using the `run` method. This will run each task in order, with the output of each task being passed as input to the next task.
```
# Execute the workflow
workflow.run()
```
1. Access the output of each task using the `output` attribute of the task object.
```
# Access the output of each task
output1 = task1.output
output2 = task2.output
output3 = task3.output
```
1. Optionally, you can run the tasks in parallel by setting the `parallel` parameter to `True` when creating the `Workflow` object.
```
# Create a parallel workflow
parallel_workflow = Workflow(agent=my_agent, parallel=True)
```
1. You can also access the context information for a task using the `context` method. This method returns a dictionary containing the parent output, parent task, and child task for the given task.
```
# Access the context information for a task
context = workflow.context(task2)
parent_output = context["parent_output"]
parent_task = context["parent"]
child_task = context["child"]
```

@ -0,0 +1,258 @@
# AbstractWorker Class
====================
The `AbstractWorker` class is an abstract class for AI workers. An AI worker can communicate with other workers and perform actions. Different workers can differ in what actions they perform in the `receive` method.
## Class Definition
----------------
```
class AbstractWorker:
"""(In preview) An abstract class for AI worker.
An worker can communicate with other workers and perform actions.
Different workers can differ in what actions they perform in the `receive` method.
"""
```
## Initialization
--------------
The `AbstractWorker` class is initialized with a single parameter:
- `name` (str): The name of the worker.
```
def __init__(
self,
name: str,
):
"""
Args:
name (str): name of the worker.
"""
self._name = name
```
## Properties
----------
The `AbstractWorker` class has a single property:
- `name`: Returns the name of the worker.
```
@property
def name(self):
"""Get the name of the worker."""
return self._name
```
## Methods
-------
The `AbstractWorker` class has several methods:
### `run`
The `run` method is used to run the worker agent once. It takes a single parameter:
- `task` (str): The task to be run.
```
def run(
self,
task: str
):
"""Run the worker agent once"""
```
### `send`
The `send` method is used to send a message to another worker. It takes three parameters:
- `message` (Union[Dict, str]): The message to be sent.
- `recipient` (AbstractWorker): The recipient of the message.
- `request_reply` (Optional[bool]): If set to `True`, the method will request a reply from the recipient.
```
def send(
self,
message: Union[Dict, str],
recipient: AbstractWorker,
request_reply: Optional[bool] = None
):
"""(Abstract method) Send a message to another worker."""
```
### `a_send`
The `a_send` method is the asynchronous version of the `send` method. It takes the same parameters as the `send` method.
```
async def a_send(
self,
message: Union[Dict, str],
recipient: AbstractWorker,
request_reply: Optional[bool] = None
):
"""(Abstract async method) Send a message to another worker."""
```
### `receive`
The `receive` method is used to receive a message from another worker. It takes three parameters:
- `message` (Union[Dict, str]): The message to be received.
- `sender` (AbstractWorker): The sender of the message.
- `request_reply` (Optional[bool]): If set to `True`, the method will request a reply from the sender.
```
def receive(
self,
message: Union[Dict, str],
sender: AbstractWorker,
request_reply: Optional[bool] = None
):
"""(Abstract method) Receive a message from another worker."""
```
### `a_receive`
The `a_receive` method is the asynchronous version of the `receive` method. It takes the same parameters as the `receive` method.
```
async def a_receive(
self,
message: Union[Dict, str],
sender: AbstractWorker,
request_reply: Optional[bool] = None
):
"""(Abstract async method) Receive a message from another worker."""
```
### `reset`
The `reset` method is used to reset the worker.
```
def reset(self):
"""(Abstract method) Reset the worker."""
```
### `generate_reply`
The `generate_reply` method is used to generate a reply based on the received messages. It takes two parameters:
- `messages` (Optional[List[Dict]]): A list of messages received.
- `sender` (AbstractWorker): The sender of the messages.
The method returns a string, a dictionary, or `None`. If `None` is returned, no reply is generated.
```
def generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender: AbstractWorker,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""
```
### `a_generate_reply`
The `a_generate_reply` method is the asynchronous version of the `generate_reply` method. It
takes the same parameters as the `generate_reply` method.
```
async def a_generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender: AbstractWorker,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract async method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""
```
Usage Examples
--------------
### Example 1: Creating an AbstractWorker
```
from swarms.worker.base import AbstractWorker
worker = AbstractWorker(name="Worker1")
print(worker.name) # Output: Worker1
```
In this example, we create an instance of `AbstractWorker` named "Worker1" and print its name.
### Example 2: Sending a Message
```
from swarms.worker.base import AbstractWorker
worker1 = AbstractWorker(name="Worker1")
worker2 = AbstractWorker(name="Worker2")
message = {"content": "Hello, Worker2!"}
worker1.send(message, worker2)
```
In this example, "Worker1" sends a message to "Worker2". The message is a dictionary with a single key-value pair.
### Example 3: Receiving a Message
```
from swarms.worker.base import AbstractWorker
worker1 = AbstractWorker(name="Worker1")
worker2 = AbstractWorker(name="Worker2")
message = {"content": "Hello, Worker2!"}
worker1.send(message, worker2)
received_message = worker2.receive(message, worker1)
print(received_message) # Output: {"content": "Hello, Worker2!"}
```
In this example, "Worker1" sends a message to "Worker2". "Worker2" then receives the message and prints it.
Notes
-----
- The `AbstractWorker` class is an abstract class, which means it cannot be instantiated directly. Instead, it should be subclassed, and at least the `send`, `receive`, `reset`, and `generate_reply` methods should be overridden.
- The `send` and `receive` methods are abstract methods, which means they must be implemented in any subclass of `AbstractWorker`.
- The `a_send`, `a_receive`, and `a_generate_reply` methods are asynchronous methods, which means they return a coroutine that can be awaited using the `await` keyword.
- The `generate_reply` method is used to generate a reply based on the received messages. The exact implementation of this method will depend on the specific requirements of your application.
- The `reset` method is used to reset the state of the worker. The exact implementation of this method will depend on the specific requirements of your application.

@ -19,3 +19,4 @@ node = Worker(
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times." task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
response = node.run(task) response = node.run(task)
print(response) print(response)

@ -77,20 +77,26 @@ nav:
- Hiring: "hiring.md" - Hiring: "hiring.md"
- Swarms: - Swarms:
- Overview: "swarms/index.md" - Overview: "swarms/index.md"
- swarms.swarms:
- AutoScaler: "swarms/swarms/autoscaler.md" - AutoScaler: "swarms/swarms/autoscaler.md"
- Workers: - swarms.workers:
- Overview: "swarms/workers/index.md" - Overview: "swarms/workers/index.md"
- Agents: - AbstractWorker: "swarms/workers/abstract_worker.md"
- Base Models: - swarms.agents:
- AbstractAgent: "swarms/agents/abstract_agent.md"
- OmniModalAgent: "swarms/agents/omni_agent.md"
- swarms.models:
- Overview: "swarms/models/index.md" - Overview: "swarms/models/index.md"
- HuggingFaceLLM: "swarms/models/hf.md" - HuggingFaceLLM: "swarms/models/hf.md"
- Anthropic: "swarms/models/anthropic.md" - Anthropic: "swarms/models/anthropic.md"
- OmniModalAgent: "swarms/agents/omni_agent.md" - swarms.structs:
- Overview: "swarms/structs/overview.md"
- Workflow: "swarms/structs/workflow.md"
- Examples: - Examples:
- Overview: "examples/index.md" - Overview: "examples/index.md"
- Agents: - Agents:
- OmniAgent: "examples/omni_agent.md" - OmniAgent: "examples/omni_agent.md"
- Applications: - Applications:
- CustomerSupport: - CustomerSupport:
- Overview: "applications/customer_support.md" - Overview: "applications/customer_support.md"

@ -0,0 +1,18 @@
from swarms.agents.base import agent
from swarms.structs.nonlinear_worfklow import NonLinearWorkflow, Task
prompt = "develop a feedforward network in pytorch"
prompt2 = "Develop a self attention using pytorch"
task1 = Task("task1", prompt)
task2 = Task("task2", prompt2, parents=[task1])
#add tasks to workflow
workflow = NonLinearWorkflow(agent)
#add tasks to tree
workflow.add(task1)
workflow.add(task2)
#run
workflow.run()

@ -0,0 +1,8 @@
from swarms.structs.workflow import Workflow, StringTask
from langchain.llms import OpenAIChat
llm = OpenAIChat()
workflow = Workflow(llm)

@ -44,7 +44,7 @@ nodes = [
messages = [ messages = [
{ {
"role": "system", "role": "system",
"context": f"Create an a small feedforward in pytorch", "context": "Create an a small feedforward in pytorch",
} }
] ]

@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry] [tool.poetry]
name = "swarms" name = "swarms"
version = "1.7.7" version = "1.7.8"
description = "Swarms - Pytorch" description = "Swarms - Pytorch"
license = "MIT" license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"] authors = ["Kye Gomez <kye@apac.ai>"]
@ -63,3 +63,11 @@ types-pytz = "^2023.3.0.0"
black = "^23.1.0" black = "^23.1.0"
types-chardet = "^5.0.4.6" types-chardet = "^5.0.4.6"
mypy-protobuf = "^3.0.0" mypy-protobuf = "^3.0.0"
[tool.autopep8]
max_line_length = 120
ignore = "E501,W6" # or ["E501", "W6"]
in-place = true
recursive = true
aggressive = 3

@ -50,6 +50,7 @@ torchmetrics
transformers transformers
webdataset webdataset
yapf yapf
autopep8
mkdocs mkdocs

@ -1,23 +1,23 @@
# swarms # swarms
from swarms import agents
from swarms.swarms.orchestrate import Orchestrator
from swarms import swarms
from swarms import structs
from swarms import models
from swarms.workers.worker import Worker
from swarms import workers
from swarms.logo import logo2 from swarms.logo import logo2
print(logo2) print(logo2)
# worker # worker
from swarms import workers
from swarms.workers.worker import Worker
# boss # boss
# from swarms.boss.boss_node import Boss # from swarms.boss.boss_node import Boss
# models # models
from swarms import models
# structs # structs
from swarms import structs
# swarms # swarms
from swarms import swarms
from swarms.swarms.orchestrate import Orchestrator
# agents # agents
from swarms import agents

@ -8,8 +8,7 @@
from swarms.agents.omni_modal_agent import OmniModalAgent from swarms.agents.omni_modal_agent import OmniModalAgent
# utils # utils
from swarms.agents.message import Message from swarms.agents.message import Message
from swarms.agents.stream_response import stream from swarms.agents.stream_response import stream
# from swarms.agents.base import AbstractAgent from swarms.agents.base import AbstractAgent

@ -1,133 +0,0 @@
from __future__ import annotations
from typing import List, Optional
from langchain.chains.llm import LLMChain
from swarms.agents.utils.Agent import AgentOutputParser
from swarms.agents.utils.human_input import HumanInputRun
from swarms.memory.base_memory import BaseChatMessageHistory, ChatMessageHistory
from swarms.memory.document import Document
from swarms.models.base import AbstractModel
from swarms.models.prompts.agent_prompt_auto import (
MessageFormatter,
PromptConstructor,
)
from swarms.models.prompts.agent_prompt_generator import FINISH_NAME
from swarms.models.prompts.base import (
AIMessage,
HumanMessage,
SystemMessage,
)
from swarms.tools.base import BaseTool
class Agent:
"""Base Agent class"""
def __init__(
self,
ai_name: str,
chain: LLMChain,
memory,
output_parser: AgentOutputParser,
tools: List[BaseTool],
feedback_tool: Optional[HumanInputRun] = None,
chat_history_memory: Optional[BaseChatMessageHistory] = None,
):
self.ai_name = ai_name
self.chain = chain
self.memory = memory
self.next_action_count = 0
self.output_parser = output_parser
self.tools = tools
self.feedback_tool = feedback_tool
self.chat_history_memory = chat_history_memory or ChatMessageHistory()
@classmethod
def integrate(
cls,
ai_name: str,
ai_role: str,
memory,
tools: List[BaseTool],
llm: AbstractModel,
human_in_the_loop: bool = False,
output_parser: Optional[AgentOutputParser] = None,
chat_history_memory: Optional[BaseChatMessageHistory] = None,
) -> Agent:
prompt_constructor = PromptConstructor(ai_name=ai_name,
ai_role=ai_role,
tools=tools)
message_formatter = MessageFormatter()
human_feedback_tool = HumanInputRun() if human_in_the_loop else None
chain = LLMChain(llm=llm, prompt_constructor=prompt_constructor, message_formatter=message_formatter)
return cls(
ai_name,
memory,
chain,
output_parser or AgentOutputParser(),
tools,
feedback_tool=human_feedback_tool,
chat_history_memory=chat_history_memory,
)
def run(self, goals: List[str]) -> str:
user_input = (
"Determine which next command to use, and respond using the format specified above:"
)
loop_count = 0
while True:
loop_count += 1
# Send message to AI, get response
assistant_reply = self.chain.run(
goals=goals,
messages=self.chat_history_memory.messages,
memory=self.memory,
user_input=user_input,
)
print(assistant_reply)
self.chat_history_memory.add_message(HumanMessage(content=user_input))
self.chat_history_memory.add_message(AIMessage(content=assistant_reply))
# Get command name and arguments
action = self.output_parser.parse(assistant_reply)
tools = {t.name: t for t in self.tools}
if action.name == FINISH_NAME:
return action.args["response"]
if action.name in tools:
tool = tools[action.name]
try:
observation = tool.run(action.args)
except Exception as error:
observation = (
f"Validation Error in args: {str(error)}, args: {action.args}"
)
except Exception as e:
observation = (
f"Error: {str(e)}, {type(e).__name__}, args: {action.args}"
)
result = f"Command {tool.name} returned: {observation}"
elif action.name == "ERROR":
result = f"Error: {action.args}. "
else:
result = (
f"""Unknown command '{action.name}'.
Please refer to the 'COMMANDS' list for available
commands and only respond in the specified JSON format."""
)
memory_to_add = (
f"Assistant Reply: {assistant_reply} " f"\nResult: {result} "
)
if self.feedback_tool is not None:
feedback = f"\n{self.feedback_tool.run('Input: ')}"
if feedback in {"q", "stop"}:
print("EXITING")
return "EXITING"
memory_to_add += feedback
self.memory.add_documents([Document(page_content=memory_to_add)])
self.chat_history_memory.add_message(SystemMessage(content=result))

@ -7,6 +7,7 @@ import openai
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class OpenAI: class OpenAI:
def __init__( def __init__(
self, self,
@ -111,7 +112,7 @@ class OpenAI:
initial_prompt, initial_prompt,
rejected_solutions=None rejected_solutions=None
): ):
if (type(state) == str): if (isinstance(state, str)):
state_text = state state_text = state
else: else:
state_text = '\n'.join(state) state_text = '\n'.join(state)
@ -134,7 +135,6 @@ class OpenAI:
# print(f"Generated thoughts: {thoughts}") # print(f"Generated thoughts: {thoughts}")
return thoughts return thoughts
def generate_solution(self, def generate_solution(self,
initial_prompt, initial_prompt,
state, state,
@ -169,7 +169,7 @@ class OpenAI:
if self.evaluation_strategy == 'value': if self.evaluation_strategy == 'value':
state_values = {} state_values = {}
for state in states: for state in states:
if (type(state) == str): if (isinstance(state, str)):
state_text = state state_text = state
else: else:
state_text = '\n'.join(state) state_text = '\n'.join(state)
@ -193,6 +193,8 @@ class OpenAI:
else: else:
raise ValueError("Invalid evaluation strategy. Choose 'value' or 'vote'.") raise ValueError("Invalid evaluation strategy. Choose 'value' or 'vote'.")
class AoTAgent: class AoTAgent:
def __init__( def __init__(
self, self,

@ -1,28 +1,64 @@
from typing import Dict, List, Optional, Union
class AbstractAgent:
"""(In preview) An abstract class for AI agent.
An agent can communicate with other agents and perform actions.
Different agents can differ in what actions they perform in the `receive` method.
Agents are full and completed:
Agents = llm + tools + memory
"""
class AbsractAgent:
def __init__( def __init__(
self, self,
llm, name: str,
temperature # tools: List[Tool],
) -> None: #memory: Memory
):
"""
Args:
name (str): name of the agent.
"""
# a dictionary of conversations, default value is list
self._name = name
@property
def name(self):
"""Get the name of the agent."""
return self._name
def tools(self, tools):
"""init tools"""
def memory(self, memory_store):
"""init memory"""
pass pass
#single query def reset(self):
"""(Abstract method) Reset the agent."""
def run(self, task: str): def run(self, task: str):
pass """Run the agent once"""
# conversational back and forth def _arun(self, taks: str):
def chat(self, message: str): """Run Async run"""
message_historys = []
message_historys.append(message)
reply = self.run(message) def chat(self, messages: List[Dict]):
message_historys.append(reply) """Chat with the agent"""
return message_historys def _achat(
self,
messages: List[Dict]
):
"""Asynchronous Chat"""
def step(self, message): def step(self, message: str):
pass """Step through the agent"""
def reset(self): def _astep(self, message: str):
pass """Asynchronous step"""

File diff suppressed because it is too large Load Diff

@ -3,6 +3,7 @@ from typing import Any, Dict, List
from swarms.memory.base_memory import BaseChatMemory, get_prompt_input_key from swarms.memory.base_memory import BaseChatMemory, get_prompt_input_key
from swarms.memory.base import VectorStoreRetriever from swarms.memory.base import VectorStoreRetriever
class AgentMemory(BaseChatMemory): class AgentMemory(BaseChatMemory):
retriever: VectorStoreRetriever retriever: VectorStoreRetriever
"""VectorStoreRetriever object to connect to.""" """VectorStoreRetriever object to connect to."""

@ -1,5 +1,6 @@
import datetime import datetime
class Message: class Message:
""" """
Represents a message with timestamp and optional metadata. Represents a message with timestamp and optional metadata.

@ -3,5 +3,3 @@
# from .GroundingDINO.groundingdino.util import box_ops, SLConfig # from .GroundingDINO.groundingdino.util import box_ops, SLConfig
# from .GroundingDINO.groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap # from .GroundingDINO.groundingdino.util.utils import clean_state_dict, get_phrases_from_posmap
# from .segment_anything.segment_anything import build_sam, SamPredictor, SamAutomaticMaskGenerator # from .segment_anything.segment_anything import build_sam, SamPredictor, SamAutomaticMaskGenerator

@ -11,4 +11,3 @@
# Copied from DETR (https://github.com/facebookresearch/detr) # Copied from DETR (https://github.com/facebookresearch/detr)
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
# ------------------------------------------------------------------------ # ------------------------------------------------------------------------

@ -27,7 +27,7 @@ from torch.nn.init import constant_, xavier_uniform_
try: try:
from groundingdino import _C from groundingdino import _C
except: except BaseException:
warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!")
@ -241,7 +241,6 @@ class MultiScaleDeformableAttention(nn.Module):
level_start_index: Optional[torch.Tensor] = None, level_start_index: Optional[torch.Tensor] = None,
**kwargs **kwargs
) -> torch.Tensor: ) -> torch.Tensor:
"""Forward Function of MultiScaleDeformableAttention """Forward Function of MultiScaleDeformableAttention
Args: Args:

@ -1,6 +1,7 @@
from transformers import AutoTokenizer, BertModel, RobertaModel from transformers import AutoTokenizer, BertModel, RobertaModel
import os import os
def get_tokenlizer(text_encoder_type): def get_tokenlizer(text_encoder_type):
if not isinstance(text_encoder_type, str): if not isinstance(text_encoder_type, str):
# print("text_encoder_type is not a str") # print("text_encoder_type is not a str")

@ -170,7 +170,7 @@ class SLConfig(object):
elif isinstance(b, list): elif isinstance(b, list):
try: try:
_ = int(k) _ = int(k)
except: except BaseException:
raise TypeError( raise TypeError(
f"b is a list, " f"index {k} should be an int when input but {type(k)}" f"b is a list, " f"index {k} should be an int when input but {type(k)}"
) )

@ -268,6 +268,7 @@ def get_embedder(multires, i=0):
} }
embedder_obj = Embedder(**embed_kwargs) embedder_obj = Embedder(**embed_kwargs)
def embed(x, eo=embedder_obj): def embed(x, eo=embedder_obj):
return eo.embed(x) return eo.embed(x)
return embed, embedder_obj.out_dim return embed, embedder_obj.out_dim

@ -243,7 +243,7 @@ class COCOVisualizer:
for ann in anns: for ann in anns:
c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0] c = (np.random.random((1, 3)) * 0.6 + 0.4).tolist()[0]
if "segmentation" in ann: if "segmentation" in ann:
if type(ann["segmentation"]) == list: if isinstance(ann["segmentation"], list):
# polygon # polygon
for seg in ann["segmentation"]: for seg in ann["segmentation"]:
poly = np.array(seg).reshape((int(len(seg) / 2), 2)) poly = np.array(seg).reshape((int(len(seg) / 2), 2))
@ -252,7 +252,7 @@ class COCOVisualizer:
else: else:
# mask # mask
t = self.imgs[ann["image_id"]] t = self.imgs[ann["image_id"]]
if type(ann["segmentation"]["counts"]) == list: if isinstance(ann["segmentation"]["counts"], list):
rle = maskUtils.frPyObjects( rle = maskUtils.frPyObjects(
[ann["segmentation"]], t["height"], t["width"] [ann["segmentation"]], t["height"], t["width"]
) )
@ -267,7 +267,7 @@ class COCOVisualizer:
for i in range(3): for i in range(3):
img[:, :, i] = color_mask[i] img[:, :, i] = color_mask[i]
ax.imshow(np.dstack((img, m * 0.5))) ax.imshow(np.dstack((img, m * 0.5)))
if "keypoints" in ann and type(ann["keypoints"]) == list: if "keypoints" in ann and isinstance(ann["keypoints"], list):
# turn skeleton into zero-based index # turn skeleton into zero-based index
sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1 sks = np.array(self.loadCats(ann["category_id"])[0]["skeleton"]) - 1
kp = np.array(ann["keypoints"]) kp = np.array(ann["keypoints"])

@ -24,14 +24,14 @@ def create_positive_map_from_span(tokenized, token_span, max_text_len=256):
beg_pos = tokenized.char_to_token(beg + 1) beg_pos = tokenized.char_to_token(beg + 1)
if beg_pos is None: if beg_pos is None:
beg_pos = tokenized.char_to_token(beg + 2) beg_pos = tokenized.char_to_token(beg + 2)
except: except BaseException:
beg_pos = None beg_pos = None
if end_pos is None: if end_pos is None:
try: try:
end_pos = tokenized.char_to_token(end - 2) end_pos = tokenized.char_to_token(end - 2)
if end_pos is None: if end_pos is None:
end_pos = tokenized.char_to_token(end - 3) end_pos = tokenized.char_to_token(end - 3)
except: except BaseException:
end_pos = None end_pos = None
if beg_pos is None or end_pos is None: if beg_pos is None or end_pos is None:
continue continue

@ -3,4 +3,3 @@
# This source code is licensed under the license found in the # This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.

@ -3,4 +3,3 @@
# This source code is licensed under the license found in the # This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree. # LICENSE file in the root directory of this source tree.

@ -1,3 +1,4 @@
from swarms.agents.message import Message
import os import os
import random import random
import torch import torch
@ -36,7 +37,6 @@ import matplotlib.pyplot as plt
import wget import wget
# prompts # prompts
VISUAL_AGENT_PREFIX = """ VISUAL_AGENT_PREFIX = """
Worker Multi-Modal Agent is designed to be able to assist with Worker Multi-Modal Agent is designed to be able to assist with
@ -239,6 +239,7 @@ def get_new_image_name(org_img_name, func_name="update"):
new_file_name = f'{this_new_uuid}_{func_name}_{recent_prev_file_name}_{most_org_file_name}.png' new_file_name = f'{this_new_uuid}_{func_name}_{recent_prev_file_name}_{most_org_file_name}.png'
return os.path.join(head, new_file_name) return os.path.join(head, new_file_name)
class InstructPix2Pix: class InstructPix2Pix:
def __init__(self, device): def __init__(self, device):
print(f"Initializing InstructPix2Pix to {device}") print(f"Initializing InstructPix2Pix to {device}")
@ -604,6 +605,7 @@ class PoseText2Image:
f"Output Image: {updated_image_path}") f"Output Image: {updated_image_path}")
return updated_image_path return updated_image_path
class SegText2Image: class SegText2Image:
def __init__(self, device): def __init__(self, device):
print(f"Initializing SegText2Image to {device}") print(f"Initializing SegText2Image to {device}")
@ -815,10 +817,8 @@ class Segmenting:
if not os.path.exists(self.model_checkpoint_path): if not os.path.exists(self.model_checkpoint_path):
wget.download(url, out=self.model_checkpoint_path) wget.download(url, out=self.model_checkpoint_path)
def show_mask(self, mask: np.ndarray, image: np.ndarray, def show_mask(self, mask: np.ndarray, image: np.ndarray,
random_color: bool = False, transparency=1) -> np.ndarray: random_color: bool = False, transparency=1) -> np.ndarray:
"""Visualize a mask on top of an image. """Visualize a mask on top of an image.
Args: Args:
mask (np.ndarray): A 2D array of shape (H, W). mask (np.ndarray): A 2D array of shape (H, W).
@ -839,7 +839,6 @@ class Segmenting:
image = cv2.addWeighted(image, 0.7, mask_image.astype('uint8'), transparency, 0) image = cv2.addWeighted(image, 0.7, mask_image.astype('uint8'), transparency, 0)
return image return image
def show_box(self, box, ax, label): def show_box(self, box, ax, label):
@ -848,7 +847,6 @@ class Segmenting:
ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0, 0, 0, 0), lw=2)) ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0, 0, 0, 0), lw=2))
ax.text(x0, y0, label) ax.text(x0, y0, label)
def get_mask_with_boxes(self, image_pil, image, boxes_filt): def get_mask_with_boxes(self, image_pil, image, boxes_filt):
size = image_pil.size size = image_pil.size
@ -916,7 +914,6 @@ class Segmenting:
image, p.astype(int), radius=3, color=(255, 0, 0), thickness=-1) image, p.astype(int), radius=3, color=(255, 0, 0), thickness=-1)
return image return image
def segment_image_with_click(self, img, is_positive: bool): def segment_image_with_click(self, img, is_positive: bool):
self.sam_predictor.set_image(img) self.sam_predictor.set_image(img)
@ -971,7 +968,6 @@ class Segmenting:
multimask_output=False, multimask_output=False,
) )
img = self.show_mask(masks[0], img, random_color=False, transparency=0.3) img = self.show_mask(masks[0], img, random_color=False, transparency=0.3)
img = self.show_points(input_point, input_label, img) img = self.show_points(input_point, input_label, img)
@ -1016,6 +1012,7 @@ class Segmenting:
) )
return updated_image_path return updated_image_path
class Text2Box: class Text2Box:
def __init__(self, device): def __init__(self, device):
print(f"Initializing ObjectDetection to {device}") print(f"Initializing ObjectDetection to {device}")
@ -1035,6 +1032,7 @@ class Text2Box:
config_url = "https://raw.githubusercontent.com/IDEA-Research/GroundingDINO/main/groundingdino/config/GroundingDINO_SwinT_OGC.py" config_url = "https://raw.githubusercontent.com/IDEA-Research/GroundingDINO/main/groundingdino/config/GroundingDINO_SwinT_OGC.py"
if not os.path.exists(self.model_config_path): if not os.path.exists(self.model_config_path):
wget.download(config_url, out=self.model_config_path) wget.download(config_url, out=self.model_config_path)
def load_image(self, image_path): def load_image(self, image_path):
# load image # load image
image_pil = Image.open(image_path).convert("RGB") # load image image_pil = Image.open(image_path).convert("RGB") # load image
@ -1169,13 +1167,16 @@ class Inpainting:
self.inpaint = StableDiffusionInpaintPipeline.from_pretrained( self.inpaint = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting", revision=self.revision, torch_dtype=self.torch_dtype, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker')).to(device) "runwayml/stable-diffusion-inpainting", revision=self.revision, torch_dtype=self.torch_dtype, safety_checker=StableDiffusionSafetyChecker.from_pretrained('CompVis/stable-diffusion-safety-checker')).to(device)
def __call__(self, prompt, image, mask_image, height=512, width=512, num_inference_steps=50): def __call__(self, prompt, image, mask_image, height=512, width=512, num_inference_steps=50):
update_image = self.inpaint(prompt=prompt, image=image.resize((width, height)), update_image = self.inpaint(prompt=prompt, image=image.resize((width, height)),
mask_image=mask_image.resize((width, height)), height=height, width=width, num_inference_steps=num_inference_steps).images[0] mask_image=mask_image.resize((width, height)), height=height, width=width, num_inference_steps=num_inference_steps).images[0]
return update_image return update_image
class InfinityOutPainting: class InfinityOutPainting:
template_model = True # Add this line to show this is a template model. template_model = True # Add this line to show this is a template model.
def __init__(self, ImageCaptioning, Inpainting, VisualQuestionAnswering): def __init__(self, ImageCaptioning, Inpainting, VisualQuestionAnswering):
self.llm = OpenAI(temperature=0) self.llm = OpenAI(temperature=0)
self.ImageCaption = ImageCaptioning self.ImageCaption = ImageCaptioning
@ -1272,15 +1273,14 @@ class InfinityOutPainting:
return updated_image_path return updated_image_path
class ObjectSegmenting: class ObjectSegmenting:
template_model = True # Add this line to show this is a template model. template_model = True # Add this line to show this is a template model.
def __init__(self, Text2Box: Text2Box, Segmenting: Segmenting): def __init__(self, Text2Box: Text2Box, Segmenting: Segmenting):
# self.llm = OpenAI(temperature=0) # self.llm = OpenAI(temperature=0)
self.grounding = Text2Box self.grounding = Text2Box
self.sam = Segmenting self.sam = Segmenting
@prompts(name="Segment the given object", @prompts(name="Segment the given object",
description="useful when you only want to segment the certain objects in the picture" description="useful when you only want to segment the certain objects in the picture"
"according to the given text" "according to the given text"
@ -1341,7 +1341,6 @@ class ObjectSegmenting:
for mask in masks: for mask in masks:
image = self.sam.show_mask(mask[0].cpu().numpy(), image, random_color=True, transparency=0.3) image = self.sam.show_mask(mask[0].cpu().numpy(), image, random_color=True, transparency=0.3)
Image.fromarray(merged_mask) Image.fromarray(merged_mask)
return merged_mask return merged_mask
@ -1349,6 +1348,7 @@ class ObjectSegmenting:
class ImageEditing: class ImageEditing:
template_model = True template_model = True
def __init__(self, Text2Box: Text2Box, Segmenting: Segmenting, Inpainting: Inpainting): def __init__(self, Text2Box: Text2Box, Segmenting: Segmenting, Inpainting: Inpainting):
print("Initializing ImageEditing") print("Initializing ImageEditing")
self.sam = Segmenting self.sam = Segmenting
@ -1408,11 +1408,13 @@ class ImageEditing:
f"Output Image: {updated_image_path}") f"Output Image: {updated_image_path}")
return updated_image_path return updated_image_path
class BackgroundRemoving: class BackgroundRemoving:
''' '''
using to remove the background of the given picture using to remove the background of the given picture
''' '''
template_model = True template_model = True
def __init__(self, VisualQuestionAnswering: VisualQuestionAnswering, Text2Box: Text2Box, Segmenting: Segmenting): def __init__(self, VisualQuestionAnswering: VisualQuestionAnswering, Text2Box: Text2Box, Segmenting: Segmenting):
self.vqa = VisualQuestionAnswering self.vqa = VisualQuestionAnswering
self.obj_segmenting = ObjectSegmenting(Text2Box, Segmenting) self.obj_segmenting = ObjectSegmenting(Text2Box, Segmenting)
@ -1578,10 +1580,7 @@ class MultiModalVisualAgent:
self.memory.clear() self.memory.clear()
# usage
###### usage
from swarms.agents.message import Message
class MultiModalAgent: class MultiModalAgent:
@ -1619,6 +1618,7 @@ class MultiModalAgent:
""" """
def __init__( def __init__(
self, self,
load_dict, load_dict,
@ -1641,7 +1641,6 @@ class MultiModalAgent:
self.language = language self.language = language
self.history = [] self.history = []
def run_text( def run_text(
self, self,
text: str = None, text: str = None,
@ -1762,5 +1761,3 @@ class MultiModalAgent:
self.agent.clear_memory() self.agent.clear_memory()
except Exception as e: except Exception as e:
return f"Error cleaning memory: {str(e)}" return f"Error cleaning memory: {str(e)}"

@ -34,18 +34,22 @@ max_length = {
"ada": 2049 "ada": 2049
} }
def count_tokens(model_name, text): def count_tokens(model_name, text):
return len(encodings[model_name].encode(text)) return len(encodings[model_name].encode(text))
def get_max_context_length(model_name): def get_max_context_length(model_name):
return max_length[model_name] return max_length[model_name]
def get_token_ids_for_task_parsing(model_name): def get_token_ids_for_task_parsing(model_name):
text = '''{"task": "text-classification", "token-classification", "text2text-generation", "summarization", "translation", "question-answering", "conversational", "text-generation", "sentence-similarity", "tabular-classification", "object-detection", "image-classification", "image-to-image", "image-to-text", "text-to-image", "visual-question-answering", "document-question-answering", "image-segmentation", "text-to-speech", "text-to-video", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "canny-control", "hed-control", "mlsd-control", "normal-control", "openpose-control", "canny-text-to-image", "depth-text-to-image", "hed-text-to-image", "mlsd-text-to-image", "normal-text-to-image", "openpose-text-to-image", "seg-text-to-image", "args", "text", "path", "dep", "id", "<GENERATED>-"}''' text = '''{"task": "text-classification", "token-classification", "text2text-generation", "summarization", "translation", "question-answering", "conversational", "text-generation", "sentence-similarity", "tabular-classification", "object-detection", "image-classification", "image-to-image", "image-to-text", "text-to-image", "visual-question-answering", "document-question-answering", "image-segmentation", "text-to-speech", "text-to-video", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "canny-control", "hed-control", "mlsd-control", "normal-control", "openpose-control", "canny-text-to-image", "depth-text-to-image", "hed-text-to-image", "mlsd-text-to-image", "normal-text-to-image", "openpose-text-to-image", "seg-text-to-image", "args", "text", "path", "dep", "id", "<GENERATED>-"}'''
res = encodings[model_name].encode(text) res = encodings[model_name].encode(text)
res = list(set(res)) res = list(set(res))
return res return res
def get_token_ids_for_choose_model(model_name): def get_token_ids_for_choose_model(model_name):
text = '''{"id": "reason"}''' text = '''{"id": "reason"}'''
res = encodings[model_name].encode(text) res = encodings[model_name].encode(text)

@ -56,7 +56,6 @@ from transformers import (
) )
# logs # logs
warnings.filterwarnings("ignore") warnings.filterwarnings("ignore")
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
@ -295,7 +294,6 @@ def load_pipes(local_deployment):
model.load_state_dict(torch.load(f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/mlsd_large_512_fp32.pth"), strict=True) model.load_state_dict(torch.load(f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/mlsd_large_512_fp32.pth"), strict=True)
return MLSDdetector(model) return MLSDdetector(model)
hed_network = Network(f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/network-bsds500.pth") hed_network = Network(f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/network-bsds500.pth")
controlnet_sd_pipes = { controlnet_sd_pipes = {
@ -356,6 +354,7 @@ def load_pipes(local_deployment):
pipes = {**standard_pipes, **other_pipes, **controlnet_sd_pipes} pipes = {**standard_pipes, **other_pipes, **controlnet_sd_pipes}
return pipes return pipes
pipes = load_pipes(local_deployment) pipes = load_pipes(local_deployment)
end = time.time() end = time.time()
@ -363,10 +362,12 @@ during = end - start
print(f"[ ready ] {during}s") print(f"[ ready ] {during}s")
@app.route('/running', methods=['GET']) @app.route('/running', methods=['GET'])
def running(): def running():
return jsonify({"running": True}) return jsonify({"running": True})
@app.route('/status/<path:model_id>', methods=['GET']) @app.route('/status/<path:model_id>', methods=['GET'])
def status(model_id): def status(model_id):
disabled_models = ["microsoft/trocr-base-printed", "microsoft/trocr-base-handwritten"] disabled_models = ["microsoft/trocr-base-printed", "microsoft/trocr-base-handwritten"]
@ -377,6 +378,7 @@ def status(model_id):
print(f"[ check {model_id} ] failed") print(f"[ check {model_id} ] failed")
return jsonify({"loaded": False}) return jsonify({"loaded": False})
@app.route('/models/<path:model_id>', methods=['POST']) @app.route('/models/<path:model_id>', methods=['POST'])
def models(model_id): def models(model_id):
while "using" in pipes[model_id] and pipes[model_id]["using"]: while "using" in pipes[model_id] and pipes[model_id]["using"]:
@ -392,7 +394,7 @@ def models(model_id):
if "device" in pipes[model_id]: if "device" in pipes[model_id]:
try: try:
pipe.to(pipes[model_id]["device"]) pipe.to(pipes[model_id]["device"])
except: except BaseException:
pipe.device = torch.device(pipes[model_id]["device"]) pipe.device = torch.device(pipes[model_id]["device"])
pipe.model.to(pipes[model_id]["device"]) pipe.model.to(pipes[model_id]["device"])
@ -621,7 +623,7 @@ def models(model_id):
try: try:
pipe.to("cpu") pipe.to("cpu")
torch.cuda.empty_cache() torch.cuda.empty_cache()
except: except BaseException:
pipe.device = torch.device("cpu") pipe.device = torch.device("cpu")
pipe.model.to("cpu") pipe.model.to("cpu")
torch.cuda.empty_cache() torch.cuda.empty_cache()

@ -57,18 +57,22 @@ max_length = {
"ada": 2049 "ada": 2049
} }
def count_tokens(model_name, text): def count_tokens(model_name, text):
return len(encodings[model_name].encode(text)) return len(encodings[model_name].encode(text))
def get_max_context_length(model_name): def get_max_context_length(model_name):
return max_length[model_name] return max_length[model_name]
def get_token_ids_for_task_parsing(model_name): def get_token_ids_for_task_parsing(model_name):
text = '''{"task": "text-classification", "token-classification", "text2text-generation", "summarization", "translation", "question-answering", "conversational", "text-generation", "sentence-similarity", "tabular-classification", "object-detection", "image-classification", "image-to-image", "image-to-text", "text-to-image", "visual-question-answering", "document-question-answering", "image-segmentation", "text-to-speech", "text-to-video", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "canny-control", "hed-control", "mlsd-control", "normal-control", "openpose-control", "canny-text-to-image", "depth-text-to-image", "hed-text-to-image", "mlsd-text-to-image", "normal-text-to-image", "openpose-text-to-image", "seg-text-to-image", "args", "text", "path", "dep", "id", "<GENERATED>-"}''' text = '''{"task": "text-classification", "token-classification", "text2text-generation", "summarization", "translation", "question-answering", "conversational", "text-generation", "sentence-similarity", "tabular-classification", "object-detection", "image-classification", "image-to-image", "image-to-text", "text-to-image", "visual-question-answering", "document-question-answering", "image-segmentation", "text-to-speech", "text-to-video", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "canny-control", "hed-control", "mlsd-control", "normal-control", "openpose-control", "canny-text-to-image", "depth-text-to-image", "hed-text-to-image", "mlsd-text-to-image", "normal-text-to-image", "openpose-text-to-image", "seg-text-to-image", "args", "text", "path", "dep", "id", "<GENERATED>-"}'''
res = encodings[model_name].encode(text) res = encodings[model_name].encode(text)
res = list(set(res)) res = list(set(res))
return res return res
def get_token_ids_for_choose_model(model_name): def get_token_ids_for_choose_model(model_name):
text = '''{"id": "reason"}''' text = '''{"id": "reason"}'''
res = encodings[model_name].encode(text) res = encodings[model_name].encode(text)
@ -76,13 +80,7 @@ def get_token_ids_for_choose_model(model_name):
return res return res
######### #########
parser = argparse.ArgumentParser() parser = argparse.ArgumentParser()
parser.add_argument("--config", type=str, default="swarms/agents/workers/multi_modal_workers/omni_agent/config.yml") parser.add_argument("--config", type=str, default="swarms/agents/workers/multi_modal_workers/omni_agent/config.yml")
parser.add_argument("--mode", type=str, default="cli") parser.add_argument("--mode", type=str, default="cli")
@ -183,7 +181,7 @@ if inference_mode!="huggingface":
r = requests.get(Model_Server + "/running") r = requests.get(Model_Server + "/running")
if r.status_code != 200: if r.status_code != 200:
raise ValueError(message) raise ValueError(message)
except: except BaseException:
raise ValueError(message) raise ValueError(message)
@ -222,6 +220,7 @@ elif "HUGGINGFACE_ACCESS_TOKEN" in os.environ and os.getenv("HUGGINGFACE_ACCESS_
else: else:
raise ValueError(f"Incorrect HuggingFace token. Please check your {args.config} file.") raise ValueError(f"Incorrect HuggingFace token. Please check your {args.config} file.")
def convert_chat_to_completion(data): def convert_chat_to_completion(data):
messages = data.pop('messages', []) messages = data.pop('messages', [])
tprompt = "" tprompt = ""
@ -243,6 +242,7 @@ def convert_chat_to_completion(data):
data['max_tokens'] = data.get('max_tokens', max(get_max_context_length(LLM) - count_tokens(LLM_encoding, final_prompt), 1)) data['max_tokens'] = data.get('max_tokens', max(get_max_context_length(LLM) - count_tokens(LLM_encoding, final_prompt), 1))
return data return data
def send_request(data): def send_request(data):
api_key = data.pop("api_key") api_key = data.pop("api_key")
api_type = data.pop("api_type") api_type = data.pop("api_type")
@ -269,6 +269,7 @@ def send_request(data):
else: else:
return response.json()["choices"][0]["message"]["content"].strip() return response.json()["choices"][0]["message"]["content"].strip()
def replace_slot(text, entries): def replace_slot(text, entries):
for key, value in entries.items(): for key, value in entries.items():
if not isinstance(value, str): if not isinstance(value, str):
@ -276,6 +277,7 @@ def replace_slot(text, entries):
text = text.replace("{{" + key + "}}", value.replace('"', "'").replace('\n', "")) text = text.replace("{{" + key + "}}", value.replace('"', "'").replace('\n', ""))
return text return text
def find_json(s): def find_json(s):
s = s.replace("\'", "\"") s = s.replace("\'", "\"")
start = s.find("{") start = s.find("{")
@ -284,21 +286,24 @@ def find_json(s):
res = res.replace("\n", "") res = res.replace("\n", "")
return res return res
def field_extract(s, field): def field_extract(s, field):
try: try:
field_rep = re.compile(f'{field}.*?:.*?"(.*?)"', re.IGNORECASE) field_rep = re.compile(f'{field}.*?:.*?"(.*?)"', re.IGNORECASE)
extracted = field_rep.search(s).group(1).replace("\"", "\'") extracted = field_rep.search(s).group(1).replace("\"", "\'")
except: except BaseException:
field_rep = re.compile(f'{field}:\ *"(.*?)"', re.IGNORECASE) field_rep = re.compile(f'{field}:\ *"(.*?)"', re.IGNORECASE)
extracted = field_rep.search(s).group(1).replace("\"", "\'") extracted = field_rep.search(s).group(1).replace("\"", "\'")
return extracted return extracted
def get_id_reason(choose_str): def get_id_reason(choose_str):
reason = field_extract(choose_str, "reason") reason = field_extract(choose_str, "reason")
id = field_extract(choose_str, "id") id = field_extract(choose_str, "id")
choose = {"id": id, "reason": reason} choose = {"id": id, "reason": reason}
return id.strip(), reason.strip(), choose return id.strip(), reason.strip(), choose
def record_case(success, **args): def record_case(success, **args):
if success: if success:
f = open("logs/log_success.jsonl", "a") f = open("logs/log_success.jsonl", "a")
@ -308,6 +313,7 @@ def record_case(success, **args):
f.write(json.dumps(log) + "\n") f.write(json.dumps(log) + "\n")
f.close() f.close()
def image_to_bytes(img_url): def image_to_bytes(img_url):
img_byte = io.BytesIO() img_byte = io.BytesIO()
img_url.split(".")[-1] img_url.split(".")[-1]
@ -315,6 +321,7 @@ def image_to_bytes(img_url):
img_data = img_byte.getvalue() img_data = img_byte.getvalue()
return img_data return img_data
def resource_has_dep(command): def resource_has_dep(command):
args = command["args"] args = command["args"]
for _, v in args.items(): for _, v in args.items():
@ -322,6 +329,7 @@ def resource_has_dep(command):
return True return True
return False return False
def fix_dep(tasks): def fix_dep(tasks):
for task in tasks: for task in tasks:
args = task["args"] args = task["args"]
@ -335,6 +343,7 @@ def fix_dep(tasks):
task["dep"] = [-1] task["dep"] = [-1]
return tasks return tasks
def unfold(tasks): def unfold(tasks):
flag_unfold_task = False flag_unfold_task = False
try: try:
@ -361,6 +370,7 @@ def unfold(tasks):
return tasks return tasks
def chitchat(messages, api_key, api_type, api_endpoint): def chitchat(messages, api_key, api_type, api_endpoint):
data = { data = {
"model": LLM, "model": LLM,
@ -371,6 +381,7 @@ def chitchat(messages, api_key, api_type, api_endpoint):
} }
return send_request(data) return send_request(data)
def parse_task(context, input, api_key, api_type, api_endpoint): def parse_task(context, input, api_key, api_type, api_endpoint):
demos_or_presteps = parse_task_demos_or_presteps demos_or_presteps = parse_task_demos_or_presteps
messages = json.loads(demos_or_presteps) messages = json.loads(demos_or_presteps)
@ -404,6 +415,7 @@ def parse_task(context, input, api_key, api_type, api_endpoint):
} }
return send_request(data) return send_request(data)
def choose_model(input, task, metas, api_key, api_type, api_endpoint): def choose_model(input, task, metas, api_key, api_type, api_endpoint):
prompt = replace_slot(choose_model_prompt, { prompt = replace_slot(choose_model_prompt, {
"input": input, "input": input,
@ -454,6 +466,7 @@ def response_results(input, results, api_key, api_type, api_endpoint):
} }
return send_request(data) return send_request(data)
def huggingface_model_inference(model_id, data, task): def huggingface_model_inference(model_id, data, task):
task_url = f"https://api-inference.huggingface.co/models/{model_id}" # InferenceApi does not yet support some tasks task_url = f"https://api-inference.huggingface.co/models/{model_id}" # InferenceApi does not yet support some tasks
inference = InferenceApi(repo_id=model_id, token=config["huggingface"]["token"]) inference = InferenceApi(repo_id=model_id, token=config["huggingface"]["token"])
@ -586,6 +599,7 @@ def huggingface_model_inference(model_id, data, task):
result = {"generated audio": f"/audios/{name}.{type}"} result = {"generated audio": f"/audios/{name}.{type}"}
return result return result
def local_model_inference(model_id, data, task): def local_model_inference(model_id, data, task):
task_url = f"{Model_Server}/models/{model_id}" task_url = f"{Model_Server}/models/{model_id}"
@ -732,6 +746,7 @@ def get_model_status(model_id, url, headers, queue = None):
queue.put((model_id, False, None)) queue.put((model_id, False, None))
return False return False
def get_avaliable_models(candidates, topk=5): def get_avaliable_models(candidates, topk=5):
all_available_models = {"local": [], "huggingface": []} all_available_models = {"local": [], "huggingface": []}
threads = [] threads = []
@ -766,6 +781,7 @@ def get_avaliable_models(candidates, topk=5):
return all_available_models return all_available_models
def collect_result(command, choose, inference_result): def collect_result(command, choose, inference_result):
result = {"task": command} result = {"task": command}
result["inference result"] = inference_result result["inference result"] = inference_result
@ -945,6 +961,7 @@ def run_task(input, command, results, api_key, api_type, api_endpoint):
results[id] = collect_result(command, choose, inference_result) results[id] = collect_result(command, choose, inference_result)
return True return True
def chat_huggingface(messages, api_key, api_type, api_endpoint, return_planning=False, return_results=False): def chat_huggingface(messages, api_key, api_type, api_endpoint, return_planning=False, return_results=False):
start = time.time() start = time.time()
context = messages[:-1] context = messages[:-1]
@ -1032,6 +1049,7 @@ def chat_huggingface(messages, api_key, api_type, api_endpoint, return_planning
logger.info(f"response: {response}") logger.info(f"response: {response}")
return answer return answer
def test(): def test():
# single round examples # single round examples
inputs = [ inputs = [
@ -1055,6 +1073,7 @@ def test():
] ]
chat_huggingface(messages, API_KEY, API_TYPE, API_ENDPOINT, return_planning=False, return_results=False) chat_huggingface(messages, API_KEY, API_TYPE, API_ENDPOINT, return_planning=False, return_results=False)
def cli(): def cli():
messages = [] messages = []
print("Welcome to Jarvis! A collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors. Jarvis can plan tasks, schedule Hugging Face models, generate friendly responses based on your requests, and help you with many things. Please enter your request (`exit` to exit).") print("Welcome to Jarvis! A collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors. Jarvis can plan tasks, schedule Hugging Face models, generate friendly responses based on your requests, and help you with many things. Please enter your request (`exit` to exit).")

@ -10,5 +10,3 @@ class Replicator:
def run(self, task): def run(self, task):
pass pass

@ -30,6 +30,7 @@ class Step:
self.args = args self.args = args
self.tool = tool self.tool = tool
class Plan: class Plan:
def __init__( def __init__(
self, self,
@ -44,9 +45,6 @@ class Plan:
return str(self) return str(self)
class OmniModalAgent: class OmniModalAgent:
""" """
OmniModalAgent OmniModalAgent
@ -72,6 +70,7 @@ class OmniModalAgent:
agent = OmniModalAgent(llm) agent = OmniModalAgent(llm)
response = agent.run("Hello, how are you? Create an image of how your are doing!") response = agent.run("Hello, how are you? Create an image of how your are doing!")
""" """
def __init__( def __init__(
self, self,
llm: BaseLanguageModel, llm: BaseLanguageModel,
@ -105,7 +104,6 @@ class OmniModalAgent:
# self.task_executor = TaskExecutor # self.task_executor = TaskExecutor
self.history = [] self.history = []
def run( def run(
self, self,
input: str input: str
@ -203,5 +201,3 @@ class OmniModalAgent:
""" """
for token in response.split(): for token in response.split():
yield token yield token

@ -132,12 +132,6 @@ class SalesConversationChain(LLMChain):
return cls(prompt=prompt, llm=llm, verbose=verbose) return cls(prompt=prompt, llm=llm, verbose=verbose)
# Set up a knowledge base # Set up a knowledge base
def setup_knowledge_base(product_catalog: str = None): def setup_knowledge_base(product_catalog: str = None):
""" """
@ -186,8 +180,6 @@ def get_tools(product_catalog):
return tools return tools
class CustomPromptTemplateForTools(StringPromptTemplate): class CustomPromptTemplateForTools(StringPromptTemplate):
# The template to use # The template to use
template: str template: str
@ -238,7 +230,7 @@ class SalesConvoOutputParser(AgentOutputParser):
regex = r"Action: (.*?)[\n]*Action Input: (.*)" regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, text) match = re.search(regex, text)
if not match: if not match:
## TODO - this is not entirely reliable, sometimes results in an error. # TODO - this is not entirely reliable, sometimes results in an error.
return AgentFinish( return AgentFinish(
{ {
"output": "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?" "output": "I apologize, I was unable to find the answer to your question. Is there anything else I can help with?"
@ -405,7 +397,7 @@ class ProfitPilot(Chain, BaseModel):
tool_names = [tool.name for tool in tools] tool_names = [tool.name for tool in tools]
# WARNING: this output parser is NOT reliable yet # WARNING: this output parser is NOT reliable yet
## It makes assumptions about output from LLM which can break and throw an error # It makes assumptions about output from LLM which can break and throw an error
output_parser = SalesConvoOutputParser(ai_prefix=kwargs["salesperson_name"]) output_parser = SalesConvoOutputParser(ai_prefix=kwargs["salesperson_name"])
sales_agent_with_tools = LLMSingleActionAgent( sales_agent_with_tools = LLMSingleActionAgent(

@ -17,4 +17,3 @@ class ErrorArtifact(BaseArtifact):
from griptape.schemas import ErrorArtifactSchema from griptape.schemas import ErrorArtifactSchema
return dict(ErrorArtifactSchema().dump(self)) return dict(ErrorArtifactSchema().dump(self))

@ -5,6 +5,7 @@ import json
from typing import Optional from typing import Optional
from pydantic import BaseModel, Field, StrictStr from pydantic import BaseModel, Field, StrictStr
class Artifact(BaseModel): class Artifact(BaseModel):
""" """
@ -63,5 +64,3 @@ class Artifact(BaseModel):
) )
return _obj return _obj

@ -14,6 +14,7 @@ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(
# ---------- Boss Node ---------- # ---------- Boss Node ----------
class Boss: class Boss:
""" """
The Bose class is responsible for creating and executing tasks using the BabyAGI model. The Bose class is responsible for creating and executing tasks using the BabyAGI model.
@ -37,6 +38,7 @@ class Boss:
# Run the Bose to process the objective # Run the Bose to process the objective
boss.run() boss.run()
""" """
def __init__( def __init__(
self, self,
objective: str, objective: str,

@ -28,4 +28,3 @@ class PegasusEmbedding:
except Exception as e: except Exception as e:
logging.error(f"Failed to generate embeddings. Error: {e}") logging.error(f"Failed to generate embeddings. Error: {e}")
raise raise

@ -12,6 +12,7 @@ from swarms.swarms.swarms import HierarchicalSwarm
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
class HiveMind: class HiveMind:
def __init__( def __init__(
self, self,

@ -10,9 +10,11 @@ from swarms.memory.schemas import Task as APITask
class Step(APIStep): class Step(APIStep):
additional_properties: Optional[Dict[str, str]] = None additional_properties: Optional[Dict[str, str]] = None
class Task(APITask): class Task(APITask):
steps: List[Step] = [] steps: List[Step] = []
class NotFoundException(Exception): class NotFoundException(Exception):
""" """
Exception raised when a resource is not found. Exception raised when a resource is not found.
@ -23,6 +25,7 @@ class NotFoundException(Exception):
self.item_id = item_id self.item_id = item_id
super().__init__(f"{item_name} with {item_id} not found.") super().__init__(f"{item_name} with {item_id} not found.")
class TaskDB(ABC): class TaskDB(ABC):
async def create_task( async def create_task(
self, self,

@ -6,6 +6,7 @@ from typing import Union, List
import oceandb import oceandb
from oceandb.utils.embedding_function import MultiModalEmbeddingFunction from oceandb.utils.embedding_function import MultiModalEmbeddingFunction
class OceanDB: class OceanDB:
def __init__(self): def __init__(self):
try: try:

@ -1,6 +1,7 @@
import requests import requests
import os import os
class Anthropic: class Anthropic:
"""Anthropic large language models.""" """Anthropic large language models."""

@ -1,5 +1,6 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
class AbstractModel(ABC): class AbstractModel(ABC):
# abstract base class for language models # abstract base class for language models
def __init__(): def __init__():
@ -12,4 +13,3 @@ class AbstractModel(ABC):
def chat(self, prompt, history): def chat(self, prompt, history):
pass pass

@ -13,6 +13,7 @@ class Mistral:
result = model.run(task) result = model.run(task)
print(result) print(result)
""" """
def __init__( def __init__(
self, self,
ai_name: str = "Node Model Agent", ai_name: str = "Node Model Agent",
@ -151,4 +152,3 @@ class Mistral:
""" """
for token in response.split(): for token in response.split():
yield token yield token

@ -1,5 +1,6 @@
from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import AutoTokenizer, AutoModelForCausalLM
class Petals: class Petals:
"""Petals Bloom models.""" """Petals Bloom models."""

@ -3,11 +3,13 @@ import re
from abc import abstractmethod from abc import abstractmethod
from typing import Dict, NamedTuple from typing import Dict, NamedTuple
class AgentAction(NamedTuple): class AgentAction(NamedTuple):
"""Action returned by AgentOutputParser.""" """Action returned by AgentOutputParser."""
name: str name: str
args: Dict args: Dict
class BaseAgentOutputParser: class BaseAgentOutputParser:
"""Base Output parser for Agent.""" """Base Output parser for Agent."""
@ -15,6 +17,7 @@ class BaseAgentOutputParser:
def parse(self, text: str) -> AgentAction: def parse(self, text: str) -> AgentAction:
"""Return AgentAction""" """Return AgentAction"""
class AgentOutputParser(BaseAgentOutputParser): class AgentOutputParser(BaseAgentOutputParser):
"""Output parser for Agent.""" """Output parser for Agent."""

@ -1,6 +1,7 @@
import json import json
from typing import List from typing import List
class PromptGenerator: class PromptGenerator:
"""A class for generating custom prompt strings.""" """A class for generating custom prompt strings."""
@ -75,4 +76,3 @@ class PromptGenerator:
) )
return prompt_string return prompt_string

@ -2,6 +2,7 @@ import time
from typing import Any, List from typing import Any, List
from swarms.models.prompts.agent_prompt_generator import get_prompt from swarms.models.prompts.agent_prompt_generator import get_prompt
class TokenUtils: class TokenUtils:
@staticmethod @staticmethod
def count_tokens(text: str) -> int: def count_tokens(text: str) -> int:

@ -27,6 +27,7 @@ def generate_report_prompt(question, research_summary):
" in depth, with facts and numbers if available, a minimum of 1,200 words and with markdown syntax and apa format. "\ " in depth, with facts and numbers if available, a minimum of 1,200 words and with markdown syntax and apa format. "\
"Write all source urls at the end of the report in apa format" "Write all source urls at the end of the report in apa format"
def generate_search_queries_prompt(question): def generate_search_queries_prompt(question):
""" Generates the search queries prompt for the given question. """ Generates the search queries prompt for the given question.
Args: question (str): The question to generate the search queries prompt for Args: question (str): The question to generate the search queries prompt for
@ -69,6 +70,7 @@ def generate_outline_report_prompt(question, research_summary):
' The research report should be detailed, informative, in-depth, and a minimum of 1,200 words.' \ ' The research report should be detailed, informative, in-depth, and a minimum of 1,200 words.' \
' Use appropriate Markdown syntax to format the outline and ensure readability.' ' Use appropriate Markdown syntax to format the outline and ensure readability.'
def generate_concepts_prompt(question, research_summary): def generate_concepts_prompt(question, research_summary):
""" Generates the concepts prompt for the given question. """ Generates the concepts prompt for the given question.
Args: question (str): The question to generate the concepts prompt for Args: question (str): The question to generate the concepts prompt for
@ -96,6 +98,7 @@ def generate_lesson_prompt(concept):
return prompt return prompt
def get_report_by_type(report_type): def get_report_by_type(report_type):
report_type_mapping = { report_type_mapping = {
'research_report': generate_report_prompt, 'research_report': generate_report_prompt,

@ -10,6 +10,7 @@ from swarms.utils.serializable import Serializable
if TYPE_CHECKING: if TYPE_CHECKING:
from langchain.prompts.chat import ChatPromptTemplate from langchain.prompts.chat import ChatPromptTemplate
def get_buffer_string( def get_buffer_string(
messages: Sequence[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI" messages: Sequence[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI"
) -> str: ) -> str:
@ -95,7 +96,7 @@ class BaseMessageChunk(BaseMessage):
for k, v in right.items(): for k, v in right.items():
if k not in merged: if k not in merged:
merged[k] = v merged[k] = v
elif type(merged[k]) != type(v): elif not isinstance(merged[k], type(v)):
raise ValueError( raise ValueError(
f'additional_kwargs["{k}"] already exists in this message,' f'additional_kwargs["{k}"] already exists in this message,'
" but with a different type." " but with a different type."

@ -11,6 +11,7 @@ class Message:
The base abstract Message class. The base abstract Message class.
Messages are the inputs and outputs of ChatModels. Messages are the inputs and outputs of ChatModels.
""" """
def __init__(self, content: str, role: str, additional_kwargs: Dict = None): def __init__(self, content: str, role: str, additional_kwargs: Dict = None):
self.content = content self.content = content
self.role = role self.role = role
@ -25,6 +26,7 @@ class HumanMessage(Message):
""" """
A Message from a human. A Message from a human.
""" """
def __init__(self, content: str, role: str = "Human", additional_kwargs: Dict = None, example: bool = False): def __init__(self, content: str, role: str = "Human", additional_kwargs: Dict = None, example: bool = False):
super().__init__(content, role, additional_kwargs) super().__init__(content, role, additional_kwargs)
self.example = example self.example = example
@ -37,6 +39,7 @@ class AIMessage(Message):
""" """
A Message from an AI. A Message from an AI.
""" """
def __init__(self, content: str, role: str = "AI", additional_kwargs: Dict = None, example: bool = False): def __init__(self, content: str, role: str = "AI", additional_kwargs: Dict = None, example: bool = False):
super().__init__(content, role, additional_kwargs) super().__init__(content, role, additional_kwargs)
self.example = example self.example = example
@ -50,6 +53,7 @@ class SystemMessage(Message):
A Message for priming AI behavior, usually passed in as the first of a sequence A Message for priming AI behavior, usually passed in as the first of a sequence
of input messages. of input messages.
""" """
def __init__(self, content: str, role: str = "System", additional_kwargs: Dict = None): def __init__(self, content: str, role: str = "System", additional_kwargs: Dict = None):
super().__init__(content, role, additional_kwargs) super().__init__(content, role, additional_kwargs)
@ -61,6 +65,7 @@ class FunctionMessage(Message):
""" """
A Message for passing the result of executing a function back to a model. A Message for passing the result of executing a function back to a model.
""" """
def __init__(self, content: str, role: str = "Function", name: str, additional_kwargs: Dict = None): def __init__(self, content: str, role: str = "Function", name: str, additional_kwargs: Dict = None):
super().__init__(content, role, additional_kwargs) super().__init__(content, role, additional_kwargs)
self.name = name self.name = name
@ -73,6 +78,7 @@ class ChatMessage(Message):
""" """
A Message that can be assigned an arbitrary speaker (i.e. role). A Message that can be assigned an arbitrary speaker (i.e. role).
""" """
def __init__(self, content: str, role: str, additional_kwargs: Dict = None): def __init__(self, content: str, role: str, additional_kwargs: Dict = None):
super().__init__(content, role, additional_kwargs) super().__init__(content, role, additional_kwargs)

@ -21,6 +21,7 @@ def character(character_name, topic, word_limit):
""" """
return prompt return prompt
def debate_monitor(game_description, word_limit, character_names): def debate_monitor(game_description, word_limit, character_names):
prompt = f""" prompt = f"""

@ -1,6 +1,5 @@
SALES_ASSISTANT_PROMPT = """You are a sales assistant helping your sales agent to determine which stage of a sales conversation should the agent move to, or stay at. SALES_ASSISTANT_PROMPT = """You are a sales assistant helping your sales agent to determine which stage of a sales conversation should the agent move to, or stay at.
Following '===' is the conversation history. Following '===' is the conversation history.
Use this conversation history to make your decision. Use this conversation history to make your decision.
@ -55,4 +54,3 @@ conversation_stages = {'1' : "Introduction: Start the conversation by introducin
'5': "Solution presentation: Based on the prospect's needs, present your product/service as the solution that can address their pain points.", '5': "Solution presentation: Based on the prospect's needs, present your product/service as the solution that can address their pain points.",
'6': "Objection handling: Address any objections that the prospect may have regarding your product/service. Be prepared to provide evidence or testimonials to support your claims.", '6': "Objection handling: Address any objections that the prospect may have regarding your product/service. Be prepared to provide evidence or testimonials to support your claims.",
'7': "Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits."} '7': "Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits."}

@ -9,7 +9,6 @@ conversation_stages = {
} }
SALES_AGENT_TOOLS_PROMPT = """ SALES_AGENT_TOOLS_PROMPT = """
Never forget your name is {salesperson_name}. You work as a {salesperson_role}. Never forget your name is {salesperson_name}. You work as a {salesperson_role}.
You work at company named {company_name}. {company_name}'s business is the following: {company_business}. You work at company named {company_name}. {company_name}'s business is the following: {company_business}.

@ -2,6 +2,7 @@ from typing import List, Dict, Any, Union
from concurrent.futures import Executor, ThreadPoolExecutor, as_completed from concurrent.futures import Executor, ThreadPoolExecutor, as_completed
from graphlib import TopologicalSorter from graphlib import TopologicalSorter
class Task: class Task:
def __init__( def __init__(
self, self,
@ -19,3 +20,89 @@ class Task:
def execute(self): def execute(self):
raise NotImplementedError raise NotImplementedError
class NonLinearWorkflow:
"""
NonLinearWorkflow constructs a non sequential DAG of tasks to be executed by agents
Architecture:
NonLinearWorkflow = Task + Agent + Executor
ASCII Diagram:
+-------------------+
| NonLinearWorkflow |
+-------------------+
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
+-------------------+
"""
def __init__(
self,
agents,
iters_per_task
):
"""A workflow is a collection of tasks that can be executed in parallel or sequentially."""
super().__init__()
self.executor = ThreadPoolExecutor()
self.agents = agents
self.tasks = []
def add(self, task: Task):
"""Add a task to the workflow"""
assert isinstance(
task,
Task
), "Input must be an nstance of Task"
self.tasks.append(task)
return task
def run(self):
"""Run the workflow"""
ordered_tasks = self.ordered_tasks
exit_loop = False
while not self.is_finished() and not exit_loop:
futures_list = {}
for task in ordered_tasks:
if task.can_execute:
future = self.executor.submit(self.agents.run, task.task_string)
futures_list[future] = task
for future in as_completed(futures_list):
if isinstance(future.result(), Exception):
exit_loop = True
break
return self.output_tasks()
def output_tasks(self) -> List[Task]:
"""Output tasks from the workflow"""
return [task for task in self.tasks if not task.children]
def to_graph(self) -> Dict[str, set[str]]:
"""Convert the workflow to a graph"""
graph = {
task.id: set(child.id for child in task.children) for task in self.tasks
}
return graph
def order_tasks(self) -> List[Task]:
"""Order the tasks USING TOPOLOGICAL SORTING"""
task_order = TopologicalSorter(
self.to_graph()
).static_order()
return [
self.find_task(task_id) for task_id in task_order
]

@ -3,13 +3,12 @@ from __future__ import annotations
import json import json
import pprint import pprint
import uuid import uuid
from abc import ABC from abc import ABC, abstractmethod
from enum import Enum from enum import Enum
from typing import Any, Optional from typing import Any, List, Optional, Union
from swarms.artifacts.main import Artifact
from pydantic import BaseModel, Field, StrictStr, conlist from pydantic import BaseModel, Field, StrictStr, conlist
from swarms.artifacts.main import Artifact
from swarms.artifacts.error_artifact import ErrorArtifact from swarms.artifacts.error_artifact import ErrorArtifact
@ -20,37 +19,37 @@ class BaseTask(ABC):
FINISHED = 3 FINISHED = 3
def __init__(self): def __init__(self):
self.id = uuid.uuid4().hex self.id: str = uuid.uuid4().hex
self.state = self.State.PENDING self.state: BaseTask.State = self.State.PENDING
self.parent_ids = [] self.parent_ids: List[str] = []
self.child_ids = [] self.child_ids: List[str] = []
self.output = None self.output: Optional[Union[Artifact, ErrorArtifact]] = None
self.structure = None self.structure: Optional['Structure'] = None
@property @property
# @abstractmethod @abstractmethod
def input(self): def input(self) -> Any:
pass pass
@property @property
def parents(self): def parents(self) -> List[BaseTask]:
return [self.structure.find_task(parent_id) for parent_id in self.parent_ids] return [self.structure.find_task(parent_id) for parent_id in self.parent_ids]
@property @property
def children(self): def children(self) -> List[BaseTask]:
return [self.structure.find_task(child_id) for child_id in self.child_ids] return [self.structure.find_task(child_id) for child_id in self.child_ids]
def __rshift__(self, child): def __rshift__(self, child: BaseTask) -> BaseTask:
return self.add_child(child) return self.add_child(child)
def __lshift__(self, child): def __lshift__(self, child: BaseTask) -> BaseTask:
return self.add_parent(child) return self.add_parent(child)
def preprocess(self, structure): def preprocess(self, structure: 'Structure') -> BaseTask:
self.structure = structure self.structure = structure
return self return self
def add_child(self, child): def add_child(self, child: BaseTask) -> BaseTask:
if self.structure: if self.structure:
child.structure = self.structure child.structure = self.structure
elif child.structure: elif child.structure:
@ -70,7 +69,7 @@ class BaseTask(ABC):
return child return child
def add_parent(self, parent): def add_parent(self, parent: BaseTask) -> BaseTask:
if self.structure: if self.structure:
parent.structure = self.structure parent.structure = self.structure
elif parent.structure: elif parent.structure:
@ -90,22 +89,22 @@ class BaseTask(ABC):
return parent return parent
def is_pending(self): def is_pending(self) -> bool:
return self.state == self.State.PENDING return self.state == self.State.PENDING
def is_finished(self): def is_finished(self) -> bool:
return self.state == self.State.FINISHED return self.state == self.State.FINISHED
def is_executing(self): def is_executing(self) -> bool:
return self.state == self.State.EXECUTING return self.state == self.State.EXECUTING
def before_run(self): def before_run(self) -> None:
pass pass
def after_run(self): def after_run(self) -> None:
pass pass
def execute(self): def execute(self) -> Optional[Union[Artifact, ErrorArtifact]]:
try: try:
self.state = self.State.EXECUTING self.state = self.State.EXECUTING
self.before_run() self.before_run()
@ -117,30 +116,19 @@ class BaseTask(ABC):
self.state = self.State.FINISHED self.state = self.State.FINISHED
return self.output return self.output
def can_execute(self): def can_execute(self) -> bool:
return self.state == self.State.PENDING and all(parent.is_finished() for parent in self.parents) return self.state == self.State.PENDING and all(parent.is_finished() for parent in self.parents)
def reset(self): def reset(self) -> BaseTask:
self.state = self.State.PENDING self.state = self.State.PENDING
self.output = None self.output = None
return self return self
# @abstractmethod @abstractmethod
def run(self): def run(self) -> Optional[Union[Artifact, ErrorArtifact]]:
pass pass
class Task(BaseModel): class Task(BaseModel):
input: Optional[StrictStr] = Field( input: Optional[StrictStr] = Field(
None, None,
@ -154,66 +142,37 @@ class Task(BaseModel):
..., ...,
description="ID of the task" description="ID of the task"
) )
artifacts: conlist(Artifact) = Field( artifacts: conlist(Artifact, min_items=1) = Field(
..., ...,
description="A list of artifacts that the task has been produced" description="A list of artifacts that the task has been produced"
) )
__properties = ["input", "additional_input", "task_id", "artifact"]
class Config: class Config:
#pydantic config
allow_population_by_field_name = True allow_population_by_field_name = True
validate_assignment = True validate_assignment = True
def to_str(self) -> str: def to_str(self) -> str:
"""Returns the str representation of the model using alias"""
return pprint.pformat(self.dict(by_alias=True)) return pprint.pformat(self.dict(by_alias=True))
def to_json(self) -> str: def to_json(self) -> str:
"""Returns the JSON representation of the model using alias""" return json.dumps(self.dict(by_alias=True, exclude_none=True))
return json.dumps(self.to_dict())
@classmethod @classmethod
def from_json(cls, json_str: str) -> Task: def from_json(cls, json_str: str) -> 'Task':
"""Create an instance of Task from a json string""" return cls.parse_raw(json_str)
return cls.from_dict(json.loads(json_str))
def to_dict(self):
"""Returns the dict representation of the model using alias"""
_dict = self.dict(by_alias=True, exclude={}, exclude_none=True)
_items =[]
if self.artifacts:
for _item in self.artifacts:
if _item:
_items.append(_item.to_dict())
_dict["artifacts"] = _items
#set to None if additional input is None
# and __fields__set contains the field
if self.additional_input is None and "additional_input" in self.__fields__set__:
_dict["additional_input"] = None
def to_dict(self) -> dict:
_dict = self.dict(by_alias=True, exclude_none=True)
if self.artifacts:
_dict["artifacts"] = [artifact.dict(by_alias=True, exclude_none=True) for artifact in self.artifacts]
return _dict return _dict
@classmethod @classmethod
def from_dict(cls, obj: dict) -> Task: def from_dict(cls, obj: dict) -> 'Task':
"""Create an instance of Task from dict"""
if obj is None: if obj is None:
return None return None
if not isinstance(obj, dict): if not isinstance(obj, dict):
return Task.parse_obj(obj) raise ValueError("Input must be a dictionary.")
if 'artifacts' in obj:
_obj = Task.parse_obj( obj['artifacts'] = [Artifact.parse_obj(artifact) for artifact in obj['artifacts']]
{ return cls.parse_obj(obj)
"input": obj.get("input"),
"additional_input": obj.get("additional_input"),
"task_id": obj.get("task_id"),
"artifacts": [
Artifact.from_dict(_item) for _item in obj.get("artifacts")
]
if obj.get("artifacts") is not None
else None,
}
)

@ -1,24 +1,8 @@
from __future__ import annotations from __future__ import annotations
import concurrent.futures from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from swarms.artifacts.error_artifact import ErrorArtifact
from swarms.structs.task import BaseTask
class StringTask(BaseTask):
def __init__(self, task):
super().__init__()
self.task = task
def execute(self) -> Any:
prompt = self.task.replace(
"{{ parent_input }}", self.parents[0].output if self.parents else ""
)
response = self.structure.llm(prompt)
self.output = response
return response
class Workflow: class Workflow:
""" """
@ -41,21 +25,34 @@ class Workflow:
""" """
def __init__( class Task:
self, def __init__(self, task: str):
llm, self.task = task
parallel: bool = False self.parents = []
): self.children = []
self.llm = llm self.output = None
self.tasks: List[BaseTask] = [] self.structure = None
self.parallel = parallel
def add_child(self, child: 'Workflow.Task'):
self.children.append(child)
child.parents.append(self)
child.structure = self.structure
def execute(self) -> Any:
prompt = self.task.replace(
"{{ parent_input }}", self.parents[0].output if self.parents else ""
)
response = self.structure.agent.run(prompt)
self.output = response
return response
def __init__(self, agent, parallel: bool = False):
self.agent = agent
self.tasks: List[Workflow.Task] = []
self.parallel = parallel
def add( def add(self, task: str) -> Task:
self, task = self.Task(task)
task: BaseTask
) -> BaseTask:
task = StringTask(task)
if self.last_task(): if self.last_task():
self.last_task().add_child(task) self.last_task().add_child(task)
@ -64,48 +61,35 @@ class Workflow:
self.tasks.append(task) self.tasks.append(task)
return task return task
def first_task(self) -> Optional[BaseTask]: def first_task(self) -> Optional[Task]:
return self.tasks[0] if self.tasks else None return self.tasks[0] if self.tasks else None
def last_task(self) -> Optional[BaseTask]: def last_task(self) -> Optional[Task]:
return self.tasks[-1] if self.tasks else None return self.tasks[-1] if self.tasks else None
def run(self, *args) -> BaseTask: def run(self, *args) -> Task:
self._execution_args = args
[task.reset() for task in self.tasks] [task.reset() for task in self.tasks]
if self.parallel: if self.parallel:
with concurrent.futures.ThreadPoolExecutor() as executor: with ThreadPoolExecutor() as executor:
list(executor.map(self.__run_from_task, [self.first_task])) list(executor.map(self.__run_from_task, [self.first_task]))
else: else:
self.__run_from_task(self.first_task()) self.__run_from_task(self.first_task())
self._execution_args = ()
return self.last_task() return self.last_task()
def context(self, task: Task) -> Dict[str, Any]:
def context(self, task: BaseTask) -> Dict[str, Any]: return {
context = super().context(task) "parent_output": task.parents[0].output if task.parents and task.parents[0].output else None,
context.update(
{
"parent_output": task.parents[0].output.to_text() \
if task.parents and task.parents[0].output else None,
"parent": task.parents[0] if task.parents else None, "parent": task.parents[0] if task.parents else None,
"child": task.children[0] if task.children else None "child": task.children[0] if task.children else None
} }
)
return context
def __run_from_task(self, task: Optional[BaseTask]) -> None: def __run_from_task(self, task: Optional[Task]) -> None:
if task is None: if task is None:
return return
else: else:
if isinstance(task.execute(), ErrorArtifact): if isinstance(task.execute(), Exception):
return return
else: else:
self.__run_from_task(next(iter(task.children), None)) self.__run_from_task(next(iter(task.children), None))

@ -5,6 +5,7 @@ from time import sleep
from swarms.utils.decorators import error_decorator, log_decorator, timing_decorator from swarms.utils.decorators import error_decorator, log_decorator, timing_decorator
from swarms.workers.worker import Worker from swarms.workers.worker import Worker
class AutoScaler: class AutoScaler:
""" """
The AutoScaler is like a kubernetes pod, that autoscales an agent or worker or boss! The AutoScaler is like a kubernetes pod, that autoscales an agent or worker or boss!
@ -91,4 +92,3 @@ class AutoScaler:
if self.agents_pool: if self.agents_pool:
agent_to_remove = self.agents_poo.pop() agent_to_remove = self.agents_poo.pop()
del agent_to_remove del agent_to_remove

@ -1,5 +1,6 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
class AbstractSwarm(ABC): class AbstractSwarm(ABC):
# TODO: Pass in abstract LLM class that can utilize Hf or Anthropic models, Move away from OPENAI # TODO: Pass in abstract LLM class that can utilize Hf or Anthropic models, Move away from OPENAI
# TODO: ADD Universal Communication Layer, a ocean vectorstore instance # TODO: ADD Universal Communication Layer, a ocean vectorstore instance
@ -19,5 +20,3 @@ class AbstractSwarm(ABC):
@abstractmethod @abstractmethod
def run(self): def run(self):
pass pass

@ -1,6 +1,7 @@
from typing import List from typing import List
from swarms.workers.worker import Worker from swarms.workers.worker import Worker
class DialogueSimulator: class DialogueSimulator:
def __init__(self, agents: List[Worker]): def __init__(self, agents: List[Worker]):
self.agents = agents self.agents = agents

@ -29,6 +29,7 @@ class GodMode:
""" """
def __init__( def __init__(
self, self,
llms llms

@ -72,7 +72,6 @@ class GroupChat:
) )
class GroupChatManager(Worker): class GroupChatManager(Worker):
def __init__( def __init__(
self, self,
@ -85,9 +84,9 @@ class GroupChatManager(Worker):
): ):
super().__init__( super().__init__(
ai_name=ai_name, ai_name=ai_name,
max_consecutive_auto_reply=max_consecutive_auto_reply, # max_consecutive_auto_reply=max_consecutive_auto_reply,
human_input_mode=human_input_mode, # human_input_mode=human_input_mode,
system_message=system_message, # system_message=system_message,
**kwargs **kwargs
) )
self.register_reply( self.register_reply(

@ -3,14 +3,18 @@ import tenacity
from langchain.output_parsers import RegexParser from langchain.output_parsers import RegexParser
# utils # utils
class BidOutputParser(RegexParser): class BidOutputParser(RegexParser):
def get_format_instructions(self) -> str: def get_format_instructions(self) -> str:
return "Your response should be an integrater delimited by angled brackets like this: <int>" return "Your response should be an integrater delimited by angled brackets like this: <int>"
bid_parser = BidOutputParser( bid_parser = BidOutputParser(
regex=r"<(\d+)>", output_keys=["bid"], default_output_key="bid" regex=r"<(\d+)>", output_keys=["bid"], default_output_key="bid"
) )
def select_next_speaker( def select_next_speaker(
step: int, step: int,
agents, agents,

@ -7,6 +7,7 @@ def select_speaker(step: int, agents: List[Worker]) -> int:
# This function selects the speaker in a round-robin fashion # This function selects the speaker in a round-robin fashion
return step % len(agents) return step % len(agents)
class MultiAgentDebate: class MultiAgentDebate:
""" """
MultiAgentDebate MultiAgentDebate
@ -15,6 +16,7 @@ class MultiAgentDebate:
""" """
def __init__( def __init__(
self, self,
agents: List[Worker], agents: List[Worker],

@ -15,6 +15,7 @@ class TaskStatus(Enum):
COMPLETED = 3 COMPLETED = 3
FAILED = 4 FAILED = 4
class Orchestrator: class Orchestrator:
""" """
The Orchestrator takes in an agent, worker, or boss as input The Orchestrator takes in an agent, worker, or boss as input
@ -88,6 +89,7 @@ class Orchestrator:
print(orchestrator.retrieve_result(id(task))) print(orchestrator.retrieve_result(id(task)))
``` ```
""" """
def __init__( def __init__(
self, self,
agent, agent,
@ -121,8 +123,8 @@ class Orchestrator:
self.embed_func = embed_func if embed_func else self.embed self.embed_func = embed_func if embed_func else self.embed
# @abstractmethod # @abstractmethod
def assign_task( def assign_task(
self, self,
agent_id: int, agent_id: int,
@ -170,8 +172,8 @@ class Orchestrator:
embedding = openai(input) embedding = openai(input)
return embedding return embedding
# @abstractmethod # @abstractmethod
def retrieve_results(self, agent_id: int) -> Any: def retrieve_results(self, agent_id: int) -> Any:
"""Retrieve results from a specific agent""" """Retrieve results from a specific agent"""
@ -202,8 +204,8 @@ class Orchestrator:
logging.error(f"Failed to update the vector database. Error: {e}") logging.error(f"Failed to update the vector database. Error: {e}")
raise raise
# @abstractmethod # @abstractmethod
def get_vector_db(self): def get_vector_db(self):
"""Retrieve the vector database""" """Retrieve the vector database"""
return self.collection return self.collection
@ -291,9 +293,6 @@ class Orchestrator:
objective=f"chat with agent {receiver_id} about {message}" objective=f"chat with agent {receiver_id} about {message}"
) )
def add_agents( def add_agents(
self, self,
num_agents: int num_agents: int
@ -311,4 +310,3 @@ class Orchestrator:
self.executor = ThreadPoolExecutor( self.executor = ThreadPoolExecutor(
max_workers=self.agents.qsize() max_workers=self.agents.qsize()
) )

@ -13,6 +13,7 @@ class TaskStatus(Enum):
COMPLETED = 3 COMPLETED = 3
FAILED = 4 FAILED = 4
class ScalableGroupChat: class ScalableGroupChat:
""" """
This is a class to enable scalable groupchat like a telegram, it takes an Worker as an input This is a class to enable scalable groupchat like a telegram, it takes an Worker as an input
@ -26,6 +27,7 @@ class ScalableGroupChat:
-> every worker can communicate without restrictions in parallel -> every worker can communicate without restrictions in parallel
""" """
def __init__( def __init__(
self, self,
worker_count: int = 5, worker_count: int = 5,
@ -61,7 +63,6 @@ class ScalableGroupChat:
return embedding return embedding
def retrieve_results( def retrieve_results(
self, self,
agent_id: int agent_id: int
@ -95,13 +96,12 @@ class ScalableGroupChat:
logging.error(f"Failed to update the vector database. Error: {e}") logging.error(f"Failed to update the vector database. Error: {e}")
raise raise
# @abstractmethod # @abstractmethod
def get_vector_db(self): def get_vector_db(self):
"""Retrieve the vector database""" """Retrieve the vector database"""
return self.collection return self.collection
def append_to_db( def append_to_db(
self, self,
result: str result: str
@ -118,8 +118,6 @@ class ScalableGroupChat:
logging.error(f"Failed to append the agent output to database. Error: {e}") logging.error(f"Failed to append the agent output to database. Error: {e}")
raise raise
def chat( def chat(
self, self,
sender_id: int, sender_id: int,
@ -158,5 +156,3 @@ class ScalableGroupChat:
self.run( self.run(
objective=f"chat with agent {receiver_id} about {message}" objective=f"chat with agent {receiver_id} about {message}"
) )

@ -1,12 +1,14 @@
from swarms.workers.worker import Worker from swarms.workers.worker import Worker
from queue import Queue, PriorityQueue from queue import Queue, PriorityQueue
class SimpleSwarm: class SimpleSwarm:
def __init__( def __init__(
self, self,
num_workers, num_workers: int = None,
openai_api_key, openai_api_key: str = None,
ai_name ai_name: str = None,
rounds: int = 1,
): ):
""" """
@ -37,14 +39,14 @@ class SimpleSwarm:
""" """
self.workers = [ self.workers = [
Worker(openai_api_key, ai_name) for _ in range(num_workers) Worker(ai_name, ai_name, openai_api_key) for _ in range(num_workers)
] ]
self.task_queue = Queue() self.task_queue = Queue()
self.priority_queue = PriorityQueue() self.priority_queue = PriorityQueue()
def distribute( def distribute(
self, self,
task, task: str = None,
priority=None priority=None
): ):
"""Distribute a task to the workers""" """Distribute a task to the workers"""
@ -78,7 +80,6 @@ class SimpleSwarm:
return responses return responses
def run_old(self, task): def run_old(self, task):
responses = [] responses = []

@ -1,3 +1,17 @@
import interpreter
from transformers import (
BlipForQuestionAnswering,
BlipProcessor,
)
from PIL import Image
import torch
from swarms.utils.logger import logger
from pydantic import Field
from langchain.tools.file_management.write import WriteFileTool
from langchain.tools.file_management.read import ReadFileTool
from langchain.tools import BaseTool
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.qa_with_sources.loading import BaseCombineDocumentsChain
import asyncio import asyncio
import os import os
@ -13,16 +27,6 @@ from langchain.docstore.document import Document
ROOT_DIR = "./data/" ROOT_DIR = "./data/"
from langchain.chains.qa_with_sources.loading import BaseCombineDocumentsChain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.tools import BaseTool
from langchain.tools.file_management.read import ReadFileTool
from langchain.tools.file_management.write import WriteFileTool
from pydantic import Field
from swarms.utils.logger import logger
@contextmanager @contextmanager
def pushd(new_dir): def pushd(new_dir):
@ -34,6 +38,7 @@ def pushd(new_dir):
finally: finally:
os.chdir(prev_dir) os.chdir(prev_dir)
@tool @tool
def process_csv( def process_csv(
llm, csv_file_path: str, instructions: str, output_path: Optional[str] = None llm, csv_file_path: str, instructions: str, output_path: Optional[str] = None
@ -84,10 +89,12 @@ async def async_load_playwright(url: str) -> str:
await browser.close() await browser.close()
return results return results
def run_async(coro): def run_async(coro):
event_loop = asyncio.get_event_loop() event_loop = asyncio.get_event_loop()
return event_loop.run_until_complete(coro) return event_loop.run_until_complete(coro)
@tool @tool
def browse_web_page(url: str) -> str: def browse_web_page(url: str) -> str:
"""Verbose way to scrape a whole webpage. Likely to cause issues parsing.""" """Verbose way to scrape a whole webpage. Likely to cause issues parsing."""
@ -126,8 +133,6 @@ class WebpageQATool(BaseTool):
async def _arun(self, url: str, question: str) -> str: async def _arun(self, url: str, question: str) -> str:
raise NotImplementedError raise NotImplementedError
import interpreter
@tool @tool
def compile(task: str): def compile(task: str):
@ -153,16 +158,7 @@ def compile(task: str):
os.environ["INTERPRETER_CLI_DEBUG"] = True os.environ["INTERPRETER_CLI_DEBUG"] = True
# mm model workers # mm model workers
import torch
from PIL import Image
from transformers import (
BlipForQuestionAnswering,
BlipProcessor,
)
@tool @tool
@ -195,5 +191,3 @@ def VQAinference(self, inputs):
) )
return answer return answer

@ -10,6 +10,7 @@ from langchain.llms.base import BaseLLM
from langchain.agents.agent import AgentExecutor from langchain.agents.agent import AgentExecutor
from langchain.agents import load_tools from langchain.agents import load_tools
class ToolScope(Enum): class ToolScope(Enum):
GLOBAL = "global" GLOBAL = "global"
SESSION = "session" SESSION = "session"

@ -3,6 +3,7 @@ from swarms.tools.base import Tool, ToolException
from typing import Any, List from typing import Any, List
from codeinterpreterapi import CodeInterpreterSession, File, ToolException from codeinterpreterapi import CodeInterpreterSession, File, ToolException
class CodeInterpreter(Tool): class CodeInterpreter(Tool):
def __init__(self, name: str, description: str): def __init__(self, name: str, description: str):
super().__init__(name, description, self.run) super().__init__(name, description, self.run)
@ -51,6 +52,7 @@ class CodeInterpreter(Tool):
# terminate the session # terminate the session
await session.astop() await session.astop()
""" """
tool = CodeInterpreter("Code Interpreter", "A tool to interpret code and generate useful outputs.") tool = CodeInterpreter("Code Interpreter", "A tool to interpret code and generate useful outputs.")

@ -42,7 +42,6 @@ def verify(func):
return wrapper return wrapper
class SyscallTimeoutException(Exception): class SyscallTimeoutException(Exception):
def __init__(self, pid: int, *args) -> None: def __init__(self, pid: int, *args) -> None:
super().__init__(f"deadline exceeded while waiting syscall for {pid}", *args) super().__init__(f"deadline exceeded while waiting syscall for {pid}", *args)
@ -132,8 +131,6 @@ class SyscallTracer:
return exitcode, reason return exitcode, reason
class StdoutTracer: class StdoutTracer:
def __init__( def __init__(
self, self,
@ -196,7 +193,6 @@ class StdoutTracer:
return (exitcode, output) return (exitcode, output)
class Terminal(BaseToolSet): class Terminal(BaseToolSet):
def __init__(self): def __init__(self):
self.sessions: Dict[str, List[SyscallTracer]] = {} self.sessions: Dict[str, List[SyscallTracer]] = {}
@ -242,7 +238,6 @@ class Terminal(BaseToolSet):
############# #############
@tool( @tool(
name="Terminal", name="Terminal",
description="Executes commands in a terminal." description="Executes commands in a terminal."
@ -281,8 +276,6 @@ def terminal_execute(self, commands: str, get_session: SessionGetter) -> str:
return output return output
""" """
write protocol: write protocol:
@ -291,7 +284,6 @@ write protocol:
""" """
class WriteCommand: class WriteCommand:
separator = "\n" separator = "\n"
@ -329,15 +321,13 @@ class CodeWriter:
return WriteCommand.from_str(command).with_mode("a").execute() return WriteCommand.from_str(command).with_mode("a").execute()
""" """
read protocol: read protocol:
<filepath>|<start line>-<end line> <filepath>|<start line>-<end line>
""" """
class Line: class Line:
def __init__(self, content: str, line_number: int, depth: int): def __init__(self, content: str, line_number: int, depth: int):
self.__content: str = content self.__content: str = content
@ -500,10 +490,6 @@ class CodeReader:
return SummaryCommand.from_str(command).execute() return SummaryCommand.from_str(command).execute()
""" """
patch protocol: patch protocol:
@ -563,7 +549,6 @@ test.py|11,16|11,16|_titles
""" """
class Position: class Position:
separator = "," separator = ","
@ -664,11 +649,6 @@ class CodePatcher:
return written, deleted return written, deleted
class CodeEditor(BaseToolSet): class CodeEditor(BaseToolSet):
@tool( @tool(
name="CodeEditor.READ", name="CodeEditor.READ",
@ -825,6 +805,7 @@ def code_editor_read(self, inputs: str) -> str:
) )
return output return output
@tool( @tool(
name="CodeEditor.SUMMARY", name="CodeEditor.SUMMARY",
description="Summary code. " description="Summary code. "
@ -845,6 +826,7 @@ def code_editor_summary(self, inputs: str) -> str:
) )
return output return output
@tool( @tool(
name="CodeEditor.APPEND", name="CodeEditor.APPEND",
description="Append code to the existing file. " description="Append code to the existing file. "
@ -867,6 +849,7 @@ def code_editor_append(self, inputs: str) -> str:
) )
return output return output
@tool( @tool(
name="CodeEditor.WRITE", name="CodeEditor.WRITE",
description="Write code to create a new tool. " description="Write code to create a new tool. "
@ -890,6 +873,7 @@ def code_editor_write(self, inputs: str) -> str:
) )
return output return output
@tool( @tool(
name="CodeEditor.PATCH", name="CodeEditor.PATCH",
description="Patch the code to correct the error if an error occurs or to improve it. " description="Patch the code to correct the error if an error occurs or to improve it. "
@ -920,6 +904,7 @@ def code_editor_patch(self, patches: str) -> str:
) )
return output return output
@tool( @tool(
name="CodeEditor.DELETE", name="CodeEditor.DELETE",
description="Delete code in file for a new start. " description="Delete code in file for a new start. "

@ -20,6 +20,3 @@ class ExitConversation(BaseToolSet):
logger.debug("\nProcessed ExitConversation.") logger.debug("\nProcessed ExitConversation.")
return message return message

@ -223,7 +223,6 @@ class VisualQuestionAnswering(BaseToolSet):
return answer return answer
class ImageCaptioning(BaseHandler): class ImageCaptioning(BaseHandler):
def __init__(self, device): def __init__(self, device):
print("Initializing ImageCaptioning to %s" % device) print("Initializing ImageCaptioning to %s" % device)
@ -256,8 +255,3 @@ class ImageCaptioning(BaseHandler):
) )
return IMAGE_PROMPT.format(filename=filename, description=description) return IMAGE_PROMPT.format(filename=filename, description=description)

@ -35,4 +35,3 @@ class RequestsGet(BaseToolSet):
) )
return content return content

@ -38,7 +38,6 @@ class SpeechToText:
subprocess.run(["pip", "install", "pytube"]) subprocess.run(["pip", "install", "pytube"])
subprocess.run(["pip", "install", "pydub"]) subprocess.run(["pip", "install", "pydub"])
def download_youtube_video(self): def download_youtube_video(self):
audio_file = f'video.{self.audio_format}' audio_file = f'video.{self.audio_format}'
@ -121,5 +120,3 @@ class SpeechToText:
return transcription return transcription
except KeyError: except KeyError:
print("The key 'segments' is not found in the result.") print("The key 'segments' is not found in the result.")

@ -13,6 +13,7 @@ def log_decorator(func):
return result return result
return wrapper return wrapper
def error_decorator(func): def error_decorator(func):
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
try: try:
@ -22,6 +23,7 @@ def error_decorator(func):
raise raise
return wrapper return wrapper
def timing_decorator(func): def timing_decorator(func):
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
start_time = time.time() start_time = time.time()
@ -31,6 +33,7 @@ def timing_decorator(func):
return result return result
return wrapper return wrapper
def retry_decorator(max_retries=5): def retry_decorator(max_retries=5):
def decorator(func): def decorator(func):
@functools.wraps(func) @functools.wraps(func)
@ -44,16 +47,20 @@ def retry_decorator(max_retries=5):
return wrapper return wrapper
return decorator return decorator
def singleton_decorator(cls): def singleton_decorator(cls):
instances = {} instances = {}
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
if cls not in instances: if cls not in instances:
instances[cls] = cls(*args, **kwargs) instances[cls] = cls(*args, **kwargs)
return instances[cls] return instances[cls]
return wrapper return wrapper
def synchronized_decorator(func): def synchronized_decorator(func):
func.__lock__ = threading.Lock() func.__lock__ = threading.Lock()
def wrapper(*args, **kwargs): def wrapper(*args, **kwargs):
with func.__lock__: with func.__lock__:
return func(*args, **kwargs) return func(*args, **kwargs)
@ -67,6 +74,7 @@ def deprecated_decorator(func):
return func(*args, **kwargs) return func(*args, **kwargs)
return wrapper return wrapper
def validate_inputs_decorator(validator): def validate_inputs_decorator(validator):
def decorator(func): def decorator(func):
@functools.wraps(func) @functools.wraps(func)
@ -76,4 +84,3 @@ def validate_inputs_decorator(validator):
return func(*args, **kwargs) return func(*args, **kwargs)
return wrapper return wrapper
return decorator return decorator

@ -1,3 +1,12 @@
import pandas as pd
from swarms.models.prompts.prebuild.multi_modal_prompts import DATAFRAME_PROMPT
import requests
from typing import Dict
from enum import Enum
from pathlib import Path
import shutil
import boto3
from abc import ABC, abstractmethod, abstractstaticmethod
import os import os
import random import random
import uuid import uuid
@ -13,7 +22,7 @@ def seed_everything(seed):
torch.manual_seed(seed) torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) torch.cuda.manual_seed_all(seed)
except: except BaseException:
pass pass
return seed return seed
@ -75,16 +84,10 @@ def get_new_dataframe_name(org_img_name, func_name="update"):
this_new_uuid, func_name, recent_prev_file_name, most_org_file_name this_new_uuid, func_name, recent_prev_file_name, most_org_file_name
) )
return os.path.join(head, new_file_name) return os.path.join(head, new_file_name)
#########=======================> utils end # =======================> utils end
# =======================> ANSI BEGINNING
#########=======================> ANSI BEGINNING
class Code: class Code:
@ -205,9 +208,6 @@ def dim_multiline(message: str) -> str:
# ================================> upload base # ================================> upload base
from abc import ABC, abstractmethod, abstractstaticmethod
STATIC_DIR = "static" STATIC_DIR = "static"
@ -227,9 +227,6 @@ class AbstractUploader(ABC):
# ========================= upload s3 # ========================= upload s3
import boto3
class S3Uploader(AbstractUploader): class S3Uploader(AbstractUploader):
def __init__(self, accessKey: str, secretKey: str, region: str, bucket: str): def __init__(self, accessKey: str, secretKey: str, region: str, bucket: str):
self.accessKey = accessKey self.accessKey = accessKey
@ -261,9 +258,8 @@ class S3Uploader(AbstractUploader):
# ========================= upload s3 # ========================= upload s3
# ========================> upload/static # ========================> upload/static
import shutil
from pathlib import Path
class StaticUploader(AbstractUploader): class StaticUploader(AbstractUploader):
@ -277,8 +273,6 @@ class StaticUploader(AbstractUploader):
server = os.environ.get("SERVER", "http://localhost:8000") server = os.environ.get("SERVER", "http://localhost:8000")
return StaticUploader(server, path, endpoint) return StaticUploader(server, path, endpoint)
def get_url(self, uploaded_path: str) -> str: def get_url(self, uploaded_path: str) -> str:
return f"{self.server}/{uploaded_path}" return f"{self.server}/{uploaded_path}"
@ -291,14 +285,8 @@ class StaticUploader(AbstractUploader):
return f"{self.server}/{endpoint_path}" return f"{self.server}/{endpoint_path}"
# ========================> handlers/base # ========================> handlers/base
import uuid
from enum import Enum
from typing import Dict
import requests
# from env import settings # from env import settings
@ -391,18 +379,12 @@ class FileHandler:
return handler.handle(local_filename) return handler.handle(local_filename)
except Exception as e: except Exception as e:
raise e raise e
########################### => base end # => base end
# ===========================>
#############===========================>
from swarms.models.prompts.prebuild.multi_modal_prompts import DATAFRAME_PROMPT
import pandas as pd
class CsvToDataframe(BaseHandler): class CsvToDataframe(BaseHandler):
def handle(self, filename: str): def handle(self, filename: str):
df = pd.read_csv(filename) df = pd.read_csv(filename)
@ -417,7 +399,3 @@ class CsvToDataframe(BaseHandler):
) )
return DATAFRAME_PROMPT.format(filename=filename, description=description) return DATAFRAME_PROMPT.format(filename=filename, description=description)

@ -6,6 +6,7 @@ from pathlib import Path
from swarms.utils.main import AbstractUploader from swarms.utils.main import AbstractUploader
class StaticUploader(AbstractUploader): class StaticUploader(AbstractUploader):
def __init__(self, server: str, path: Path, endpoint: str): def __init__(self, server: str, path: Path, endpoint: str):
self.server = server self.server = server

@ -1 +1,2 @@
from swarms.workers.worker import Worker from swarms.workers.worker import Worker
from swarms.workers.base import AbstractWorker

@ -0,0 +1,96 @@
from typing import Dict, List, Optional, Union
class AbstractWorker:
"""(In preview) An abstract class for AI worker.
An worker can communicate with other workers and perform actions.
Different workers can differ in what actions they perform in the `receive` method.
"""
def __init__(
self,
name: str,
):
"""
Args:
name (str): name of the worker.
"""
# a dictionary of conversations, default value is list
self._name = name
@property
def name(self):
"""Get the name of the worker."""
return self._name
def run(
self,
task: str
):
"""Run the worker agent once"""
def send(
self,
message: Union[Dict, str],
recipient, # add AbstractWorker
request_reply: Optional[bool] = None
):
"""(Abstract method) Send a message to another worker."""
async def a_send(
self,
message: Union[Dict, str],
recipient, # add AbstractWorker
request_reply: Optional[bool] = None
):
"""(Aabstract async method) Send a message to another worker."""
def receive(
self,
message: Union[Dict, str],
sender, # add AbstractWorker
request_reply: Optional[bool] = None
):
"""(Abstract method) Receive a message from another worker."""
async def a_receive(
self,
message: Union[Dict, str],
sender, # add AbstractWorker
request_reply: Optional[bool] = None
):
"""(Abstract async method) Receive a message from another worker."""
def reset(self):
"""(Abstract method) Reset the worker."""
def generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender=None, # Optional["AbstractWorker"] = None,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""
async def a_generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender=None, # Optional["AbstractWorker"] = None,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract async method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""

@ -5,7 +5,7 @@ from langchain.embeddings import OpenAIEmbeddings
from langchain.tools.human.tool import HumanInputRun from langchain.tools.human.tool import HumanInputRun
from langchain.vectorstores import FAISS from langchain.vectorstores import FAISS
from langchain_experimental.autonomous_agents import AutoGPT from langchain_experimental.autonomous_agents import AutoGPT
from typing import Dict, List, Optional, Union
from swarms.agents.message import Message from swarms.agents.message import Message
from swarms.tools.autogpt import ( from swarms.tools.autogpt import (
ReadFileTool, ReadFileTool,
@ -21,6 +21,8 @@ from swarms.utils.decorators import error_decorator, log_decorator, timing_decor
ROOT_DIR = "./data/" ROOT_DIR = "./data/"
# main # main
class Worker: class Worker:
""" """
Useful for when you need to spawn an autonomous agent instance as a worker to accomplish complex tasks, Useful for when you need to spawn an autonomous agent instance as a worker to accomplish complex tasks,
@ -54,6 +56,7 @@ class Worker:
llm + tools + memory llm + tools + memory
""" """
def __init__( def __init__(
self, self,
ai_name: str = "Autobot Swarm Worker", ai_name: str = "Autobot Swarm Worker",
@ -139,7 +142,6 @@ class Worker:
if external_tools is not None: if external_tools is not None:
self.tools.extend(external_tools) self.tools.extend(external_tools)
def setup_memory(self): def setup_memory(self):
""" """
Set up memory for the worker. Set up memory for the worker.
@ -158,7 +160,6 @@ class Worker:
except Exception as error: except Exception as error:
raise RuntimeError(f"Error setting up memory perhaps try try tuning the embedding size: {error}") raise RuntimeError(f"Error setting up memory perhaps try try tuning the embedding size: {error}")
def setup_agent(self): def setup_agent(self):
""" """
Set up the autonomous agent. Set up the autonomous agent.
@ -303,3 +304,11 @@ class Worker:
""" """
for token in response.split(): for token in response.split():
yield token yield token
@staticmethod
def _message_to_dict(message: Union[Dict, str]):
"""Convert a message"""
if isinstance(message, str):
return {"content": message}
else:
return message

Loading…
Cancel
Save