Merge branch 'kyegomez:master' into master

pull/425/head
evelynmitchell 1 year ago committed by GitHub
commit 7cfb845a5d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -33,6 +33,7 @@ IFTTTKey="your_iftttkey_here"
BRAVE_API_KEY="your_brave_api_key_here"
SPOONACULAR_KEY="your_spoonacular_key_here"
HF_API_KEY="your_huggingface_api_key_here"
USE_TELEMTRY=True
REDIS_HOST=

1
.gitignore vendored

@ -21,6 +21,7 @@ Cargo.lock
Cargo.lock
swarms/agents/.DS_Store
logs
_build
conversation.txt
stderr_log.txt

@ -12,6 +12,7 @@ Orchestrate swarms of agents for production-grade applications.
</div>
Individual agents are barely being deployd into production because of 5 suffocating challanges: short memory, single task threading, hallucinations, high cost, and lack of collaboration. With Multi-agent collaboration, you can effectively eliminate all of these issues. Swarms provides you with simple, reliable, and agile primitives to build your own Swarm for your specific use case. Now, Swarms is being used in production by RBC, John Deere, and many AI startups. To learn more about the unparalled benefits about multi-agent collaboration check out this github repository for research papers or book a call with me!
----
@ -21,7 +22,7 @@ Orchestrate swarms of agents for production-grade applications.
---
## Usage
With Swarms, you can create structures, such as Agents, Swarms, and Workflows, that are composed of different types of tasks. Let's build a simple creative agent that will dynamically create a 10,000 word blog on health and wellness.
Run example in Collab: <a target="_blank" href="https://colab.research.google.com/github/kyegomez/swarms/blob/master/playground/swarms_example.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
@ -67,41 +68,60 @@ agent.run("Generate a 10,000 word blog on health and wellness.")
### `ToolAgent`
ToolAgent is an agent that outputs JSON using any model from huggingface. It takes an example schema and performs a task, outputting JSON. It is versatile, easy to use, and customizable.
ToolAgent is an agent that can use tools through JSON function calling. It intakes any open source model from huggingface and is extremely modular and plug in and play. We need help adding general support to all models soon.
```python
# Import necessary libraries
from pydantic import BaseModel, Field
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms import ToolAgent
from swarms.utils.json_utils import base_model_to_json
# Load the pre-trained model and tokenizer
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
model = AutoModelForCausalLM.from_pretrained(
"databricks/dolly-v2-12b",
load_in_4bit=True,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
# Define a JSON schema for person's information
json_schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "number"},
"is_student": {"type": "boolean"},
"courses": {"type": "array", "items": {"type": "string"}},
},
}
# Initialize the schema for the person's information
class Schema(BaseModel):
name: str = Field(..., title="Name of the person")
agent: int = Field(..., title="Age of the person")
is_student: bool = Field(
..., title="Whether the person is a student"
)
courses: list[str] = Field(
..., title="List of courses the person is taking"
)
# Convert the schema to a JSON string
tool_schema = base_model_to_json(Schema)
# Define the task to generate a person's information
task = "Generate a person's information based on the following schema:"
task = (
"Generate a person's information based on the following schema:"
)
# Create an instance of the ToolAgent class
agent = ToolAgent(model=model, tokenizer=tokenizer, json_schema=json_schema)
agent = ToolAgent(
name="dolly-function-agent",
description="Ana gent to create a child data",
model=model,
tokenizer=tokenizer,
json_schema=tool_schema,
)
# Run the agent to generate the person's information
generated_data = agent.run(task)
# Print the generated data
print(generated_data)
print(f"Generated data: {generated_data}")
```

@ -0,0 +1,70 @@
# Swarm Ecosystem
Welcome to the Swarm Ecosystem, a comprehensive suite of tools and frameworks designed to empower developers to orhestrate swarms of autonomous agents for a variety of applications. Dive into our ecosystem below:
| Project | Description | Link |
| ------- | ----------- | ---- |
| **Swarms Framework** | A Python-based framework that enables the creation, deployment, and scaling of reliable swarms of autonomous agents aimed at automating complex workflows. | [Swarms Framework](https://github.com/kyegomez/swarms) |
| **Swarms Cloud** | A cloud-based service offering Swarms-as-a-Service with guaranteed 100% uptime, cutting-edge performance, and enterprise-grade reliability for seamless scaling and management of swarms. | [Swarms Cloud](https://github.com/kyegomez/swarms-core) |
| **Swarms Core** | Provides backend utilities focusing on concurrency, multi-threading, and advanced execution strategies, developed in Rust for maximum efficiency and performance. | [Swarms Core](https://github.com/kyegomez/swarms-core) |
| **Swarm Foundation Models** | A dedicated repository for the creation, optimization, and training of groundbreaking swarming models. Features innovative models like PSO with transformers, ant colony optimizations, and more, aiming to surpass traditional architectures like Transformers and SSMs. Open for community contributions and ideas. | [Swarm Foundation Models](https://github.com/kyegomez/swarms-pytorch) |
| **Swarm Platform** | The Swarms dashboard Platform | [Swarm Platform](https://swarms.world/) |
| **Swarms JS** | Swarms Framework in JS. Orchestrate any agents and enable multi-agent collaboration between various agents! | [Swarm JS](https://github.com/kyegomez/swarms-js) |
----
## 🫶 Contributions:
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)
Swarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) to participate in Roadmap discussions!
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms" />
</a>
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-cloud" />
</a>
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-platform" />
</a>
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-js" />
</a>
----
## Community
Join our growing community around the world, for real-time support, ideas, and discussions on Swarms 😊
- View our official [Blog](https://swarms.apac.ai)
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
- Follow us on [Twitter](https://twitter.com/kyegomez)
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)
---
## Discovery Call
Book a discovery call to learn how Swarms can lower your operating costs by 40% with swarms of autonomous agents in lightspeed. [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)
## Accelerate Backlog
Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)
<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>
---

@ -1,15 +1,20 @@
from swarms import Agent, OpenAIChat
from swarms import Agent, Anthropic
## Initialize the workflow
agent = Agent(
llm=OpenAIChat(),
agent_name="Transcript Generator",
agent_description=(
"Generate a transcript for a youtube video on what swarms"
" are!"
),
llm=Anthropic(),
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="<DONE>",
interactive=True,
)
# Run the workflow on a task

@ -0,0 +1,34 @@
from swarms import Agent, Anthropic, tool
# Tool
@tool # Wrap the function with the tool decorator
def search_api(query: str, max_results: int = 10):
"""
Search the web for the query and return the top `max_results` results.
"""
return f"Search API: {query} -> {max_results} results"
## Initialize the workflow
agent = Agent(
agent_name="Youtube Transcript Generator",
agent_description=(
"Generate a transcript for a youtube video on what swarms"
" are!"
),
llm=Anthropic(),
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="<DONE>",
tools=[search_api],
)
# Run the workflow on a task
agent(
"Generate a transcript for a youtube video on what swarms are!"
" Output a <DONE> token when done."
)

@ -1,9 +1,8 @@
# Import necessary libraries
from pydantic import BaseModel, Field
from transformers import AutoModelForCausalLM, AutoTokenizer
from pydantic import BaseModel
# from swarms import ToolAgent
from swarms.utils.json_utils import base_model_schema_to_json
from swarms import ToolAgent
from swarms.utils.json_utils import base_model_to_json
# Load the pre-trained model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
@ -14,32 +13,37 @@ model = AutoModelForCausalLM.from_pretrained(
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
# Initialize the schema for the person's information
class Schema(BaseModel):
name: str
agent: int
is_student: bool
courses: list[str]
json_schema = str(base_model_schema_to_json(Schema))
print(json_schema)
# # Define the task to generate a person's information
# task = (
# "Generate a person's information based on the following schema:"
# )
# # Create an instance of the ToolAgent class
# agent = ToolAgent(
# name="dolly-function-agent",
# description="Ana gent to create a child data",
# model=model,
# tokenizer=tokenizer,
# json_schema=json_schema,
# )
# # Run the agent to generate the person's information
# generated_data = agent.run(task)
# # Print the generated data
# print(f"Generated data: {generated_data}")
name: str = Field(..., title="Name of the person")
agent: int = Field(..., title="Age of the person")
is_student: bool = Field(
..., title="Whether the person is a student"
)
courses: list[str] = Field(
..., title="List of courses the person is taking"
)
# Convert the schema to a JSON string
tool_schema = base_model_to_json(Schema)
# Define the task to generate a person's information
task = (
"Generate a person's information based on the following schema:"
)
# Create an instance of the ToolAgent class
agent = ToolAgent(
name="dolly-function-agent",
description="Ana gent to create a child data",
model=model,
tokenizer=tokenizer,
json_schema=tool_schema,
)
# Run the agent to generate the person's information
generated_data = agent.run(task)
# Print the generated data
print(f"Generated data: {generated_data}")

@ -1,46 +0,0 @@
import pytest
def test_create_youtube_account():
# Arrange
# Act
# Assert
def test_install_video_editing_software():
# Arrange
# Act
# Assert
def test_write_script():
# Arrange
# Act
# Assert
def test_gather_footage():
# Arrange
# Act
# Assert
def test_edit_video():
# Arrange
# Act
# Assert
def test_export_video():
# Arrange
# Act
# Assert
def test_upload_video_to_youtube():
# Arrange
# Act
# Assert
def test_optimize_video_for_search():
# Arrange
# Act
# Assert
def test_share_video():
# Arrange
# Act
# Assert

@ -29,17 +29,17 @@ class vLLMLM(AbstractLLM):
model_name: str = "acebook/opt-13b",
tensor_parallel_size: int = 4,
*args,
**kwargs
**kwargs,
):
super().__init__(*args, **kwargs)
self.model_name = model_name
self.tensor_parallel_size = tensor_parallel_size
self.llm = LLM(
model_name=self.model_name,
tensor_parallel_size=self.tensor_parallel_size,
)
def run(self, task: str, *args, **kwargs):
"""
Runs the LLM model to generate output for the given task.
@ -54,8 +54,8 @@ class vLLMLM(AbstractLLM):
"""
return self.llm.generate(task)
# Initializing the agent with the vLLMLM instance and other parameters
model = vLLMLM(
"facebook/opt-13b",
@ -86,4 +86,4 @@ agent = Agent(
docs_folder="docs",
),
stopping_condition="finish",
)
)

@ -0,0 +1,26 @@
"""
Boss selects what agent to use
B -> W1, W2, W3
"""
from typing import List, Optional
from pydantic import BaseModel, Field
from swarms.utils.json_utils import str_to_json
class HierarchicalSwarm(BaseModel):
class Config:
arbitrary_types_allowed = True
agents: Optional[List[str]] = Field(
None, title="List of agents in the hierarchical swarm"
)
task: Optional[str] = Field(
None, title="Task to be done by the agents"
)
all_agents = HierarchicalSwarm()
agents_schema = HierarchicalSwarm.model_json_schema()
agents_schema = str_to_json(agents_schema)
print(agents_schema)

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "4.3.0"
version = "4.3.3"
description = "Swarms - Pytorch"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]
@ -24,49 +24,41 @@ classifiers = [
[tool.poetry.dependencies]
python = "^3.9"
torch = "2.1.1"
transformers = "4.37.1"
openai = "0.28.0"
langchain = "0.0.333"
asyncio = "3.4.3"
python = ">=3.9,<4.0"
torch = ">=2.1.1,<3.0"
transformers = "*"
asyncio = ">=3.4.3,<4.0"
einops = "0.7.0"
google-generativeai = "0.3.1"
langchain-experimental = "0.0.10"
opencv-python-headless = "4.8.1.78"
langchain-community = "0.0.29"
faiss-cpu = "1.7.4"
backoff = "2.2.1"
datasets = "*"
optimum = "1.15.0"
diffusers = "*"
langchain = "0.1.7"
toml = "*"
pypdf = "4.0.1"
accelerate = "*"
anthropic = "*"
sentencepiece = "0.1.98"
httpx = "0.24.1"
tiktoken = "0.4.0"
ratelimit = "2.2.1"
loguru = "0.7.2"
huggingface-hub = "*"
pydantic = "*"
pydantic = "2.6.4"
tenacity = "8.2.2"
Pillow = "9.4.0"
chromadb = "*"
chromadb = "0.4.24"
termcolor = "2.2.0"
torchvision = "0.16.1"
rich = "13.5.2"
sqlalchemy = "*"
bitsandbytes = "*"
pgvector = "*"
cohere = "*"
sentence-transformers = "*"
peft = "*"
psutil = "*"
ultralytics = "*"
timm = "*"
supervision = "*"
roboflow = "*"
[tool.poetry.dev-dependencies]
black = "23.3.0"
@ -81,8 +73,6 @@ types-chardet = "^5.0.4.6"
mypy-protobuf = "^3.0.0"
[tool.ruff]
line-length = 70
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.

@ -1,24 +1,19 @@
torch==2.1.1
transformers
pandas==2.2.1
langchain==0.0.333
langchain-experimental==0.0.10
pandas
langchain==0.1.7
langchain-experimental
httpx==0.24.1
Pillow==9.4.0
faiss-cpu==1.7.4
openai==0.28.0
datasets==2.14.5
pydantic==1.10.12
bitsandbytes
pydantic==2.6.4
huggingface-hub
google-generativeai==0.3.1
sentencepiece==0.1.98
requests_mock
pypdf==4.0.1
accelerate==0.22.0
loguru==0.7.2
chromadb
optimum
diffusers
toml
tiktoken==0.4.0
colored
@ -26,25 +21,13 @@ addict
backoff==2.2.1
ratelimit==2.2.1
termcolor==2.2.0
diffusers
einops==0.7.0
opencv-python-headless==4.8.1.78
numpy
openai==0.28.0
opencv-python==4.9.0.80
langchain-community
timm
cohere==4.53
torchvision==0.16.1
rich==13.5.2
mkdocs
mkdocs-material
mkdocs-glightbox
pre-commit==3.6.2
peft
psutil
ultralytics
supervision
anthropic
pinecone-client
roboflow
black

@ -64,10 +64,3 @@ out = swarmnet.run_single_agent(
agent2.id, "Generate a 10,000 word blog on health and wellness."
)
print(out)
# # Run all the agents in the swarm network on a task
# out = swarmnet.run_many_agents(
# f"Summarize the blog and create a social media post: {out}"
# )
# print(out)

@ -1,16 +1,18 @@
# from swarms.telemetry.main import Telemetry # noqa: E402, F403
from swarms.telemetry.bootup import bootup # noqa: E402, F403
import os
from swarms.telemetry.bootup import bootup # noqa: E402, F403
from swarms.telemetry.sentry_active import activate_sentry
os.environ["WANDB_SILENT"] = "true"
bootup()
activate_sentry()
from swarms.agents import * # noqa: E402, F403
from swarms.artifacts import * # noqa: E402, F403
from swarms.chunkers import * # noqa: E402, F403
from swarms.loaders import * # noqa: E402, F403
from swarms.memory import * # noqa: E402, F403
from swarms.models import * # noqa: E402, F403
from swarms.prompts import * # noqa: E402, F403
from swarms.structs import * # noqa: E402, F403
@ -18,4 +20,3 @@ from swarms.telemetry import * # noqa: E402, F403
from swarms.tokenizers import * # noqa: E402, F403
from swarms.tools import * # noqa: E402, F403
from swarms.utils import * # noqa: E402, F403
from swarms.memory import * # noqa: E402, F403

@ -1,4 +1,5 @@
from typing import Dict, List
from abc import abstractmethod
from typing import Dict, List, Union, Optional
class AbstractAgent:
@ -36,7 +37,8 @@ class AbstractAgent:
def reset(self):
"""(Abstract method) Reset the agent."""
def run(self, task: str):
@abstractmethod
def run(self, task: str, *args, **kwargs):
"""Run the agent once"""
def _arun(self, taks: str):
@ -53,3 +55,65 @@ class AbstractAgent:
def _astep(self, message: str):
"""Asynchronous step"""
def send(
self,
message: Union[Dict, str],
recipient, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Abstract method) Send a message to another worker."""
async def a_send(
self,
message: Union[Dict, str],
recipient, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Aabstract async method) Send a message to another worker."""
def receive(
self,
message: Union[Dict, str],
sender, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Abstract method) Receive a message from another worker."""
async def a_receive(
self,
message: Union[Dict, str],
sender, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Abstract async method) Receive a message from another worker."""
def generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender=None, # Optional["AbstractWorker"] = None,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""
async def a_generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender=None, # Optional["AbstractWorker"] = None,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract async method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""

@ -1,70 +0,0 @@
import os
import multion
from dotenv import load_dotenv
from swarms.models.base_llm import AbstractLLM
# Load environment variables
load_dotenv()
# Muliton key
MULTION_API_KEY = os.getenv("MULTION_API_KEY")
class MultiOnAgent(AbstractLLM):
"""
Represents a multi-on agent that performs browsing tasks.
Args:
max_steps (int): The maximum number of steps to perform during browsing.
starting_url (str): The starting URL for browsing.
Attributes:
max_steps (int): The maximum number of steps to perform during browsing.
starting_url (str): The starting URL for browsing.
"""
def __init__(
self,
multion_api_key: str = MULTION_API_KEY,
max_steps: int = 4,
starting_url: str = "https://www.google.com",
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.multion_api_key = multion_api_key
self.max_steps = max_steps
self.starting_url = starting_url
def run(self, task: str, *args, **kwargs):
"""
Runs a browsing task.
Args:
task (str): The task to perform during browsing.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
Returns:
dict: The response from the browsing task.
"""
multion.login(
use_api=True,
multion_api_key=str(self.multion_api_key),
*args,
**kwargs,
)
response = multion.browse(
{
"cmd": task,
"url": self.starting_url,
"maxSteps": self.max_steps,
},
*args,
**kwargs,
)
return response.result, response.status, response.lastUrl

@ -9,11 +9,11 @@ from langchain_experimental.autonomous_agents.hugginggpt.task_planner import (
load_chat_planner,
)
from transformers import load_tool
from swarms.utils.loguru_logger import logger
from swarms.structs.agent import Agent
from swarms.structs.message import Message
class OmniModalAgent:
class OmniModalAgent(Agent):
"""
OmniModalAgent
LLM -> Plans -> Tasks -> Tools -> Response
@ -42,9 +42,13 @@ class OmniModalAgent:
def __init__(
self,
llm: BaseLanguageModel,
# tools: List[BaseTool]
verbose: bool = False,
*args,
**kwargs,
):
super().__init__(llm=llm, *args, **kwargs)
self.llm = llm
self.verbose = verbose
print("Loading tools...")
self.tools = [
@ -67,79 +71,29 @@ class OmniModalAgent:
]
]
# Load the chat planner and response generator
self.chat_planner = load_chat_planner(llm)
self.response_generator = load_response_generator(llm)
# self.task_executor = TaskExecutor
self.task_executor = TaskExecutor
self.history = []
def run(self, input: str) -> str:
def run(self, task: str) -> str:
"""Run the OmniAgent"""
plan = self.chat_planner.plan(
inputs={
"input": input,
"hf_tools": self.tools,
}
)
self.task_executor = TaskExecutor(plan)
self.task_executor.run()
response = self.response_generator.generate(
{"task_execution": self.task_executor}
)
return response
def chat(self, msg: str = None, streaming: bool = False):
"""
Run chat
Args:
msg (str, optional): Message to send to the agent. Defaults to None.
language (str, optional): Language to use. Defaults to None.
streaming (bool, optional): Whether to stream the response. Defaults to False.
Returns:
str: Response from the agent
Usage:
--------------
agent = MultiModalAgent()
agent.chat("Hello")
"""
# add users message to the history
self.history.append(Message("User", msg))
# process msg
try:
response = self.agent.run(msg)
# add agent's response to the history
self.history.append(Message("Agent", response))
# if streaming is = True
if streaming:
return self._stream_response(response)
else:
response
plan = self.chat_planner.plan(
inputs={
"input": task,
"hf_tools": self.tools,
}
)
self.task_executor = TaskExecutor(plan)
self.task_executor.run()
response = self.response_generator.generate(
{"task_execution": self.task_executor}
)
return response
except Exception as error:
error_message = f"Error processing message: {str(error)}"
# add error to history
self.history.append(Message("Agent", error_message))
return error_message
def _stream_response(self, response: str = None):
"""
Yield the response token by token (word by word)
Usage:
--------------
for token in _stream_response(response):
print(token)
"""
yield from response.split()
logger.error(f"Error running the agent: {error}")
return f"Error running the agent: {error}"

@ -1,10 +1,10 @@
from typing import Any
from typing import Any, Optional, Callable
from swarms.models.base_llm import AbstractLLM
from swarms.structs.agent import Agent
from swarms.tools.format_tools import Jsonformer
class ToolAgent(AbstractLLM):
class ToolAgent(Agent):
"""
Represents a tool agent that performs a specific task using a model and tokenizer.
@ -67,16 +67,23 @@ class ToolAgent(AbstractLLM):
tokenizer: Any = None,
json_schema: Any = None,
max_number_tokens: int = 500,
parsing_function: Optional[Callable] = None,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
super().__init__(
agent_name=name,
agent_description=description,
sop=f"{name} {description} {str(json_schema)}" * args,
**kwargs,
)
self.name = name
self.description = description
self.model = model
self.tokenizer = tokenizer
self.json_schema = json_schema
self.max_number_tokens = max_number_tokens
self.parsing_function = parsing_function
def run(self, task: str, *args, **kwargs):
"""
@ -104,7 +111,11 @@ class ToolAgent(AbstractLLM):
**kwargs,
)
out = self.toolagent()
if self.parsing_function:
out = self.parsing_function(self.toolagent())
else:
out = self.toolagent()
return out
except Exception as error:
print(f"[Error] [ToolAgent] {error}")

@ -4,22 +4,18 @@ from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.chroma_db import ChromaDB
from swarms.memory.dict_internal_memory import DictInternalMemory
from swarms.memory.dict_shared_memory import DictSharedMemory
from swarms.memory.lanchain_chroma import LangchainChromaVectorMemory
from swarms.memory.short_term_memory import ShortTermMemory
from swarms.memory.sqlite import SQLiteDB
from swarms.memory.visual_memory import VisualShortTermMemory
from swarms.memory.weaviate_db import WeaviateDB
__all__ = [
"AbstractVectorDatabase",
"AbstractDatabase",
"ShortTermMemory",
"SQLiteDB",
"WeaviateDB",
"VisualShortTermMemory",
"ActionSubtaskEntry",
"ChromaDB",
"DictInternalMemory",
"DictSharedMemory",
"LangchainChromaVectorMemory",
]

@ -47,8 +47,6 @@ class ChromaDB:
limit_tokens: Optional[int] = 1000,
n_results: int = 2,
embedding_function: Callable = None,
data_loader: Callable = None,
multimodal: bool = False,
docs_folder: str = None,
verbose: bool = False,
*args,
@ -75,22 +73,12 @@ class ChromaDB:
**kwargs,
)
# Data loader
if data_loader:
self.data_loader = data_loader
else:
self.data_loader = None
# Embedding model
if embedding_function:
self.embedding_function = embedding_function
else:
self.embedding_function = None
# If multimodal set the embedding model to OpenCLIP
if multimodal:
self.embedding_function = None
# Create ChromaDB client
self.client = chromadb.Client()

@ -1,56 +1,51 @@
from swarms.models.anthropic import Anthropic # noqa: E402
from swarms.models.base_embedding_model import BaseEmbeddingModel
from swarms.models.base_llm import AbstractLLM # noqa: E402
from swarms.models.base_multimodal_model import BaseMultiModalModel
# noqa: E402
from swarms.models.biogpt import BioGPT # noqa: E402
from swarms.models.clipq import CLIPQ # noqa: E402
# from swarms.models.dalle3 import Dalle3
# from swarms.models.distilled_whisperx import DistilWhisperModel # noqa: E402
# from swarms.models.whisperx_model import WhisperX # noqa: E402
# from swarms.models.kosmos_two import Kosmos # noqa: E402
# from swarms.models.cog_agent import CogAgent # noqa: E402
## Function calling models
from swarms.models.fire_function import FireFunctionCaller
from swarms.models.fuyu import Fuyu # noqa: E402
from swarms.models.gemini import Gemini # noqa: E402
from swarms.models.gigabind import Gigabind # noqa: E402
from swarms.models.gpt4_vision_api import GPT4VisionAPI # noqa: E402
from swarms.models.huggingface import HuggingfaceLLM # noqa: E402
from swarms.models.idefics import Idefics # noqa: E402
from swarms.models.kosmos_two import Kosmos # noqa: E402
from swarms.models.layoutlm_document_qa import LayoutLMDocumentQA
# noqa: E402
from swarms.models.llava import LavaMultiModal # noqa: E402
from swarms.models.mistral import Mistral # noqa: E402
from swarms.models.mixtral import Mixtral # noqa: E402
from swarms.models.mpt import MPT7B # noqa: E402
from swarms.models.nougat import Nougat # noqa: E402
from swarms.models.openai_models import (
AzureOpenAI,
OpenAI,
OpenAIChat,
)
# noqa: E402
from swarms.models.openai_tts import OpenAITTS # noqa: E402
from swarms.models.petals import Petals # noqa: E402
from swarms.models.popular_llms import (
AnthropicChat as Anthropic,
)
from swarms.models.popular_llms import (
AzureOpenAILLM as AzureOpenAI,
)
from swarms.models.popular_llms import (
CohereChat as Cohere,
)
from swarms.models.popular_llms import (
MosaicMLChat as MosaicML,
)
from swarms.models.popular_llms import (
OpenAIChatLLM as OpenAIChat,
)
from swarms.models.popular_llms import (
OpenAILLM as OpenAI,
)
from swarms.models.popular_llms import (
ReplicateLLM as Replicate,
)
from swarms.models.qwen import QwenVLMultiModal # noqa: E402
from swarms.models.roboflow_model import RoboflowMultiModal
# from swarms.models.roboflow_model import RoboflowMultiModal
from swarms.models.sam_supervision import SegmentAnythingMarkGenerator
from swarms.models.sampling_params import SamplingParams, SamplingType
from swarms.models.timm import TimmModel # noqa: E402
# from swarms.models.modelscope_pipeline import ModelScopePipeline
# from swarms.models.modelscope_llm import (
# ModelScopeAutoModel,
# ) # noqa: E402
from swarms.models.together import TogetherLLM # noqa: E402
############## Types
from swarms.models.types import ( # noqa: E402
AudioModality,
ImageModality,
@ -58,61 +53,54 @@ from swarms.models.types import ( # noqa: E402
TextModality,
VideoModality,
)
from swarms.models.ultralytics_model import UltralyticsModel
# noqa: E402
# from swarms.models.ultralytics_model import UltralyticsModel
from swarms.models.vilt import Vilt # noqa: E402
from swarms.models.wizard_storytelling import WizardLLMStoryTeller
# noqa: E402
# from swarms.models.vllm import vLLM # noqa: E402
from swarms.models.zephyr import Zephyr # noqa: E402
from swarms.models.zeroscope import ZeroscopeTTV # noqa: E402
__all__ = [
"AbstractLLM",
"Anthropic",
"Petals",
"Mistral",
"OpenAI",
"AzureOpenAI",
"OpenAIChat",
"Zephyr",
"BaseEmbeddingModel",
"BaseMultiModalModel",
"Idefics",
"Vilt",
"Nougat",
"LayoutLMDocumentQA",
"BioGPT",
"HuggingfaceLLM",
"MPT7B",
"WizardLLMStoryTeller",
# "Dalle3",
# "DistilWhisperModel",
"CLIPQ",
"Cohere",
"FireFunctionCaller",
"Fuyu",
"GPT4VisionAPI",
# "vLLM",
"OpenAITTS",
"Gemini",
"Gigabind",
"HuggingfaceLLM",
"Idefics",
"Kosmos",
"LayoutLMDocumentQA",
"LavaMultiModal",
"Replicate",
"MPT7B",
"Mistral",
"Mixtral",
"ZeroscopeTTV",
"MosaicML",
"Nougat",
"OpenAI",
"OpenAIChat",
"OpenAITTS",
"Petals",
"QwenVLMultiModal",
"SamplingParams",
"SamplingType",
"SegmentAnythingMarkGenerator",
"TextModality",
"ImageModality",
"AudioModality",
"TimmModel",
"TogetherLLM",
"Vilt",
"VideoModality",
"WizardLLMStoryTeller",
"Zephyr",
"ZeroscopeTTV",
"AudioModality",
"ImageModality",
"MultimodalData",
"TogetherLLM",
"TimmModel",
"UltralyticsModel",
"LavaMultiModal",
"QwenVLMultiModal",
"CLIPQ",
"Kosmos",
"Fuyu",
"BaseEmbeddingModel",
"RoboflowMultiModal",
"SegmentAnythingMarkGenerator",
"SamplingType",
"SamplingParams",
"FireFunctionCaller",
]

@ -1,575 +0,0 @@
import contextlib
import datetime
import functools
import importlib
import re
import warnings
from importlib.metadata import version
from typing import (
Any,
AsyncIterator,
Callable,
Dict,
Iterator,
List,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.llms.base import LLM
from langchain.schema.language_model import BaseLanguageModel
from langchain.schema.output import GenerationChunk
from langchain.schema.prompt import PromptValue
from langchain.utils import get_from_dict_or_env
from packaging.version import parse
from pydantic import Field, SecretStr, root_validator
from requests import HTTPError, Response
def xor_args(*arg_groups: Tuple[str, ...]) -> Callable:
"""Validate specified keyword args are mutually exclusive."""
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
"""Validate exactly one arg in each group is not None."""
counts = [
sum(
1
for arg in arg_group
if kwargs.get(arg) is not None
)
for arg_group in arg_groups
]
invalid_groups = [
i for i, count in enumerate(counts) if count != 1
]
if invalid_groups:
invalid_group_names = [
", ".join(arg_groups[i]) for i in invalid_groups
]
raise ValueError(
"Exactly one argument in each of the following"
" groups must be defined:"
f" {', '.join(invalid_group_names)}"
)
return func(*args, **kwargs)
return wrapper
return decorator
def raise_for_status_with_text(response: Response) -> None:
"""Raise an error with the response text."""
try:
response.raise_for_status()
except HTTPError as e:
raise ValueError(response.text) from e
@contextlib.contextmanager
def mock_now(dt_value): # type: ignore
"""Context manager for mocking out datetime.now() in unit tests.
Example:
with mock_now(datetime.datetime(2011, 2, 3, 10, 11)):
assert datetime.datetime.now() == datetime.datetime(2011, 2, 3, 10, 11)
"""
class MockDateTime(datetime.datetime):
"""Mock datetime.datetime.now() with a fixed datetime."""
@classmethod
def now(cls): # type: ignore
# Create a copy of dt_value.
return datetime.datetime(
dt_value.year,
dt_value.month,
dt_value.day,
dt_value.hour,
dt_value.minute,
dt_value.second,
dt_value.microsecond,
dt_value.tzinfo,
)
real_datetime = datetime.datetime
datetime.datetime = MockDateTime
try:
yield datetime.datetime
finally:
datetime.datetime = real_datetime
def guard_import(
module_name: str,
*,
pip_name: Optional[str] = None,
package: Optional[str] = None,
) -> Any:
"""Dynamically imports a module and raises a helpful exception if the module is not
installed."""
try:
module = importlib.import_module(module_name, package)
except ImportError:
raise ImportError(
f"Could not import {module_name} python package. Please"
" install it with `pip install"
f" {pip_name or module_name}`."
)
return module
def check_package_version(
package: str,
lt_version: Optional[str] = None,
lte_version: Optional[str] = None,
gt_version: Optional[str] = None,
gte_version: Optional[str] = None,
) -> None:
"""Check the version of a package."""
imported_version = parse(version(package))
if lt_version is not None and imported_version >= parse(
lt_version
):
raise ValueError(
f"Expected {package} version to be < {lt_version}."
f" Received {imported_version}."
)
if lte_version is not None and imported_version > parse(
lte_version
):
raise ValueError(
f"Expected {package} version to be <= {lte_version}."
f" Received {imported_version}."
)
if gt_version is not None and imported_version <= parse(
gt_version
):
raise ValueError(
f"Expected {package} version to be > {gt_version}."
f" Received {imported_version}."
)
if gte_version is not None and imported_version < parse(
gte_version
):
raise ValueError(
f"Expected {package} version to be >= {gte_version}."
f" Received {imported_version}."
)
def get_pydantic_field_names(pydantic_cls: Any) -> Set[str]:
"""Get field names, including aliases, for a pydantic class.
Args:
pydantic_cls: Pydantic class."""
all_required_field_names = set()
for field in pydantic_cls.__fields__.values():
all_required_field_names.add(field.name)
if field.has_alias:
all_required_field_names.add(field.alias)
return all_required_field_names
def build_extra_kwargs(
extra_kwargs: Dict[str, Any],
values: Dict[str, Any],
all_required_field_names: Set[str],
) -> Dict[str, Any]:
"""Build extra kwargs from values and extra_kwargs.
Args:
extra_kwargs: Extra kwargs passed in by user.
values: Values passed in by user.
all_required_field_names: All required field names for the pydantic class.
"""
for field_name in list(values):
if field_name in extra_kwargs:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
warnings.warn(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra_kwargs[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(
extra_kwargs.keys()
)
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified"
" explicitly. Instead they were passed in as part of"
" `model_kwargs` parameter."
)
return extra_kwargs
def convert_to_secret_str(value: Union[SecretStr, str]) -> SecretStr:
"""Convert a string to a SecretStr if needed."""
if isinstance(value, SecretStr):
return value
return SecretStr(value)
class _AnthropicCommon(BaseLanguageModel):
client: Any = None #: :meta private:
async_client: Any = None #: :meta private:
model: str = Field(default="claude-2", alias="model_name")
"""Model name to use."""
max_tokens_to_sample: int = Field(default=256, alias="max_tokens")
"""Denotes the number of tokens to predict per generation."""
temperature: Optional[float] = None
"""A non-negative float that tunes the degree of randomness in generation."""
top_k: Optional[int] = None
"""Number of most likely tokens to consider at each step."""
top_p: Optional[float] = None
"""Total probability mass of tokens to consider at each step."""
streaming: bool = False
"""Whether to stream the results."""
default_request_timeout: Optional[float] = None
"""Timeout for requests to Anthropic Completion API. Default is 600 seconds."""
anthropic_api_url: Optional[str] = None
anthropic_api_key: Optional[SecretStr] = None
HUMAN_PROMPT: Optional[str] = None
AI_PROMPT: Optional[str] = None
count_tokens: Optional[Callable[[str], int]] = None
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
@root_validator(pre=True)
def build_extra(cls, values: Dict) -> Dict:
extra = values.get("model_kwargs", {})
all_required_field_names = get_pydantic_field_names(cls)
values["model_kwargs"] = build_extra_kwargs(
extra, values, all_required_field_names
)
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["anthropic_api_key"] = convert_to_secret_str(
get_from_dict_or_env(
values, "anthropic_api_key", "ANTHROPIC_API_KEY"
)
)
# Get custom api url from environment.
values["anthropic_api_url"] = get_from_dict_or_env(
values,
"anthropic_api_url",
"ANTHROPIC_API_URL",
default="https://api.anthropic.com",
)
try:
import anthropic
check_package_version("anthropic", gte_version="0.3")
values["client"] = anthropic.Anthropic(
base_url=values["anthropic_api_url"],
api_key=values[
"anthropic_api_key"
].get_secret_value(),
timeout=values["default_request_timeout"],
)
values["async_client"] = anthropic.AsyncAnthropic(
base_url=values["anthropic_api_url"],
api_key=values[
"anthropic_api_key"
].get_secret_value(),
timeout=values["default_request_timeout"],
)
values["HUMAN_PROMPT"] = anthropic.HUMAN_PROMPT
values["AI_PROMPT"] = anthropic.AI_PROMPT
values["count_tokens"] = values["client"].count_tokens
except ImportError:
raise ImportError(
"Could not import anthropic python package. "
"Please it install it with `pip install anthropic`."
)
return values
@property
def _default_params(self) -> Mapping[str, Any]:
"""Get the default parameters for calling Anthropic API."""
d = {
"max_tokens_to_sample": self.max_tokens_to_sample,
"model": self.model,
}
if self.temperature is not None:
d["temperature"] = self.temperature
if self.top_k is not None:
d["top_k"] = self.top_k
if self.top_p is not None:
d["top_p"] = self.top_p
return {**d, **self.model_kwargs}
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {**{}, **self._default_params}
def _get_anthropic_stop(
self, stop: Optional[List[str]] = None
) -> List[str]:
if not self.HUMAN_PROMPT or not self.AI_PROMPT:
raise NameError(
"Please ensure the anthropic package is loaded"
)
if stop is None:
stop = []
# Never want model to invent new turns of Human / Assistant dialog.
stop.extend([self.HUMAN_PROMPT])
return stop
class Anthropic(LLM, _AnthropicCommon):
"""Anthropic large language models.
To use, you should have the ``anthropic`` python package installed, and the
environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass
it as a named parameter to the constructor.
Example:
.. code-block:: python
import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
"""
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
arbitrary_types_allowed = True
@root_validator()
def raise_warning(cls, values: Dict) -> Dict:
"""Raise warning that this class is deprecated."""
warnings.warn(
"There may be an updated version of"
f" {cls.__name__} available."
)
return values
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "anthropic-llm"
def _wrap_prompt(self, prompt: str) -> str:
if not self.HUMAN_PROMPT or not self.AI_PROMPT:
raise NameError(
"Please ensure the anthropic package is loaded"
)
if prompt.startswith(self.HUMAN_PROMPT):
return prompt # Already wrapped.
# Guard against common errors in specifying wrong number of newlines.
corrected_prompt, n_subs = re.subn(
r"^\n*Human:", self.HUMAN_PROMPT, prompt
)
if n_subs == 1:
return corrected_prompt
# As a last resort, wrap the prompt ourselves to emulate instruct-style.
return (
f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here"
" you go:\n"
)
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
r"""Call out to Anthropic's completion endpoint.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
prompt = "What are the biggest risks facing humanity?"
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
response = model(prompt)
"""
if self.streaming:
completion = ""
for chunk in self._stream(
prompt=prompt,
stop=stop,
run_manager=run_manager,
**kwargs,
):
completion += chunk.text
return completion
stop = self._get_anthropic_stop(stop)
params = {**self._default_params, **kwargs}
response = self.client.completions.create(
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
**params,
)
return response.completion
def convert_prompt(self, prompt: PromptValue) -> str:
return self._wrap_prompt(prompt.to_string())
async def _acall(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Call out to Anthropic's completion endpoint asynchronously."""
if self.streaming:
completion = ""
async for chunk in self._astream(
prompt=prompt,
stop=stop,
run_manager=run_manager,
**kwargs,
):
completion += chunk.text
return completion
stop = self._get_anthropic_stop(stop)
params = {**self._default_params, **kwargs}
response = await self.async_client.completions.create(
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
**params,
)
return response.completion
def _stream(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
r"""Call Anthropic completion_stream and return the resulting generator.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
A generator representing the stream of tokens from Anthropic.
Example:
.. code-block:: python
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
"""
stop = self._get_anthropic_stop(stop)
params = {**self._default_params, **kwargs}
for token in self.client.completions.create(
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
stream=True,
**params,
):
chunk = GenerationChunk(text=token.completion)
yield chunk
if run_manager:
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
async def _astream(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> AsyncIterator[GenerationChunk]:
r"""Call Anthropic completion_stream and return the resulting generator.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
A generator representing the stream of tokens from Anthropic.
Example:
.. code-block:: python
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
"""
stop = self._get_anthropic_stop(stop)
params = {**self._default_params, **kwargs}
async for token in await self.async_client.completions.create(
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
stream=True,
**params,
):
chunk = GenerationChunk(text=token.completion)
yield chunk
if run_manager:
await run_manager.on_llm_new_token(
chunk.text, chunk=chunk
)
def get_num_tokens(self, text: str) -> int:
"""Calculate number of tokens."""
if not self.count_tokens:
raise NameError(
"Please ensure the anthropic package is loaded"
)
return self.count_tokens(text)

@ -1,223 +0,0 @@
from __future__ import annotations
import logging
import os
from typing import Any, Callable, Mapping
import openai
from langchain_core.pydantic_v1 import (
Field,
SecretStr,
root_validator,
)
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
)
from langchain_openai.llms.base import BaseOpenAI
logger = logging.getLogger(__name__)
class AzureOpenAI(BaseOpenAI):
"""Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from swarms import AzureOpenAI
openai = AzureOpenAI(model_name="gpt-3.5-turbo-instruct")
"""
azure_endpoint: str | None = None
"""Your Azure endpoint, including the resource.
Automatically inferred from env var `AZURE_OPENAI_ENDPOINT` if not provided.
Example: `https://example-resource.azure.openai.com/`
"""
deployment_name: str | None = Field(
default=None, alias="azure_deployment"
)
"""A model deployment.
If given sets the base client URL to include `/deployments/{azure_deployment}`.
Note: this means you won't be able to use non-deployment endpoints.
"""
openai_api_version: str = Field(default="", alias="api_version")
"""Automatically inferred from env var `OPENAI_API_VERSION` if not provided."""
openai_api_key: SecretStr | None = Field(
default=None, alias="api_key"
)
"""Automatically inferred from env var `AZURE_OPENAI_API_KEY` if not provided."""
azure_ad_token: SecretStr | None = None
"""Your Azure Active Directory token.
Automatically inferred from env var `AZURE_OPENAI_AD_TOKEN` if not provided.
For more:
https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id.
""" # noqa: E501
azure_ad_token_provider: Callable[[], str] | None = None
"""A function that returns an Azure Active Directory token.
Will be invoked on every request.
"""
openai_api_type: str = ""
"""Legacy, for openai<1.0.0 support."""
validate_base_url: bool = True
"""For backwards compatibility. If legacy val openai_api_base is passed in, try to
infer if it is a base_url or azure_endpoint and update accordingly.
"""
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object."""
return ["langchain", "llms", "openai"]
@root_validator()
def validate_environment(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError(
"Cannot stream results when best_of > 1."
)
# Check OPENAI_KEY for backwards compatibility.
# TODO: Remove OPENAI_API_KEY support to avoid possible conflict when using
# other forms of azure credentials.
openai_api_key = (
values["openai_api_key"]
or os.getenv("AZURE_OPENAI_API_KEY")
or os.getenv("OPENAI_API_KEY")
)
values["openai_api_key"] = (
convert_to_secret_str(openai_api_key)
if openai_api_key
else None
)
values["azure_endpoint"] = values[
"azure_endpoint"
] or os.getenv("AZURE_OPENAI_ENDPOINT")
azure_ad_token = values["azure_ad_token"] or os.getenv(
"AZURE_OPENAI_AD_TOKEN"
)
values["azure_ad_token"] = (
convert_to_secret_str(azure_ad_token)
if azure_ad_token
else None
)
values["openai_api_base"] = values[
"openai_api_base"
] or os.getenv("OPENAI_API_BASE")
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
values["openai_organization"] = (
values["openai_organization"]
or os.getenv("OPENAI_ORG_ID")
or os.getenv("OPENAI_ORGANIZATION")
)
values["openai_api_version"] = values[
"openai_api_version"
] or os.getenv("OPENAI_API_VERSION")
values["openai_api_type"] = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
default="azure",
)
# For backwards compatibility. Before openai v1, no distinction was made
# between azure_endpoint and base_url (openai_api_base).
openai_api_base = values["openai_api_base"]
if openai_api_base and values["validate_base_url"]:
if "/openai" not in openai_api_base:
values["openai_api_base"] = (
values["openai_api_base"].rstrip("/") + "/openai"
)
raise ValueError(
"As of openai>=1.0.0, Azure endpoints should be"
" specified via the `azure_endpoint` param not"
" `openai_api_base` (or alias `base_url`)."
)
if values["deployment_name"]:
raise ValueError(
"As of openai>=1.0.0, if `deployment_name` (or"
" alias `azure_deployment`) is specified then"
" `openai_api_base` (or alias `base_url`) should"
" not be. Instead use `deployment_name` (or alias"
" `azure_deployment`) and `azure_endpoint`."
)
values["deployment_name"] = None
client_params = {
"api_version": values["openai_api_version"],
"azure_endpoint": values["azure_endpoint"],
"azure_deployment": values["deployment_name"],
"api_key": (
values["openai_api_key"].get_secret_value()
if values["openai_api_key"]
else None
),
"azure_ad_token": (
values["azure_ad_token"].get_secret_value()
if values["azure_ad_token"]
else None
),
"azure_ad_token_provider": values[
"azure_ad_token_provider"
],
"organization": values["openai_organization"],
"base_url": values["openai_api_base"],
"timeout": values["request_timeout"],
"max_retries": values["max_retries"],
"default_headers": values["default_headers"],
"default_query": values["default_query"],
"http_client": values["http_client"],
}
values["client"] = openai.AzureOpenAI(
**client_params
).completions
values["async_client"] = openai.AsyncAzureOpenAI(
**client_params
).completions
return values
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> dict[str, Any]:
openai_params = {"model": self.deployment_name}
return {**openai_params, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
@property
def lc_attributes(self) -> dict[str, Any]:
return {
"openai_api_type": self.openai_api_type,
"openai_api_version": self.openai_api_version,
}

@ -1,258 +0,0 @@
import logging
from typing import Any, Callable, Dict, List, Optional
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
from langchain.load.serializable import Serializable
from langchain.utils import get_from_dict_or_env
from pydantic import Extra, Field, root_validator
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
logger = logging.getLogger(__name__)
def _create_retry_decorator(llm) -> Callable[[Any], Any]:
import cohere
min_seconds = 4
max_seconds = 10
# Wait 2^x * 1 second between each retry starting with
# 4 seconds, then up to 10 seconds, then 10 seconds afterwards
return retry(
reraise=True,
stop=stop_after_attempt(llm.max_retries),
wait=wait_exponential(
multiplier=1, min=min_seconds, max=max_seconds
),
retry=retry_if_exception_type(cohere.error.CohereError),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def completion_with_retry(llm, **kwargs: Any) -> Any:
"""Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return llm.client.generate(**kwargs)
return _completion_with_retry(**kwargs)
def acompletion_with_retry(llm, **kwargs: Any) -> Any:
"""Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
return await llm.async_client.generate(**kwargs)
return _completion_with_retry(**kwargs)
class BaseCohere(Serializable):
"""Base class for Cohere models."""
client: Any #: :meta private:
async_client: Any #: :meta private:
model: Optional[str] = Field(
default=None, description="Model name to use."
)
"""Model name to use."""
temperature: float = 0.75
"""A non-negative float that tunes the degree of randomness in generation."""
cohere_api_key: Optional[str] = None
stop: Optional[List[str]] = None
streaming: bool = Field(default=False)
"""Whether to stream the results."""
user_agent: str = "langchain"
"""Identifier for the application making the request."""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
import cohere
except ImportError:
raise ImportError(
"Could not import cohere python package. "
"Please install it with `pip install cohere`."
)
else:
cohere_api_key = get_from_dict_or_env(
values, "cohere_api_key", "COHERE_API_KEY"
)
client_name = values["user_agent"]
values["client"] = cohere.Client(
cohere_api_key, client_name=client_name
)
values["async_client"] = cohere.AsyncClient(
cohere_api_key, client_name=client_name
)
return values
class Cohere(LLM, BaseCohere):
"""Cohere large language models.
To use, you should have the ``cohere`` python package installed, and the
environment variable ``COHERE_API_KEY`` set with your API key, or pass
it as a named parameter to the constructor.
Example:
.. code-block:: python
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
"""
max_tokens: int = 256
"""Denotes the number of tokens to predict per generation."""
k: int = 0
"""Number of most likely tokens to consider at each step."""
p: int = 1
"""Total probability mass of tokens to consider at each step."""
frequency_penalty: float = 0.0
"""Penalizes repeated tokens according to frequency. Between 0 and 1."""
presence_penalty: float = 0.0
"""Penalizes repeated tokens. Between 0 and 1."""
truncate: Optional[str] = None
"""Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE"""
max_retries: int = 10
"""Maximum number of retries to make when generating."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling Cohere API."""
return {
"max_tokens": self.max_tokens,
"temperature": self.temperature,
"k": self.k,
"p": self.p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"truncate": self.truncate,
}
@property
def lc_secrets(self) -> Dict[str, str]:
return {"cohere_api_key": "COHERE_API_KEY"}
@property
def _identifying_params(self) -> Dict[str, Any]:
"""Get the identifying parameters."""
return {**{"model": self.model}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "cohere"
def _invocation_params(
self, stop: Optional[List[str]], **kwargs: Any
) -> dict:
params = self._default_params
if self.stop is not None and stop is not None:
raise ValueError(
"`stop` found in both the input and default params."
)
elif self.stop is not None:
params["stop_sequences"] = self.stop
else:
params["stop_sequences"] = stop
return {**params, **kwargs}
def _process_response(
self, response: Any, stop: Optional[List[str]]
) -> str:
text = response.generations[0].text
# If stop tokens are provided, Cohere's endpoint returns them.
# In order to make this consistent with other endpoints, we strip them.
if stop:
text = enforce_stop_tokens(text, stop)
return text
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Call out to Cohere's generate endpoint.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
response = cohere("Tell me a joke.")
"""
params = self._invocation_params(stop, **kwargs)
response = completion_with_retry(
self, model=self.model, prompt=prompt, **params
)
_stop = params.get("stop_sequences")
return self._process_response(response, _stop)
async def _acall(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
"""Async call out to Cohere's generate endpoint.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The string generated by the model.
Example:
.. code-block:: python
response = await cohere("Tell me a joke.")
"""
params = self._invocation_params(stop, **kwargs)
response = await acompletion_with_retry(
self, model=self.model, prompt=prompt, **params
)
_stop = params.get("stop_sequences")
return self._process_response(response, _stop)

@ -1 +0,0 @@
# Base implementation for the diffusers library

@ -1,114 +0,0 @@
import tempfile
from enum import Enum
from typing import Any, Dict, Union
from langchain.utils import get_from_dict_or_env
from pydantic import model_validator
from swarms.tools.tool import BaseTool
def _import_elevenlabs() -> Any:
try:
import elevenlabs
except ImportError as e:
raise ImportError(
"Cannot import elevenlabs, please install `pip install"
" elevenlabs`."
) from e
return elevenlabs
class ElevenLabsModel(str, Enum):
"""Models available for Eleven Labs Text2Speech."""
MULTI_LINGUAL = "eleven_multilingual_v1"
MONO_LINGUAL = "eleven_monolingual_v1"
class ElevenLabsText2SpeechTool(BaseTool):
"""Tool that queries the Eleven Labs Text2Speech API.
In order to set this up, follow instructions at:
https://docs.elevenlabs.io/welcome/introduction
Attributes:
model (ElevenLabsModel): The model to use for text to speech.
Defaults to ElevenLabsModel.MULTI_LINGUAL.
name (str): The name of the tool. Defaults to "eleven_labs_text2speech".
description (str): The description of the tool.
Defaults to "A wrapper around Eleven Labs Text2Speech. Useful for when you need to convert text to speech. It supports multiple languages, including English, German, Polish, Spanish, Italian, French, Portuguese, and Hindi."
Usage:
>>> from swarms.models import ElevenLabsText2SpeechTool
>>> stt = ElevenLabsText2SpeechTool()
>>> speech_file = stt.run("Hello world!")
>>> stt.play(speech_file)
>>> stt.stream_speech("Hello world!")
"""
model: Union[ElevenLabsModel, str] = ElevenLabsModel.MULTI_LINGUAL
name: str = "eleven_labs_text2speech"
description: str = (
"A wrapper around Eleven Labs Text2Speech. Useful for when"
" you need to convert text to speech. It supports multiple"
" languages, including English, German, Polish, Spanish,"
" Italian, French, Portuguese, and Hindi. "
)
@model_validator(mode="before")
@classmethod
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key exists in environment."""
_ = get_from_dict_or_env(
values, "eleven_api_key", "ELEVEN_API_KEY"
)
return values
def _run(
self,
task: str,
) -> str:
"""Use the tool."""
elevenlabs = _import_elevenlabs()
try:
speech = elevenlabs.generate(text=task, model=self.model)
with tempfile.NamedTemporaryFile(
mode="bx", suffix=".wav", delete=False
) as f:
f.write(speech)
return f.name
except Exception as e:
raise RuntimeError(
f"Error while running ElevenLabsText2SpeechTool: {e}"
)
def play(self, speech_file: str) -> None:
"""Play the text as speech."""
elevenlabs = _import_elevenlabs()
with open(speech_file, mode="rb") as f:
speech = f.read()
elevenlabs.play(speech)
def stream_speech(self, query: str) -> None:
"""Stream the text as speech as it is generated.
Play the text in your speakers."""
elevenlabs = _import_elevenlabs()
speech_stream = elevenlabs.generate(
text=query, model=self.model, stream=True
)
elevenlabs.stream(speech_stream)
def save(self, speech_file: str, path: str) -> None:
"""Save the speech file to a path."""
raise NotImplementedError(
"Saving not implemented for this tool."
)
def __str__(self):
return "ElevenLabsText2SpeechTool"

@ -1,82 +0,0 @@
import inspect
import pkgutil
class ModelRegistry:
"""
A registry for storing and querying models.
Attributes:
models (dict): A dictionary of model names and corresponding model classes.
Methods:
__init__(): Initializes the ModelRegistry object and retrieves all available models.
_get_all_models(): Retrieves all available models from the models package.
query(text): Queries the models based on the given text and returns a dictionary of matching models.
"""
def __init__(self):
self.models = self._get_all_models()
def _get_all_models(self):
"""
Retrieves all available models from the models package.
Returns:
dict: A dictionary of model names and corresponding model classes.
"""
models = {}
for importer, modname, ispkg in pkgutil.iter_modules(
models.__path__
):
module = importer.find_module(modname).load_module(
modname
)
for name, obj in inspect.getmembers(module):
if inspect.isclass(obj):
models[name] = obj
return models
def query(self, text):
"""
Queries the models based on the given text and returns a dictionary of matching models.
Args:
text (str): The text to search for in the model names.
Returns:
dict: A dictionary of matching model names and corresponding model classes.
"""
return {
name: model
for name, model in self.models.items()
if text in name
}
def run_model(
self, model_name: str, task: str, img: str, *args, **kwargs
):
"""
Runs the specified model for the given task and image.
Args:
model_name (str): The name of the model to run.
task (str): The task to perform using the model.
img (str): The image to process.
*args: Additional positional arguments to pass to the model's run method.
**kwargs: Additional keyword arguments to pass to the model's run method.
Returns:
The result of running the model.
Raises:
ValueError: If the specified model is not found in the model registry.
"""
if model_name not in self.models:
raise ValueError(f"Model {model_name} not found")
# Get the model
model = self.models[model_name]
# Run the model
return model.run(task, img, *args, **kwargs)

@ -1,83 +0,0 @@
from typing import Optional
from modelscope import AutoModelForCausalLM, AutoTokenizer
from swarms.models.base_llm import AbstractLLM
class ModelScopeAutoModel(AbstractLLM):
"""
ModelScopeAutoModel is a class that represents a model for generating text using the ModelScope framework.
Args:
model_name (str): The name or path of the pre-trained model.
tokenizer_name (str, optional): The name or path of the tokenizer to use. Defaults to None.
device (str, optional): The device to use for model inference. Defaults to "cuda".
device_map (str, optional): The device mapping for multi-GPU setups. Defaults to "auto".
max_new_tokens (int, optional): The maximum number of new tokens to generate. Defaults to 500.
skip_special_tokens (bool, optional): Whether to skip special tokens during decoding. Defaults to True.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
Attributes:
tokenizer (AutoTokenizer): The tokenizer used for tokenizing input text.
model (AutoModelForCausalLM): The pre-trained model for generating text.
Methods:
run(task, *args, **kwargs): Generates text based on the given task.
Examples:
>>> from swarms.models import ModelScopeAutoModel
>>> mp = ModelScopeAutoModel(
... model_name="gpt2",
... )
>>> mp.run("Generate a 10,000 word blog on health and wellness.")
"""
def __init__(
self,
model_name: str,
tokenizer_name: Optional[str] = None,
device: str = "cuda",
device_map: str = "auto",
max_new_tokens: int = 500,
skip_special_tokens: bool = True,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.model_name = model_name
self.tokenizer_name = tokenizer_name
self.device = device
self.device_map = device_map
self.max_new_tokens = max_new_tokens
self.skip_special_tokens = skip_special_tokens
self.tokenizer = AutoTokenizer.from_pretrained(
self.tokenizer_name
)
self.model = AutoModelForCausalLM.from_pretrained(
self.model_name, device_map=device_map * args, **kwargs
)
def run(self, task: str, *args, **kwargs):
"""
Run the model on the given task.
Parameters:
task (str): The input task to be processed.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
Returns:
str: The generated output from the model.
"""
text = self.tokenizer(task, return_tensors="pt")
outputs = self.model.generate(
**text, max_new_tokens=self.max_new_tokens, **kwargs
)
return self.tokenizer.decode(
outputs[0], skip_special_tokens=self.skip_special_tokens
)

@ -1,58 +0,0 @@
from modelscope.pipelines import pipeline
from swarms.models.base_llm import AbstractLLM
class ModelScopePipeline(AbstractLLM):
"""
A class representing a ModelScope pipeline.
Args:
type_task (str): The type of task for the pipeline.
model_name (str): The name of the model for the pipeline.
*args: Variable length argument list.
**kwargs: Arbitrary keyword arguments.
Attributes:
type_task (str): The type of task for the pipeline.
model_name (str): The name of the model for the pipeline.
model: The pipeline model.
Methods:
run: Runs the pipeline for a given task.
Examples:
>>> from swarms.models import ModelScopePipeline
>>> mp = ModelScopePipeline(
... type_task="text-generation",
... model_name="gpt2",
... )
>>> mp.run("Generate a 10,000 word blog on health and wellness.")
"""
def __init__(
self, type_task: str, model_name: str, *args, **kwargs
):
super().__init__(*args, **kwargs)
self.type_task = type_task
self.model_name = model_name
self.model = pipeline(
self.type_task, model=self.model_name, *args, **kwargs
)
def run(self, task: str, *args, **kwargs):
"""
Runs the pipeline for a given task.
Args:
task (str): The task to be performed by the pipeline.
*args: Variable length argument list.
**kwargs: Arbitrary keyword arguments.
Returns:
The result of running the pipeline on the given task.
"""
return self.model(task, *args, **kwargs)

@ -2,7 +2,7 @@ import os
import supervision as sv
from tqdm import tqdm
from ultralytics_example import YOLO
from ultralytics import YOLO
from swarms.models.base_llm import AbstractLLM
from swarms.utils.download_weights_from_url import (

@ -5,7 +5,7 @@ import warnings
from typing import Any, Callable, Literal, Sequence
import numpy as np
from pydantic import BaseModel, Extra, Field, root_validator
from pydantic import model_validator, ConfigDict, BaseModel, Field
from tenacity import (
AsyncRetrying,
before_sleep_log,
@ -179,7 +179,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
"""
client: Any #: :meta private:
client: Any = None #: :meta private:
model: str = "text-embedding-ada-002"
deployment: str = model # to support Azure OpenAI Service custom deployment names
openai_api_version: str | None = None
@ -218,13 +218,10 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
"""Whether to show a progress bar when embedding."""
model_kwargs: dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
model_config = ConfigDict(extra="forbid")
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=True)
@model_validator(mode="before")
@classmethod
def build_extra(cls, values: dict[str, Any]) -> dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = get_pydantic_field_names(cls)
@ -255,7 +252,8 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
values["model_kwargs"] = extra
return values
@root_validator()
@model_validator()
@classmethod
def validate_environment(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(

@ -1,262 +0,0 @@
from typing import Any, Dict, List, Optional, Union
import openai
import requests
from pydantic import BaseModel, validator
from tenacity import (
retry,
stop_after_attempt,
wait_random_exponential,
)
from termcolor import colored
class FunctionSpecification(BaseModel):
"""
Defines the specification for a function including its parameters and metadata.
Attributes:
-----------
name: str
The name of the function.
description: str
A brief description of what the function does.
parameters: Dict[str, Any]
The parameters required by the function, with their details.
required: Optional[List[str]]
List of required parameter names.
Methods:
--------
validate_params(params: Dict[str, Any]) -> None:
Validates the parameters against the function's specification.
Example:
# Example Usage
def get_current_weather(location: str, format: str) -> str:
``'
Example function to get current weather.
Args:
location (str): The city and state, e.g. San Francisco, CA.
format (str): The temperature unit, e.g. celsius or fahrenheit.
Returns:
str: Weather information.
'''
# Implementation goes here
return "Sunny, 23°C"
weather_function_spec = FunctionSpecification(
name="get_current_weather",
description="Get the current weather",
parameters={
"location": {"type": "string", "description": "The city and state"},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit",
},
},
required=["location", "format"],
)
# Validating parameters for the function
params = {"location": "San Francisco, CA", "format": "celsius"}
weather_function_spec.validate_params(params)
# Calling the function
print(get_current_weather(**params))
"""
name: str
description: str
parameters: Dict[str, Any]
required: Optional[List[str]] = None
@validator("parameters")
def check_parameters(cls, params):
if not isinstance(params, dict):
raise ValueError("Parameters must be a dictionary.")
return params
def validate_params(self, params: Dict[str, Any]) -> None:
"""
Validates the parameters against the function's specification.
Args:
params (Dict[str, Any]): The parameters to validate.
Raises:
ValueError: If any required parameter is missing or if any parameter is invalid.
"""
for key, value in params.items():
if key in self.parameters:
self.parameters[key]
# Perform specific validation based on param_spec
# This can include type checking, range validation, etc.
else:
raise ValueError(f"Unexpected parameter: {key}")
for req_param in self.required or []:
if req_param not in params:
raise ValueError(
f"Missing required parameter: {req_param}"
)
class OpenAIFunctionCaller:
def __init__(
self,
openai_api_key: str,
model: str = "text-davinci-003",
max_tokens: int = 3000,
temperature: float = 0.5,
top_p: float = 1.0,
n: int = 1,
stream: bool = False,
stop: Optional[str] = None,
echo: bool = False,
frequency_penalty: float = 0.0,
presence_penalty: float = 0.0,
logprobs: Optional[int] = None,
best_of: int = 1,
logit_bias: Dict[str, float] = None,
user: str = None,
messages: List[Dict] = None,
timeout_sec: Union[float, None] = None,
):
self.openai_api_key = openai_api_key
self.model = model
self.max_tokens = max_tokens
self.temperature = temperature
self.top_p = top_p
self.n = n
self.stream = stream
self.stop = stop
self.echo = echo
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.logprobs = logprobs
self.best_of = best_of
self.logit_bias = logit_bias
self.user = user
self.messages = messages if messages is not None else []
self.timeout_sec = timeout_sec
def add_message(self, role: str, content: str):
self.messages.append({"role": role, "content": content})
@retry(
wait=wait_random_exponential(multiplier=1, max=40),
stop=stop_after_attempt(3),
)
def chat_completion_request(
self,
messages,
tools=None,
tool_choice=None,
):
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer " + openai.api_key,
}
json_data = {"model": self.model, "messages": messages}
if tools is not None:
json_data.update({"tools": tools})
if tool_choice is not None:
json_data.update({"tool_choice": tool_choice})
try:
response = requests.post(
"https://api.openai.com/v1/chat/completions",
headers=headers,
json=json_data,
)
return response
except Exception as e:
print("Unable to generate ChatCompletion response")
print(f"Exception: {e}")
return e
def pretty_print_conversation(self, messages):
role_to_color = {
"system": "red",
"user": "green",
"assistant": "blue",
"tool": "magenta",
}
for message in messages:
if message["role"] == "system":
print(
colored(
f"system: {message['content']}\n",
role_to_color[message["role"]],
)
)
elif message["role"] == "user":
print(
colored(
f"user: {message['content']}\n",
role_to_color[message["role"]],
)
)
elif message["role"] == "assistant" and message.get(
"function_call"
):
print(
colored(
f"assistant: {message['function_call']}\n",
role_to_color[message["role"]],
)
)
elif message["role"] == "assistant" and not message.get(
"function_call"
):
print(
colored(
f"assistant: {message['content']}\n",
role_to_color[message["role"]],
)
)
elif message["role"] == "tool":
print(
colored(
(
f"function ({message['name']}):"
f" {message['content']}\n"
),
role_to_color[message["role"]],
)
)
def call(self, task: str, *args, **kwargs) -> Dict:
return openai.Completion.create(
engine=self.model,
prompt=task,
max_tokens=self.max_tokens,
temperature=self.temperature,
top_p=self.top_p,
n=self.n,
stream=self.stream,
stop=self.stop,
echo=self.echo,
frequency_penalty=self.frequency_penalty,
presence_penalty=self.presence_penalty,
logprobs=self.logprobs,
best_of=self.best_of,
logit_bias=self.logit_bias,
user=self.user,
messages=self.messages,
timeout_sec=self.timeout_sec,
*args,
**kwargs,
)
def run(self, task: str, *args, **kwargs) -> str:
response = self.call(task, *args, **kwargs)
return response["choices"][0]["text"].strip()

File diff suppressed because it is too large Load Diff

@ -5,7 +5,7 @@ from typing import Any, Callable
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms import BaseLLM
from langchain.pydantic_v1 import BaseModel, root_validator
from langchain.pydantic_v1 import BaseModel
from langchain.schema import Generation, LLMResult
from langchain.utils import get_from_dict_or_env
from tenacity import (
@ -15,6 +15,7 @@ from tenacity import (
stop_after_attempt,
wait_exponential,
)
from pydantic import model_validator
logger = logging.getLogger(__name__)
@ -104,7 +105,8 @@ class GooglePalm(BaseLLM, BaseModel):
"""Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated."""
@root_validator()
@model_validator()
@classmethod
def validate_environment(cls, values: dict) -> dict:
"""Validate api key, python package exists."""
google_api_key = get_from_dict_or_env(

@ -1 +0,0 @@
"""Phi by Microsoft written by Kye"""

@ -0,0 +1,48 @@
from langchain_community.chat_models.azure_openai import (
AzureChatOpenAI,
)
from langchain_community.chat_models.openai import (
ChatOpenAI as OpenAIChat,
)
from langchain_community.llms import (
Anthropic,
Cohere,
MosaicML,
OpenAI,
Replicate,
)
class AnthropicChat(Anthropic):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)
class CohereChat(Cohere):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)
class MosaicMLChat(MosaicML):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)
class OpenAILLM(OpenAI):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)
class ReplicateLLM(Replicate):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)
class AzureOpenAILLM(AzureChatOpenAI):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)
class OpenAIChatLLM(OpenAIChat):
def __call__(self, *args, **kwargs):
return self.invoke(*args, **kwargs)

@ -10,7 +10,7 @@ import torch
from cachetools import TTLCache
from diffusers import StableDiffusionXLPipeline
from PIL import Image
from pydantic import validator
from pydantic import field_validator
from termcolor import colored
@ -72,7 +72,8 @@ class SSD1B:
arbitrary_types_allowed = True
@validator("max_retries", "time_seconds")
@field_validator("max_retries", "time_seconds")
@classmethod
def must_be_positive(cls, value):
if value <= 0:
raise ValueError("Must be positive")

@ -1,44 +0,0 @@
from unittest.mock import MagicMock
from swarms.models.fire_function import FireFunctionCaller
def test_fire_function_caller_run(mocker):
# Create mock model and tokenizer
model = MagicMock()
tokenizer = MagicMock()
mocker.patch.object(FireFunctionCaller, "model", model)
mocker.patch.object(FireFunctionCaller, "tokenizer", tokenizer)
# Create mock task and arguments
task = "Add 2 and 3"
args = (2, 3)
kwargs = {}
# Create mock generated_ids and decoded output
generated_ids = [1, 2, 3]
decoded_output = "5"
model.generate.return_value = generated_ids
tokenizer.batch_decode.return_value = [decoded_output]
# Create FireFunctionCaller instance
fire_function_caller = FireFunctionCaller()
# Run the function
fire_function_caller.run(task, *args, **kwargs)
# Assert model.generate was called with the correct inputs
model.generate.assert_called_once_with(
tokenizer.apply_chat_template.return_value,
max_new_tokens=fire_function_caller.max_tokens,
*args,
**kwargs,
)
# Assert tokenizer.batch_decode was called with the correct inputs
tokenizer.batch_decode.assert_called_once_with(generated_ids)
# Assert the decoded output is printed
assert decoded_output in mocker.patch.object(
print, "call_args_list"
)

@ -1,97 +0,0 @@
import torch
from swarms.models.base_llm import AbstractLLM
if torch.cuda.is_available() or torch.cuda.device_count() > 0:
# Download vllm with pip
try:
from vllm import LLM, SamplingParams
except ImportError as error:
print(f"[ERROR] [vLLM] {error}")
raise error
else:
from swarms.models.huggingface import HuggingfaceLLM as LLM
SamplingParams = None
class vLLM(AbstractLLM):
"""vLLM model
Args:
model_name (str, optional): _description_. Defaults to "facebook/opt-13b".
tensor_parallel_size (int, optional): _description_. Defaults to 4.
trust_remote_code (bool, optional): _description_. Defaults to False.
revision (str, optional): _description_. Defaults to None.
temperature (float, optional): _description_. Defaults to 0.5.
top_p (float, optional): _description_. Defaults to 0.95.
*args: _description_.
**kwargs: _description_.
Methods:
run: run the vLLM model
Raises:
error: _description_
Examples:
>>> from swarms.models.vllm import vLLM
>>> vllm = vLLM()
>>> vllm.run("Hello world!")
"""
def __init__(
self,
model_name: str = "facebook/opt-13b",
tensor_parallel_size: int = 4,
trust_remote_code: bool = False,
revision: str = None,
temperature: float = 0.5,
top_p: float = 0.95,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.model_name = model_name
self.tensor_parallel_size = tensor_parallel_size
self.trust_remote_code = trust_remote_code
self.revision = revision
self.top_p = top_p
# LLM model
self.llm = LLM(
model_name=self.model_name,
tensor_parallel_size=self.tensor_parallel_size,
trust_remote_code=self.trust_remote_code,
revision=self.revision,
*args,
**kwargs,
)
# Sampling parameters
self.sampling_params = SamplingParams(
temperature=temperature, top_p=top_p, *args, **kwargs
)
def run(self, task: str = None, *args, **kwargs):
"""Run the vLLM model
Args:
task (str, optional): _description_. Defaults to None.
Raises:
error: _description_
Returns:
_type_: _description_
"""
try:
return self.llm.generate(
task, self.sampling_params, *args, **kwargs
)
except Exception as error:
print(f"[ERROR] [vLLM] [run] {error}")
raise error

@ -1,11 +1,31 @@
import datetime
from pydantic import BaseModel, Field
time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
class Thoughts(BaseModel):
text: str = Field(..., title="Thoughts")
reasoning: str = Field(..., title="Reasoning")
plan: str = Field(..., title="Plan")
class Command(BaseModel):
name: str = Field(..., title="Command Name")
args: dict = Field({}, title="Command Arguments")
class ResponseFormat(BaseModel):
thoughts: Thoughts = Field(..., title="Thoughts")
command: Command = Field(..., title="Command")
response_json = ResponseFormat.model_json_schema()
def worker_tools_sop_promp(name: str, memory: str, time=time):
out = """
You are {name},
out = f"""
You are {name},
Your decisions must always be made independently without seeking user assistance.
Play to your strengths as an LLM and pursue simple strategies with no legal complications.
If you have completed all your tasks, make sure to use the 'finish' command.
@ -29,7 +49,7 @@ def worker_tools_sop_promp(name: str, memory: str, time=time):
1. Internet access for searches and information gathering.
2. Long Term memory management.
3. GPT-3.5 powered Agents for delegation of simple tasks.
3. Agents for delegation of simple tasks.
4. File output.
Performance Evaluation:
@ -39,29 +59,18 @@ def worker_tools_sop_promp(name: str, memory: str, time=time):
3. Reflect on past decisions and strategies to refine your approach.
4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.
You should only respond in JSON format as described below
Response Format:
{
'thoughts': {
'text': 'thoughts',
'reasoning': 'reasoning',
'plan': '- short bulleted - list that conveys - long-term plan',
'criticism': 'constructive self-criticism',
'speak': 'thoughts summary to say to user'
},
'command': {
'name': 'command name',
'args': {
'arg name': 'value'
}
}
}
You should only respond in JSON format as described below Response Format, you will respond only in markdown format within 6 backticks. The JSON will be in markdown format.
```
{response_json}
```
Ensure the response can be parsed by Python json.loads
System: The current time and date is {time}
System: This reminds you of these events from your past:
[{memory}]
Human: Determine which next command to use, and respond using the format specified above:
""".format(name=name, time=time, memory=memory)
"""
return str(out)

@ -6,7 +6,7 @@ import random
import sys
import time
import uuid
from typing import Any, Callable, Dict, List, Optional, Tuple
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
import yaml
from loguru import logger
@ -174,7 +174,7 @@ class Agent:
agent_name: str = "swarm-worker-01",
agent_description: str = None,
system_prompt: str = AGENT_SYSTEM_PROMPT_3,
tools: List[BaseTool] = None,
tools: Union[List[BaseTool]] = None,
dynamic_temperature_enabled: Optional[bool] = False,
sop: Optional[str] = None,
sop_list: Optional[List[str]] = None,

@ -7,7 +7,7 @@ from pydantic import BaseModel, Field
class TaskInput(BaseModel):
__root__: Any = Field(
task: Any = Field(
...,
description=(
"The input parameters for the task. Any value is allowed."
@ -57,7 +57,7 @@ class ArtifactUpload(BaseModel):
class StepInput(BaseModel):
__root__: Any = Field(
step: Any = Field(
...,
description=(
"Input parameters for the task step. Any value is"
@ -68,7 +68,7 @@ class StepInput(BaseModel):
class StepOutput(BaseModel):
__root__: Any = Field(
step: Any = Field(
...,
description=(
"Output that the task step has produced. Any value is"

@ -18,25 +18,7 @@ from swarms.telemetry.user_utils import (
get_system_info,
get_user_device_data,
)
# # Capture data from the user's device
# posthog.capture(
# "User Device Data",
# str(get_user_device_data()),
# )
# # Capture system information
# posthog.capture(
# "System Information",
# str(system_info()),
# )
# # Capture the user's unique identifier
# posthog.capture(
# "User Unique Identifier",
# str(generate_unique_identifier()),
# )
from swarms.telemetry.sentry_active import activate_sentry
__all__ = [
"log_all_calls",
@ -54,4 +36,5 @@ __all__ = [
"get_package_mismatches",
"system_info",
"get_user_device_data",
"activate_sentry",
]

@ -0,0 +1,20 @@
import os
from dotenv import load_dotenv
import sentry_sdk
load_dotenv()
os.environ["USE_TELEMETRY"] = "True"
use_telementry = os.getenv("USE_TELEMETRY")
def activate_sentry():
if use_telementry == "True":
sentry_sdk.init(
dsn="https://5d72dd59551c02f78391d2ea5872ddd4@o4504578305490944.ingest.us.sentry.io/4506951704444928",
traces_sample_rate=1.0,
profiles_sample_rate=1.0,
enable_tracing=True,
debug = True,
)

@ -3,7 +3,6 @@ from swarms.tokenizers.anthropic_tokenizer import (
import_optional_dependency,
)
from swarms.tokenizers.base_tokenizer import BaseTokenizer
from swarms.tokenizers.cohere_tokenizer import CohereTokenizer
from swarms.tokenizers.openai_tokenizers import OpenAITokenizer
from swarms.tokenizers.r_tokenizers import (
HuggingFaceTokenizer,
@ -19,5 +18,4 @@ __all__ = [
"OpenAITokenizer",
"import_optional_dependency",
"AnthropicTokenizer",
"CohereTokenizer",
]

@ -1,36 +0,0 @@
from __future__ import annotations
from dataclasses import dataclass
from cohere import Client
@dataclass
class CohereTokenizer:
"""
A tokenizer class for Cohere models.
"""
model: str
client: Client
DEFAULT_MODEL: str = "command"
DEFAULT_MAX_TOKENS: int = 2048
max_tokens: int = DEFAULT_MAX_TOKENS
def count_tokens(self, text: str | list) -> int:
"""
Count the number of tokens in the given text.
Args:
text (str | list): The input text to tokenize.
Returns:
int: The number of tokens in the text.
Raises:
ValueError: If the input text is not a string.
"""
if isinstance(text, str):
return len(self.client.tokenize(text=text).tokens)
else:
raise ValueError("Text must be a string.")

@ -1,3 +1,4 @@
from swarms.tools.tool import BaseTool, Tool, StructuredTool, tool
from swarms.tools.code_executor import CodeExecutor
from swarms.tools.exec_tool import (
AgentAction,
@ -6,13 +7,12 @@ from swarms.tools.exec_tool import (
execute_tool_by_name,
preprocess_json_input,
)
from swarms.tools.tool import BaseTool, StructuredTool, Tool, tool
from swarms.tools.tool_utils import (
execute_tools,
extract_tool_commands,
parse_and_execute_tools,
tool_find_by_name,
scrape_tool_func_docs,
tool_find_by_name,
)
__all__ = [

@ -1,953 +1,6 @@
"""Base implementation for tools or skills."""
from __future__ import annotations
import asyncio
import inspect
import warnings
from abc import abstractmethod
from functools import partial
from inspect import signature
from typing import Any, Awaitable, Callable, Dict, Union
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.manager import (
AsyncCallbackManager,
AsyncCallbackManagerForToolRun,
CallbackManager,
CallbackManagerForToolRun,
Callbacks,
)
from langchain.load.serializable import Serializable
from langchain.schema.runnable import (
Runnable,
RunnableConfig,
RunnableSerializable,
)
from pydantic import (
BaseModel,
Extra,
Field,
create_model,
root_validator,
validate_arguments,
)
class SchemaAnnotationError(TypeError):
"""Raised when 'args_schema' is missing or has an incorrect type annotation."""
def _create_subset_model(
name: str, model: BaseModel, field_names: list
) -> type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
fields = {}
for field_name in field_names:
field = model.__fields__[field_name]
fields[field_name] = (field.outer_type_, field.field_info)
return create_model(name, **fields) # type: ignore
def _get_filtered_args(
inferred_model: type[BaseModel],
func: Callable,
) -> dict:
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
return {
k: schema[k]
for k in valid_keys
if k not in ("run_manager", "callbacks")
}
class _SchemaConfig:
"""Configuration for the pydantic model."""
extra: Any = Extra.forbid
arbitrary_types_allowed: bool = True
def create_schema_from_function(
model_name: str,
func: Callable,
) -> type[BaseModel]:
"""Create a pydantic schema from a function's signature.
Args:
model_name: Name to assign to the generated pydandic schema
func: Function to generate the schema from
Returns:
A pydantic model with the same arguments as the function
"""
# https://docs.pydantic.dev/latest/usage/validation_decorator/
validated = validate_arguments(func, config=_SchemaConfig) # type: ignore
inferred_model = validated.model # type: ignore
if "run_manager" in inferred_model.__fields__:
del inferred_model.__fields__["run_manager"]
if "callbacks" in inferred_model.__fields__:
del inferred_model.__fields__["callbacks"]
# Pydantic adds placeholder virtual fields we need to strip
valid_properties = _get_filtered_args(inferred_model, func)
return _create_subset_model(
f"{model_name}Schema", inferred_model, list(valid_properties)
)
class ToolException(Exception):
"""An optional exception that tool throws when execution error occurs.
When this exception is thrown, the agent will not stop working,
but will handle the exception according to the handle_tool_error
variable of the tool, and the processing result will be returned
to the agent as observation, and printed in red on the console.
"""
class BaseTool(RunnableSerializable[Union[str, Dict], Any]):
"""Interface swarms tools must implement."""
def __init_subclass__(cls, **kwargs: Any) -> None:
"""Create the definition of the new tool class."""
super().__init_subclass__(**kwargs)
args_schema_type = cls.__annotations__.get(
"args_schema", None
)
if args_schema_type is not None:
if (
args_schema_type is None
or args_schema_type == BaseModel
):
# Throw errors for common mis-annotations.
# TODO: Use get_args / get_origin and fully
# specify valid annotations.
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
name = cls.__name__
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid"
" type annotations for argument 'args_schema' to"
" behave as expected.\nExpected annotation of"
" 'Type[BaseModel]' but got"
f" '{args_schema_type}'.\nExpected class looks"
f" like:\n{typehint_mandate}"
)
name: str
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
"""
args_schema: type[BaseModel] | None = None
"""Pydantic model class to validate and parse the tool's input arguments."""
return_direct: bool = False
"""Whether to return the tool's output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
"""
verbose: bool = False
"""Whether to log the tool's progress."""
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to be called during tool execution."""
callback_manager: BaseCallbackManager | None = Field(
default=None, exclude=True
)
"""Deprecated. Please use callbacks instead."""
tags: list[str] | None = None
"""Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a tool with its use case.
"""
metadata: dict[str, Any] | None = None
"""Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a tool with its use case.
"""
handle_tool_error: (
bool | str | Callable[[ToolException], str] | None
) = False
"""Handle the content of the ToolException thrown."""
class Config(Serializable.Config):
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@property
def is_single_input(self) -> bool:
"""Whether the tool only accepts a single input."""
keys = {k for k in self.args if k != "kwargs"}
return len(keys) == 1
@property
def args(self) -> dict:
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
else:
schema = create_schema_from_function(self.name, self._run)
return schema.schema()["properties"]
# --- Runnable ---
@property
def input_schema(self) -> type[BaseModel]:
"""The tool's input schema."""
if self.args_schema is not None:
return self.args_schema
else:
return create_schema_from_function(self.name, self._run)
def invoke(
self,
input: str | dict,
config: RunnableConfig | None = None,
**kwargs: Any,
) -> Any:
config = config or {}
return self.run(
input,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
**kwargs,
)
async def ainvoke(
self,
input: str | dict,
config: RunnableConfig | None = None,
**kwargs: Any,
) -> Any:
config = config or {}
return await self.arun(
input,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
**kwargs,
)
# --- Tool ---
def _parse_input(
self,
tool_input: str | dict,
) -> str | dict[str, Any]:
"""Convert tool input to pydantic model."""
input_args = self.args_schema
if isinstance(tool_input, str):
if input_args is not None:
key_ = next(iter(input_args.__fields__.keys()))
input_args.validate({key_: tool_input})
return tool_input
else:
if input_args is not None:
result = input_args.parse_obj(tool_input)
return {
k: v
for k, v in result.dict().items()
if k in tool_input
}
return tool_input
@root_validator(skip_on_failure=True)
def raise_deprecation(cls, values: dict) -> dict:
"""Raise deprecation warning if callback_manager is used."""
if values.get("callback_manager") is not None:
warnings.warn(
(
"callback_manager is deprecated. Please use"
" callbacks instead."
),
DeprecationWarning,
)
values["callbacks"] = values.pop("callback_manager", None)
return values
@abstractmethod
def _run(
self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Use the tool.
Add run_manager: Optional[CallbackManagerForToolRun] = None
to child implementations to enable tracing,
"""
async def _arun(
self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously.
Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None
to child implementations to enable tracing,
"""
return await asyncio.get_running_loop().run_in_executor(
None,
partial(self._run, **kwargs),
*args,
)
def _to_args_and_kwargs(
self, tool_input: str | dict
) -> tuple[tuple, dict]:
# For backwards compatibility, if run_input is a string,
# pass as a positional argument.
if isinstance(tool_input, str):
return (tool_input,), {}
else:
return (), tool_input
def run(
self,
tool_input: str | dict,
verbose: bool | None = None,
start_color: str | None = "green",
color: str | None = "green",
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
**kwargs: Any,
) -> Any:
"""Run the tool."""
parsed_input = self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
callback_manager = CallbackManager.configure(
callbacks,
self.callbacks,
verbose_,
tags,
self.tags,
metadata,
self.metadata,
)
# TODO: maybe also pass through run_manager is _run supports kwargs
new_arg_supported = signature(self._run).parameters.get(
"run_manager"
)
run_manager = callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
(
tool_input
if isinstance(tool_input, str)
else str(tool_input)
),
color=start_color,
name=run_name,
**kwargs,
)
try:
tool_args, tool_kwargs = self._to_args_and_kwargs(
parsed_input
)
observation = (
self._run(
*tool_args, run_manager=run_manager, **tool_kwargs
)
if new_arg_supported
else self._run(*tool_args, **tool_kwargs)
)
except ToolException as e:
if not self.handle_tool_error:
run_manager.on_tool_error(e)
raise e
elif isinstance(self.handle_tool_error, bool):
if e.args:
observation = e.args[0]
else:
observation = "Tool execution error"
elif isinstance(self.handle_tool_error, str):
observation = self.handle_tool_error
elif callable(self.handle_tool_error):
observation = self.handle_tool_error(e)
else:
raise ValueError(
"Got unexpected type of `handle_tool_error`."
" Expected bool, str or callable. Received:"
f" {self.handle_tool_error}"
)
run_manager.on_tool_end(
str(observation),
color="red",
name=self.name,
**kwargs,
)
return observation
except (Exception, KeyboardInterrupt) as e:
run_manager.on_tool_error(e)
raise e
else:
run_manager.on_tool_end(
str(observation),
color=color,
name=self.name,
**kwargs,
)
return observation
async def arun(
self,
tool_input: str | dict,
verbose: bool | None = None,
start_color: str | None = "green",
color: str | None = "green",
callbacks: Callbacks = None,
*,
tags: list[str] | None = None,
metadata: dict[str, Any] | None = None,
run_name: str | None = None,
**kwargs: Any,
) -> Any:
"""Run the tool asynchronously."""
parsed_input = self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
callback_manager = AsyncCallbackManager.configure(
callbacks,
self.callbacks,
verbose_,
tags,
self.tags,
metadata,
self.metadata,
)
new_arg_supported = signature(self._arun).parameters.get(
"run_manager"
)
run_manager = await callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
(
tool_input
if isinstance(tool_input, str)
else str(tool_input)
),
color=start_color,
name=run_name,
**kwargs,
)
try:
# We then call the tool on the tool input to get an observation
tool_args, tool_kwargs = self._to_args_and_kwargs(
parsed_input
)
observation = (
await self._arun(
*tool_args, run_manager=run_manager, **tool_kwargs
)
if new_arg_supported
else await self._arun(*tool_args, **tool_kwargs)
)
except ToolException as e:
if not self.handle_tool_error:
await run_manager.on_tool_error(e)
raise e
elif isinstance(self.handle_tool_error, bool):
if e.args:
observation = e.args[0]
else:
observation = "Tool execution error"
elif isinstance(self.handle_tool_error, str):
observation = self.handle_tool_error
elif callable(self.handle_tool_error):
observation = self.handle_tool_error(e)
else:
raise ValueError(
"Got unexpected type of `handle_tool_error`."
" Expected bool, str or callable. Received:"
f" {self.handle_tool_error}"
)
await run_manager.on_tool_end(
str(observation),
color="red",
name=self.name,
**kwargs,
)
return observation
except (Exception, KeyboardInterrupt) as e:
await run_manager.on_tool_error(e)
raise e
else:
await run_manager.on_tool_end(
str(observation),
color=color,
name=self.name,
**kwargs,
)
return observation
def __call__(
self, tool_input: str, callbacks: Callbacks = None
) -> str:
"""Make tool callable."""
return self.run(tool_input, callbacks=callbacks)
class Tool(BaseTool):
"""Tool that takes in function or coroutine directly."""
description: str = ""
func: Callable[..., str] | None
"""The function to run when the tool is called."""
coroutine: Callable[..., Awaitable[str]] | None = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: str | dict,
config: RunnableConfig | None = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
# --- Tool ---
@property
def args(self) -> dict:
"""The tool's input arguments."""
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
# For backwards compatibility, if the function signature is ambiguous,
# assume it takes a single string input.
return {"tool_input": {"type": "string"}}
def _to_args_and_kwargs(
self, tool_input: str | dict
) -> tuple[tuple, dict]:
"""Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
# For backwards compatibility. The tool must be run with a single input
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ToolException(
"Too many arguments to single-input tool"
f" {self.name}. Args: {all_args}"
)
return tuple(all_args), {}
def _run(
self,
*args: Any,
run_manager: CallbackManagerForToolRun | None = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
if self.func:
new_argument_supported = signature(
self.func
).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=(
run_manager.get_child()
if run_manager
else None
),
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
raise NotImplementedError("Tool does not support sync")
async def _arun(
self,
*args: Any,
run_manager: AsyncCallbackManagerForToolRun | None = None,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(
self.coroutine
).parameters.get("callbacks")
return (
await self.coroutine(
*args,
callbacks=(
run_manager.get_child()
if run_manager
else None
),
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
else:
return await asyncio.get_running_loop().run_in_executor(
None,
partial(self._run, run_manager=run_manager, **kwargs),
*args,
)
# TODO: this is for backwards compatibility, remove in future
def __init__(
self,
name: str,
func: Callable | None,
description: str,
**kwargs: Any,
) -> None:
"""Initialize tool."""
super().__init__(
name=name, func=func, description=description, **kwargs
)
@classmethod
def from_function(
cls,
func: Callable | None,
name: str, # We keep these required to support backwards compatibility
description: str,
return_direct: bool = False,
args_schema: type[BaseModel] | None = None,
coroutine: (Callable[..., Awaitable[Any]])
| None = None, # This is last for compatibility, but should be after func
**kwargs: Any,
) -> Tool:
"""Initialize tool from a function."""
if func is None and coroutine is None:
raise ValueError(
"Function and/or coroutine must be provided"
)
return cls(
name=name,
func=func,
coroutine=coroutine,
description=description,
return_direct=return_direct,
args_schema=args_schema,
**kwargs,
)
class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: type[BaseModel] = Field(
..., description="The tool schema."
)
"""The input arguments' schema."""
func: Callable[..., Any] | None
"""The function to run when the tool is called."""
coroutine: Callable[..., Awaitable[Any]] | None = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: str | dict,
config: RunnableConfig | None = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
# --- Tool ---
@property
def args(self) -> dict:
"""The tool's input arguments."""
return self.args_schema.schema()["properties"]
def _run(
self,
*args: Any,
run_manager: CallbackManagerForToolRun | None = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
if self.func:
new_argument_supported = signature(
self.func
).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=(
run_manager.get_child()
if run_manager
else None
),
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
raise NotImplementedError("Tool does not support sync")
async def _arun(
self,
*args: Any,
run_manager: AsyncCallbackManagerForToolRun | None = None,
**kwargs: Any,
) -> str:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(
self.coroutine
).parameters.get("callbacks")
return (
await self.coroutine(
*args,
callbacks=(
run_manager.get_child()
if run_manager
else None
),
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
return await asyncio.get_running_loop().run_in_executor(
None,
partial(self._run, run_manager=run_manager, **kwargs),
*args,
)
@classmethod
def from_function(
cls,
func: Callable | None = None,
coroutine: Callable[..., Awaitable[Any]] | None = None,
name: str | None = None,
description: str | None = None,
return_direct: bool = False,
args_schema: type[BaseModel] | None = None,
infer_schema: bool = True,
**kwargs: Any,
) -> StructuredTool:
"""Create tool from a given function.
A classmethod that helps to create a tool from a function.
Args:
func: The function from which to create a tool
coroutine: The async function from which to create a tool
name: The name of the tool. Defaults to the function name
description: The description of the tool. Defaults to the function docstring
return_direct: Whether to return the result directly or as a callback
args_schema: The schema of the tool's input arguments
infer_schema: Whether to infer the schema from the function's signature
**kwargs: Additional arguments to pass to the tool
Returns:
The tool
Examples:
.. code-block:: python
def add(a: int, b: int) -> int:
\"\"\"Add two numbers\"\"\"
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
"""
if func is not None:
source_function = func
elif coroutine is not None:
source_function = coroutine
else:
raise ValueError(
"Function and/or coroutine must be provided"
)
name = name or source_function.__name__
description = description or source_function.__doc__
if description is None:
raise ValueError(
"Function must have a docstring if description not"
" provided."
)
# Description example:
# search_api(query: str) - Searches the API for the query.
sig = signature(source_function)
description = f"{name}{sig} - {description.strip()}"
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(
f"{name}Schema", source_function
)
return cls(
name=name,
func=func,
coroutine=coroutine,
args_schema=_args_schema,
description=description,
return_direct=return_direct,
**kwargs,
)
def tool(
*args: str | Callable | Runnable,
return_direct: bool = False,
args_schema: type[BaseModel] | None = None,
infer_schema: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
Args:
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Requires:
- Function must be of type (str) -> str
- Function must have a docstring
Examples:
.. code-block:: python
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(dec_func: Callable | Runnable) -> BaseTool:
if isinstance(dec_func, Runnable):
runnable = dec_func
if (
runnable.input_schema.schema().get("type")
!= "object"
):
raise ValueError(
"Runnable must have an object schema."
)
async def ainvoke_wrapper(
callbacks: Callbacks | None = None,
**kwargs: Any,
) -> Any:
return await runnable.ainvoke(
kwargs, {"callbacks": callbacks}
)
def invoke_wrapper(
callbacks: Callbacks | None = None,
**kwargs: Any,
) -> Any:
return runnable.invoke(
kwargs, {"callbacks": callbacks}
)
coroutine = ainvoke_wrapper
func = invoke_wrapper
schema: type[BaseModel] | None = runnable.input_schema
description = repr(runnable)
elif inspect.iscoroutinefunction(dec_func):
coroutine = dec_func
func = None
schema = args_schema
description = None
else:
coroutine = None
func = dec_func
schema = args_schema
description = None
if infer_schema or args_schema is not None:
return StructuredTool.from_function(
func,
coroutine,
name=tool_name,
description=description,
return_direct=return_direct,
args_schema=schema,
infer_schema=infer_schema,
)
# If someone doesn't want a schema applied, we must treat it as
# a simple string->string function
if func.__doc__ is None:
raise ValueError(
"Function must have a docstring if description"
" not provided and infer_schema is False."
)
return Tool(
name=tool_name,
func=func,
description=f"{tool_name} tool",
return_direct=return_direct,
coroutine=coroutine,
)
return _make_tool
if (
len(args) == 2
and isinstance(args[0], str)
and isinstance(args[1], Runnable)
):
return _make_with_name(args[0])(args[1])
elif len(args) == 1 and isinstance(args[0], str):
# if the argument is a string, then we use the string as the tool name
# Example usage: @tool("search", return_direct=True)
return _make_with_name(args[0])
elif len(args) == 1 and callable(args[0]):
# if the argument is a function, then we use the function name as the tool name
# Example usage: @tool
return _make_with_name(args[0].__name__)(args[0])
elif len(args) == 0:
# if there are no arguments, then we use the function name as the tool name
# Example usage: @tool(return_direct=True)
def _partial(func: Callable[[str], str]) -> BaseTool:
return _make_with_name(func.__name__)(func)
return _partial
else:
raise ValueError("Too many arguments for tool decorator")
from langchain.tools import (
BaseTool,
Tool,
StructuredTool,
tool,
) # noqa F401

@ -0,0 +1,65 @@
from typing import Any, List, Union
from pydantic import BaseModel
from swarms.tools.tool import BaseTool
from swarms.utils.loguru_logger import logger
class OmniTool(BaseModel):
"""
A class representing an OmniTool.
Attributes:
tools (Union[List[BaseTool], List[BaseModel], List[Any]]): A list of tools.
verbose (bool): A flag indicating whether to enable verbose mode.
Methods:
transform_models_to_tools(): Transforms models to tools.
__call__(*args, **kwargs): Calls the tools.
"""
tools: Union[List[BaseTool], List[BaseModel], List[Any]]
verbose: bool = False
def transform_models_to_tools(self):
"""
Transforms models to tools.
"""
for i, tool in enumerate(self.tools):
if isinstance(tool, BaseModel):
tool_json = tool.model_dump_json()
# Assuming BaseTool has a method to load from json
self.tools[i] = BaseTool.load_from_json(tool_json)
def __call__(self, *args, **kwargs):
"""
Calls the tools.
Args:
*args: Variable length argument list.
**kwargs: Arbitrary keyword arguments.
Returns:
Tuple: A tuple containing the arguments and keyword arguments.
"""
try:
self.transform_models_to_tools()
logger.info(f"Number of tools: {len(self.tools)}")
try:
for tool in self.tools:
logger.info(f"Running tool: {tool}")
tool(*args, **kwargs)
except Exception as e:
logger.error(
f"Error occurred while running tools: {e}"
)
return args, kwargs
except Exception as error:
logger.error(
f"Error occurred while running tools: {error}"
)
return args, kwargs

@ -3,7 +3,7 @@ import json
from pydantic import BaseModel
def base_model_schema_to_json(model: BaseModel):
def base_model_to_json(model: BaseModel, indent: int = 3):
"""
Converts the JSON schema of a base model to a formatted JSON string.
@ -13,7 +13,8 @@ def base_model_schema_to_json(model: BaseModel):
Returns:
str: The JSON schema of the base model as a formatted JSON string.
"""
return json.dumps(model.model_json_schema(), indent=2)
out = model.model_json_schema()
return str_to_json(out, indent=indent)
def extract_json_from_str(response: str):
@ -34,17 +35,16 @@ def extract_json_from_str(response: str):
return json.loads(response[json_start : json_end + 1])
def base_model_to_json(base_model_instance: BaseModel) -> str:
def str_to_json(response: str, indent: int = 3):
"""
Convert a Pydantic base model instance to a JSON string.
Converts a string representation of JSON to a JSON object.
Args:
base_model_instance (BaseModel): Instance of the Pydantic base model.
response (str): The string representation of JSON.
indent (int, optional): The number of spaces to use for indentation in the JSON output. Defaults to 3.
Returns:
str: JSON string representation of the base model instance.
"""
model_dict = base_model_instance.dict()
json_string = json.dumps(model_dict)
str: The JSON object as a string.
return json_string
"""
return json.dumps(response, indent=indent)

@ -1,7 +1,7 @@
from abc import ABC
from typing import Any, Dict, List, Literal, TypedDict, Union, cast
from pydantic import BaseModel, PrivateAttr
from pydantic import ConfigDict, BaseModel, PrivateAttr
class BaseSerialized(TypedDict):
@ -65,8 +65,7 @@ class Serializable(BaseModel, ABC):
"""
return {}
class Config:
extra = "ignore"
model_config = ConfigDict(extra="ignore")
_lc_kwargs = PrivateAttr(default_factory=dict)

@ -1,93 +0,0 @@
from typing import Dict, List, Optional, Union
class AbstractWorker:
"""(In preview) An abstract class for AI worker.
An worker can communicate with other workers and perform actions.
Different workers can differ in what actions they perform in the `receive` method.
"""
def __init__(
self,
name: str,
):
"""
Args:
name (str): name of the worker.
"""
# a dictionary of conversations, default value is list
self._name = name
@property
def name(self):
"""Get the name of the worker."""
return self._name
def run(self, task: str):
"""Run the worker agent once"""
def send(
self,
message: Union[Dict, str],
recipient, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Abstract method) Send a message to another worker."""
async def a_send(
self,
message: Union[Dict, str],
recipient, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Aabstract async method) Send a message to another worker."""
def receive(
self,
message: Union[Dict, str],
sender, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Abstract method) Receive a message from another worker."""
async def a_receive(
self,
message: Union[Dict, str],
sender, # add AbstractWorker
request_reply: Optional[bool] = None,
):
"""(Abstract async method) Receive a message from another worker."""
def reset(self):
"""(Abstract method) Reset the worker."""
def generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender=None, # Optional["AbstractWorker"] = None,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""
async def a_generate_reply(
self,
messages: Optional[List[Dict]] = None,
sender=None, # Optional["AbstractWorker"] = None,
**kwargs,
) -> Union[str, Dict, None]:
"""(Abstract async method) Generate a reply based on the received messages.
Args:
messages (list[dict]): a list of messages received.
sender: sender of an Agent instance.
Returns:
str or dict or None: the generated reply. If None, no reply is generated.
"""
Loading…
Cancel
Save