pull/453/head
Kye 9 months ago
parent 1b64ef07fb
commit a59a39c43f

@ -152,11 +152,11 @@ The Agent class provides built-in support for long-term memory, allowing agents
```python
from swarms.memory import AbstractVectorDatabase
from swarms.memory import BaseVectorDatabase
from swarms import Agent
class CustomMemory(AbstractVectorDatabase):
class CustomMemory(BaseVectorDatabase):
    def __init__(self, *args, **kwargs):
@ -196,7 +196,7 @@ class MyCustomAgent(Agent):
```
In the example above, we define a new `CustomMemory` class that inherits from the `AbstractVectorDatabase` class provided by the Agent class framework. Within the `CustomMemory` class, you can implement specialized memory management logic, such as custom indexing, retrieval, and storage mechanisms.
In the example above, we define a new `CustomMemory` class that inherits from the `BaseVectorDatabase` class provided by the Agent class framework. Within the `CustomMemory` class, you can implement specialized memory management logic, such as custom indexing, retrieval, and storage mechanisms.
Next, within the `MyCustomAgent` class, we initialize an instance of the `CustomMemory` class and assign it to the `self.long_term_memory` attribute. This custom memory instance can then be utilized within the overridden `run` method, where you can query the memory and process the results as needed.

@ -167,10 +167,10 @@ nav:
- groupchatmanager: "swarms/structs/groupchatmanager.md"
- MajorityVoting: "swarms/structs/majorityvoting.md"
- swarms.memory:
- Building Custom Vector Memory Databases with the AbstractVectorDatabase Class: "swarms/memory/diy_memory.md"
- Building Custom Vector Memory Databases with the BaseVectorDatabase Class: "swarms/memory/diy_memory.md"
- ShortTermMemory: "swarms/memory/short_term_memory.md"
- Guides:
- Building Custom Vector Memory Databases with the AbstractVectorDatabase Class: "swarms/memory/diy_memory.md"
- Building Custom Vector Memory Databases with the BaseVectorDatabase Class: "swarms/memory/diy_memory.md"
- How to Create A Custom Language Model: "swarms/models/custom_model.md"
- Deploying Azure OpenAI in Production, A Comprehensive Guide: "swarms/models/azure_openai.md"
- DIY Build Your Own Agent: "diy_your_own_agent.md"

@ -12,7 +12,7 @@ The primary objective of the `ToolAgent` class is to amplify the efficiency of d
The `ToolAgent` class has the following definition:
```python
class ToolAgent(AbstractLLM):
class ToolAgent(BaseLLM):
def __init__(
self,
name: str,

@ -1073,9 +1073,9 @@ agent.run(task=task, img=img)
### Swarms Compliant Model Interface
```python
from swarms import AbstractLLM
from swarms import BaseLLM
class vLLMLM(AbstractLLM):
class vLLMLM(BaseLLM):
def __init__(self, model_name='default_model', tensor_parallel_size=1, *args, **kwargs):
super().__init__(*args, **kwargs)
self.model_name = model_name

@ -1,20 +1,20 @@
# Building Custom Vector Memory Databases with the AbstractVectorDatabase Class
# Building Custom Vector Memory Databases with the BaseVectorDatabase Class
In the age of large language models (LLMs) and AI-powered applications, efficient memory management has become a crucial component. Vector databases, which store and retrieve data in high-dimensional vector spaces, have emerged as powerful tools for handling the vast amounts of data generated and consumed by AI systems. However, integrating vector databases into your applications can be a daunting task, requiring in-depth knowledge of their underlying architectures and APIs.
Enter the `AbstractVectorDatabase` class, a powerful abstraction layer designed to simplify the process of creating and integrating custom vector memory databases into your AI applications. By inheriting from this class, developers can build tailored vector database solutions that seamlessly integrate with their existing systems, enabling efficient storage, retrieval, and manipulation of high-dimensional data.
Enter the `BaseVectorDatabase` class, a powerful abstraction layer designed to simplify the process of creating and integrating custom vector memory databases into your AI applications. By inheriting from this class, developers can build tailored vector database solutions that seamlessly integrate with their existing systems, enabling efficient storage, retrieval, and manipulation of high-dimensional data.
In this comprehensive guide, we'll explore the `AbstractVectorDatabase` class in detail, covering its core functionality and diving deep into the process of creating custom vector memory databases using popular solutions like PostgreSQL, Pinecone, Chroma, FAISS, and more. Whether you're a seasoned AI developer or just starting to explore the world of vector databases, this guide will provide you with the knowledge and tools necessary to build robust, scalable, and efficient memory solutions for your AI applications.
In this comprehensive guide, we'll explore the `BaseVectorDatabase` class in detail, covering its core functionality and diving deep into the process of creating custom vector memory databases using popular solutions like PostgreSQL, Pinecone, Chroma, FAISS, and more. Whether you're a seasoned AI developer or just starting to explore the world of vector databases, this guide will provide you with the knowledge and tools necessary to build robust, scalable, and efficient memory solutions for your AI applications.
## Understanding the AbstractVectorDatabase Class
## Understanding the BaseVectorDatabase Class
Before we dive into the implementation details, let's take a closer look at the `AbstractVectorDatabase` class and its core functionality.
Before we dive into the implementation details, let's take a closer look at the `BaseVectorDatabase` class and its core functionality.
The `AbstractVectorDatabase` class is an abstract base class that defines the interface for interacting with a vector database. It serves as a blueprint for creating concrete implementations of vector databases, ensuring a consistent and standardized approach to database operations across different systems.
The `BaseVectorDatabase` class is an abstract base class that defines the interface for interacting with a vector database. It serves as a blueprint for creating concrete implementations of vector databases, ensuring a consistent and standardized approach to database operations across different systems.
The class provides a set of abstract methods that define the essential functionality required for working with vector databases, such as connecting to the database, executing queries, and performing CRUD (Create, Read, Update, Delete) operations.
Here's a breakdown of the abstract methods defined in the `AbstractVectorDatabase` class:
Here's a breakdown of the abstract methods defined in the `BaseVectorDatabase` class:
1\. `connect()`: This method establishes a connection to the vector database.
@ -34,22 +34,22 @@ Here's a breakdown of the abstract methods defined in the `AbstractVectorDatabas
9\. `delete(message)`: This method deletes a record from the vector database.
By inheriting from the `AbstractVectorDatabase` class and implementing these abstract methods, developers can create concrete vector database implementations tailored to their specific needs and requirements.
By inheriting from the `BaseVectorDatabase` class and implementing these abstract methods, developers can create concrete vector database implementations tailored to their specific needs and requirements.
## Creating a Custom Vector Memory Database
Now that we have a solid understanding of the `AbstractVectorDatabase` class, let's dive into the process of creating a custom vector memory database by inheriting from this class. Throughout this guide, we'll explore various vector database solutions, including PostgreSQL, Pinecone, Chroma, FAISS, and more, showcasing how to integrate them seamlessly into your AI applications.
Now that we have a solid understanding of the `BaseVectorDatabase` class, let's dive into the process of creating a custom vector memory database by inheriting from this class. Throughout this guide, we'll explore various vector database solutions, including PostgreSQL, Pinecone, Chroma, FAISS, and more, showcasing how to integrate them seamlessly into your AI applications.
### Step 1: Inherit from the AbstractVectorDatabase Class
### Step 1: Inherit from the BaseVectorDatabase Class
The first step in creating a custom vector memory database is to inherit from the `AbstractVectorDatabase` class. This will provide your custom implementation with the foundational structure and interface defined by the abstract class.
The first step in creating a custom vector memory database is to inherit from the `BaseVectorDatabase` class. This will provide your custom implementation with the foundational structure and interface defined by the abstract class.
```python
from abc import ABC, abstractmethod
from swarms import AbstractVectorDatabase
from swarms import BaseVectorDatabase
class MyCustomVectorDatabase(AbstractVectorDatabase):
class MyCustomVectorDatabase(BaseVectorDatabase):
    def __init__(self, *args, **kwargs):
@ -59,17 +59,17 @@ class MyCustomVectorDatabase(AbstractVectorDatabase):
```
In the example above, we define a new class `MyCustomVectorDatabase` that inherits from the `AbstractVectorDatabase` class. Within the `__init__` method, you can add any custom initialization logic specific to your vector database implementation.
In the example above, we define a new class `MyCustomVectorDatabase` that inherits from the `BaseVectorDatabase` class. Within the `__init__` method, you can add any custom initialization logic specific to your vector database implementation.
### Step 2: Implement the Abstract Methods
The next step is to implement the abstract methods defined in the `AbstractVectorDatabase` class. These methods provide the core functionality for interacting with your vector database, such as connecting, querying, and performing CRUD operations.
The next step is to implement the abstract methods defined in the `BaseVectorDatabase` class. These methods provide the core functionality for interacting with your vector database, such as connecting, querying, and performing CRUD operations.
```python
from swarms import AbstractVectorDatabase
from swarms import BaseVectorDatabase
class MyCustomVectorDatabase(AbstractVectorDatabase):
class MyCustomVectorDatabase(BaseVectorDatabase):
    def __init__(self, *args, **kwargs):
@ -146,7 +146,7 @@ PostgreSQL is a powerful open-source relational database management system that
```python
import psycopg2
from swarms import AbstractVectorDatabase
from swarms import BaseVectorDatabase
class PostgreSQLVectorDatabase(MyCustomVectorDatabase):
@ -209,7 +209,7 @@ Pinecone is a managed vector database service that provides efficient storage, r
```python
import pinecone
from swarms import AbstractVectorDatabase
from swarms import BaseVectorDatabase
class PineconeVectorDatabase(MyCustomVectorDatabase):
@ -263,7 +263,7 @@ Chroma is an open-source vector database library that provides efficient storage
```python
from chromadb.client import Client
from swarms import AbstractVectorDatabase
from swarms import BaseVectorDatabase
class ChromaVectorDatabase(MyCustomVectorDatabase):
@ -467,7 +467,7 @@ By following these best practices and considering potential challenges, you can
# Conclusion
In this comprehensive guide, we've explored the `AbstractVectorDatabase` class and its role in simplifying the process of creating custom vector memory databases. We've covered the core functionality of the class, walked through the step-by-step process of inheriting and extending its functionality, and provided examples of integrating popular vector database solutions like PostgreSQL, Pinecone, Chroma, and FAISS.
In this comprehensive guide, we've explored the `BaseVectorDatabase` class and its role in simplifying the process of creating custom vector memory databases. We've covered the core functionality of the class, walked through the step-by-step process of inheriting and extending its functionality, and provided examples of integrating popular vector database solutions like PostgreSQL, Pinecone, Chroma, and FAISS.
Building custom vector memory databases empowers developers to create tailored and efficient data management solutions that seamlessly integrate with their AI applications. By leveraging the power of vector databases, you can unlock new possibilities in data storage, retrieval, and manipulation, enabling your AI systems to handle vast amounts of high-dimensional data with ease.
@ -475,4 +475,4 @@ Remember, the journey of building custom vector memory databases is an iterative
As you embark on this journey, keep in mind the importance of scalability, performance, data quality, security, and compliance. Foster an environment of collaboration, knowledge sharing, and community engagement to ensure that your custom vector memory databases are robust, reliable, and capable of meeting the ever-evolving demands of the AI landscape.
So, dive in, leverage the power of the `AbstractVectorDatabase` class, and create the custom vector memory databases that will drive the future of AI-powered applications.
So, dive in, leverage the power of the `BaseVectorDatabase` class, and create the custom vector memory databases that will drive the future of AI-powered applications.

@ -19,13 +19,13 @@
## 1. Introduction <a name="introduction"></a>
The Language Model Interface (`AbstractLLM`) is a flexible and extensible framework for working with various language models. This documentation provides a comprehensive guide to the interface, its attributes, methods, and usage examples. Whether you're using a pre-trained language model or building your own, this interface can help streamline the process of text generation, chatbots, summarization, and more.
The Language Model Interface (`BaseLLM`) is a flexible and extensible framework for working with various language models. This documentation provides a comprehensive guide to the interface, its attributes, methods, and usage examples. Whether you're using a pre-trained language model or building your own, this interface can help streamline the process of text generation, chatbots, summarization, and more.
## 2. Abstract Language Model <a name="abstract-language-model"></a>
### Initialization <a name="initialization"></a>
The `AbstractLLM` class provides a common interface for language models. It can be initialized with various parameters to customize model behavior. Here are the initialization parameters:
The `BaseLLM` class provides a common interface for language models. It can be initialized with various parameters to customize model behavior. Here are the initialization parameters:
| Parameter | Description | Default Value |
|------------------------|-------------------------------------------------------------------------------------------------|---------------|
@ -82,7 +82,7 @@ The `AbstractLLM` class provides a common interface for language models. It can
### Methods <a name="methods"></a>
The `AbstractLLM` class defines several methods for working with language models:
The `BaseLLM` class defines several methods for working with language models:
- `run(task: Optional[str] = None, *args, **kwargs) -> str`: Generate text using the language model. This method is abstract and must be implemented by subclasses.
@ -156,18 +156,18 @@ get_generation_time() -> float`: Get the time taken for text generation.
## 3. Implementation <a name="implementation"></a>
The `AbstractLLM` class serves as the base for implementing specific language models. Subclasses of `AbstractLLM` should implement the `run` method to define how text is generated for a given task. This design allows flexibility in integrating different language models while maintaining a common interface.
The `BaseLLM` class serves as the base for implementing specific language models. Subclasses of `BaseLLM` should implement the `run` method to define how text is generated for a given task. This design allows flexibility in integrating different language models while maintaining a common interface.
## 4. Usage Examples <a name="usage-examples"></a>
To demonstrate how to use the `AbstractLLM` interface, let's create an example using a hypothetical language model. We'll initialize an instance of the model and generate text for a simple task.
To demonstrate how to use the `BaseLLM` interface, let's create an example using a hypothetical language model. We'll initialize an instance of the model and generate text for a simple task.
```python
# Import the AbstractLLM class
from swarms.models import AbstractLLM
# Import the BaseLLM class
from swarms.models import BaseLLM
# Create an instance of the language model
language_model = AbstractLLM(
language_model = BaseLLM(
model_name="my_language_model",
max_tokens=50,
temperature=0.7,
@ -188,7 +188,7 @@ In this example, we've created an instance of our hypothetical language model, c
## 5. Additional Features <a name="additional-features"></a>
The `AbstractLLM` interface provides additional features for customization and control:
The `BaseLLM` interface provides additional features for customization and control:
- `batch_run`: Generate text for a batch of tasks efficiently.
- `arun` and `abatch_run`: Asynchronous versions of `run` and `batch_run` for concurrent text generation.
@ -199,7 +199,7 @@ These features enhance the flexibility and utility of the interface in various a
## 6. Performance Metrics <a name="performance-metrics"></a>
The `AbstractLLM` class offers methods for tracking performance metrics:
The `BaseLLM` class offers methods for tracking performance metrics:
- `_tokens_per_second`: Calculate tokens generated per second.
- `_num_tokens`: Calculate the number of tokens in a text.
@ -224,4 +224,4 @@ The `track_resource_utilization` method is a placeholder for tracking and report
## 9. Conclusion <a name="conclusion"></a>
The Language Model Interface (`AbstractLLM`) is a versatile framework for working with language models. Whether you're using pre-trained models or developing your own, this interface provides a consistent and extensible foundation. By following the provided guidelines and examples, you can integrate and customize language models for various natural language processing tasks.
The Language Model Interface (`BaseLLM`) is a versatile framework for working with language models. Whether you're using pre-trained models or developing your own, this interface provides a consistent and extensible foundation. By following the provided guidelines and examples, you can integrate and customize language models for various natural language processing tasks.

@ -1,6 +1,6 @@
# How to Create A Custom Language Model
When working with advanced language models, there might come a time when you need a custom solution tailored to your specific needs. Inheriting from an `AbstractLLM` in a Python framework allows developers to create custom language model classes with ease. This developer guide will take you through the process step by step.
When working with advanced language models, there might come a time when you need a custom solution tailored to your specific needs. Inheriting from an `BaseLLM` in a Python framework allows developers to create custom language model classes with ease. This developer guide will take you through the process step by step.
### Prerequisites
@ -9,22 +9,22 @@ Before you begin, ensure that you have:
- A working knowledge of Python programming.
- Basic understanding of object-oriented programming (OOP) in Python.
- Familiarity with language models and natural language processing (NLP).
- The appropriate Python framework installed, with access to `AbstractLLM`.
- The appropriate Python framework installed, with access to `BaseLLM`.
### Step-by-Step Guide
#### Step 1: Understand `AbstractLLM`
#### Step 1: Understand `BaseLLM`
The `AbstractLLM` is an abstract base class that defines a set of methods and properties which your custom language model (LLM) should implement. Abstract classes in Python are not designed to be instantiated directly but are meant to be subclasses.
The `BaseLLM` is an abstract base class that defines a set of methods and properties which your custom language model (LLM) should implement. Abstract classes in Python are not designed to be instantiated directly but are meant to be subclasses.
#### Step 2: Create a New Class
Start by defining a new class that inherits from `AbstractLLM`. This class will implement the required methods defined in the abstract base class.
Start by defining a new class that inherits from `BaseLLM`. This class will implement the required methods defined in the abstract base class.
```python
from swarms import AbstractLLM
from swarms import BaseLLM
class vLLMLM(AbstractLLM):
class vLLMLM(BaseLLM):
pass
```
@ -33,7 +33,7 @@ class vLLMLM(AbstractLLM):
Implement the `__init__` method to initialize your custom LLM. You'll want to initialize the base class as well and define any additional parameters for your model.
```python
class vLLMLM(AbstractLLM):
class vLLMLM(BaseLLM):
def __init__(self, model_name='default_model', tensor_parallel_size=1, *args, **kwargs):
super().__init__(*args, **kwargs)
self.model_name = model_name
@ -43,10 +43,10 @@ class vLLMLM(AbstractLLM):
#### Step 4: Implement Required Methods
Implement the `run` method or any other abstract methods required by `AbstractLLM`. This is where you define how your model processes input and returns output.
Implement the `run` method or any other abstract methods required by `BaseLLM`. This is where you define how your model processes input and returns output.
```python
class vLLMLM(AbstractLLM):
class vLLMLM(BaseLLM):
# ... existing code ...
def run(self, task, *args, **kwargs):
@ -73,9 +73,9 @@ Depending on the requirements, you might need to integrate additional components
Write comprehensive docstrings for your class and its methods. Good documentation is crucial for maintaining the code and for other developers who might use your model.
```python
class vLLMLM(AbstractLLM):
class vLLMLM(BaseLLM):
"""
A custom language model class that extends AbstractLLM.
A custom language model class that extends BaseLLM.
... more detailed docstring ...
"""
@ -96,7 +96,7 @@ Use a version control system like Git to track changes to your model. This makes
### Conclusion
By following this guide, you should now have a custom model that extends the `AbstractLLM`. Remember that the key to a successful custom LLM is understanding the base functionalities, implementing necessary changes, and testing thoroughly. Keep iterating and improving based on feedback and performance metrics.
By following this guide, you should now have a custom model that extends the `BaseLLM`. Remember that the key to a successful custom LLM is understanding the base functionalities, implementing necessary changes, and testing thoroughly. Keep iterating and improving based on feedback and performance metrics.
### Further Reading
@ -104,4 +104,4 @@ By following this guide, you should now have a custom model that extends the `Ab
- In-depth tutorials on object-oriented programming in Python.
- Advanced NLP techniques and optimization strategies for language models.
This guide provides the fundamental steps to create custom models using `AbstractLLM`. For detailed implementation and advanced customization, it's essential to dive deeper into the specific functionalities and capabilities of the language model framework you are using.
This guide provides the fundamental steps to create custom models using `BaseLLM`. For detailed implementation and advanced customization, it's essential to dive deeper into the specific functionalities and capabilities of the language model framework you are using.

@ -0,0 +1,176 @@
from typing import List
from pydantic import BaseModel, Field
from swarms.structs.agent import Agent
from swarms.structs.base_swarm import BaseSwarm
from swarms.utils.loguru_logger import logger
from swarms.models.popular_llms import Anthropic, OpenAIChat
from swarms.models.base_llm import BaseLLM
from swarms.memory.base_vectordb import BaseVectorDatabase
boss_sys_prompt = (
"You're the Swarm Orchestrator, like a project manager of a"
" bustling hive. When a task arises, you tap into your network of"
" worker agents who are ready to jump into action. Whether it's"
" organizing data, handling logistics, or crunching numbers, you"
" delegate tasks strategically to maximize efficiency. Picture"
" yourself as the conductor of a well-oiled machine,"
" orchestrating the workflow seamlessly to achieve optimal"
" results with your team of dedicated worker agents."
)
class AgentSchema(BaseModel):
name: str = Field(
...,
title="Name of the agent",
description="Name of the agent",
)
system_prompt: str = (
Field(
...,
title="System prompt for the agent",
description="System prompt for the agent",
),
)
rules: str = Field(
...,
title="Rules",
description="Rules for the agent",
)
llm: str = Field(
...,
title="Language model",
description="Language model for the agent: `GPT4` or `Claude",
)
# tools: List[ToolSchema] = Field(
# ...,
# title="Tools available to the agent",
# description="Either `browser` or `terminal`",
# )
# task: str = Field(
# ...,
# title="Task assigned to the agent",
# description="Task assigned to the agent",
# )
# TODO: Add more fields here such as the agent's language model, tools, etc.
class HassSchema(BaseModel):
plan: str = Field(
...,
title="Plan to solve the input problem",
description="List of steps to solve the problem",
)
agents: List[AgentSchema] = Field(
...,
title="List of agents to use for the problem",
description="List of agents to use for the problem",
)
# Rules for the agents
rules: str = Field(
...,
title="Rules for the agents",
description="Rules for the agents",
)
class HiearchicalSwarm(BaseSwarm):
def __init__(
self,
director: Agent = None,
subordinates: List[Agent] = [],
workers: List[Agent] = [],
director_sys_prompt: str = boss_sys_prompt,
director_name: str = "Swarm Orchestrator",
director_agent_creation_schema: BaseModel = HassSchema,
director_llm: BaseLLM = Anthropic,
communication_protocol: BaseVectorDatabase = None,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.director = director
self.subordinates = subordinates
self.workers = workers
self.director_sys_prompt = director_sys_prompt
self.director_name = director_name
self.director_agent_creation_schema = (
director_agent_creation_schema
)
self.director_llm = director_llm
self.communication_protocol = communication_protocol
def create_director(self, *args, **kwargs):
"""
Create the director agent based on the provided schema.
"""
name = self.director_name
system_prompt = self.director_sys_prompt
director_llm = self.director_llm
if director_llm == Anthropic:
Anthropic(*args, **kwargs)
elif director_llm == OpenAIChat:
OpenAIChat(*args, **kwargs)
logger.info(
f"Creating Director Agent: {name} with system prompt:"
f" {system_prompt}"
)
director = Agent(
agent_name=name,
system_prompt=system_prompt,
llm=director_llm,
max_loops=1,
autosave=True,
dashboard=False,
verbose=True,
stopping_token="<DONE>",
)
return director
def create_worker_agents(
agents: List[AgentSchema],
) -> List[Agent]:
"""
Create and initialize agents based on the provided AgentSchema objects.
Args:
agents (List[AgentSchema]): A list of AgentSchema objects containing agent information.
Returns:
List[Agent]: The initialized Agent objects.
"""
agent_list = []
for agent in agents:
name = agent.name
system_prompt = agent.system_prompt
logger.info(
f"Creating agent: {name} with system prompt:"
f" {system_prompt}"
)
out = Agent(
agent_name=name,
system_prompt=system_prompt,
# llm=Anthropic(
# anthropic_api_key=os.getenv("ANTHROPIC_API_KEY")
# ),
max_loops=1,
autosave=True,
dashboard=False,
verbose=True,
stopping_token="<DONE>",
)
# network.add_agent(out)
agent_list.append(out)
return agent_list

@ -5,10 +5,7 @@ from dotenv import load_dotenv
from swarms import (
Conversation,
OpenAIChat,
detect_markdown,
extract_code_from_markdown,
)
from swarms.tools.code_executor import CodeExecutor
conv = Conversation(
autosave=False,
@ -46,16 +43,6 @@ def interactive_conversation(llm, iters: int = 10):
f"Assistant: {out}",
)
# Code Interpreter
if detect_markdown(out):
code = extract_code_from_markdown(out)
if code:
print(f"Code: {code}")
executor = CodeExecutor()
out = executor.run(code)
conv.add("assistant", out)
# print(f"Assistant: {out}")
conv.display_conversation()
# conv.export_conversation("conversation.txt")

@ -1,7 +1,7 @@
from dataclasses import dataclass
from typing import List
from swarms import JSON, AbstractLLM, AbstractVectorDatabase, Agent
from swarms import JSON, BaseLLM, BaseVectorDatabase, Agent
@dataclass
@ -10,13 +10,13 @@ class YourAgent(Agent):
Represents an agent in the swarm protocol.
Attributes:
llm (AbstractLLM): The low-level module for the agent.
long_term_memory (AbstractVectorDatabase): The long-term memory for the agent.
llm (BaseLLM): The low-level module for the agent.
long_term_memory (BaseVectorDatabase): The long-term memory for the agent.
tool_schema (List[JSON]): The schema for the tools used by the agent.
"""
llm: AbstractLLM
long_term_memory: AbstractVectorDatabase
llm: BaseLLM
long_term_memory: BaseVectorDatabase
tool_schema: JSON
tool_schemas: List[JSON]

@ -1,98 +0,0 @@
swarms
pip install swarms
swarms is the most pythonic way of writing cognitive systems. Leveraging pydantic models as output schemas combined with langchain in the backend allows for a seamless integration of llms into your apps. It utilizes OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output. In the backend it compiles the swarms syntax into langchain runnables so you can easily invoke, stream or batch process your pipelines.
Open in GitHub Codespaces
from pydantic import BaseModel, Field
from swarms import Anthropic
from swarms import Agent
# Initialize the schema for the person's information
class Schema(BaseModel):
name: str = Field(..., title="Name of the person")
agent: int = Field(..., title="Age of the person")
is_student: bool = Field(..., title="Whether the person is a student")
courses: list[str] = Field(
..., title="List of courses the person is taking"
)
# Convert the schema to a JSON string
tool_schema = Schema(
name="Tool Name",
agent=1,
is_student=True,
courses=["Course1", "Course2"],
)
# Define the task to generate a person's information
task = "Generate a person's information based on the following schema:"
# Initialize the agent
agent = Agent(
agent_name="Person Information Generator",
system_prompt=(
"Generate a person's information based on the following schema:"
),
# Set the tool schema to the JSON string -- this is the key difference
tool_schema=tool_schema,
llm=Anthropic(),
max_loops=3,
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
interactive=True,
# Set the output type to the tool schema which is a BaseModel
output_type=tool_schema, # or dict, or str
metadata_output_type="json",
# List of schemas that the agent can handle
list_tool_schemas = [tool_schema],
function_calling_format_type = "OpenAI",
function_calling_type = "json" # or soon yaml
)
# Run the agent to generate the person's information
generated_data = agent.run(task)
# Print the generated data
print(f"Generated data: {generated_data}")
Features
🐍 pythonic
🔀 easy swap between openai or local models
🔄 dynamic output types (pydantic models, or primitives)
👁️ vision llm support
🧠 langchain_core as backend
📝 jinja templating for prompts
🏗️ reliable structured output
🔁 auto retry parsing
🔧 langsmith support
🔄 sync, async, streaming, parallel, fallbacks
📦 gguf download from huggingface
✅ type hints for all functions and mypy support
🗣️ chat router component
🧩 composable with langchain LCEL
🛠️ easy error handling
🚦 enums and literal support
📐 custom parsing types
Documentation
Checkout the docs here 👈
Also highly recommend to try and run the examples in the ./examples folder.
Contribution
You want to contribute? Thanks, that's great! For more information checkout the Contributing Guide. Please run the dev setup to get started:
git clone https://github.com/kyegomez/swarms.git && cd swarms
./dev_setup.sh
About
⛓️ build cognitive systems, pythonic

@ -1,29 +1,31 @@
from swarms import Agent
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class ExampleLLM(AbstractLLM):
def __init__():
# Define a custom LLM class
class ExampleLLM(BaseLLM):
def __init__(self):
pass
def run(self, task: str, *args, **kwargs):
# Your LLM logic here
pass
## Initialize the workflow
# Initialize the workflow
agent = Agent(
llm=ExampleLLM(),
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="<DONE>",
interactive=True,
llm=ExampleLLM(), # Instantiate the ExampleLLM class
max_loops="auto", # Set the maximum number of loops to "auto"
autosave=True, # Enable autosave feature
dashboard=False, # Disable the dashboard
streaming_on=True, # Enable streaming
verbose=True, # Enable verbose mode
stopping_token="<DONE>", # Set the stopping token to "<DONE>"
interactive=True, # Enable interactive mode
)
# Run the workflow on a task
agent(
"Generate a transcript for a youtube video on what swarms are!"
" Output a <DONE> token when done."
"Generate a transcript for a youtube video on what swarms are!" # Specify the task
" Output a <DONE> token when done." # Specify the stopping condition
)

@ -1,48 +0,0 @@
from swarms import Agent, OpenAI
from swarms.structs.groupchat import GroupChat, GroupChatManager
api_key = ""
llm = OpenAI(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
# Initialize the agent
flow1 = Agent(
llm=llm,
max_loops=1,
system_message="YOU ARE SILLY, YOU OFFER NOTHING OF VALUE",
name="silly",
dashboard=True,
)
flow2 = Agent(
llm=llm,
max_loops=1,
system_message="YOU ARE VERY SMART AND ANSWER RIDDLES",
name="detective",
dashboard=True,
)
flow3 = Agent(
llm=llm,
max_loops=1,
system_message="YOU MAKE RIDDLES",
name="riddler",
dashboard=True,
)
manager = Agent(
llm=llm,
max_loops=1,
system_message="YOU ARE A GROUP CHAT MANAGER",
name="manager",
dashboard=True,
)
# Example usage:
agents = [flow1, flow2, flow3]
group_chat = GroupChat(agents=agents, messages=[], max_round=10)
chat_manager = GroupChatManager(groupchat=group_chat, selector=manager)
chat_history = chat_manager("Write me a riddle")

@ -1,18 +0,0 @@
import os
from swarms import OpenAIChat, Agent
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Create a chat instance
llm = OpenAIChat(
api_key=os.getenv("OPENAI_API_KEY"),
)
# Create an agent
agent = Agent(
agent_name="GPT-3",
llm=llm,
)

@ -1,6 +1,4 @@
from swarms.structs.message_pool import MessagePool
from swarms import Agent, OpenAIChat
from swarms.memory.chroma_db import ChromaDB
from swarms import Agent, OpenAIChat, MessagePool
# Agents
@ -13,7 +11,6 @@ agent1 = Agent(
),
agent_name="Steve",
agent_description="A Minecraft player agent",
long_term_memory=ChromaDB(),
max_steps=1,
)
@ -26,7 +23,6 @@ agent2 = Agent(
),
agent_name="Bob",
agent_description="A Minecraft builder agent",
long_term_memory=ChromaDB(),
max_steps=1,
)
@ -39,7 +35,6 @@ agent3 = Agent(
),
agent_name="Alex",
agent_description="A Minecraft explorer agent",
long_term_memory=ChromaDB(),
max_steps=1,
)
@ -52,7 +47,6 @@ agent4 = Agent(
),
agent_name="Ender",
agent_description="A Minecraft adventurer agent",
long_term_memory=ChromaDB(),
max_steps=1,
)
@ -65,7 +59,6 @@ moderator = Agent(
),
agent_name="Admin",
agent_description="A Minecraft moderator agent",
long_term_memory=ChromaDB(),
max_steps=1,
)

@ -58,4 +58,4 @@ workflow = MultiProcessWorkflow(
# Run the workflow
results = workflow.run("What")
results = workflow.run("What is the best way to market a new product?")

@ -0,0 +1,39 @@
from typing import Annotated
from swarms import create_openai_tool
from openai import OpenAI
# Create an instance of the OpenAI client
client = OpenAI()
# Define the user messages for the chat conversation
messages = [
{
"role": "user",
"content": "What's the weather like in San Francisco, Tokyo, and Paris?",
}
]
# Define the BMI calculator tool using the create_openai_tool decorator
@create_openai_tool(
name="BMI Calculator",
description="Calculate the Body Mass Index (BMI)",
)
def calculate_bmi(
weight: Annotated[float, "Weight in kilograms"],
height: Annotated[float, "Height in meters"],
) -> Annotated[float, "Body Mass Index"]:
"""Calculate the Body Mass Index (BMI) given a person's weight and height."""
return weight / (height**2)
# Create a chat completion request using the OpenAI client
response = client.chat.completions.create(
model="gpt-3.5-turbo-0125",
messages=messages,
tools=calculate_bmi,
tool_choice="auto", # auto is default, but we'll be explicit
)
# Print the generated response from the chat completion
print(response.choices[0].message["content"])

@ -3,7 +3,7 @@ import pkgutil
from typing import Any
import swarms.models
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
from swarms.structs.conversation import Conversation
@ -28,13 +28,13 @@ def get_llm_by_name(name: str):
# Run the language model in a loop for n iterations
def SimpleAgent(
llm: AbstractLLM = None, iters: Any = "automatic", *args, **kwargs
llm: BaseLLM = None, iters: Any = "automatic", *args, **kwargs
):
"""
A simple agent that interacts with a language model.
Args:
llm (AbstractLLM): The language model to use for generating responses.
llm (BaseLLM): The language model to use for generating responses.
iters (Any): The number of iterations or "automatic" to run indefinitely.
*args: Additional positional arguments to pass to the language model.
**kwargs: Additional keyword arguments to pass to the language model.

@ -1,7 +1,7 @@
from typing import Any, Optional, Callable
from swarms.structs.agent import Agent
from swarms.tools.format_tools import Jsonformer
from swarms.tools.json_former import Jsonformer
from swarms.utils.loguru_logger import logger

@ -1,6 +1,6 @@
from swarms.memory.action_subtask import ActionSubtaskEntry
from swarms.memory.base_db import AbstractDatabase
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
from swarms.memory.dict_internal_memory import DictInternalMemory
from swarms.memory.dict_shared_memory import DictSharedMemory
from swarms.memory.short_term_memory import ShortTermMemory
@ -8,7 +8,7 @@ from swarms.memory.visual_memory import VisualShortTermMemory
__all__ = [
"AbstractDatabase",
"AbstractVectorDatabase",
"BaseVectorDatabase",
"ActionSubtaskEntry",
"DictInternalMemory",
"DictSharedMemory",

@ -1,7 +1,7 @@
from abc import ABC
class AbstractVectorDatabase(ABC):
class BaseVectorDatabase(ABC):
"""
Abstract base class for a database.

@ -9,14 +9,14 @@ from dotenv import load_dotenv
from swarms.utils.data_to_text import data_to_text
from swarms.utils.markdown_message import display_markdown_message
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
# Load environment variables
load_dotenv()
# Results storage using local ChromaDB
class ChromaDB(AbstractVectorDatabase):
class ChromaDB(BaseVectorDatabase):
"""
ChromaDB database

@ -7,7 +7,7 @@ from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from swarms.models.popular_llms import OpenAIChat
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
def synchronized_mem(method):
@ -31,7 +31,7 @@ def synchronized_mem(method):
return wrapper
class LangchainChromaVectorMemory(AbstractVectorDatabase):
class LangchainChromaVectorMemory(BaseVectorDatabase):
"""
A class representing a vector memory for storing and retrieving text entries.

@ -5,10 +5,10 @@ from sqlalchemy import JSON, Column, String, create_engine
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
class PostgresDB(AbstractVectorDatabase):
class PostgresDB(BaseVectorDatabase):
"""
A class representing a Postgres database.

@ -3,12 +3,12 @@ from typing import Optional
import pinecone
from attr import define, field
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
from swarms.utils.hash import str_to_hash
@define
class PineconeDB(AbstractVectorDatabase):
class PineconeDB(BaseVectorDatabase):
"""
PineconeDB is a vector storage driver that uses Pinecone as the underlying storage engine.

@ -1,7 +1,7 @@
from typing import List
from httpx import RequestError
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
try:
from sentence_transformers import SentenceTransformer
@ -21,7 +21,7 @@ except ImportError:
print("pip install qdrant-client")
class Qdrant(AbstractVectorDatabase):
class Qdrant(BaseVectorDatabase):
"""
Qdrant class for managing collections and performing vector operations using QdrantClient.

@ -1,6 +1,6 @@
from typing import Any, List, Optional, Tuple
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
try:
import sqlite3
@ -8,7 +8,7 @@ except ImportError:
raise ImportError("Please install sqlite3 to use the SQLiteDB class.")
class SQLiteDB(AbstractVectorDatabase):
class SQLiteDB(BaseVectorDatabase):
"""
A reusable class for SQLite database operations with methods for adding,
deleting, updating, and querying data.

@ -4,7 +4,7 @@ Weaviate API Client
from typing import Any, Dict, List, Optional
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
try:
import weaviate
@ -12,7 +12,7 @@ except ImportError:
print("pip install weaviate-client")
class WeaviateDB(AbstractVectorDatabase):
class WeaviateDB(BaseVectorDatabase):
"""
Weaviate API Client

@ -1,5 +1,5 @@
from swarms.models.base_embedding_model import BaseEmbeddingModel
from swarms.models.base_llm import AbstractLLM # noqa: E402
from swarms.models.base_llm import BaseLLM # noqa: E402
from swarms.models.base_multimodal_model import BaseMultiModalModel
from swarms.models.fire_function import FireFunctionCaller
from swarms.models.fuyu import Fuyu # noqa: E402
@ -46,7 +46,7 @@ from swarms.models.vilt import Vilt # noqa: E402
from swarms.models.openai_embeddings import OpenAIEmbeddings
__all__ = [
"AbstractLLM",
"BaseLLM",
"Anthropic",
"AzureOpenAI",
"BaseEmbeddingModel",

@ -20,7 +20,7 @@ def count_tokens(text: str) -> int:
return len(text.split())
class AbstractLLM(ABC):
class BaseLLM(ABC):
"""Abstract Language Model that defines the interface for all language models
Args:

@ -2,14 +2,14 @@ import wave
from abc import abstractmethod
from typing import Optional
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class BaseTTSModel(AbstractLLM):
class BaseTTSModel(BaseLLM):
"""Base class for all TTS models.
Args:
AbstractLLM (_type_): _description_
BaseLLM (_type_): _description_
model_name (_type_): _description_
voice (_type_): _description_
chunk_size (_type_): _description_

@ -5,10 +5,10 @@ from typing import List, Optional
from diffusers.utils import export_to_video
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class BaseTextToVideo(AbstractLLM):
class BaseTextToVideo(BaseLLM):
"""BaseTextToVideo class represents prebuilt text-to-video models."""
def __init__(self, *args, **kwargs):

@ -3,10 +3,10 @@ from typing import Any
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class FireFunctionCaller(AbstractLLM):
class FireFunctionCaller(BaseLLM):
"""
A class that represents a caller for the FireFunction model.

@ -11,10 +11,10 @@ from transformers import (
BitsAndBytesConfig,
)
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class HuggingfaceLLM(AbstractLLM):
class HuggingfaceLLM(BaseLLM):
"""
A class for running inference on a given model.

@ -3,15 +3,15 @@ from abc import abstractmethod
import torch
from termcolor import colored
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
from transformers.pipelines import pipeline
class HuggingfacePipeline(AbstractLLM):
class HuggingfacePipeline(BaseLLM):
"""HuggingfacePipeline
Args:
AbstractLLM (AbstractLLM): [description]
BaseLLM (BaseLLM): [description]
task (str, optional): [description]. Defaults to "text-generation".
model_name (str, optional): [description]. Defaults to None.
use_fp8 (bool, optional): [description]. Defaults to False.

@ -12,10 +12,10 @@ from transformers import (
BitsAndBytesConfig,
TextStreamer,
)
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class LlamaFunctionCaller(AbstractLLM):
class LlamaFunctionCaller(BaseLLM):
"""
A class to manage and execute Llama functions.

@ -1,11 +1,11 @@
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
from swarms.structs.message import Message
class Mistral(AbstractLLM):
class Mistral(BaseLLM):
"""
Mistral is an all-new llm

@ -2,10 +2,10 @@ from typing import Optional
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
class Mixtral(AbstractLLM):
class Mixtral(BaseLLM):
"""Mixtral model.
Args:

@ -1,4 +1,4 @@
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
from pydantic import BaseModel
from typing import List, Dict
import openai
@ -9,7 +9,7 @@ class OpenRouterRequest(BaseModel):
messages: List[Dict[str, str]] = []
class OpenRouterChat(AbstractLLM):
class OpenRouterChat(BaseLLM):
"""
A class representing an OpenRouter chat model.

@ -5,7 +5,7 @@ import sys
import requests
from dotenv import load_dotenv
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
try:
import wave
@ -26,7 +26,7 @@ def openai_api_key_env():
return openai_api_key
class OpenAITTS(AbstractLLM):
class OpenAITTS(BaseLLM):
"""OpenAI TTS model
Attributes:

@ -5,7 +5,7 @@ from typing import Optional
import requests
from dotenv import load_dotenv
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
# Load environment variables
load_dotenv()
@ -16,7 +16,7 @@ def together_api_key_env():
return os.getenv("TOGETHER_API_KEY")
class TogetherLLM(AbstractLLM):
class TogetherLLM(BaseLLM):
"""
GPT-4 Vision API

@ -84,6 +84,7 @@ from swarms.structs.yaml_model import (
pydantic_type_to_yaml_schema,
YamlModel,
)
from swarms.structs.message_pool import MessagePool
__all__ = [
"Agent",
@ -155,4 +156,5 @@ __all__ = [
"create_yaml_schema_from_dict",
"pydantic_type_to_yaml_schema",
"YamlModel",
"MessagePool",
]

@ -13,7 +13,7 @@ import yaml
from loguru import logger
from termcolor import colored
from swarms.memory.base_vectordb import AbstractVectorDatabase
from swarms.memory.base_vectordb import BaseVectorDatabase
from swarms.prompts.agent_system_prompts import AGENT_SYSTEM_PROMPT_3
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
@ -25,16 +25,16 @@ from swarms.utils.data_to_text import data_to_text
from swarms.utils.parse_code import extract_code_from_markdown
from swarms.utils.pdf_to_text import pdf_to_text
from swarms.tools.exec_tool import execute_tool_by_name
from swarms.tools.code_executor import CodeExecutor
from swarms.prompts.worker_prompt import tool_usage_worker_prompt
from pydantic import BaseModel
from swarms.tools.pydantic_to_json import (
pydantic_to_functions,
multi_pydantic_to_functions,
base_model_to_openai_function,
multi_base_model_to_openai_function,
)
from swarms.structs.schemas import Step, ManySteps
from swarms.telemetry.user_utils import get_user_device_data
from swarms.structs.yaml_model import YamlModel
from swarms.tools.code_interpreter import SubprocessCodeInterpreter
# Utils
@ -113,7 +113,7 @@ class Agent:
pdf_path (str): The path to the pdf
list_of_pdf (str): The list of pdf
tokenizer (Any): The tokenizer
memory (AbstractVectorDatabase): The memory
memory (BaseVectorDatabase): The memory
preset_stopping_token (bool): Enable preset stopping token
traceback (Any): The traceback
traceback_handlers (Any): The traceback handlers
@ -198,7 +198,7 @@ class Agent:
pdf_path: Optional[str] = None,
list_of_pdf: Optional[str] = None,
tokenizer: Optional[Any] = None,
long_term_memory: Optional[AbstractVectorDatabase] = None,
long_term_memory: Optional[BaseVectorDatabase] = None,
preset_stopping_token: Optional[bool] = False,
traceback: Optional[Any] = None,
traceback_handlers: Optional[Any] = None,
@ -639,7 +639,7 @@ class Agent:
return json.loads(json_str)
def pydantic_model_to_json_str(self, model: BaseModel):
return str(pydantic_to_functions(model))
return str(base_model_to_openai_function(model))
def dict_to_json_str(self, dictionary: dict):
"""Convert a dictionary to a JSON string"""
@ -659,14 +659,14 @@ class Agent:
self, tool_schema: BaseModel = None, *args, **kwargs
):
"""Convert a tool schema to a string"""
out = pydantic_to_functions(tool_schema)
out = base_model_to_openai_function(tool_schema)
return str(out)
def tool_schemas_to_str(
self, tool_schemas: List[BaseModel] = None, *args, **kwargs
):
"""Convert a list of tool schemas to a string"""
out = multi_pydantic_to_functions(tool_schemas)
out = multi_base_model_to_openai_function(tool_schemas)
return str(out)
def str_to_pydantic_model(self, string: str, model: BaseModel):
@ -790,8 +790,9 @@ class Agent:
)
# Execute the code
# execution = execute_command(extracted_code)
execution = CodeExecutor().run(extracted_code)
execution = SubprocessCodeInterpreter(
debug_mode=True
).run(extracted_code)
# Add the execution to the memory
self.short_memory.add(

@ -1,7 +1,7 @@
from swarms.structs.agent import Agent
from typing import Union
from swarms.models.popular_llms import OpenAIChat
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
from swarms.prompts.meta_system_prompt import (
meta_system_prompt_generator,
)
@ -12,13 +12,13 @@ meta_prompter_llm = OpenAIChat(
def meta_system_prompt(
agent: Union[Agent, AbstractLLM], system_prompt: str
agent: Union[Agent, BaseLLM], system_prompt: str
) -> str:
"""
Generates a meta system prompt for the given agent using the provided system prompt.
Args:
agent (Union[Agent, AbstractLLM]): The agent or LLM (Language Learning Model) for which the meta system prompt is generated.
agent (Union[Agent, BaseLLM]): The agent or LLM (Language Learning Model) for which the meta system prompt is generated.
system_prompt (str): The system prompt used to generate the meta system prompt.
Returns:

@ -4,12 +4,12 @@ from typing import (
Sequence,
Union,
)
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
from swarms.models.base_multimodal_model import BaseMultiModalModel
from swarms.structs.agent import Agent
# Unified type for agent
AgentType = Union[Agent, Callable, Any, AbstractLLM, BaseMultiModalModel]
AgentType = Union[Agent, Callable, Any, BaseLLM, BaseMultiModalModel]
# List of agents
AgentListType = Sequence[AgentType]

@ -4,10 +4,11 @@ from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.utils.loguru_logger import logger
from swarms.utils.try_except_wrapper import try_except_wrapper
from swarms.structs.base_workflow import BaseWorkflow
@dataclass
class SequentialWorkflow:
class SequentialWorkflow(BaseWorkflow):
name: str = "Sequential Workflow"
description: str = None
objective: str = None
@ -22,6 +23,7 @@ class SequentialWorkflow:
# ) # List to store tasks
def __post_init__(self):
super().__init__()
self.conversation = Conversation(
time_enabled=True,
autosave=True,
@ -51,35 +53,6 @@ class SequentialWorkflow:
def reset_workflow(self) -> None:
self.conversation = {}
# @try_except_wrapper
# WITH TASK POOL
# def run(self):
# if not self.agent_pool:
# raise ValueError("No agents have been added to the workflow.")
# self.workflow_bootup()
# loops = 0
# prompt = None # Initialize prompt to None; will be updated with the output of each agent
# while loops < self.max_loops:
# for i, agent in enumerate(self.agent_pool):
# task = (
# self.task_pool[i] if prompt is None else prompt
# ) # Use initial task or the output from the previous agent
# logger.info(
# f"Agent: {agent.agent_name} {i+1} is executing the task"
# )
# logger.info("\n")
# output = agent.run(task)
# if output is None:
# logger.error(
# f"Agent {i+1} returned None for task: {task}"
# )
# raise ValueError(f"Agent {i+1} returned None.")
# self.conversation.add(agent.agent_name, output)
# prompt = output # Update prompt with current agent's output to pass to the next agent
# logger.info(f"Prompt: {prompt}")
# loops += 1
# return self.conversation.return_history_as_string()
@try_except_wrapper
def run(self):
if not self.agent_pool:

@ -1,5 +1,4 @@
from swarms.tools.tool import BaseTool, Tool, StructuredTool, tool
from swarms.tools.code_executor import CodeExecutor
from swarms.tools.exec_tool import (
AgentAction,
AgentOutputParser,
@ -16,23 +15,24 @@ from swarms.tools.tool_utils import (
)
from swarms.tools.pydantic_to_json import (
_remove_a_key,
pydantic_to_functions,
multi_pydantic_to_functions,
base_model_to_openai_function,
multi_base_model_to_openai_function,
function_to_str,
functions_to_str,
)
from swarms.tools.openai_func_calling_schema import (
OpenAIFunctionCallSchema,
OpenAIFunctionCallSchema as OpenAIFunctionCallSchemaBaseModel,
)
from swarms.tools.py_func_to_openai_func_str import (
get_parameter_json_schema,
get_required_params,
get_parameters,
get_openai_function_schema,
get_load_param_if_needed_function,
get_openai_function_schema_from_func,
load_basemodels_if_needed,
serialize_to_str,
get_load_param_if_needed_function,
get_parameters,
get_required_params,
Function,
ToolFunction,
)
from swarms.tools.openai_tool_creator_decorator import create_openai_tool
__all__ = [
@ -40,7 +40,6 @@ __all__ = [
"Tool",
"StructuredTool",
"tool",
"CodeExecutor",
"AgentAction",
"AgentOutputParser",
"BaseAgentOutputParser",
@ -52,16 +51,17 @@ __all__ = [
"scrape_tool_func_docs",
"tool_find_by_name",
"_remove_a_key",
"pydantic_to_functions",
"multi_pydantic_to_functions",
"base_model_to_openai_function",
"multi_base_model_to_openai_function",
"function_to_str",
"functions_to_str",
"OpenAIFunctionCallSchema",
"get_parameter_json_schema",
"get_required_params",
"get_parameters",
"get_openai_function_schema",
"get_load_param_if_needed_function",
"OpenAIFunctionCallSchemaBaseModel",
"get_openai_function_schema_from_func",
"load_basemodels_if_needed",
"serialize_to_str",
"get_load_param_if_needed_function",
"get_parameters",
"get_required_params",
"Function",
"ToolFunction",
"create_openai_tool",
]

@ -1,97 +0,0 @@
import os
import subprocess
import tempfile
class CodeExecutor:
"""
A class for executing code snippets.
Args:
code (str, optional): The code snippet to be executed. Defaults to None.
Methods:
is_python_code(code: str = None) -> bool:
Checks if the given code is Python code.
run_python(code: str = None) -> str:
Executes the given Python code and returns the output.
run(code: str = None) -> str:
Executes the given code and returns the output.
__call__() -> str:
Executes the code and returns the output.
"""
def __init__(self):
self.code = None
def run_python(self, code: str = None) -> str:
"""
Executes the given Python code and returns the output.
Args:
code (str, optional): The Python code to be executed. Defaults to None.
Returns:
str: The output of the code execution.
"""
code = code or self.code
try:
# Create a temporary file
with tempfile.NamedTemporaryFile(
suffix=".py", delete=False
) as temp:
temp.write(code.encode())
temp_filename = temp.name
# Execute the temporary file
output = subprocess.check_output(
f"python {temp_filename}",
shell=True,
)
# Delete the temporary file
os.remove(temp_filename)
return output.decode("utf-8")
except subprocess.CalledProcessError as error:
return error.output.decode("utf-8")
except Exception as error:
return str(error)
def run(self, code: str = None) -> str:
"""
Executes the given code and returns the output.
Args:
code (str, optional): The code to be executed. Defaults to None.
Returns:
str: The output of the code execution.
"""
try:
output = subprocess.check_output(
code,
shell=True,
)
return output.decode("utf-8")
except subprocess.CalledProcessError as e:
return e.output.decode("utf-8")
except Exception as e:
return str(e)
def __call__(self, task: str, *args, **kwargs) -> str:
"""
Executes the code and returns the output.
Returns:
str: The output of the code execution.
"""
return self.run(task, *args, **kwargs)
# model = CodeExecutor()
# out = model.run("python3")
# print(out)

@ -1,112 +0,0 @@
import logging
import os
import subprocess
import tempfile
import traceback
from typing import Tuple
async def execute_code_async(code: str) -> Tuple[str, str]:
"""
This function takes a string of code as input, adds some documentation to it,
and then attempts to execute the code asynchronously. If the code execution is successful,
the function returns the new code and an empty string. If the code execution
fails, the function returns the new code and the error message.
Args:
code (str): The original code.
Returns:
Tuple[str, str]: The new code with added documentation and the error message (if any).
"""
# Validate the input
if not isinstance(code, str):
raise ValueError("The code must be a string.")
# Add some documentation to the code
documentation = """
'''
This code has been prepared for deployment in an execution sandbox.
'''
"""
# Combine the documentation and the original code
new_code = documentation + "\n" + code
# Attempt to execute the code
error_message = ""
try:
# Use a secure environment to execute the code (e.g., a Docker container)
# This is just a placeholder and would require additional setup and dependencies
# exec_in_docker(new_code)
out = exec(new_code)
return out
# logging.info("Code executed successfully.")
except Exception:
error_message = traceback.format_exc()
logging.error("Code execution failed. Error: %s", error_message)
# Return the new code and the error message
return out, error_message
def execute_code_in_sandbox(code: str, language: str = "python"):
"""
Execute code in a specified language using subprocess and return the results or errors.
Args:
code (str): The code to be executed.
language (str): The programming language of the code. Currently supports 'python' only.
Returns:
dict: A dictionary containing either the result or any errors.
"""
result = {"output": None, "errors": None}
try:
if language == "python":
# Write the code to a temporary file
with tempfile.NamedTemporaryFile(
delete=False, suffix=".py", mode="w"
) as tmp:
tmp.write(code)
tmp_path = tmp.name
# Execute the code in a separate process
process = subprocess.run(
["python", tmp_path],
capture_output=True,
text=True,
timeout=10,
)
# Capture the output and errors
result["output"] = process.stdout
result["errors"] = process.stderr
else:
# Placeholder for other languages; each would need its own implementation
raise NotImplementedError(
f"Execution for {language} not implemented."
)
except subprocess.TimeoutExpired:
result["errors"] = "Execution timed out."
except Exception as e:
result["errors"] = str(e)
finally:
# Ensure the temporary file is removed after execution
if "tmp_path" in locals():
os.remove(tmp_path)
return result
# # Example usage
# code_to_execute = """
# print("Hello, world!")
# """
# execution_result = execute_code(code_to_execute)
# print(json.dumps(execution_result, indent=4))

@ -1,40 +0,0 @@
import concurrent.futures
from typing import Any, Callable, Dict, List
from inspect import iscoroutinefunction
import asyncio
# Helper function to run an asynchronous function in a synchronous way
def run_async_function_in_sync(func: Callable, *args, **kwargs) -> Any:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
coroutine = func(*args, **kwargs)
return loop.run_until_complete(coroutine)
# Main omni function for parallel execution
def omni_parallel_function_caller(
function_calls: List[Dict[str, Any]]
) -> List[Any]:
results = []
with concurrent.futures.ThreadPoolExecutor() as executor:
future_to_call = {}
for call in function_calls:
func = call["function"]
args = call.get("args", ())
kwargs = call.get("kwargs", {})
if iscoroutinefunction(func):
# Wrap and execute asynchronous function in a separate process
future = executor.submit(
run_async_function_in_sync, func, *args, **kwargs
)
else:
# Directly execute synchronous function in a thread
future = executor.submit(func, *args, **kwargs)
future_to_call[future] = call
for future in concurrent.futures.as_completed(future_to_call):
results.append(future.result())
return results

@ -9,7 +9,7 @@ from swarms.tools.logits_processor import (
OutputNumbersTokens,
StringStoppingCriteria,
)
from swarms.models.base_llm import AbstractLLM
from swarms.models.base_llm import BaseLLM
GENERATION_MARKER = "|GENERATION|"
@ -47,7 +47,7 @@ class Jsonformer:
max_number_tokens: int = 6,
temperature: float = 1.0,
max_string_token_length: int = 10,
llm: AbstractLLM = None,
llm: BaseLLM = None,
):
self.model = model
self.tokenizer = tokenizer

@ -0,0 +1,81 @@
from functools import wraps
from swarms.tools.py_func_to_openai_func_str import (
get_openai_function_schema_from_func,
)
from swarms.utils.loguru_logger import logger
def create_openai_tool(
name: str = None,
description: str = None,
return_dict: bool = True,
verbose: bool = True,
return_string: bool = False,
return_yaml: bool = False,
):
"""
A decorator function that generates an OpenAI function schema.
Args:
name (str, optional): The name of the OpenAI function. Defaults to None.
description (str, optional): The description of the OpenAI function. Defaults to None.
*args: Variable length argument list.
**kwargs: Arbitrary keyword arguments.
Returns:
dict: The generated OpenAI function schema.
"""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
try:
# Log the function call
logger.info(f"Creating Tool: {func.__name__}")
# Assert that the arguments are of the correct type
assert isinstance(name, str), "name must be a string"
assert isinstance(
description, str
), "description must be a string"
assert isinstance(
return_dict, bool
), "return_dict must be a boolean"
assert isinstance(
verbose, bool
), "verbose must be a boolean"
# Call the function
func(*args, **kwargs)
# Get the openai function schema
schema = get_openai_function_schema_from_func(
func, name=name, description=description
)
# Return the schema
if return_dict:
return schema
elif return_string is True:
return str(schema)
elif return_yaml is True:
# schema = YamlModel().dict_to_yaml(schema)
return schema
else:
return schema
except AssertionError as e:
# Log the assertion error
logger.error(f"Assertion error: {str(e)}")
raise
except Exception as e:
# Log the exception
logger.error(f"Exception occurred: {str(e)}")
raise
return wrapper
return decorator

@ -368,7 +368,7 @@ def get_missing_annotations(
return missing, unannotated_with_default
def get_openai_function_schema(
def get_openai_function_schema_from_func(
function: Callable[..., Any],
*,
name: Optional[str] = None,

@ -14,8 +14,9 @@ def _remove_a_key(d: dict, remove_key: str) -> None:
_remove_a_key(d[key], remove_key)
def pydantic_to_functions(
def base_model_to_openai_function(
pydantic_type: type[BaseModel],
output_str: bool = False,
) -> dict[str, Any]:
"""
Convert a Pydantic model to a dictionary representation of functions.
@ -57,21 +58,37 @@ def pydantic_to_functions(
_remove_a_key(parameters, "title")
_remove_a_key(parameters, "additionalProperties")
return {
"function_call": {
"name": pydantic_type.__class__.__name__.lower(),
},
"functions": [
{
if output_str:
out = {
"function_call": {
"name": pydantic_type.__class__.__name__.lower(),
"description": schema["description"],
"parameters": parameters,
},
],
}
"functions": [
{
"name": pydantic_type.__class__.__name__.lower(),
"description": schema["description"],
"parameters": parameters,
},
],
}
return str(out)
else:
return {
"function_call": {
"name": pydantic_type.__class__.__name__.lower(),
},
"functions": [
{
"name": pydantic_type.__class__.__name__.lower(),
"description": schema["description"],
"parameters": parameters,
},
],
}
def multi_pydantic_to_functions(
def multi_base_model_to_openai_function(
pydantic_types: List[BaseModel] = None,
) -> dict[str, Any]:
"""
@ -85,7 +102,7 @@ def multi_pydantic_to_functions(
"""
functions: list[dict[str, Any]] = [
pydantic_to_functions(pydantic_type)["functions"][0]
base_model_to_openai_function(pydantic_type)["functions"][0]
for pydantic_type in pydantic_types
]

Loading…
Cancel
Save