[FEAT][YamlModel] [CLEANUP][Docs]

pull/442/head
Kye 9 months ago
parent 72eeeeeee6
commit 3cedc09c84

2
.gitignore vendored

@ -21,7 +21,7 @@ Cargo.lock
.DS_STORE
Cargo.lock
swarms/agents/.DS_Store
artifacts_two
logs
_build
conversation.txt

@ -0,0 +1,323 @@
# **Equipping Autonomous Agents with Tools**
==========================================
Tools play a crucial role in enhancing the capabilities of AI agents. Swarms, a powerful open-source framework, provides a robust and flexible environment for building and integrating tools with AI agents. In this comprehensive guide, we'll explore the process of creating tools in Swarms, including the 3-step process, tool decorator, adding types and doc strings, and integrating them into the Agent class.
## **Introduction to Swarms**
--------------------------
Swarms is a Python-based framework that simplifies the development and deployment of AI agents. It provides a seamless integration with large language models (LLMs) and offers a wide range of tools and utilities to streamline the agent development process. One of the core features of Swarms is the ability to create and integrate custom tools, which can significantly extend the capabilities of AI agents.
Learn more here:
[**GitHub - kyegomez/swarms: Build, Deploy, and Scale Reliable Swarms of Autonomous Agents for...**](https://github.com/kyegomez/swarms?source=post_page-----49d146bcbf9e--------------------------------)
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
### [Build, Deploy, and Scale Reliable Swarms of Autonomous Agents for Workflow Automation. Join our Community...](https://github.com/kyegomez/swarms?source=post_page-----49d146bcbf9e--------------------------------)
[github.com](https://github.com/kyegomez/swarms?source=post_page-----49d146bcbf9e--------------------------------)
And, join our community for real-time support and conversations with friends!
[**Join the Agora Discord Server!**](https://discord.gg/A8DrG5nj?source=post_page-----49d146bcbf9e--------------------------------)
-----------------------------------------------------------------------------------------------------------------------------------
### [Advancing Humanity through open source AI research. | 6319 members](https://discord.gg/A8DrG5nj?source=post_page-----49d146bcbf9e--------------------------------)
[discord.gg](https://discord.gg/A8DrG5nj?source=post_page-----49d146bcbf9e--------------------------------)
**Installation**
================
First, download swarms with the following command. If you have any questions please refer to this video or ask us in the discord!
```bash
pip3 install -U swarms
```
**Necessary Imports**
---------------------
Before we dive into the process of creating tools in Swarms, let's familiarize ourselves with the necessary imports:
```python
from swarms import Agent, Anthropic, tool
import subprocess
```
- These imports provide access to the core components of the Swarms framework, including the `Agent` class, the `Anthropic` language model, and the `tool` decorator for creating custom tools.
- `import subprocess`: This import allows us to interact with the system's terminal and execute shell commands, which can be useful for certain types of tools.
With these imports in place, we're ready to explore the process of creating tools in Swarms.
### **The 3-Step Process**
======================
Creating tools in Swarms follows a straightforward 3-step process:
1\. Define the Tool Function\
2\. Decorate the Function with `[@tool](http://twitter.com/tool)` and add documentation with type hints.\
3\. Add the Tool to the `Agent` Instance
Let's go through each step in detail, accompanied by code examples.
### **Step 1: Define the Tool Function**
------------------------------------
The first step in creating a tool is to define a Python function that encapsulates the desired functionality. This function will serve as the core logic for your tool. Here's an example of a tool function that allows you to execute code in the terminal:
```python
def terminal(code: str) -> str:
"""
Run code in the terminal.
Args:
code (str): The code to run in the terminal.
Returns:
str: The output of the code.
"""
out = subprocess.run(
code, shell=True, capture_output=True, text=True
).stdout
return str(out)
```
In this example, the `terminal` function takes a string `code` as input and uses the `subprocess` module to execute the provided code in the system's terminal. The output of the code is captured and returned as a string.
### **Let's break down the components of this function:**
-----------------------------------------------------
- **Function Signature:** The function signature `def terminal(code: str) -> str:` defines the function name (`terminal`), the parameter name and type (`code: str`), and the return type (`-> str`). This adheres to Python's type hinting conventions.
- **Docstring**: The multiline string enclosed in triple quotes (`"""` ... `"""`) is a docstring, which provides a brief description of the function, its parameters, and its return value. Docstrings are essential for documenting your code and making it easier for the agent to understand and use your tools.
- **Function Body**: The body of the function contains the actual logic for executing the code in the terminal. It uses the `subprocess.run` function to execute the provided `code` in the shell, capturing the output (`capture_output=True`), and returning the output as text (`text=True`). The `stdout` attribute of the `CompletedProcess` object contains the captured output, which is converted to a string and returned.
This is a simple example, but it demonstrates the key components of a tool function: a well-defined signature with type hints, a descriptive docstring, and the core logic encapsulated within the function body.
### **Step 2: Decorate the Function with `**[**@tool**](http://twitter.com/tool)**`**
---------------------------------------------------------------------------------
After defining the tool function, the next step is to decorate it with the `[@tool](http://twitter.com/tool)` decorator provided by Swarms. This decorator registers the function as a tool, allowing it to be used by AI agents within the Swarms framework.
Here's how you would decorate the `terminal` function from the previous example:
```python
@tool
def terminal(code: str) -> str:
"""
Run code in the terminal.
Args:
code (str): The code to run in the terminal.
Returns:
str: The output of the code.
"""
out = subprocess.run(
code, shell=True, capture_output=True, text=True
).stdout
return str(out)
```
The `[@tool](http://twitter.com/tool)` decorator is placed directly above the function definition. This decorator performs the necessary registration and configuration steps to integrate the tool with the Swarms framework.
### **Step 3: Add the Tool to the `Agent` Instance**
------------------------------------------------
The final step in creating a tool is to add it to the `Agent` instance. The `Agent` class is a core component of the Swarms framework and represents an AI agent capable of interacting with humans and other agents, as well as utilizing the available tools.
Here's an example of how to create an `Agent` instance and add the `terminal` tool:
```python
# Model
llm = Anthropic(
temperature=0.1,
)
# Agent
agent = Agent(
agent_name="Devin",
system_prompt=(
"Autonomous agent that can interact with humans and other"
" agents. Be Helpful and Kind. Use the tools provided to"
" assist the user. Return all code in markdown format."
),
llm=llm,
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="<DONE>",
interactive=True,
tools=[terminal],
code_interpreter=True,
)
```
In this example, we first create an instance of the `Anthropic` language model, which will be used by the agent for natural language
**The Necessity of Documentation**
----------------------------------
Before creating tools, it's essential to understand the importance of documentation. Clear and concise documentation ensures that your code is easily understandable and maintainable, not only for yourself but also for other developers who may work with your codebase in the future.
Effective documentation serves several purposes:
1. Code Clarity: Well-documented code is easier to read and understand, making it more accessible for both developers and non-technical stakeholders.
2. Collaboration: When working in a team or contributing to open-source projects, proper documentation facilitates collaboration and knowledge sharing.
3. Onboarding: Comprehensive documentation can significantly streamline the onboarding process for new team members or contributors, reducing the time required to familiarize themselves with the codebase.
4. Future Maintenance: As projects evolve and requirements change, well-documented code becomes invaluable for future maintenance and updates.
In the context of creating tools in Swarms, documentation plays a vital role in ensuring that your tools are easily discoverable, understandable, and usable by other developers and AI agents.
**Type Handling**
-----------------
Python is a dynamically-typed language, which means that variables can hold values of different types during runtime. While this flexibility can be advantageous in certain scenarios, it can also lead to potential errors and inconsistencies, especially in larger codebases.
Type hints, introduced in Python 3.5, provide a way to explicitly annotate the expected types of variables, function parameters, and return values. By incorporating type hints into your code, you can:
1. Improve Code Readability: Type hints make it easier for developers to understand the expected data types, reducing the risk of introducing bugs due to type-related errors.
2. Enable Static Type Checking: With tools like mypy, you can perform static type checking, catching potential type-related issues before running the code.
3. Enhance Code Completion and Tooling: Modern IDEs and code editors can leverage type hints to provide better code completion, refactoring capabilities, and inline documentation.
In the context of creating tools in Swarms, type hints are crucial for ensuring that your tools are used correctly by AI agents and other developers. By clearly defining the expected input and output types, you can reduce the likelihood of runtime errors and improve the overall reliability of your tools.
Now, let's continue with other tool examples!
### **Additional Tool Examples**
============================
To further illustrate the process of creating tools in Swarms, let's explore a few more examples of tool functions with varying functionalities.
**Browser Tool**
```python
@tool
def browser(query: str) -> str:
"""
Search the query in the browser with the `browser` tool.
Args:
query (str): The query to search in the browser.
Returns:
str: The search results.
"""
import webbrowser
url = f"https://www.google.com/search?q={query}"
webbrowser.open(url)
return f"Searching for {query} in the browser."
```
The `browser` tool allows the agent to perform web searches by opening the provided `query` in the default web browser. It leverages the `webbrowser` module to construct the search URL and open it in the browser. The tool returns a string indicating that the search is being performed.
**File Creation Tool**
```python
@tool
def create_file(file_path: str, content: str) -> str:
"""
Create a file using the file editor tool.
Args:
file_path (str): The path to the file.
content (str): The content to write to the file.
Returns:
str: The result of the file creation operation.
"""
with open(file_path, "w") as file:
file.write(content)
return f"File {file_path} created successfully."
```
The `create_file` tool allows the agent to create a new file at the specified `file_path` with the provided `content`. It uses Python's built-in `open` function in write mode (`"w"`) to create the file and write the content to it. The tool returns a string indicating the successful creation of the file.
**File Editor Tool**
```python
@tool
def file_editor(file_path: str, mode: str, content: str) -> str:
"""
Edit a file using the file editor tool.
Args:
file_path (str): The path to the file.
mode (str): The mode to open the file in.
content (str): The content to write to the file.
Returns:
str: The result of the file editing operation.
"""
with open(file_path, mode) as file:
file.write(content)
return f"File {file_path} edited successfully."
```
The `file_editor` tool is similar to the `create_file` tool but provides more flexibility by allowing the agent to specify the mode in which the file should be opened (e.g., `"w"` for write, `"a"` for append). It writes the provided `content` to the file at the specified `file_path` and returns a string indicating the successful editing of the file.
These examples demonstrate the versatility of tools in Swarms and how they can be designed to perform a wide range of tasks, from executing terminal commands to interacting with files and performing web searches.
### **Plugging Tools into the Agent**
=================================
After defining and decorating your tool functions, the next step is to integrate them into the `Agent` instance. This process involves passing the tools as a list to the `tools` parameter when creating the `Agent` instance.
```python
# Model
llm = Anthropic(temperature=0.1)
# Tools
tools = [terminal, browser, create_file, file_editor]
# Agent
agent = Agent(
agent_name="Devin",
system_prompt=(
"Autonomous agent that can interact with humans and other"
" agents. Be Helpful and Kind. Use the tools provided to"
" assist the user. Return all code in markdown format."
),
llm=llm,
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="<DONE>",
interactive=True,
tools=tools,
code_interpreter=True,
)
```
In this example, we create a list `tools` containing the `terminal`, `browser`, `create_file`, and `file_editor` tools. This list is then passed to the `tools` parameter when creating the `Agent` instance.
Once the `Agent` instance is created with the provided tools, it can utilize these tools to perform various tasks and interact with external systems. The agent can call these tools as needed, passing the required arguments and receiving the corresponding return values.
### **Using Tools in Agent Interactions**
=====================================
After integrating the tools into the `Agent` instance, you can utilize them in your agent's interactions with humans or other agents. Here's an example of how an agent might use the `terminal` tool:
```
out = agent("Run the command 'ls' in the terminal.")
print(out)
```
In this example, the human user instructs the agent to run the `"ls"` command in the terminal. The agent processes this request and utilizes the `terminal` tool to execute the command, capturing and returning the output.
Similarly, the agent can leverage other tools, such as the `browser` tool for web searches or the `file_editor` tool for creating and modifying files, based on the user's instructions.
### **Conclusion**
==============
Creating tools in Swarms is a powerful way to extend the capabilities of AI agents and enable them to interact with external systems and perform a wide range of tasks. By following the 3-step process of defining the tool function, decorating it with `@tool`, and adding it to the `Agent` instance, you can seamlessly integrate custom tools into your AI agent's workflow.
Throughout this blog post, we explored the importance of documentation and type handling, which are essential for maintaining code quality, facilitating collaboration, and ensuring the correct usage of your tools by other developers and AI agents.
We also covered the necessary imports and provided detailed code examples for various types of tools, such as executing terminal commands, performing web searches, and creating and editing files. These examples demonstrated the flexibility and versatility of tools in Swarms, allowing you to tailor your tools to meet your specific project requirements.
By leveraging the power of tools in Swarms, you can empower your AI agents with diverse capabilities, enabling them to tackle complex tasks, interact with external systems, and provide more comprehensive and intelligent solutions.

@ -67,6 +67,15 @@ You can install `swarms` with pip in a
poetry install --extras "desktop"
```
!!! example "NPM install |WIP|"
=== "headless"
Get started with the NPM implementation of Swarms with this command:
```bash
npm install swarms-js
```
## Documentation

@ -0,0 +1,251 @@
# YamlModel: A Pydantic Model for YAML Data
### Introduction
The `YamlModel` class, derived from `BaseModel` in Pydantic, offers a convenient way to work with YAML data in your Python applications. It provides methods for serialization (converting to YAML), deserialization (creating an instance from YAML), and schema generation. This documentation will delve into the functionalities of `YamlModel` and guide you through its usage with illustrative examples.
### Purpose and Functionality
The primary purpose of `YamlModel` is to streamline the interaction between your Python code and YAML data. It accomplishes this by:
* **Serialization:** Transforming a `YamlModel` instance into a YAML string representation using the `to_yaml()` method.
* **Deserialization:** Constructing a `YamlModel` instance from a provided YAML string using the `from_yaml()` class method.
* **JSON to YAML Conversion:** Facilitating the conversion of JSON data to YAML format through the `json_to_yaml()` static method.
* **Saving to YAML File:** Enabling the storage of `YamlModel` instances as YAML files using the `save_to_yaml()` method.
* (Future Implementation) **Schema Generation:** The `create_yaml_schema()` class method (not yet implemented but included for future reference) will generate a YAML schema that reflects the structure of the `YamlModel` class and its fields.
### Class Definition and Arguments
The `YamlModel` class inherits from Pydantic's `BaseModel` class. You can define your custom YAML models by creating subclasses of `YamlModel` and specifying your data fields within the class definition. Here's the breakdown of the `YamlModel` class and its methods:
```python
class YamlModel(BaseModel):
"""
A Pydantic model class for working with YAML data.
"""
def to_yaml(self):
"""
Serialize the Pydantic model instance to a YAML string.
"""
return yaml.safe_dump(self.dict(), sort_keys=False)
@classmethod
def from_yaml(cls, yaml_str: str):
"""
Create an instance of the class from a YAML string.
Args:
yaml_str (str): The YAML string to parse.
Returns:
cls: An instance of the class with attributes populated from the YAML data.
Returns None if there was an error loading the YAML data.
"""
# ...
@staticmethod
def json_to_yaml(json_str: str):
"""
Convert a JSON string to a YAML string.
"""
# ...
def save_to_yaml(self, filename: str):
"""
Save the Pydantic model instance as a YAML file.
"""
# ...
# TODO: Implement a method to create a YAML schema from the model fields
# @classmethod
# def create_yaml_schema(cls):
# # ...
"""
```
**Arguments:**
* `self` (implicit): Refers to the current instance of the `YamlModel` class.
* `yaml_str` (str): The YAML string used for deserialization in the `from_yaml()` method.
* `json_str` (str): The JSON string used for conversion to YAML in the `json_to_yaml()` method.
* `filename` (str): The filename (including path) for saving the YAML model instance in the `save_to_yaml()` method.
### Detailed Method Descriptions
**1. to_yaml()**
This method transforms an instance of the `YamlModel` class into a YAML string representation. It utilizes the `yaml.safe_dump()` function from the `PyYAML` library to ensure secure YAML data generation. The `sort_keys=False` argument guarantees that the order of keys in the resulting YAML string remains consistent with the order of fields defined in your `YamlModel` subclass.
**Example:**
```python
class User(YamlModel):
name: str
age: int
is_active: bool
user = User(name="Bob", age=30, is_active=True)
yaml_string = user.to_yaml()
print(yaml_string)
```
This code will output a YAML string representation of the `user` object, resembling:
```yaml
name: Bob
age: 30
is_active: true
```
### Detailed Method Descriptions
**2. from_yaml(cls, yaml_str)** (Class Method)
The `from_yaml()` class method is responsible for constructing a `YamlModel` instance from a provided YAML string.
* **Arguments:**
* `cls` (class): The class representing the desired YAML model (the subclass of `YamlModel` that matches the structure of the YAML data).
* `yaml_str` (str): The YAML string containing the data to be parsed and used for creating the model instance.
* **Returns:**
* `cls` (instance): An instance of the specified class (`cls`) populated with the data extracted from the YAML string. If an error occurs during parsing, it returns `None`.
* **Error Handling:**
The `from_yaml()` method employs `yaml.safe_load()` for secure YAML parsing. It incorporates a `try-except` block to handle potential `ValueError` exceptions that might arise during the parsing process. If an error is encountered, it logs the error message and returns `None`.
**Example:**
```python
class User(YamlModel):
name: str
age: int
is_active: bool
yaml_string = """
name: Alice
age: 25
is_active: false
"""
user = User.from_yaml(yaml_string)
print(user.name) # Output: Alice
```
**3. json_to_yaml(json_str)** (Static Method)
This static method in the `YamlModel` class serves the purpose of converting a JSON string into a YAML string representation.
* **Arguments:**
* `json_str` (str): The JSON string that needs to be converted to YAML format.
* **Returns:**
* `str`: The converted YAML string representation of the provided JSON data.
* **Functionality:**
The `json_to_yaml()` method leverages the `json.loads()` function to parse the JSON string into a Python dictionary. Subsequently, it utilizes `yaml.dump()` to generate the corresponding YAML string representation from the parsed dictionary.
**Example:**
```python
json_string = '{"name": "Charlie", "age": 42, "is_active": true}'
yaml_string = YamlModel.json_to_yaml(json_string)
print(yaml_string)
```
This code snippet will convert the JSON data to a YAML string, likely resembling:
```yaml
name: Charlie
age: 42
is_active: true
```
**4. save_to_yaml(self, filename)**
The `save_to_yaml()` method facilitates the storage of a `YamlModel` instance as a YAML file.
* **Arguments:**
* `self` (implicit): Refers to the current instance of the `YamlModel` class that you intend to save.
* `filename` (str): The desired filename (including path) for the YAML file.
* **Functionality:**
The `save_to_yaml()` method employs the previously explained `to_yaml()` method to generate a YAML string representation of the `self` instance. It then opens the specified file in write mode (`"w"`) and writes the YAML string content to the file.
**Example:**
```python
class Employee(YamlModel):
name: str
department: str
salary: float
employee = Employee(name="David", department="Engineering", salary=95000.00)
employee.save_to_yaml("employee.yaml")
```
This code will create a YAML file named "employee.yaml" containing the serialized representation of the `employee` object.
### More Usage Examples ++
```python
class User(YamlModel):
name: str
age: int
is_active: bool
# Create an instance of the User model
user = User(name="Alice", age=30, is_active=True)
# Serialize the User instance to YAML and print it
yaml_string = user.to_yaml()
print(yaml_string)
```
This code snippet demonstrates the creation of a `User` instance and its subsequent serialization to a YAML string using the `to_yaml()` method. The printed output will likely resemble:
```yaml
name: Alice
age: 30
is_active: true
```
### Converting JSON to YAML
```python
# Convert JSON string to YAML and print
json_string = '{"name": "Bob", "age": 25, "is_active": false}'
yaml_string = YamlModel.json_to_yaml(json_string)
print(yaml_string)
```
This example showcases the conversion of a JSON string containing user data into a YAML string representation using the `json_to_yaml()` static method. The resulting YAML string might look like:
```yaml
name: Bob
age: 25
is_active: false
```
### Saving User Instance to YAML File
```python
# Save the User instance to a YAML file
user.save_to_yaml("user.yaml")
```
This code demonstrates the utilization of the `save_to_yaml()` method to store the `user` instance as a YAML file named "user.yaml". The contents of the file will mirror the serialized YAML string representation of the user object.
## Additional Considerations
* Ensure you have the `PyYAML` library installed (`pip install pyyaml`) to leverage the YAML parsing and serialization functionalities within `YamlModel`.
* Remember that the `create_yaml_schema()` method is not yet implemented but serves as a placeholder for future enhancements.
* For complex data structures within your YAML models, consider leveraging Pydantic's data validation and nested model capabilities for robust data management.
## Conclusion
The `YamlModel` class in Pydantic offers a streamlined approach to working with YAML data in your Python projects. By employing the provided methods (`to_yaml()`, `from_yaml()`, `json_to_yaml()`, and `save_to_yaml()`), you can efficiently convert between Python objects and YAML representations, facilitating data persistence and exchange. This comprehensive documentation empowers you to effectively utilize `YamlModel` for your YAML data processing requirements.

@ -1,8 +1,8 @@
site_name: Swarms Docs
site_name: Swarms Documentation
plugins:
- glightbox
- search
copyright: "&copy; APAC Corp, Inc."
copyright: "&copy; TGSC, Corporation."
extra_css:
- docs/assets/css/extra.css
extra:
@ -57,20 +57,23 @@ markdown_extensions:
nav:
- Home:
- Overview: "index.md"
- Contributing: "contributing.md"
- Limitations of Individual Agents: "limits_of_individual_agents.md"
- Why Swarms: "why_swarms.md"
- The Swarms Bounty System: "swarms_bounty_system.md"
- Build an Agent: "diy_your_own_agent.md"
- Agent with tools: "examples/tools_agent.md"
- Swarms Cloud API:
- Overview: "swarms_cloud/main.md"
- Migrate from OpenAI to Swarms in 3 lines of code: "swarms_cloud/migrate_openai.md"
- Contributors:
- Contributing: "contributing.md"
- Why Swarms: "why_swarms.md"
- The Swarms Bounty System: "swarms_bounty_system.md"
- Swarms Framework [PY]:
- Overview: "swarms/index.md"
- DIY Build Your Own Agent: "diy_your_own_agent.md"
- Agents with Tools: "examples/tools_agent.md"
- swarms.agents:
- Agents:
- WorkerAgent: "swarms/agents/workeragent.md"
- OmniAgent: "swarms/agents/omni_agent.md"
- AbstractAgent: "swarms/agents/abstractagent.md"
- ToolAgent: "swarms/agents/toolagent.md"
- swarms.models:
@ -98,7 +101,7 @@ nav:
- DistilWhisperModel: "swarms/models/distilled_whisperx.md"
- swarms.structs:
- Foundational Structures:
- agent: "swarms/structs/agent.md"
- Agent: "swarms/structs/agent.md"
- basestructure: "swarms/structs/basestructure.md"
- artifactupload: "swarms/structs/artifactupload.md"
- taskinput: "swarms/structs/taskinput.md"
@ -106,6 +109,7 @@ nav:
- artifact: "swarms/structs/artifact.md"
- task: "swarms/structs/task.md"
- Task Queue Base: "swarms/structs/taskqueuebase.md"
- YamlModel: "swarms/structs/yaml_model.md"
- Workflows:
- recursiveworkflow: "swarms/structs/recursiveworkflow.md"
- concurrentworkflow: "swarms/structs/concurrentworkflow.md"
@ -121,30 +125,13 @@ nav:
- MajorityVoting: "swarms/structs/majorityvoting.md"
- swarms.memory:
- Building Custom Vector Memory Databases with the AbstractVectorDatabase Class: "swarms/memory/diy_memory.md"
- Vector Databases:
- Weaviate: "swarms/memory/weaviate.md"
- PineconeDB: "swarms/memory/pinecone.md"
- PGVectorStore: "swarms/memory/pg.md"
- ShortTermMemory: "swarms/memory/short_term_memory.md"
- swarms.utils:
- Misc:
- pdf_to_text: "swarms/utils/pdf_to_text.md"
- load_model_torch: "swarms/utils/load_model_torch.md"
- metrics_decorator: "swarms/utils/metrics_decorator.md"
- prep_torch_inference: "swarms/utils/prep_torch_inference.md"
- find_image_path: "swarms/utils/find_image_path.md"
- print_class_parameters: "swarms/utils/print_class_parameters.md"
- extract_code_from_markdown: "swarms/utils/extract_code_from_markdown.md"
- check_device: "swarms/utils/check_device.md"
- display_markdown_message: "swarms/utils/display_markdown_message.md"
- phoenix_tracer: "swarms/utils/phoenix_tracer.md"
- limit_tokens_from_string: "swarms/utils/limit_tokens_from_string.md"
- math_eval: "swarms/utils/math_eval.md"
- Guides:
- Building Custom Vector Memory Databases with the AbstractVectorDatabase Class: "swarms/memory/diy_memory.md"
- How to Create A Custom Language Model: "swarms/models/custom_model.md"
- Deploying Azure OpenAI in Production: A Comprehensive Guide: "swarms/models/azure_openai.md"
- DIY Build Your Own Agent: "diy_your_own_agent.md"
- Equipping Autonomous Agents with Tools: "examples/tools_agent.md"
- Overview: "examples/index.md"
- Agents:
- Agent: "examples/flow.md"
@ -177,7 +164,6 @@ nav:
- Research: "corporate/research.md"
- Demos: "corporate/demos.md"
- Checklist: "corporate/checklist.md"
- Organization:
- FrontEnd Member Onboarding: "corporate/front_end_contributors.md"

@ -2,7 +2,7 @@ import json
import logging
import threading
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
class ShortTermMemory(BaseStructure):

@ -6,7 +6,7 @@ from swarms.structs.agent_process import (
)
from swarms.structs.auto_swarm import AutoSwarm, AutoSwarmRouter
from swarms.structs.autoscaler import AutoScaler
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.structs.base_swarm import BaseSwarm
from swarms.structs.base_workflow import BaseWorkflow
from swarms.structs.block_wrapper import block
@ -80,6 +80,13 @@ from swarms.structs.utils import (
)
from swarms.structs.agent_rearrange import AgentRearrange
from swarms.structs.yaml_model import (
get_type_name,
create_yaml_schema_from_dict,
pydantic_type_to_yaml_schema,
YamlModel,
)
__all__ = [
"Agent",
@ -149,4 +156,8 @@ __all__ = [
"find_token_in_text",
"parse_tasks",
"AgentRearrange",
"get_type_name",
"create_yaml_schema_from_dict",
"pydantic_type_to_yaml_schema",
"YamlModel",
]

@ -7,7 +7,7 @@ from typing import Callable, Dict, List, Optional
from termcolor import colored
from swarms.structs.agent import Agent
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.utils.decorators import (
error_decorator,
log_decorator,

@ -2,7 +2,6 @@ import asyncio
import concurrent.futures
import json
import os
from abc import ABC
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
from typing import Any, Dict, List, Optional
@ -13,9 +12,10 @@ try:
import gzip
except ImportError as error:
print(f"Error importing gzip: {error}")
from pydantic import BaseModel
class BaseStructure(ABC):
class BaseStructure(BaseModel):
"""Base structure.
@ -59,26 +59,17 @@ class BaseStructure(ABC):
run_with_resources_batched: _description_
Examples:
>>> base_structure = BaseStructure()
>>> base_structure
BaseStructure(name=None, description=None, save_metadata=True, save_artifact_path='./artifacts', save_metadata_path='./metadata', save_error_path='./errors')
"""
def __init__(
self,
name: Optional[str] = None,
description: Optional[str] = None,
save_metadata: bool = True,
save_artifact_path: Optional[str] = "./artifacts",
save_metadata_path: Optional[str] = "./metadata",
save_error_path: Optional[str] = "./errors",
*args,
**kwargs,
):
self.name = name
self.description = description
self.save_metadata = save_metadata
self.save_artifact_path = save_artifact_path
self.save_metadata_path = save_metadata_path
self.save_error_path = save_error_path
name: Optional[str] = None
description: Optional[str] = None
save_metadata: bool = True
save_artifact_path: Optional[str] = "./artifacts"
save_metadata_path: Optional[str] = "./metadata"
save_error_path: Optional[str] = "./errors"
def run(self, *args, **kwargs):
"""Run the structure."""
@ -430,3 +421,8 @@ class BaseStructure(ABC):
return self.run_batched(
batched_data, batch_size, *args, **kwargs
)
# x = BaseStructure()
# print(x)

@ -4,7 +4,7 @@ from typing import Any, Dict, List, Optional
from termcolor import colored
from swarms.structs.agent import Agent
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.structs.task import Task
from swarms.utils.loguru_logger import logger

@ -1,6 +1,6 @@
from typing import Any, Dict, Optional
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
class BlocksDict(BaseStructure):

@ -1,6 +1,6 @@
from typing import Any, List, Optional
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
class BlocksList(BaseStructure):

@ -2,7 +2,7 @@ import concurrent.futures
from dataclasses import dataclass, field
from typing import Callable, Dict, List, Optional
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.structs.task import Task
from swarms.utils.logger import logger

@ -5,7 +5,7 @@ from typing import Optional
from termcolor import colored
from swarms.memory.base_db import AbstractDatabase
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from typing import Any

@ -1,6 +1,6 @@
import logging
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
class GraphWorkflow(BaseStructure):

@ -1,7 +1,7 @@
import multiprocessing as mp
from typing import List, Optional
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
class LoadBalancer(BaseStructure):

@ -1,4 +1,4 @@
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.structs.task import Task
from swarms.utils.logger import logger # noqa: F401

@ -1,7 +1,7 @@
import logging
from typing import List
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.structs.task import Task
logging.basicConfig(level=logging.INFO)

@ -7,7 +7,7 @@ from typing import List, Optional
# from fastapi import FastAPI
from swarms.structs.agent import Agent
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
from swarms.utils.logger import logger # noqa: F401

@ -0,0 +1,207 @@
from pydantic import BaseModel
import yaml
import json
from swarms.utils.loguru_logger import logger
from typing import Any, Dict
from typing import Type
from dataclasses import is_dataclass, fields
def get_type_name(typ: Type) -> str:
"""Map Python types to simple string representations."""
if hasattr(typ, "__name__"):
return typ.__name__
return str(typ)
def create_yaml_schema_from_dict(
data: Dict[str, Any], model_class: Type
) -> str:
"""
Generate a YAML schema based on a dictionary and a class (can be a Pydantic model, regular class, or dataclass).
Args:
data: The dictionary with key-value pairs where keys are attribute names and values are example data.
model_class: The class which the data should conform to, used for obtaining type information.
Returns:
A string containing the YAML schema.
Example usage:
>>> data = {'name': 'Alice', 'age: 30, 'is_active': True}
>>> print(create_yaml_schema_from_dict(data, User))
"""
schema = {}
if is_dataclass(model_class):
for field in fields(model_class):
schema[field.name] = {
"type": get_type_name(field.type),
"default": (
field.default
if field.default != field.default_factory
else None
),
"description": field.metadata.get(
"description", "No description provided"
),
}
elif isinstance(model_class, BaseModel):
for field_name, model_field in model_class.__fields__.items():
field_info = model_field.field_info
schema[field_name] = {
"type": get_type_name(model_field.outer_type_),
"default": field_info.default,
"description": (
field_info.description
or "No description provided."
),
}
else:
# Fallback for regular classes (non-dataclass, non-Pydantic)
for attr_name, attr_value in data.items():
attr_type = type(attr_value)
schema[attr_name] = {
"type": get_type_name(attr_type),
"description": "No description provided",
}
return yaml.safe_dump(schema, sort_keys=False)
def pydantic_type_to_yaml_schema(pydantic_type):
"""
Map Pydantic types to YAML schema types.
Args:
pydantic_type (type): The Pydantic type to be mapped.
Returns:
str: The corresponding YAML schema type.
"""
type_mapping = {
int: "integer",
float: "number",
str: "string",
bool: "boolean",
list: "array",
dict: "object",
}
# For more complex types or generics, you would expand this mapping
base_type = getattr(pydantic_type, "__origin__", pydantic_type)
if base_type is None:
base_type = pydantic_type
return type_mapping.get(base_type, "string")
class YamlModel(BaseModel):
"""
A Pydantic model class for working with YAML data.
Example usage:
# Example usage with an extended YamlModel
>>> class User(YamlModel):
name: str
age: int
is_active: bool
# Create an instance of the User model
>>> user = User(name="Alice", age=30, is_active=True)
# Serialize the User instance to YAML and print it
>>> print(user.to_yaml())
# Convert JSON to YAML and print
>>> json_string = '{"name": "Bob", "age": 25, "is_active": false}'
>>> print(YamlModel.json_to_yaml(json_string))
# Save the User instance to a YAML file
>>> user.save_to_yaml('user.yaml')
"""
def to_yaml(self):
"""
Serialize the Pydantic model instance to a YAML string.
"""
return yaml.safe_dump(self.dict(), sort_keys=False)
def from_yaml(cls, yaml_str: str):
"""
Create an instance of the class from a YAML string.
Args:
yaml_str (str): The YAML string to parse.
Returns:
cls: An instance of the class with attributes populated from the YAML data.
Returns None if there was an error loading the YAML data.
"""
try:
data = yaml.safe_load(yaml_str)
return cls(**data)
except ValueError as error:
logger.error(f"Error loading YAML data: {error}")
return None
@staticmethod
def json_to_yaml(json_str: str):
"""
Convert a JSON string to a YAML string.
"""
data = json.loads(
json_str
) # Convert JSON string to dictionary
return yaml.dump(data)
def save_to_yaml(self, filename: str):
"""
Save the Pydantic model instance as a YAML file.
"""
yaml_data = self.to_yaml()
with open(filename, "w") as file:
file.write(yaml_data)
# TODO: Implement a method to create a YAML schema from the model fields
# @classmethod
# def create_yaml_schema(cls):
# """
# Generate a YAML schema based on the fields of the given BaseModel Class.
# Args:
# cls: The class for which the YAML schema is generated.
# Returns:
# A YAML representation of the schema.
# """
# schema = {}
# for field_name, model_field in cls.model_fields.items(): # Use model_fields
# field_type = model_field.type_ # Assuming type_ for field type access
# field_info = model_field # FieldInfo object already
# schema[field_name] = {
# 'type': pydantic_type_to_yaml_schema(field_type),
# 'description': field_info.description or "No description provided."
# }
# if field_info is not None: # Check for default value directly
# schema[field_name]['default'] = field_info.default
# return yaml.safe_dump(schema, sort_keys=False)
def create_yaml_schema_from_dict(
data: Dict[str, Any], model_class: Type
) -> str:
"""
Generate a YAML schema based on a dictionary and a class (can be a Pydantic model, regular class, or dataclass).
Args:
data: The dictionary with key-value pairs where keys are attribute names and values are example data.
model_class: The class which the data should conform to, used for obtaining type information.
Returns:
A string containing the YAML schema.
Example usage:
>>> data = {'name': 'Alice', 'age: 30, 'is_active': True}
"""
return create_yaml_schema_from_dict(data, model_class)

@ -3,7 +3,7 @@ from datetime import datetime
import pytest
from swarms.structs.base import BaseStructure
from swarms.structs.base_structure import BaseStructure
class TestBaseStructure:

@ -0,0 +1,97 @@
from pydantic import BaseModel
from dataclasses import dataclass
from swarms import (
create_yaml_schema_from_dict,
YamlModel,
)
@dataclass
class TestDataClass:
name: str
age: int
is_active: bool
class TestPydanticModel(BaseModel):
name: str
age: int
is_active: bool
def test_create_yaml_schema_from_dict_dataclass():
data = {"name": "Alice", "age": 30, "is_active": True}
result = create_yaml_schema_from_dict(data, TestDataClass)
expected_result = """
name:
type: str
default: None
description: No description provided
age:
type: int
default: None
description: No description provided
is_active:
type: bool
default: None
description: No description provided
"""
assert result == expected_result
def test_create_yaml_schema_from_dict_pydantic():
data = {"name": "Alice", "age": 30, "is_active": True}
result = create_yaml_schema_from_dict(data, TestPydanticModel)
expected_result = """
name:
type: str
default: None
description: No description provided
age:
type: int
default: None
description: No description provided
is_active:
type: bool
default: None
description: No description provided
"""
assert result == expected_result
def test_create_yaml_schema_from_dict_regular_class():
class TestRegularClass:
def __init__(self, name, age, is_active):
self.name = name
self.age = age
self.is_active = is_active
data = {"name": "Alice", "age": 30, "is_active": True}
result = create_yaml_schema_from_dict(data, TestRegularClass)
expected_result = """
name:
type: str
description: No description provided
age:
type: int
description: No description provided
is_active:
type: bool
description: No description provided
"""
assert result == expected_result
class User(YamlModel):
name: str
age: int
is_active: bool
def test_yaml_model():
# Create an instance of the User model
user = User(name="Alice", age=30, is_active=True)
assert user.name == "Alice"
assert user.age == 30
assert user.is_active is True
Loading…
Cancel
Save