pull/488/head
Kye Gomez 7 months ago
parent d19bc667d5
commit e704aa48d2

@ -1,71 +0,0 @@
## BingChat User Guide
Welcome to the BingChat user guide! This document provides a step-by-step tutorial on how to leverage the BingChat class, an interface to the EdgeGPT model by OpenAI.
### Table of Contents
1. [Installation & Prerequisites](#installation)
2. [Setting Up BingChat](#setup)
3. [Interacting with BingChat](#interacting)
4. [Generating Images](#images)
5. [Managing Cookies](#cookies)
### Installation & Prerequisites <a name="installation"></a>
Before initializing the BingChat model, ensure you have the necessary dependencies installed:
```shell
pip install EdgeGPT
```
Additionally, you must have a `cookies.json` file which is necessary for authenticating with EdgeGPT.
### Setting Up BingChat <a name="setup"></a>
To start, import the BingChat class:
```python
from bing_chat import BingChat
```
Initialize BingChat with the path to your `cookies.json`:
```python
chat = BingChat(cookies_path="./path/to/cookies.json")
```
### Interacting with BingChat <a name="interacting"></a>
You can obtain text responses from the EdgeGPT model by simply calling the instantiated object:
```python
response = chat("Hello, my name is ChatGPT")
print(response)
```
You can also specify the conversation style:
```python
from bing_chat import ConversationStyle
response = chat("Tell me a joke", style=ConversationStyle.creative)
print(response)
```
### Generating Images <a name="images"></a>
BingChat allows you to generate images based on text prompts:
```python
image_path = chat.create_img("Sunset over mountains", auth_cookie="YOUR_AUTH_COOKIE")
print(f"Image saved at: {image_path}")
```
Ensure you provide the required `auth_cookie` for image generation.
### Managing Cookies <a name="cookies"></a>
You can set a directory path for managing cookies using the `set_cookie_dir_path` method:
BingChat.set_cookie_dir_path("./path/to/cookies_directory")

@ -1,382 +0,0 @@
# Reliable Enterprise-Grade Autonomous Agents in Less Than 5 lines of Code
========================================================================
Welcome to the walkthrough guide for beginners on using the "Agent" feature within the Swarms framework. This guide is designed to help you understand and utilize the capabilities of the Agent class for seamless and reliable interactions with autonomous agents.
## Official Swarms Links
=====================
[Swarms website:](https://www.swarms.world/)
[Swarms Github:](https://github.com/kyegomez/swarms)
[Swarms docs:](https://swarms.apac.ai/en/latest/)
[Swarm Community!](https://discord.gg/39j5kwTuW4)!
[Book a call with The Swarm Corporation here if you're interested in high performance custom swarms!](https://calendly.com/swarm-corp/30min)
Now let's begin...
## [Table of Contents](https://github.com/kyegomez/swarms)
===========================================================================================================
1. Introduction to Swarms Agent Module
- 1.1 What is Swarms?
- 1.2 Understanding the Agent Module
2. Setting Up Your Development Environment
- 2.1 Installing Required Dependencies
- 2.2 API Key Setup
- 2.3 Creating Your First Agent
3. Creating Your First Agent
- 3.1 Importing Necessary Libraries
- 3.2 Defining Constants
- 3.3 Initializing the Agent Object
- 3.4 Initializing the Language Model
- 3.5 Running Your Agent
- 3.6 Understanding Agent Options
4. Advanced Agent Concepts
- 4.1 Custom Stopping Conditions
- 4.2 Dynamic Temperature Handling
- 4.3 Providing Feedback on Responses
- 4.4 Retry Mechanism
- 4.5 Response Filtering
- 4.6 Interactive Mode
5. Saving and Loading Agents
- 5.1 Saving Agent State
- 5.2 Loading a Saved Agent
6. Troubleshooting and Tips
- 6.1 Analyzing Feedback
- 6.2 Troubleshooting Common Issues
7. Conclusion
## [1. Introduction to Swarms Agent Module](https://github.com/kyegomez/swarms)
===================================================================================================================================================
### [1.1 What is Swarms?](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------
Swarms is a powerful framework designed to provide tools and capabilities for working with language models and automating various tasks. It allows developers to interact with language models seamlessly.
## 1.2 Understanding the Agent Feature
==================================
### [What is the Agent Feature?](https://github.com/kyegomez/swarms)
--------------------------------------------------------------------------------------------------------------------------
The Agent feature is a powerful component of the Swarms framework that allows developers to create a sequential, conversational interaction with AI language models. It enables developers to build multi-step conversations, generate long-form content, and perform complex tasks using AI. The Agent class provides autonomy to language models, enabling them to generate responses in a structured manner.
### [Key Concepts](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------
Before diving into the practical aspects, let's clarify some key concepts related to the Agent feature:
- Agent: A Agent is an instance of the Agent class that represents an ongoing interaction with an AI language model. It consists of a series of steps and responses.
- Stopping Condition: A stopping condition is a criterion that, when met, allows the Agent to stop generating responses. This can be user-defined and can depend on the content of the responses.
- Loop Interval: The loop interval specifies the time delay between consecutive interactions with the AI model.
- Retry Mechanism: In case of errors or failures during AI model interactions, the Agent can be configured to make multiple retry attempts with a specified interval.
- Interactive Mode: Interactive mode allows developers to have a back-and-forth conversation with the AI model, making it suitable for real-time interactions.
## [2. Setting Up Your Development Environment](https://github.com/kyegomez/swarms)
=============================================================================================================================================================
### [2.1 Installing Required Dependencies](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------------------------------------
Before you can start using the Swarms Agent module, you need to set up your development environment. First, you'll need to install the necessary dependencies, including Swarms itself.
# Install Swarms and required libraries
`pip3 install --upgrade swarms`
## [2. Creating Your First Agent](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------
Now, let's create your first Agent. A Agent represents a chain-like structure that allows you to engage in multi-step conversations with language models. The Agent structure is what gives an LLM autonomy. It's the Mitochondria of an autonomous agent.
# Import necessary modules
```python
from swarms.models import OpenAIChat # Zephr, Mistral
from swarms.structs import Agent
api_key = "" # Initialize the language model (LLM)
llm = OpenAIChat(
openai_api_key=api_key, temperature=0.5, max_tokens=3000
) # Initialize the Agent object
agent = Agent(llm=llm, max_loops=5) # Run the agent
out = agent.run("Create an financial analysis on the following metrics")
print(out)
```
### [3. Initializing the Agent Object](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------
Create a Agent object that will be the backbone of your conversational agent.
```python
# Initialize the Agent object
agent = Agent(
llm=llm,
max_loops=5,
stopping_condition=None, # You can define custom stopping conditions
loop_interval=1,
retry_attempts=3,
retry_interval=1,
interactive=False, # Set to True for interactive mode
dashboard=False, # Set to True for a dashboard view
dynamic_temperature=False, # Enable dynamic temperature handling
)
```
### [3.2 Initializing the Language Model](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------------
Initialize the language model (LLM) that your Agent will interact with. In this example, we're using OpenAI's GPT-3 as the LLM.
- You can also use `Mistral` or `Zephr` or any of other models!
```python
# Initialize the language model (LLM)
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
max_tokens=3000,
)
```
### [3.3 Running Your Agent](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------
Now, you're ready to run your Agent and start interacting with the language model.
If you are using a multi modality model, you can pass in the image path as another parameter
```
# Run your Agent
out = agent.run(
"Generate a 10,000 word blog on health and wellness.",
# "img.jpg" , Image path for multi-modal models
)
print(out)
```
This code will initiate a conversation with the language model, and you'll receive responses accordingly.
## [4. Advanced Agent Concepts](https://github.com/kyegomez/swarms)
===========================================================================================================================
In this section, we'll explore advanced concepts that can enhance your experience with the Swarms Agent module.
### [4.1 Custom Stopping Conditions](https://github.com/kyegomez/swarms)
You can define custom stopping conditions for your Agent. For example, you might want the Agent to stop when a specific word is mentioned in the response.
# Custom stopping condition example
```python
def stop_when_repeats(response: str) -> bool:
return "Stop" in response.lower()
```
# Set the stopping condition in your Agent
```agent.stopping_condition = stop_when_repeats```
### [4.2 Dynamic Temperature Handling](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------
Dynamic temperature handling allows you to adjust the temperature attribute of the language model during the conversation.
# Enable dynamic temperature handling in your Agent
`agent.dynamic_temperature = True`
This feature randomly changes the temperature attribute for each loop, providing a variety of responses.
### [4.3 Providing Feedback on Responses](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------------------------------------
You can provide feedback on responses generated by the language model using the `provide_feedback` method.
- Provide feedback on a response
`agent.provide_feedback("The response was helpful.")`
This feedback can be valuable for improving the quality of responses.
### [4.4 Retry Mechanism](https://github.com/kyegomez/swarms)
--------------------------------------------------------------------------------------------------------------
In case of errors or issues during conversation, you can implement a retry mechanism to attempt generating a response again.
# Set the number of retry attempts and interval
```python
agent.retry_attempts = 3
agent.retry_interval = 1 # in seconds
```
### [4.5 Response Filtering](https://github.com/kyegomez/swarms)
--------------------------------------------------------------------------------------------------------------------
You can add response filters to filter out certain words or phrases from the responses.
# Add a response filter
```python
agent.add_response_filter("inappropriate_word")
```
This helps in controlling the content generated by the language model.
### [4.6 Interactive Mode](https://github.com/kyegomez/swarms)
----------------------------------------------------------------------------------------------------------------
Interactive mode allows you to have a back-and-forth conversation with the language model. When enabled, the Agent will prompt for user input after each response.
# Enable interactive mode
`agent.interactive = True`
This is useful for real-time conversations with the model.
## [5. Saving and Loading Agents](https://github.com/kyegomez/swarms)
===============================================================================================================================
### [5.1 Saving Agent State](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------
You can save the state of your Agent, including the conversation history, for future use.
# Save the Agent state to a file
`agent.save("path/to/flow_state.json")``
### [5.2 Loading a Saved Agent](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------------
To continue a conversation or reuse a Agent, you can load a previously saved state.
# Load a saved Agent state
`agent.load("path/to/flow_state.json")``
## [6. Troubleshooting and Tips](https://github.com/kyegomez/swarms)
===============================================================================================================================
### [6.1 Analyzing Feedback](https://github.com/kyegomez/swarms)
--------------------------------------------------------------------------------------------------------------------
You can analyze the feedback provided during the conversation to identify issues and improve the quality of interactions.
# Analyze feedback
`agent.analyze_feedback()`
### [6.2 Troubleshooting Common Issues](https://github.com/kyegomez/swarms)
------------------------------------------------------------------------------------------------------------------------------------------
If you encounter issues during conversation, refer to the troubleshooting section for guidance on resolving common problems.
# [7. Conclusion: Empowering Developers with Swarms Framework and Agent Structure for Automation](https://github.com/kyegomez/swarms)
================================================================================================================================================================================================================================================================
In a world where digital tasks continue to multiply and diversify, the need for automation has never been more critical. Developers find themselves at the forefront of this automation revolution, tasked with creating reliable solutions that can seamlessly handle an array of digital tasks. Enter the Swarms framework and the Agent structure, a dynamic duo that empowers developers to build autonomous agents capable of efficiently and effectively automating a wide range of digital tasks.
[The Automation Imperative](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------
Automation is the driving force behind increased efficiency, productivity, and scalability across various industries. From mundane data entry and content generation to complex data analysis and customer support, the possibilities for automation are vast. Developers play a pivotal role in realizing these possibilities, and they require robust tools and frameworks to do so effectively.
[Swarms Framework: A Developer's Swiss Army Knife](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
The Swarms framework emerges as a comprehensive toolkit designed to empower developers in their automation endeavors. It equips developers with the tools and capabilities needed to create autonomous agents capable of interacting with language models, orchestrating multi-step workflows, and handling error scenarios gracefully. Let's explore why the Swarms framework is a game-changer for developers:
[1. Language Model Integration](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------------
One of the standout features of Swarms is its seamless integration with state-of-the-art language models, such as GPT-3. These language models have the ability to understand and generate human-like text, making them invaluable for tasks like content creation, translation, code generation, and more.
By leveraging Swarms, developers can effortlessly incorporate these language models into their applications and workflows. For instance, they can build chatbots that provide intelligent responses to customer inquiries or generate lengthy documents with minimal manual intervention. This not only saves time but also enhances overall productivity.
[2. Multi-Step Conversational Agents](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Swarms excels in orchestrating multi-step conversational flows. Developers can define intricate sequences of interactions, where the system generates responses, and users provide input at various stages. This functionality is a game-changer for building chatbots, virtual assistants, or any application requiring dynamic and context-aware conversations.
These conversational flows can be tailored to handle a wide range of scenarios, from customer support interactions to data analysis. By providing a structured framework for conversations, Swarms empowers developers to create intelligent and interactive systems that mimic human-like interactions.
[3. Customization and Extensibility](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Every development project comes with its unique requirements and challenges. Swarms acknowledges this by offering a high degree of customization and extensibility. Developers can define custom stopping conditions, implement dynamic temperature handling for language models, and even add response filters to control the generated content.
Moreover, Swarms supports an interactive mode, allowing developers to engage in real-time conversations with the language model. This feature is invaluable for rapid prototyping, testing, and fine-tuning the behavior of autonomous agents.
[4. Feedback-Driven Improvement](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------------
Swarms encourages the collection of feedback on generated responses. Developers and users alike can provide feedback to improve the quality and accuracy of interactions over time. This iterative feedback loop ensures that applications built with Swarms continually improve, becoming more reliable and capable of autonomously handling complex tasks.
[5. Handling Errors and Retries](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------------
Error handling is a critical aspect of any automation framework. Swarms simplifies this process by offering a retry mechanism. In case of errors or issues during conversations, developers can configure the framework to attempt generating responses again, ensuring robust and resilient automation.
[6. Saving and Loading Agents](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------
Developers can save the state of their conversational flows, allowing for seamless continuity and reusability. This feature is particularly beneficial when working on long-term projects or scenarios where conversations need to be resumed from a specific point.
[Unleashing the Potential of Automation with Swarms and Agent](https://github.com/kyegomez/swarms)
===============================================================================================================================================================================================
The combined power of the Swarms framework and the Agent structure creates a synergy that empowers developers to automate a multitude of digital tasks. These tools provide versatility, customization, and extensibility, making them ideal for a wide range of applications. Let's explore some of the remarkable ways in which developers can leverage Swarms and Agent for automation:
[1. Customer Support and Service Automation](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Swarms and Agent enable the creation of AI-powered customer support chatbots that excel at handling common inquiries, troubleshooting issues, and escalating complex problems to human agents when necessary. This level of automation not only reduces response times but also enhances the overall customer experience.
[2. Content Generation and Curation](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Developers can harness the power of Swarms and Agent to automate content generation tasks, such as writing articles, reports, or product descriptions. By providing an initial prompt, the system can generate high-quality content that adheres to specific guidelines and styles.
Furthermore, these tools can automate content curation by summarizing lengthy articles, extracting key insights from research papers, and even translating content into multiple languages.
[3. Data Analysis and Reporting](https://github.com/kyegomez/swarms)
-------------------------------------------------------------------------------------------------------------------------------------
Automation in data analysis and reporting is fundamental for data-driven decision-making. Swarms and Agent simplify these processes by enabling developers to create flows that interact with databases, query data, and generate reports based on user-defined criteria. This empowers businesses to derive insights quickly and make informed decisions.
[4. Programming and Code Generation](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------
Swarms and Agent streamline code generation and programming tasks. Developers can create flows to assist in writing code snippets, auto-completing code, or providing solutions to common programming challenges. This accelerates software development and reduces the likelihood of coding errors.
[5. Language Translation and Localization](https://github.com/kyegomez/swarms)
---------------------------------------------------------------------------------------------------------------------------------------------------------
With the ability to interface with language models, Swarms and Agent can automate language translation tasks. They can seamlessly translate content from one language to another, making it easier for businesses to reach global audiences and localize their offerings effectively.
[6. Virtual Assistants and AI Applications](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Developers can build virtual assistants and AI applications that offer personalized experiences. These applications can automate tasks such as setting reminders, answering questions, providing recommendations, and much more. Swarms and Agent provide the foundation for creating intelligent, interactive virtual assistants.
[Future Opportunities and Challenges](https://github.com/kyegomez/swarms)
-----------------------------------------------------------------------------------------------------------------------------------------------
As Swarms and Agent continue to evolve, developers can look forward to even more advanced features and capabilities. However, with great power comes great responsibility. Developers must remain vigilant about the ethical use of automation and language models. Ensuring that automated systems provide accurate and unbiased information is an ongoing challenge that the developer community must address.
# [In Conclusion](https://github.com/kyegomez/swarms)
===================================================================================================
The Swarms framework and the Agent structure empower developers to automate an extensive array of digital tasks by offering versatility, customization, and extensibility. From natural language understanding and generation to orchestrating multi-step conversational flows, these tools simplify complex automation scenarios.
By embracing Swarms and Agent, developers can not only save time and resources but also unlock new opportunities for innovation. The ability to harness the power of language models and create intelligent, interactive applications opens doors to a future where automation plays a pivotal role in our digital lives.
As the developer community continues to explore the capabilities of Swarms and Agent, it is essential to approach automation with responsibility, ethics, and a commitment to delivering valuable, user-centric experiences. With Swarms and Agent, the future of automation is in the hands of developers, ready to create a more efficient, intelligent, and automated world.

@ -1,3 +1,3 @@
This section of the documentation is dedicated to examples highlighting Swarms functionality.
We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/swarms-docs/tree/main/docs/examples). We are also more than happy to include new examples :)
We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/swarms-docs/tree/main/docs/examples). We are also more than happy to include new examples)

@ -1,117 +0,0 @@
## ChatGPT User Guide with Abstraction
Welcome to the ChatGPT user guide! This document will walk you through the Reverse Engineered ChatGPT API, its usage, and how to leverage the abstraction in `revgpt.py` for seamless integration.
### Table of Contents
1. [Installation](#installation)
2. [Initial Setup and Configuration](#initial-setup)
3. [Using the Abstract Class from `revgpt.py`](#using-abstract-class)
4. [V1 Standard ChatGPT](#v1-standard-chatgpt)
5. [V3 Official Chat API](#v3-official-chat-api)
6. [Credits & Disclaimers](#credits-disclaimers)
### Installation <a name="installation"></a>
To kickstart your journey with ChatGPT, first, install the ChatGPT package:
```shell
python -m pip install --upgrade revChatGPT
```
**Supported Python Versions:**
- Minimum: Python3.9
- Recommended: Python3.11+
### Initial Setup and Configuration <a name="initial-setup"></a>
1. **Account Setup:** Register on [OpenAI's ChatGPT](https://chat.openai.com/).
2. **Authentication:** Obtain your access token from OpenAI's platform.
3. **Environment Variables:** Configure your environment with the necessary variables. An example of these variables can be found at the bottom of the guide.
### Using the Abstract Class from `revgpt.py` <a name="using-abstract-class"></a>
The abstraction provided in `revgpt.py` is designed to simplify your interactions with ChatGPT.
1. **Import the Necessary Modules:**
```python
from dotenv import load_dotenv
from revgpt import AbstractChatGPT
```
2. **Load Environment Variables:**
```python
load_dotenv()
```
3. **Initialize the ChatGPT Abstract Class:**
```python
chat = AbstractChatGPT(api_key=os.getenv("ACCESS_TOKEN"), **config)
```
4. **Start Interacting with ChatGPT:**
```python
response = chat.ask("Hello, ChatGPT!")
print(response)
```
With the abstract class, you can seamlessly switch between different versions or models of ChatGPT without changing much of your code.
### V1 Standard ChatGPT <a name="v1-standard-chatgpt"></a>
If you wish to use V1 specifically:
1. Import the model:
```python
from swarms.models.revgptV1 import RevChatGPTModelv1
```
2. Initialize:
```python
model = RevChatGPTModelv1(access_token=os.getenv("ACCESS_TOKEN"), **config)
```
3. Interact:
```python
response = model.run("What's the weather like?")
print(response)
```
### V3 Official Chat API <a name="v3-official-chat-api"></a>
For users looking to integrate the official V3 API:
1. Import the model:
```python
from swarms.models.revgptV4 import RevChatGPTModelv4
```
2. Initialize:
```python
model = RevChatGPTModelv4(access_token=os.getenv("OPENAI_API_KEY"), **config)
```
3. Interact:
```python
response = model.run("Tell me a fun fact!")
print(response)
```
### Credits & Disclaimers <a name="credits-disclaimers"></a>
- This project is not an official OpenAI product and is not affiliated with OpenAI. Use at your own discretion.
- Many thanks to all the contributors who have made this project possible.
- Special acknowledgment to [virtualharby](https://www.youtube.com/@virtualharby) for the motivating music!
---
By following this guide, you should now have a clear understanding of how to use the Reverse Engineered ChatGPT API and its abstraction. Happy coding!

@ -1,338 +0,0 @@
# Tutorial: Understanding and Utilizing Worker Examples
## Table of Contents
1. Introduction
2. Code Overview
- Import Statements
- Initializing API Key and Language Model
- Creating Swarm Tools
- Appending Tools to a List
- Initializing a Worker Node
3. Understanding the `hf_agent` Tool
4. Understanding the `omni_agent` Tool
5. Understanding the `compile` Tool
6. Running a Swarm
7. Interactive Examples
- Example 1: Initializing API Key and Language Model
- Example 2: Using the `hf_agent` Tool
- Example 3: Using the `omni_agent` Tool
- Example 4: Using the `compile` Tool
8. Conclusion
## 1. Introduction
The provided code showcases a system built around a worker node that utilizes various AI models and tools to perform tasks. This tutorial will break down the code step by step, explaining its components, how they work together, and how to utilize its modularity for various tasks.
## 2. Code Overview
### Import Statements
The code begins with import statements, bringing in necessary modules and classes. Key imports include the `OpenAIChat` class, which represents a language model, and several custom agents and tools from the `swarms` package.
```python
import interpreter # Assuming this is a custom module
from swarms.agents.hf_agents import HFAgent
from swarms.agents.omni_modal_agent import OmniModalAgent
from swarms.models import OpenAIChat
from swarms.tools.autogpt import tool
from swarms.workers import Worker
```
### Initializing API Key and Language Model
Here, an API key is initialized, and a language model (`OpenAIChat`) is created. This model is capable of generating human-like text based on the provided input.
```python
# Initialize API Key
api_key = "YOUR_OPENAI_API_KEY"
# Initialize the language model
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
)
```
### Creating Swarm Tools
The code defines three tools: `hf_agent`, `omni_agent`, and `compile`. These tools encapsulate specific functionalities and can be invoked to perform tasks.
### Appending Tools to a List
All defined tools are appended to a list called `tools`. This list is later used when initializing a worker node, allowing the node to access and utilize these tools.
```python
# Append tools to a list
tools = [hf_agent, omni_agent, compile]
```
### Initializing a Worker Node
A worker node is initialized using the `Worker` class. The worker node is equipped with the language model, a name, API key, and the list of tools. It's set up to perform tasks without human intervention.
```python
# Initialize a single Worker node with previously defined tools in addition to its predefined tools
node = Worker(
llm=llm,
ai_name="Optimus Prime",
openai_api_key=api_key,
ai_role="Worker in a swarm",
external_tools=tools,
human_in_the_loop=False,
temperature=0.5,
)
```
## 3. Understanding the `hf_agent` Tool
The `hf_agent` tool utilizes an OpenAI model (`text-davinci-003`) to perform tasks. It takes a task as input and returns a response. This tool is suitable for multi-modal tasks like generating images, videos, speech, etc. The tool's primary rule is not to be used for simple tasks like generating summaries.
```python
@tool
def hf_agent(task: str = None):
# Create an HFAgent instance with the specified model and API key
agent = HFAgent(model="text-davinci-003", api_key=api_key)
# Run the agent with the provided task and optional text input
response = agent.run(task, text="¡Este es un API muy agradable!")
return response
```
## 4. Understanding the `omni_agent` Tool
The `omni_agent` tool is more versatile and leverages the `llm` (language model) to interact with Huggingface models for various tasks. It's intended for multi-modal tasks such as document-question-answering, image-captioning, summarization, and more. The tool's rule is also not to be used for simple tasks.
```python
@tool
def omni_agent(task: str = None):
# Create an OmniModalAgent instance with the provided language model
agent = OmniModalAgent(llm)
# Run the agent with the provided task
response = agent.run(task)
return response
```
## 5. Understanding the `compile` Tool
The `compile` tool allows the execution of code locally, supporting various programming languages like Python, JavaScript, and Shell. It provides a natural language interface to your computer's capabilities. Users can chat with this tool in a terminal-like interface to perform tasks such as creating and editing files, controlling a browser, and more.
```python
@tool
def compile(task: str):
# Use the interpreter module to chat with the local interpreter
task = interpreter.chat(task, return_messages=True)
interpreter.chat()
interpreter.reset(task)
# Set environment variables for the interpreter
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
os.environ["INTERPRETER_CLI_DEBUG"] = True
```
## 6. Running a Swarm
After defining tools and initializing the worker node, a specific task is provided as input to the worker node. The node then runs the task, and the response is printed to the console.
```python
# Specify the task
task = "What were the winning Boston Marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
# Run the node on the task
response = node.run(task)
# Print the response
print(response)
```
## Full Code
- The full code example of stacked swarms
```python
import os
import interpreter
from swarms.agents.hf_agents import HFAgent
from swarms.agents.omni_modal_agent import OmniModalAgent
from swarms.models import OpenAIChat
from swarms.tools.autogpt import tool
from swarms.workers import Worker
# Initialize API Key
api_key = ""
# Initialize the language model,
# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
)
# wrap a function with the tool decorator to make it a tool, then add docstrings for tool documentation
@tool
def hf_agent(task: str = None):
"""
An tool that uses an openai model to call and respond to a task by search for a model on huggingface
It first downloads the model then uses it.
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech, etc
"""
agent = HFAgent(model="text-davinci-003", api_key=api_key)
response = agent.run(task, text="¡Este es un API muy agradable!")
return response
# wrap a function with the tool decorator to make it a tool
@tool
def omni_agent(task: str = None):
"""
An tool that uses an openai Model to utilize and call huggingface models and guide them to perform a task.
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech
The following tasks are what this tool should be used for:
Tasks omni agent is good for:
--------------
document-question-answering
image-captioning
image-question-answering
image-segmentation
speech-to-text
summarization
text-classification
text-question-answering
translation
huggingface-tools/text-to-image
huggingface-tools/text-to-video
text-to-speech
huggingface-tools/text-download
huggingface-tools/image-transformation
"""
agent = OmniModalAgent(llm)
response = agent.run(task)
return response
# Code Interpreter
@tool
def compile(task: str):
"""
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally.
You can chat with Open Interpreter through a ChatGPT-like interface in your terminal
by running $ interpreter after installing.
This provides a natural-language interface to your computer's general-purpose capabilities:
Create and edit photos, videos, PDFs, etc.
Control a Chrome browser to perform research
Plot, clean, and analyze large datasets
...etc.
⚠️ Note: You'll be asked to approve code before it's run.
Rules: Only use when given to generate code or an application of some kind
"""
task = interpreter.chat(task, return_messages=True)
interpreter.chat()
interpreter.reset(task)
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
os.environ["INTERPRETER_CLI_DEBUG"] = True
# Append tools to an list
tools = [hf_agent, omni_agent, compile]
# Initialize a single Worker node with previously defined tools in addition to it's
# predefined tools
node = Worker(
llm=llm,
ai_name="Optimus Prime",
openai_api_key=api_key,
ai_role="Worker in a swarm",
external_tools=tools,
human_in_the_loop=False,
temperature=0.5,
)
# Specify task
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
# Run the node on the task
response = node.run(task)
# Print the response
print(response)
```
## 8. Conclusion
In this extensive tutorial, we've embarked on a journey to explore a sophisticated system designed to harness the power of AI models and tools for a myriad of tasks. We've peeled back the layers of code, dissected its various components, and gained a profound understanding of how these elements come together to create a versatile, modular, and powerful swarm-based AI system.
## What We've Learned
Throughout this tutorial, we've covered the following key aspects:
### Code Structure and Components
We dissected the code into its fundamental building blocks:
- **Import Statements:** We imported necessary modules and libraries, setting the stage for our system's functionality.
- **Initializing API Key and Language Model:** We learned how to set up the essential API key and initialize the language model, a core component for text generation and understanding.
- **Creating Swarm Tools:** We explored how to define tools, encapsulating specific functionalities that our system can leverage.
- **Appending Tools to a List:** We aggregated our tools into a list, making them readily available for use.
- **Initializing a Worker Node:** We created a worker node equipped with tools, a name, and configuration settings.
### Tools and Their Functions
We dove deep into the purpose and functionality of three crucial tools:
- **`hf_agent`:** We understood how this tool employs an OpenAI model for multi-modal tasks, and its use cases beyond simple summarization.
- **`omni_agent`:** We explored the versatility of this tool, guiding Huggingface models to perform a wide range of multi-modal tasks.
- **`compile`:** We saw how this tool allows the execution of code in multiple languages, providing a natural language interface for various computational tasks.
### Interactive Examples
We brought the code to life through interactive examples, showcasing how to initialize the language model, generate text, perform document-question-answering, and execute code—all with practical, real-world scenarios.
## A Recap: The Worker Node's Role
At the heart of this system lies the "Worker Node," a versatile entity capable of wielding the power of AI models and tools to accomplish tasks. The Worker Node's role is pivotal in the following ways:
1. **Task Execution:** It is responsible for executing tasks, harnessing the capabilities of the defined tools to generate responses or perform actions.
2. **Modularity:** The Worker Node benefits from the modularity of the system. It can easily access and utilize a variety of tools, allowing it to adapt to diverse tasks and requirements.
3. **Human in the Loop:** While the example here is configured to operate without human intervention, the Worker Node can be customized to incorporate human input or approval when needed.
4. **Integration:** It can be extended to integrate with other AI models, APIs, or services, expanding its functionality and versatility.
## The Road Ahead: Future Features and Enhancements
As we conclude this tutorial, let's peek into the future of this system. While the current implementation is already powerful, there is always room for growth and improvement. Here are some potential future features and enhancements to consider:
### 1. Enhanced Natural Language Understanding
- **Semantic Understanding:** Improve the system's ability to understand context and nuances in natural language, enabling more accurate responses.
### 2. Multimodal Capabilities
- **Extended Multimodal Support:** Expand the `omni_agent` tool to support additional types of multimodal tasks, such as video generation or audio processing.
### 3. Customization and Integration
- **User-defined Tools:** Allow users to define their own custom tools, opening up endless possibilities for tailoring the system to specific needs.
### 4. Collaborative Swarms
- **Swarm Collaboration:** Enable multiple Worker Nodes to collaborate on complex tasks, creating a distributed, intelligent swarm system.
### 5. User-Friendly Interfaces
- **Graphical User Interface (GUI):** Develop a user-friendly GUI for easier interaction and task management, appealing to a wider audience.
### 6. Continuous Learning
- **Active Learning:** Implement mechanisms for the system to learn and adapt over time, improving its performance with each task.
### 7. Security and Privacy
- **Enhanced Security:** Implement robust security measures to safeguard sensitive data and interactions within the system.
### 8. Community and Collaboration
- **Open Source Community:** Foster an open-source community around the system, encouraging contributions and innovation from developers worldwide.
### 9. Integration with Emerging Technologies
- **Integration with Emerging AI Models:** Keep the system up-to-date by seamlessly integrating with new and powerful AI models as they emerge in the industry.
## In Conclusion
In this tutorial, we've journeyed through a complex AI system, unraveling its inner workings, and understanding its potential. We've witnessed how code can transform into a powerful tool, capable of handling a vast array of tasks, from generating creative stories to executing code snippets.
As we conclude, we stand at the threshold of an exciting future for AI and technology. This system, with its modular design and the potential for continuous improvement, embodies the spirit of innovation and adaptability. Whether you're a developer, a researcher, or an enthusiast, the possibilities are boundless, and the journey is just beginning.
Embrace this knowledge, explore the system, and embark on your own quest to shape the future of AI. With each line of code, you have the power to transform ideas into reality and unlock new horizons of innovation. The future is yours to create, and the tools are at your fingertips.

@ -1,391 +0,0 @@
The Swarms Tool System: Functions, Pydantic BaseModels as Tools, and Radical Customization
This guide provides an in-depth look at the Swarms Tool System, focusing on its functions, the use of Pydantic BaseModels as tools, and the extensive customization options available. Aimed at developers, this documentation highlights how the Swarms framework works and offers detailed examples of creating and customizing tools and agents, specifically for accounting tasks.
The Swarms Tool System is a flexible and extensible component of the Swarms framework that allows for the creation, registration, and utilization of various tools. These tools can perform a wide range of tasks and are integrated into agents to provide specific functionalities. The system supports multiple ways to define tools, including using Pydantic BaseModels, functions, and dictionaries.
Star, fork, and contribute to the swarms framework here:
GitHub - kyegomez/swarms: Orchestrate Swarms of Agents From Any Framework Like OpenAI, Langchain…
Orchestrate Swarms of Agents From Any Framework Like OpenAI, Langchain, and Etc for Business Operation Automation. Join…
github.com
And, join our community now!
Join the Agora Discord Server!
Advancing Humanity through open source AI research. | 7314 members
discord.gg
Architecture
The architecture of the Swarms Tool System is designed to be highly modular. It consists of the following main components:
Agents: The primary entities that execute tasks.
Tools: Functions or classes that perform specific operations.
Schemas: Definitions of input and output data formats using Pydantic BaseModels.
Key Concepts
Tools
Tools are the core functional units within the Swarms framework. They can be defined in various ways:
Pydantic BaseModels: Tools can be defined using Pydantic BaseModels to ensure data validation and serialization.
Functions: Tools can be simple or complex functions.
Dictionaries: Tools can be represented as dictionaries for flexibility.
Agents
Agents utilize tools to perform tasks. They are configured with a set of tools and schemas, and they execute the tools based on the input they receive.
Installation
pip3 install -U swarms pydantic
Tool Definition
Using Pydantic BaseModels
Pydantic BaseModels provide a structured way to define tool inputs and outputs. They ensure data validation and serialization, making them ideal for complex data handling.
Example:
Define Pydantic BaseModels for accounting tasks:
from pydantic import BaseModel
class CalculateTax(BaseModel):
income: float
class GenerateInvoice(BaseModel):
client_name: str
amount: float
date: str
class SummarizeExpenses(BaseModel):
expenses: list[dict]
Define tool functions using these models:
def calculate_tax(data: CalculateTax) -> dict:
tax_rate = 0.3 # Example tax rate
tax = data.income * tax_rate
return {"income": data.income, "tax": tax}
def generate_invoice(data: GenerateInvoice) -> dict:
invoice = {
"client_name": data.client_name,
"amount": data.amount,
"date": data.date,
"invoice_id": "INV12345"
}
return invoice
def summarize_expenses(data: SummarizeExpenses) -> dict:
total_expenses = sum(expense['amount'] for expense in data.expenses)
return {"total_expenses": total_expenses}
Using Functions Directly
Tools can also be defined directly as functions without using Pydantic models. This approach is suitable for simpler tasks where complex validation is not required.
Example:
def basic_tax_calculation(income: float) -> dict:
tax_rate = 0.25
tax = income * tax_rate
return {"income": income, "tax": tax}
Using Dictionaries
Tools can be represented as dictionaries, providing maximum flexibility. This method is useful when the tools functionality is more dynamic or when integrating with external systems.
Example:
basic_tool_schema = {
"name": "basic_tax_tool",
"description": "A basic tax calculation tool",
"parameters": {
"type": "object",
"properties": {
"income": {"type": "number", "description": "Income amount"}
},
"required": ["income"]
}
}
def basic_tax_tool(income: float) -> dict:
tax_rate = 0.2
tax = income * tax_rate
return {"income": income, "tax": tax}
Tool Registration
Tools need to be registered with the agent for it to utilize them. This can be done by specifying the tools in the tools parameter during agent initialization.
Example:
from swarms import Agent
from llama_hosted import llama3Hosted
# Define Pydantic BaseModels for accounting tasks
class CalculateTax(BaseModel):
income: float
class GenerateInvoice(BaseModel):
client_name: str
amount: float
date: str
class SummarizeExpenses(BaseModel):
expenses: list[dict]
# Define tool functions using these models
def calculate_tax(data: CalculateTax) -> dict:
tax_rate = 0.3
tax = data.income * tax_rate
return {"income": data.income, "tax": tax}
def generate_invoice(data: GenerateInvoice) -> dict:
invoice = {
"client_name": data.client_name,
"amount": data.amount,
"date": data.date,
"invoice_id": "INV12345"
}
return invoice
def summarize_expenses(data: SummarizeExpenses) -> dict:
total_expenses = sum(expense['amount'] for expense in data.expenses)
return {"total_expenses": total_expenses}
# Function to generate a tool schema for demonstration purposes
def create_tool_schema():
return {
"name": "execute",
"description": "Executes code on the user's machine",
"parameters": {
"type": "object",
"properties": {
"language": {
"type": "string",
"description": "Programming language",
"enum": ["python", "java"]
},
"code": {"type": "string", "description": "Code to execute"}
},
"required": ["language", "code"]
}
}
# Initialize the agent with the tools
agent = Agent(
agent_name="Accounting Agent",
system_prompt="This agent assists with various accounting tasks.",
sop_list=["Provide accurate and timely accounting services."],
llm=llama3Hosted(),
max_loops="auto",
interactive=True,
verbose=True,
tool_schema=BaseModel,
list_base_models=[
CalculateTax,
GenerateInvoice,
SummarizeExpenses
],
output_type=str,
metadata_output_type="json",
function_calling_format_type="OpenAI",
function_calling_type="json",
tools=[
calculate_tax,
generate_invoice,
summarize_expenses
],
list_tool_schemas_json=create_tool_schema(),
)
Running the Agent
The agent can execute tasks using the run method. This method takes a prompt and determines the appropriate tool to use based on the input.
Example:
# Example task: Calculate tax for an income
result = agent.run("Calculate the tax for an income of $50,000.")
print(f"Result: {result}")
# Example task: Generate an invoice
invoice_data = agent.run("Generate an invoice for John Doe for $1500 on 2024-06-01.")
print(f"Invoice Data: {invoice_data}")
# Example task: Summarize expenses
expenses = [
{"amount": 200.0, "description": "Office supplies"},
{"amount": 1500.0, "description": "Software licenses"},
{"amount": 300.0, "description": "Travel expenses"}
]
summary = agent.run("Summarize these expenses: " + str(expenses))
print(f"Expenses Summary: {summary}")
Customizing Tools
Custom tools can be created to extend the functionality of the Swarms framework. This can include integrating external APIs, performing complex calculations, or handling specialized data formats.
Example: Custom Accounting Tool
from pydantic import BaseModel
class CustomAccountingTool(BaseModel):
data: dict
def custom_accounting_tool(data: CustomAccountingTool) -> dict:
# Custom logic for the accounting tool
result = {
"status": "success",
"data_processed": len(data.data)
}
return result
# Register the custom tool with the agent
agent = Agent(
agent_name="Accounting Agent",
system_prompt="This agent assists with various accounting tasks.",
sop_list=["Provide accurate and timely accounting services."],
llm=llama3Hosted(),
max_loops="auto",
interactive=True,
verbose=True,
tool_schema=BaseModel,
list_base_models=[
CalculateTax,
GenerateInvoice,
SummarizeExpenses,
CustomAccountingTool
],
output_type=str,
metadata_output_type="json",
function_calling_format_type="OpenAI",
function_calling_type="json",
tools=[
calculate_tax,
generate_invoice,
summarize_expenses,
custom_accounting_tool
],
list_tool_schemas_json=create_tool_schema(),
)
Advanced Customization
Advanced customization involves modifying the core components of the Swarms framework. This includes extending existing classes, adding new methods, or integrating third-party libraries.
Example: Extending the Agent Class
from swarms import Agent
class AdvancedAccountingAgent(Agent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def custom_behavior(self):
print("Executing custom behavior")
def another_custom_method(self):
print("Another
custom method")
# Initialize the advanced agent
advanced_agent = AdvancedAccountingAgent(
agent_name="Advanced Accounting Agent",
system_prompt="This agent performs advanced accounting tasks.",
sop_list=["Provide advanced accounting services."],
llm=llama3Hosted(),
max_loops="auto",
interactive=True,
verbose=True,
tool_schema=BaseModel,
list_base_models=[
CalculateTax,
GenerateInvoice,
SummarizeExpenses,
CustomAccountingTool
],
output_type=str,
metadata_output_type="json",
function_calling_format_type="OpenAI",
function_calling_type="json",
tools=[
calculate_tax,
generate_invoice,
summarize_expenses,
custom_accounting_tool
],
list_tool_schemas_json=create_tool_schema(),
)
# Call custom methods
advanced_agent.custom_behavior()
advanced_agent.another_custom_method()
Integrating External Libraries
You can integrate external libraries to extend the functionality of your tools. This is useful for adding new capabilities or leveraging existing libraries for complex tasks.
Example: Integrating Pandas for Data Processing
import pandas as pd
from pydantic import BaseModel
class DataFrameTool(BaseModel):
data: list[dict]
def process_data_frame(data: DataFrameTool) -> dict:
df = pd.DataFrame(data.data)
summary = df.describe().to_dict()
return {"summary": summary}
# Register the tool with the agent
agent = Agent(
agent_name="Data Processing Agent",
system_prompt="This agent processes data frames.",
sop_list=["Provide data processing services."],
llm=llama3Hosted(),
max_loops="auto",
interactive=True,
verbose=True,
tool_schema=BaseModel,
list_base_models=[DataFrameTool],
output_type=str,
metadata_output_type="json",
function_calling_format_type="OpenAI",
function_calling_type="json",
tools=[process_data_frame],
list_tool_schemas_json=create_tool_schema(),
)
# Example task: Process a data frame
data = [
{"col1": 1, "col2": 2},
{"col1": 3, "col2": 4},
{"col1": 5, "col2": 6}
]
result = agent.run("Process this data frame: " + str(data))
print(f"Data Frame Summary: {result}")
Conclusion
The Swarms Tool System provides a robust and flexible framework for defining and utilizing tools within agents. By leveraging Pydantic BaseModels, functions, and dictionaries, developers can create highly customized tools to perform a wide range of tasks. The extensive customization options allow for the integration of external libraries and the extension of core components, making the Swarms framework suitable for diverse applications.
This guide has covered the fundamental concepts and provided detailed examples to help you get started with the Swarms Tool System. With this foundation, you can explore and implement advanced features to build powerful
If you enjoyed this guide check out the swarms github and star us and fork it as well!

@ -1,115 +0,0 @@
# **The Ultimate Guide to Mastering the `Worker` Class from Swarms**
---
**Table of Contents**
1. Introduction: Welcome to the World of the Worker
2. The Basics: What Does the Worker Do?
3. Installation: Setting the Stage
4. Dive Deep: Understanding the Architecture
5. Practical Usage: Let's Get Rolling!
6. Advanced Tips and Tricks
7. Handling Errors: Because We All Slip Up Sometimes
8. Beyond the Basics: Advanced Features and Customization
9. Conclusion: Taking Your Knowledge Forward
---
**1. Introduction: Welcome to the World of the Worker**
Greetings, future master of the `Worker`! Step into a universe where you can command an AI worker to perform intricate tasks, be it searching the vast expanse of the internet or crafting multi-modality masterpieces. Ready to embark on this thrilling journey? Lets go!
---
**2. The Basics: What Does the Worker Do?**
The `Worker` is your personal AI assistant. Think of it as a diligent bee in a swarm, ready to handle complex tasks across various modalities, from text and images to audio and beyond.
---
**3. Installation: Setting the Stage**
Before we can call upon our Worker, we need to set the stage:
```bash
pip install swarms
```
Voila! Youre now ready to summon your Worker.
---
**4. Dive Deep: Understanding the Architecture**
- **Language Model (LLM)**: The brain of our Worker. It understands and crafts intricate language-based responses.
- **Tools**: Think of these as the Worker's toolkit. They range from file tools, website querying, to even complex tasks like image captioning.
- **Memory**: No, our Worker doesnt forget. It employs a sophisticated memory mechanism to remember past interactions and learn from them.
---
**5. Practical Usage: Let's Get Rolling!**
Heres a simple way to invoke the Worker and give it a task:
```python
from swarms import Worker
from swarms.models import OpenAIChat
llm = OpenAIChat(
# enter your api key
openai_api_key="",
temperature=0.5,
)
node = Worker(
llm=llm,
ai_name="Optimus Prime",
openai_api_key="",
ai_role="Worker in a swarm",
external_tools=None,
human_in_the_loop=False,
temperature=0.5,
)
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
response = node.run(task)
print(response)
```
The result? An agent with elegantly integrated tools and long term memories
---
**6. Advanced Tips and Tricks**
- **Streaming Responses**: Want your Worker to respond in a more dynamic fashion? Use the `_stream_response` method to get results token by token.
- **Human-in-the-Loop**: By setting `human_in_the_loop` to `True`, you can involve a human in the decision-making process, ensuring the best results.
---
**7. Handling Errors: Because We All Slip Up Sometimes**
Your Worker is designed to be robust. But if it ever encounters a hiccup, it's equipped to let you know. Error messages are crafted to be informative, guiding you on the next steps.
---
**8. Beyond the Basics: Advanced Features and Customization**
- **Custom Tools**: Want to expand the Worker's toolkit? Use the `external_tools` parameter to integrate your custom tools.
- **Memory Customization**: You can tweak the Worker's memory settings, ensuring it remembers what's crucial for your tasks.
---
**9. Conclusion: Taking Your Knowledge Forward**
Congratulations! Youre now well-equipped to harness the power of the `Worker` from Swarms. As you venture further, remember: the possibilities are endless, and with the Worker by your side, theres no task too big!
**Happy Coding and Exploring!** 🚀🎉
---
*Note*: This guide provides a stepping stone to the vast capabilities of the `Worker`. Dive into the official documentation for a deeper understanding and stay updated with the latest features.
---

@ -7,33 +7,28 @@ Orchestrate enterprise-grade agents for multi-agent collaboration and orchestrat
<h2>Core Concepts</h2>
<ul>
<li>
<a href="./core-concepts/Agents">
<a href="swarms/structs/agent.md">
Agents
</a>
</li>
<li>
<a href="./core-concepts/Tasks">
Tasks
<a href="./core-concepts/Memory">
Memory
</a>
</li>
<li>
<a href="./core-concepts/Tools">
<a href="swarms/tools/main.md">
Tools
</a>
</li>
<li>
<a href="./core-concepts/Processes">
Processes
</a>
</li>
<li>
<a href="./core-concepts/Crews">
Crews
<a href="swarms/structs/task.md">
Tasks
</a>
</li>
<li>
<a href="./core-concepts/Memory">
Memory
<a href="./core-concepts/Processes">
Multi-Agent Orchestration
</a>
</li>
</ul>

@ -93,7 +93,6 @@ markdown_extensions:
- tables
- def_list
- footnotes
nav:
- Home: "/"
- Installation:

@ -4,6 +4,7 @@ from swarms import Agent, OpenAIChat
# Initialize the agent
agent = Agent(
agent_name="Transcript Generator",
system_prompt="Generate a transcript for a youtube video on what swarms are!",
agent_description=(
"Generate a transcript for a youtube video on what swarms" " are!"
),

@ -47,6 +47,7 @@ Instructions:
"""
def movers_agent_system_prompt():
return """
The Movers Agent is responsible for providing users with fixed-cost estimates for moving services
@ -75,8 +76,8 @@ def movers_agent_system_prompt():
4. Users can click "negotiate" to simulate negotiation via Retell API, adjusting the price within a predefined range.
"""
# Example usage
# Example usage
# Initialize

Loading…
Cancel
Save