commit
83b65746a6
@ -1,49 +0,0 @@
|
||||
# Release Notes
|
||||
|
||||
## 3.7.5
|
||||
|
||||
2024-01-21
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
Fix imports of Agent, SequentialWorkflow, ModelParallelizer, Task, OpenAIChat, Gemini, GPT4VisionAPI
|
||||
|
||||
### New Features
|
||||
|
||||
New model: Odin for Object Detection and tracking
|
||||
New mode: Ultralytics Object recognition YOLO
|
||||
|
||||
New Tokenizers
|
||||
|
||||
Schema generator for prompts.
|
||||
New prompt for worker agent.
|
||||
|
||||
New structure: plan, step
|
||||
|
||||
New tool: execute tool
|
||||
|
||||
New logger: get_logger
|
||||
|
||||
Example for worker_agent
|
||||
|
||||
|
||||
|
||||
## 3.6.8
|
||||
|
||||
2024-01-19
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
Removed ModelScope
|
||||
|
||||
Removed CogAgent
|
||||
|
||||
### New Features
|
||||
|
||||
Added ultralytics vision models
|
||||
|
||||
Added TimmModel to wrap timm models
|
||||
|
||||
### Other
|
||||
|
||||
Loosened version of timm
|
@ -1,71 +0,0 @@
|
||||
## BingChat User Guide
|
||||
|
||||
Welcome to the BingChat user guide! This document provides a step-by-step tutorial on how to leverage the BingChat class, an interface to the EdgeGPT model by OpenAI.
|
||||
|
||||
### Table of Contents
|
||||
1. [Installation & Prerequisites](#installation)
|
||||
2. [Setting Up BingChat](#setup)
|
||||
3. [Interacting with BingChat](#interacting)
|
||||
4. [Generating Images](#images)
|
||||
5. [Managing Cookies](#cookies)
|
||||
|
||||
### Installation & Prerequisites <a name="installation"></a>
|
||||
|
||||
Before initializing the BingChat model, ensure you have the necessary dependencies installed:
|
||||
|
||||
```shell
|
||||
pip install EdgeGPT
|
||||
```
|
||||
|
||||
Additionally, you must have a `cookies.json` file which is necessary for authenticating with EdgeGPT.
|
||||
|
||||
### Setting Up BingChat <a name="setup"></a>
|
||||
|
||||
To start, import the BingChat class:
|
||||
|
||||
```python
|
||||
from bing_chat import BingChat
|
||||
```
|
||||
|
||||
Initialize BingChat with the path to your `cookies.json`:
|
||||
|
||||
```python
|
||||
chat = BingChat(cookies_path="./path/to/cookies.json")
|
||||
```
|
||||
|
||||
### Interacting with BingChat <a name="interacting"></a>
|
||||
|
||||
You can obtain text responses from the EdgeGPT model by simply calling the instantiated object:
|
||||
|
||||
```python
|
||||
response = chat("Hello, my name is ChatGPT")
|
||||
print(response)
|
||||
```
|
||||
|
||||
You can also specify the conversation style:
|
||||
|
||||
```python
|
||||
from bing_chat import ConversationStyle
|
||||
|
||||
response = chat("Tell me a joke", style=ConversationStyle.creative)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Generating Images <a name="images"></a>
|
||||
|
||||
BingChat allows you to generate images based on text prompts:
|
||||
|
||||
```python
|
||||
image_path = chat.create_img("Sunset over mountains", auth_cookie="YOUR_AUTH_COOKIE")
|
||||
print(f"Image saved at: {image_path}")
|
||||
```
|
||||
|
||||
Ensure you provide the required `auth_cookie` for image generation.
|
||||
|
||||
### Managing Cookies <a name="cookies"></a>
|
||||
|
||||
You can set a directory path for managing cookies using the `set_cookie_dir_path` method:
|
||||
|
||||
BingChat.set_cookie_dir_path("./path/to/cookies_directory")
|
||||
|
||||
|
@ -1,3 +1,3 @@
|
||||
This section of the documentation is dedicated to examples highlighting Swarms functionality.
|
||||
|
||||
We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/swarms-docs/tree/main/docs/examples). We are also more than happy to include new examples :)
|
||||
We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/swarms-docs/tree/main/docs/examples). We are also more than happy to include new examples)
|
@ -1,117 +0,0 @@
|
||||
## ChatGPT User Guide with Abstraction
|
||||
|
||||
Welcome to the ChatGPT user guide! This document will walk you through the Reverse Engineered ChatGPT API, its usage, and how to leverage the abstraction in `revgpt.py` for seamless integration.
|
||||
|
||||
### Table of Contents
|
||||
1. [Installation](#installation)
|
||||
2. [Initial Setup and Configuration](#initial-setup)
|
||||
3. [Using the Abstract Class from `revgpt.py`](#using-abstract-class)
|
||||
4. [V1 Standard ChatGPT](#v1-standard-chatgpt)
|
||||
5. [V3 Official Chat API](#v3-official-chat-api)
|
||||
6. [Credits & Disclaimers](#credits-disclaimers)
|
||||
|
||||
### Installation <a name="installation"></a>
|
||||
|
||||
To kickstart your journey with ChatGPT, first, install the ChatGPT package:
|
||||
|
||||
```shell
|
||||
python -m pip install --upgrade revChatGPT
|
||||
```
|
||||
|
||||
**Supported Python Versions:**
|
||||
- Minimum: Python3.9
|
||||
- Recommended: Python3.11+
|
||||
|
||||
### Initial Setup and Configuration <a name="initial-setup"></a>
|
||||
|
||||
1. **Account Setup:** Register on [OpenAI's ChatGPT](https://chat.openai.com/).
|
||||
2. **Authentication:** Obtain your access token from OpenAI's platform.
|
||||
3. **Environment Variables:** Configure your environment with the necessary variables. An example of these variables can be found at the bottom of the guide.
|
||||
|
||||
### Using the Abstract Class from `revgpt.py` <a name="using-abstract-class"></a>
|
||||
|
||||
The abstraction provided in `revgpt.py` is designed to simplify your interactions with ChatGPT.
|
||||
|
||||
1. **Import the Necessary Modules:**
|
||||
|
||||
```python
|
||||
from dotenv import load_dotenv
|
||||
from revgpt import AbstractChatGPT
|
||||
```
|
||||
|
||||
2. **Load Environment Variables:**
|
||||
|
||||
```python
|
||||
load_dotenv()
|
||||
```
|
||||
|
||||
3. **Initialize the ChatGPT Abstract Class:**
|
||||
|
||||
```python
|
||||
chat = AbstractChatGPT(api_key=os.getenv("ACCESS_TOKEN"), **config)
|
||||
```
|
||||
|
||||
4. **Start Interacting with ChatGPT:**
|
||||
|
||||
```python
|
||||
response = chat.ask("Hello, ChatGPT!")
|
||||
print(response)
|
||||
```
|
||||
|
||||
With the abstract class, you can seamlessly switch between different versions or models of ChatGPT without changing much of your code.
|
||||
|
||||
### V1 Standard ChatGPT <a name="v1-standard-chatgpt"></a>
|
||||
|
||||
If you wish to use V1 specifically:
|
||||
|
||||
1. Import the model:
|
||||
|
||||
```python
|
||||
from swarms.models.revgptV1 import RevChatGPTModelv1
|
||||
```
|
||||
|
||||
2. Initialize:
|
||||
|
||||
```python
|
||||
model = RevChatGPTModelv1(access_token=os.getenv("ACCESS_TOKEN"), **config)
|
||||
```
|
||||
|
||||
3. Interact:
|
||||
|
||||
```python
|
||||
response = model.run("What's the weather like?")
|
||||
print(response)
|
||||
```
|
||||
|
||||
### V3 Official Chat API <a name="v3-official-chat-api"></a>
|
||||
|
||||
For users looking to integrate the official V3 API:
|
||||
|
||||
1. Import the model:
|
||||
|
||||
```python
|
||||
from swarms.models.revgptV4 import RevChatGPTModelv4
|
||||
```
|
||||
|
||||
2. Initialize:
|
||||
|
||||
```python
|
||||
model = RevChatGPTModelv4(access_token=os.getenv("OPENAI_API_KEY"), **config)
|
||||
```
|
||||
|
||||
3. Interact:
|
||||
|
||||
```python
|
||||
response = model.run("Tell me a fun fact!")
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Credits & Disclaimers <a name="credits-disclaimers"></a>
|
||||
|
||||
- This project is not an official OpenAI product and is not affiliated with OpenAI. Use at your own discretion.
|
||||
- Many thanks to all the contributors who have made this project possible.
|
||||
- Special acknowledgment to [virtualharby](https://www.youtube.com/@virtualharby) for the motivating music!
|
||||
|
||||
---
|
||||
|
||||
By following this guide, you should now have a clear understanding of how to use the Reverse Engineered ChatGPT API and its abstraction. Happy coding!
|
@ -1,338 +0,0 @@
|
||||
# Tutorial: Understanding and Utilizing Worker Examples
|
||||
|
||||
## Table of Contents
|
||||
1. Introduction
|
||||
2. Code Overview
|
||||
- Import Statements
|
||||
- Initializing API Key and Language Model
|
||||
- Creating Swarm Tools
|
||||
- Appending Tools to a List
|
||||
- Initializing a Worker Node
|
||||
3. Understanding the `hf_agent` Tool
|
||||
4. Understanding the `omni_agent` Tool
|
||||
5. Understanding the `compile` Tool
|
||||
6. Running a Swarm
|
||||
7. Interactive Examples
|
||||
- Example 1: Initializing API Key and Language Model
|
||||
- Example 2: Using the `hf_agent` Tool
|
||||
- Example 3: Using the `omni_agent` Tool
|
||||
- Example 4: Using the `compile` Tool
|
||||
8. Conclusion
|
||||
|
||||
## 1. Introduction
|
||||
The provided code showcases a system built around a worker node that utilizes various AI models and tools to perform tasks. This tutorial will break down the code step by step, explaining its components, how they work together, and how to utilize its modularity for various tasks.
|
||||
|
||||
## 2. Code Overview
|
||||
|
||||
### Import Statements
|
||||
The code begins with import statements, bringing in necessary modules and classes. Key imports include the `OpenAIChat` class, which represents a language model, and several custom agents and tools from the `swarms` package.
|
||||
|
||||
```python
|
||||
import interpreter # Assuming this is a custom module
|
||||
|
||||
from swarms.agents.hf_agents import HFAgent
|
||||
from swarms.agents.omni_modal_agent import OmniModalAgent
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.tools.autogpt import tool
|
||||
from swarms.workers import Worker
|
||||
```
|
||||
|
||||
### Initializing API Key and Language Model
|
||||
Here, an API key is initialized, and a language model (`OpenAIChat`) is created. This model is capable of generating human-like text based on the provided input.
|
||||
|
||||
```python
|
||||
# Initialize API Key
|
||||
api_key = "YOUR_OPENAI_API_KEY"
|
||||
|
||||
# Initialize the language model
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
### Creating Swarm Tools
|
||||
The code defines three tools: `hf_agent`, `omni_agent`, and `compile`. These tools encapsulate specific functionalities and can be invoked to perform tasks.
|
||||
|
||||
### Appending Tools to a List
|
||||
All defined tools are appended to a list called `tools`. This list is later used when initializing a worker node, allowing the node to access and utilize these tools.
|
||||
|
||||
```python
|
||||
# Append tools to a list
|
||||
tools = [hf_agent, omni_agent, compile]
|
||||
```
|
||||
|
||||
### Initializing a Worker Node
|
||||
A worker node is initialized using the `Worker` class. The worker node is equipped with the language model, a name, API key, and the list of tools. It's set up to perform tasks without human intervention.
|
||||
|
||||
```python
|
||||
# Initialize a single Worker node with previously defined tools in addition to its predefined tools
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key=api_key,
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=tools,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
## 3. Understanding the `hf_agent` Tool
|
||||
The `hf_agent` tool utilizes an OpenAI model (`text-davinci-003`) to perform tasks. It takes a task as input and returns a response. This tool is suitable for multi-modal tasks like generating images, videos, speech, etc. The tool's primary rule is not to be used for simple tasks like generating summaries.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def hf_agent(task: str = None):
|
||||
# Create an HFAgent instance with the specified model and API key
|
||||
agent = HFAgent(model="text-davinci-003", api_key=api_key)
|
||||
# Run the agent with the provided task and optional text input
|
||||
response = agent.run(task, text="¡Este es un API muy agradable!")
|
||||
return response
|
||||
```
|
||||
|
||||
## 4. Understanding the `omni_agent` Tool
|
||||
The `omni_agent` tool is more versatile and leverages the `llm` (language model) to interact with Huggingface models for various tasks. It's intended for multi-modal tasks such as document-question-answering, image-captioning, summarization, and more. The tool's rule is also not to be used for simple tasks.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def omni_agent(task: str = None):
|
||||
# Create an OmniModalAgent instance with the provided language model
|
||||
agent = OmniModalAgent(llm)
|
||||
# Run the agent with the provided task
|
||||
response = agent.run(task)
|
||||
return response
|
||||
```
|
||||
|
||||
## 5. Understanding the `compile` Tool
|
||||
The `compile` tool allows the execution of code locally, supporting various programming languages like Python, JavaScript, and Shell. It provides a natural language interface to your computer's capabilities. Users can chat with this tool in a terminal-like interface to perform tasks such as creating and editing files, controlling a browser, and more.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def compile(task: str):
|
||||
# Use the interpreter module to chat with the local interpreter
|
||||
task = interpreter.chat(task, return_messages=True)
|
||||
interpreter.chat()
|
||||
interpreter.reset(task)
|
||||
|
||||
# Set environment variables for the interpreter
|
||||
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
|
||||
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
|
||||
os.environ["INTERPRETER_CLI_DEBUG"] = True
|
||||
```
|
||||
|
||||
## 6. Running a Swarm
|
||||
After defining tools and initializing the worker node, a specific task is provided as input to the worker node. The node then runs the task, and the response is printed to the console.
|
||||
|
||||
```python
|
||||
# Specify the task
|
||||
task = "What were the winning Boston Marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
|
||||
# Run the node on the task
|
||||
response = node.run(task)
|
||||
|
||||
# Print the response
|
||||
print(response)
|
||||
```
|
||||
|
||||
|
||||
## Full Code
|
||||
- The full code example of stacked swarms
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
import interpreter
|
||||
|
||||
from swarms.agents.hf_agents import HFAgent
|
||||
from swarms.agents.omni_modal_agent import OmniModalAgent
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.tools.autogpt import tool
|
||||
from swarms.workers import Worker
|
||||
|
||||
# Initialize API Key
|
||||
api_key = ""
|
||||
|
||||
|
||||
# Initialize the language model,
|
||||
# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
|
||||
# wrap a function with the tool decorator to make it a tool, then add docstrings for tool documentation
|
||||
@tool
|
||||
def hf_agent(task: str = None):
|
||||
"""
|
||||
An tool that uses an openai model to call and respond to a task by search for a model on huggingface
|
||||
It first downloads the model then uses it.
|
||||
|
||||
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech, etc
|
||||
|
||||
"""
|
||||
agent = HFAgent(model="text-davinci-003", api_key=api_key)
|
||||
response = agent.run(task, text="¡Este es un API muy agradable!")
|
||||
return response
|
||||
|
||||
|
||||
# wrap a function with the tool decorator to make it a tool
|
||||
@tool
|
||||
def omni_agent(task: str = None):
|
||||
"""
|
||||
An tool that uses an openai Model to utilize and call huggingface models and guide them to perform a task.
|
||||
|
||||
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech
|
||||
The following tasks are what this tool should be used for:
|
||||
|
||||
Tasks omni agent is good for:
|
||||
--------------
|
||||
document-question-answering
|
||||
image-captioning
|
||||
image-question-answering
|
||||
image-segmentation
|
||||
speech-to-text
|
||||
summarization
|
||||
text-classification
|
||||
text-question-answering
|
||||
translation
|
||||
huggingface-tools/text-to-image
|
||||
huggingface-tools/text-to-video
|
||||
text-to-speech
|
||||
huggingface-tools/text-download
|
||||
huggingface-tools/image-transformation
|
||||
"""
|
||||
agent = OmniModalAgent(llm)
|
||||
response = agent.run(task)
|
||||
return response
|
||||
|
||||
|
||||
# Code Interpreter
|
||||
@tool
|
||||
def compile(task: str):
|
||||
"""
|
||||
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally.
|
||||
You can chat with Open Interpreter through a ChatGPT-like interface in your terminal
|
||||
by running $ interpreter after installing.
|
||||
|
||||
This provides a natural-language interface to your computer's general-purpose capabilities:
|
||||
|
||||
Create and edit photos, videos, PDFs, etc.
|
||||
Control a Chrome browser to perform research
|
||||
Plot, clean, and analyze large datasets
|
||||
...etc.
|
||||
⚠️ Note: You'll be asked to approve code before it's run.
|
||||
|
||||
Rules: Only use when given to generate code or an application of some kind
|
||||
"""
|
||||
task = interpreter.chat(task, return_messages=True)
|
||||
interpreter.chat()
|
||||
interpreter.reset(task)
|
||||
|
||||
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
|
||||
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
|
||||
os.environ["INTERPRETER_CLI_DEBUG"] = True
|
||||
|
||||
|
||||
# Append tools to an list
|
||||
tools = [hf_agent, omni_agent, compile]
|
||||
|
||||
|
||||
# Initialize a single Worker node with previously defined tools in addition to it's
|
||||
# predefined tools
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key=api_key,
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=tools,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
# Specify task
|
||||
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
|
||||
# Run the node on the task
|
||||
response = node.run(task)
|
||||
|
||||
# Print the response
|
||||
print(response)
|
||||
```
|
||||
|
||||
|
||||
## 8. Conclusion
|
||||
In this extensive tutorial, we've embarked on a journey to explore a sophisticated system designed to harness the power of AI models and tools for a myriad of tasks. We've peeled back the layers of code, dissected its various components, and gained a profound understanding of how these elements come together to create a versatile, modular, and powerful swarm-based AI system.
|
||||
|
||||
## What We've Learned
|
||||
|
||||
Throughout this tutorial, we've covered the following key aspects:
|
||||
|
||||
### Code Structure and Components
|
||||
We dissected the code into its fundamental building blocks:
|
||||
- **Import Statements:** We imported necessary modules and libraries, setting the stage for our system's functionality.
|
||||
- **Initializing API Key and Language Model:** We learned how to set up the essential API key and initialize the language model, a core component for text generation and understanding.
|
||||
- **Creating Swarm Tools:** We explored how to define tools, encapsulating specific functionalities that our system can leverage.
|
||||
- **Appending Tools to a List:** We aggregated our tools into a list, making them readily available for use.
|
||||
- **Initializing a Worker Node:** We created a worker node equipped with tools, a name, and configuration settings.
|
||||
|
||||
### Tools and Their Functions
|
||||
We dove deep into the purpose and functionality of three crucial tools:
|
||||
- **`hf_agent`:** We understood how this tool employs an OpenAI model for multi-modal tasks, and its use cases beyond simple summarization.
|
||||
- **`omni_agent`:** We explored the versatility of this tool, guiding Huggingface models to perform a wide range of multi-modal tasks.
|
||||
- **`compile`:** We saw how this tool allows the execution of code in multiple languages, providing a natural language interface for various computational tasks.
|
||||
|
||||
### Interactive Examples
|
||||
We brought the code to life through interactive examples, showcasing how to initialize the language model, generate text, perform document-question-answering, and execute code—all with practical, real-world scenarios.
|
||||
|
||||
## A Recap: The Worker Node's Role
|
||||
|
||||
At the heart of this system lies the "Worker Node," a versatile entity capable of wielding the power of AI models and tools to accomplish tasks. The Worker Node's role is pivotal in the following ways:
|
||||
|
||||
1. **Task Execution:** It is responsible for executing tasks, harnessing the capabilities of the defined tools to generate responses or perform actions.
|
||||
|
||||
2. **Modularity:** The Worker Node benefits from the modularity of the system. It can easily access and utilize a variety of tools, allowing it to adapt to diverse tasks and requirements.
|
||||
|
||||
3. **Human in the Loop:** While the example here is configured to operate without human intervention, the Worker Node can be customized to incorporate human input or approval when needed.
|
||||
|
||||
4. **Integration:** It can be extended to integrate with other AI models, APIs, or services, expanding its functionality and versatility.
|
||||
|
||||
## The Road Ahead: Future Features and Enhancements
|
||||
|
||||
As we conclude this tutorial, let's peek into the future of this system. While the current implementation is already powerful, there is always room for growth and improvement. Here are some potential future features and enhancements to consider:
|
||||
|
||||
### 1. Enhanced Natural Language Understanding
|
||||
- **Semantic Understanding:** Improve the system's ability to understand context and nuances in natural language, enabling more accurate responses.
|
||||
|
||||
### 2. Multimodal Capabilities
|
||||
- **Extended Multimodal Support:** Expand the `omni_agent` tool to support additional types of multimodal tasks, such as video generation or audio processing.
|
||||
|
||||
### 3. Customization and Integration
|
||||
- **User-defined Tools:** Allow users to define their own custom tools, opening up endless possibilities for tailoring the system to specific needs.
|
||||
|
||||
### 4. Collaborative Swarms
|
||||
- **Swarm Collaboration:** Enable multiple Worker Nodes to collaborate on complex tasks, creating a distributed, intelligent swarm system.
|
||||
|
||||
### 5. User-Friendly Interfaces
|
||||
- **Graphical User Interface (GUI):** Develop a user-friendly GUI for easier interaction and task management, appealing to a wider audience.
|
||||
|
||||
### 6. Continuous Learning
|
||||
- **Active Learning:** Implement mechanisms for the system to learn and adapt over time, improving its performance with each task.
|
||||
|
||||
### 7. Security and Privacy
|
||||
- **Enhanced Security:** Implement robust security measures to safeguard sensitive data and interactions within the system.
|
||||
|
||||
### 8. Community and Collaboration
|
||||
- **Open Source Community:** Foster an open-source community around the system, encouraging contributions and innovation from developers worldwide.
|
||||
|
||||
### 9. Integration with Emerging Technologies
|
||||
- **Integration with Emerging AI Models:** Keep the system up-to-date by seamlessly integrating with new and powerful AI models as they emerge in the industry.
|
||||
|
||||
## In Conclusion
|
||||
|
||||
In this tutorial, we've journeyed through a complex AI system, unraveling its inner workings, and understanding its potential. We've witnessed how code can transform into a powerful tool, capable of handling a vast array of tasks, from generating creative stories to executing code snippets.
|
||||
|
||||
As we conclude, we stand at the threshold of an exciting future for AI and technology. This system, with its modular design and the potential for continuous improvement, embodies the spirit of innovation and adaptability. Whether you're a developer, a researcher, or an enthusiast, the possibilities are boundless, and the journey is just beginning.
|
||||
|
||||
Embrace this knowledge, explore the system, and embark on your own quest to shape the future of AI. With each line of code, you have the power to transform ideas into reality and unlock new horizons of innovation. The future is yours to create, and the tools are at your fingertips.
|
@ -1,323 +0,0 @@
|
||||
# **Equipping Autonomous Agents with Tools**
|
||||
==========================================
|
||||
|
||||
Tools play a crucial role in enhancing the capabilities of AI agents. Swarms, a powerful open-source framework, provides a robust and flexible environment for building and integrating tools with AI agents. In this comprehensive guide, we'll explore the process of creating tools in Swarms, including the 3-step process, tool decorator, adding types and doc strings, and integrating them into the Agent class.
|
||||
|
||||
## **Introduction to Swarms**
|
||||
--------------------------
|
||||
|
||||
Swarms is a Python-based framework that simplifies the development and deployment of AI agents. It provides a seamless integration with large language models (LLMs) and offers a wide range of tools and utilities to streamline the agent development process. One of the core features of Swarms is the ability to create and integrate custom tools, which can significantly extend the capabilities of AI agents.
|
||||
|
||||
Learn more here:
|
||||
|
||||
[**GitHub - kyegomez/swarms: Build, Deploy, and Scale Reliable Swarms of Autonomous Agents for...**](https://github.com/kyegomez/swarms?source=post_page-----49d146bcbf9e--------------------------------)
|
||||
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
### [Build, Deploy, and Scale Reliable Swarms of Autonomous Agents for Workflow Automation. Join our Community...](https://github.com/kyegomez/swarms?source=post_page-----49d146bcbf9e--------------------------------)
|
||||
|
||||
[github.com](https://github.com/kyegomez/swarms?source=post_page-----49d146bcbf9e--------------------------------)
|
||||
|
||||
And, join our community for real-time support and conversations with friends!
|
||||
|
||||
[**Join the Agora Discord Server!**](https://discord.gg/A8DrG5nj?source=post_page-----49d146bcbf9e--------------------------------)
|
||||
-----------------------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
### [Advancing Humanity through open source AI research. | 6319 members](https://discord.gg/A8DrG5nj?source=post_page-----49d146bcbf9e--------------------------------)
|
||||
|
||||
[discord.gg](https://discord.gg/A8DrG5nj?source=post_page-----49d146bcbf9e--------------------------------)
|
||||
|
||||
**Installation**
|
||||
================
|
||||
|
||||
First, download swarms with the following command. If you have any questions please refer to this video or ask us in the discord!
|
||||
|
||||
```bash
|
||||
pip3 install -U swarms
|
||||
```
|
||||
|
||||
**Necessary Imports**
|
||||
---------------------
|
||||
|
||||
Before we dive into the process of creating tools in Swarms, let's familiarize ourselves with the necessary imports:
|
||||
|
||||
```python
|
||||
from swarms import Agent, Anthropic, tool
|
||||
import subprocess
|
||||
```
|
||||
|
||||
- These imports provide access to the core components of the Swarms framework, including the `Agent` class, the `Anthropic` language model, and the `tool` decorator for creating custom tools.
|
||||
|
||||
- `import subprocess`: This import allows us to interact with the system's terminal and execute shell commands, which can be useful for certain types of tools.
|
||||
|
||||
With these imports in place, we're ready to explore the process of creating tools in Swarms.
|
||||
|
||||
### **The 3-Step Process**
|
||||
======================
|
||||
|
||||
Creating tools in Swarms follows a straightforward 3-step process:
|
||||
|
||||
1\. Define the Tool Function\
|
||||
2\. Decorate the Function with `[@tool](http://twitter.com/tool)` and add documentation with type hints.\
|
||||
3\. Add the Tool to the `Agent` Instance
|
||||
|
||||
Let's go through each step in detail, accompanied by code examples.
|
||||
|
||||
### **Step 1: Define the Tool Function**
|
||||
------------------------------------
|
||||
|
||||
The first step in creating a tool is to define a Python function that encapsulates the desired functionality. This function will serve as the core logic for your tool. Here's an example of a tool function that allows you to execute code in the terminal:
|
||||
|
||||
```python
|
||||
def terminal(code: str) -> str:
|
||||
"""
|
||||
Run code in the terminal.
|
||||
|
||||
Args:
|
||||
code (str): The code to run in the terminal.
|
||||
|
||||
Returns:
|
||||
str: The output of the code.
|
||||
"""
|
||||
out = subprocess.run(
|
||||
code, shell=True, capture_output=True, text=True
|
||||
).stdout
|
||||
return str(out)
|
||||
```
|
||||
|
||||
In this example, the `terminal` function takes a string `code` as input and uses the `subprocess` module to execute the provided code in the system's terminal. The output of the code is captured and returned as a string.
|
||||
|
||||
### **Let's break down the components of this function:**
|
||||
-----------------------------------------------------
|
||||
|
||||
- **Function Signature:** The function signature `def terminal(code: str) -> str:` defines the function name (`terminal`), the parameter name and type (`code: str`), and the return type (`-> str`). This adheres to Python's type hinting conventions.
|
||||
|
||||
- **Docstring**: The multiline string enclosed in triple quotes (`"""` ... `"""`) is a docstring, which provides a brief description of the function, its parameters, and its return value. Docstrings are essential for documenting your code and making it easier for the agent to understand and use your tools.
|
||||
|
||||
- **Function Body**: The body of the function contains the actual logic for executing the code in the terminal. It uses the `subprocess.run` function to execute the provided `code` in the shell, capturing the output (`capture_output=True`), and returning the output as text (`text=True`). The `stdout` attribute of the `CompletedProcess` object contains the captured output, which is converted to a string and returned.
|
||||
|
||||
This is a simple example, but it demonstrates the key components of a tool function: a well-defined signature with type hints, a descriptive docstring, and the core logic encapsulated within the function body.
|
||||
|
||||
### **Step 2: Decorate the Function with `**[**@tool**](http://twitter.com/tool)**`**
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
After defining the tool function, the next step is to decorate it with the `[@tool](http://twitter.com/tool)` decorator provided by Swarms. This decorator registers the function as a tool, allowing it to be used by AI agents within the Swarms framework.
|
||||
|
||||
Here's how you would decorate the `terminal` function from the previous example:
|
||||
|
||||
```python
|
||||
@tool
|
||||
def terminal(code: str) -> str:
|
||||
"""
|
||||
Run code in the terminal.
|
||||
|
||||
Args:
|
||||
code (str): The code to run in the terminal.
|
||||
|
||||
Returns:
|
||||
str: The output of the code.
|
||||
"""
|
||||
out = subprocess.run(
|
||||
code, shell=True, capture_output=True, text=True
|
||||
).stdout
|
||||
return str(out)
|
||||
```
|
||||
|
||||
The `[@tool](http://twitter.com/tool)` decorator is placed directly above the function definition. This decorator performs the necessary registration and configuration steps to integrate the tool with the Swarms framework.
|
||||
|
||||
### **Step 3: Add the Tool to the `Agent` Instance**
|
||||
------------------------------------------------
|
||||
|
||||
The final step in creating a tool is to add it to the `Agent` instance. The `Agent` class is a core component of the Swarms framework and represents an AI agent capable of interacting with humans and other agents, as well as utilizing the available tools.
|
||||
|
||||
Here's an example of how to create an `Agent` instance and add the `terminal` tool:
|
||||
|
||||
```python
|
||||
# Model
|
||||
llm = Anthropic(
|
||||
temperature=0.1,
|
||||
)
|
||||
|
||||
# Agent
|
||||
agent = Agent(
|
||||
agent_name="Devin",
|
||||
system_prompt=(
|
||||
"Autonomous agent that can interact with humans and other"
|
||||
" agents. Be Helpful and Kind. Use the tools provided to"
|
||||
" assist the user. Return all code in markdown format."
|
||||
),
|
||||
llm=llm,
|
||||
max_loops="auto",
|
||||
autosave=True,
|
||||
dashboard=False,
|
||||
streaming_on=True,
|
||||
verbose=True,
|
||||
stopping_token="<DONE>",
|
||||
interactive=True,
|
||||
tools=[terminal],
|
||||
code_interpreter=True,
|
||||
)
|
||||
```
|
||||
|
||||
In this example, we first create an instance of the `Anthropic` language model, which will be used by the agent for natural language
|
||||
|
||||
**The Necessity of Documentation**
|
||||
----------------------------------
|
||||
|
||||
Before creating tools, it's essential to understand the importance of documentation. Clear and concise documentation ensures that your code is easily understandable and maintainable, not only for yourself but also for other developers who may work with your codebase in the future.
|
||||
|
||||
Effective documentation serves several purposes:
|
||||
|
||||
1. Code Clarity: Well-documented code is easier to read and understand, making it more accessible for both developers and non-technical stakeholders.
|
||||
|
||||
2. Collaboration: When working in a team or contributing to open-source projects, proper documentation facilitates collaboration and knowledge sharing.
|
||||
|
||||
3. Onboarding: Comprehensive documentation can significantly streamline the onboarding process for new team members or contributors, reducing the time required to familiarize themselves with the codebase.
|
||||
|
||||
4. Future Maintenance: As projects evolve and requirements change, well-documented code becomes invaluable for future maintenance and updates.
|
||||
|
||||
In the context of creating tools in Swarms, documentation plays a vital role in ensuring that your tools are easily discoverable, understandable, and usable by other developers and AI agents.
|
||||
|
||||
**Type Handling**
|
||||
-----------------
|
||||
|
||||
Python is a dynamically-typed language, which means that variables can hold values of different types during runtime. While this flexibility can be advantageous in certain scenarios, it can also lead to potential errors and inconsistencies, especially in larger codebases.
|
||||
|
||||
Type hints, introduced in Python 3.5, provide a way to explicitly annotate the expected types of variables, function parameters, and return values. By incorporating type hints into your code, you can:
|
||||
|
||||
1. Improve Code Readability: Type hints make it easier for developers to understand the expected data types, reducing the risk of introducing bugs due to type-related errors.
|
||||
|
||||
2. Enable Static Type Checking: With tools like mypy, you can perform static type checking, catching potential type-related issues before running the code.
|
||||
|
||||
3. Enhance Code Completion and Tooling: Modern IDEs and code editors can leverage type hints to provide better code completion, refactoring capabilities, and inline documentation.
|
||||
|
||||
In the context of creating tools in Swarms, type hints are crucial for ensuring that your tools are used correctly by AI agents and other developers. By clearly defining the expected input and output types, you can reduce the likelihood of runtime errors and improve the overall reliability of your tools.
|
||||
|
||||
Now, let's continue with other tool examples!
|
||||
|
||||
### **Additional Tool Examples**
|
||||
============================
|
||||
|
||||
To further illustrate the process of creating tools in Swarms, let's explore a few more examples of tool functions with varying functionalities.
|
||||
|
||||
**Browser Tool**
|
||||
|
||||
```python
|
||||
@tool
|
||||
def browser(query: str) -> str:
|
||||
"""
|
||||
Search the query in the browser with the `browser` tool.
|
||||
Args:
|
||||
query (str): The query to search in the browser.
|
||||
Returns:
|
||||
str: The search results.
|
||||
"""
|
||||
import webbrowser
|
||||
url = f"https://www.google.com/search?q={query}"
|
||||
webbrowser.open(url)
|
||||
return f"Searching for {query} in the browser."
|
||||
```
|
||||
|
||||
The `browser` tool allows the agent to perform web searches by opening the provided `query` in the default web browser. It leverages the `webbrowser` module to construct the search URL and open it in the browser. The tool returns a string indicating that the search is being performed.
|
||||
|
||||
**File Creation Tool**
|
||||
|
||||
```python
|
||||
@tool
|
||||
def create_file(file_path: str, content: str) -> str:
|
||||
"""
|
||||
Create a file using the file editor tool.
|
||||
Args:
|
||||
file_path (str): The path to the file.
|
||||
content (str): The content to write to the file.
|
||||
Returns:
|
||||
str: The result of the file creation operation.
|
||||
"""
|
||||
with open(file_path, "w") as file:
|
||||
file.write(content)
|
||||
return f"File {file_path} created successfully."
|
||||
```
|
||||
|
||||
The `create_file` tool allows the agent to create a new file at the specified `file_path` with the provided `content`. It uses Python's built-in `open` function in write mode (`"w"`) to create the file and write the content to it. The tool returns a string indicating the successful creation of the file.
|
||||
|
||||
**File Editor Tool**
|
||||
|
||||
```python
|
||||
@tool
|
||||
def file_editor(file_path: str, mode: str, content: str) -> str:
|
||||
"""
|
||||
Edit a file using the file editor tool.
|
||||
Args:
|
||||
file_path (str): The path to the file.
|
||||
mode (str): The mode to open the file in.
|
||||
content (str): The content to write to the file.
|
||||
Returns:
|
||||
str: The result of the file editing operation.
|
||||
"""
|
||||
with open(file_path, mode) as file:
|
||||
file.write(content)
|
||||
return f"File {file_path} edited successfully."
|
||||
```
|
||||
|
||||
The `file_editor` tool is similar to the `create_file` tool but provides more flexibility by allowing the agent to specify the mode in which the file should be opened (e.g., `"w"` for write, `"a"` for append). It writes the provided `content` to the file at the specified `file_path` and returns a string indicating the successful editing of the file.
|
||||
|
||||
These examples demonstrate the versatility of tools in Swarms and how they can be designed to perform a wide range of tasks, from executing terminal commands to interacting with files and performing web searches.
|
||||
|
||||
### **Plugging Tools into the Agent**
|
||||
=================================
|
||||
|
||||
After defining and decorating your tool functions, the next step is to integrate them into the `Agent` instance. This process involves passing the tools as a list to the `tools` parameter when creating the `Agent` instance.
|
||||
|
||||
```python
|
||||
# Model
|
||||
llm = Anthropic(temperature=0.1)
|
||||
# Tools
|
||||
tools = [terminal, browser, create_file, file_editor]
|
||||
# Agent
|
||||
agent = Agent(
|
||||
agent_name="Devin",
|
||||
system_prompt=(
|
||||
"Autonomous agent that can interact with humans and other"
|
||||
" agents. Be Helpful and Kind. Use the tools provided to"
|
||||
" assist the user. Return all code in markdown format."
|
||||
),
|
||||
llm=llm,
|
||||
max_loops="auto",
|
||||
autosave=True,
|
||||
dashboard=False,
|
||||
streaming_on=True,
|
||||
verbose=True,
|
||||
stopping_token="<DONE>",
|
||||
interactive=True,
|
||||
tools=tools,
|
||||
code_interpreter=True,
|
||||
)
|
||||
```
|
||||
|
||||
In this example, we create a list `tools` containing the `terminal`, `browser`, `create_file`, and `file_editor` tools. This list is then passed to the `tools` parameter when creating the `Agent` instance.
|
||||
|
||||
Once the `Agent` instance is created with the provided tools, it can utilize these tools to perform various tasks and interact with external systems. The agent can call these tools as needed, passing the required arguments and receiving the corresponding return values.
|
||||
|
||||
### **Using Tools in Agent Interactions**
|
||||
=====================================
|
||||
|
||||
After integrating the tools into the `Agent` instance, you can utilize them in your agent's interactions with humans or other agents. Here's an example of how an agent might use the `terminal` tool:
|
||||
|
||||
```
|
||||
out = agent("Run the command 'ls' in the terminal.")
|
||||
print(out)
|
||||
```
|
||||
|
||||
In this example, the human user instructs the agent to run the `"ls"` command in the terminal. The agent processes this request and utilizes the `terminal` tool to execute the command, capturing and returning the output.
|
||||
|
||||
Similarly, the agent can leverage other tools, such as the `browser` tool for web searches or the `file_editor` tool for creating and modifying files, based on the user's instructions.
|
||||
|
||||
### **Conclusion**
|
||||
==============
|
||||
|
||||
Creating tools in Swarms is a powerful way to extend the capabilities of AI agents and enable them to interact with external systems and perform a wide range of tasks. By following the 3-step process of defining the tool function, decorating it with `@tool`, and adding it to the `Agent` instance, you can seamlessly integrate custom tools into your AI agent's workflow.
|
||||
|
||||
Throughout this blog post, we explored the importance of documentation and type handling, which are essential for maintaining code quality, facilitating collaboration, and ensuring the correct usage of your tools by other developers and AI agents.
|
||||
|
||||
We also covered the necessary imports and provided detailed code examples for various types of tools, such as executing terminal commands, performing web searches, and creating and editing files. These examples demonstrated the flexibility and versatility of tools in Swarms, allowing you to tailor your tools to meet your specific project requirements.
|
||||
|
||||
By leveraging the power of tools in Swarms, you can empower your AI agents with diverse capabilities, enabling them to tackle complex tasks, interact with external systems, and provide more comprehensive and intelligent solutions.
|
@ -1,89 +1,131 @@
|
||||
<div align="center">
|
||||
<p>
|
||||
<a align="center" href="" target="_blank">
|
||||
<img
|
||||
width="850"
|
||||
src="https://github.com/kyegomez/swarms/raw/master/images/swarmslogobanner.png"
|
||||
>
|
||||
</a>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
## 👋 Hello
|
||||
|
||||
Swarms provides you with all the building blocks you need to build reliable, production-grade, and scalable multi-agent apps!
|
||||
|
||||
## 💻 Install
|
||||
|
||||
You can install `swarms` with pip in a
|
||||
[**Python>=3.10**](https://www.python.org/) environment.
|
||||
|
||||
!!! example "pip install (recommended)"
|
||||
|
||||
=== "headless"
|
||||
The headless installation of `swarms` is designed for environments where graphical user interfaces (GUI) are not needed, making it more lightweight and suitable for server-side applications.
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
|
||||
!!! example "git clone (for development)"
|
||||
|
||||
=== "virtualenv"
|
||||
|
||||
```bash
|
||||
# clone repository and navigate to root directory
|
||||
git clone https://github.com/kyegomez/swarms.git
|
||||
cd swarms
|
||||
|
||||
# setup python environment and activate it
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install --upgrade pip
|
||||
|
||||
# headless install
|
||||
pip install -e "."
|
||||
|
||||
# desktop install
|
||||
pip install -e ".[desktop]"
|
||||
```
|
||||
|
||||
=== "poetry"
|
||||
|
||||
```bash
|
||||
# clone repository and navigate to root directory
|
||||
git clone https://github.com/kyegomez/swarms.git
|
||||
cd swarms
|
||||
|
||||
# setup python environment and activate it
|
||||
poetry env use python3.10
|
||||
poetry shell
|
||||
|
||||
# headless install
|
||||
poetry install
|
||||
|
||||
# desktop install
|
||||
poetry install --extras "desktop"
|
||||
```
|
||||
|
||||
!!! example "NPM install |WIP|"
|
||||
|
||||
=== "headless"
|
||||
Get started with the NPM implementation of Swarms with this command:
|
||||
|
||||
```bash
|
||||
npm install swarms-js
|
||||
```
|
||||
|
||||
|
||||
## Documentation
|
||||
# Swarms
|
||||
|
||||
[Learn more about swarms →](swarms/)
|
||||
Orchestrate enterprise-grade agents for multi-agent collaboration and orchestration to automate real-world problems.
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
Check out Swarms examples for building agents, data retrieval, and more.
|
||||
|
||||
[Checkout Swarms examples →](examples/)
|
||||
<div style="display:flex; margin:0 auto; justify-content: center;">
|
||||
<div style="width:25%">
|
||||
<h2>Core Concepts</h2>
|
||||
<ul>
|
||||
<li>
|
||||
<a href="swarms/structs/agent.md">
|
||||
Agents
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/memory/diy_memory.md">
|
||||
Memory
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/tools/main.md">
|
||||
Tools
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/structs/task.md">
|
||||
Tasks
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/structs/agent_rearrange.md">
|
||||
Multi-Agent Orchestration
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div style="width:25%">
|
||||
<h2>How-To Guides</h2>
|
||||
<ul>
|
||||
<li>
|
||||
<a href="swarms/install/install.md">
|
||||
Installing Swarms
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/install/docker_setup.md">
|
||||
Docker Setup
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/Create-Custom-Tools">
|
||||
Create Custom Tools
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/structs/agent_rearrange.md">
|
||||
Multi-Agent Flows
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="swarms/structs/sequential_workflow.md">
|
||||
Sequential Workflows
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/LLM-Connections">
|
||||
Connecting to LLMs
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/Customizing-Agents">
|
||||
Customizing Agents
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/Human-Input-on-Execution">
|
||||
Human Input on Execution
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/AgentOps-Observability">
|
||||
Agent Monitoring with AgentOps
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div style="width:25%">
|
||||
<h2>Examples</h2>
|
||||
<ul>
|
||||
<li>
|
||||
<a target='_blank' href="applications/business-analyst-agent.md">
|
||||
Swarm of Buisness Analysts for Business Reports
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://medium.com/@kyeg/building-compliance-agents-with-chroma-db-llama3-sop-prompting-0ed3e73559d2">
|
||||
Compliance Swarm for Customer Privacy
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://medium.com/@kyeg/announcing-neosapiens-self-replicating-swarms-0a47410aafa7">
|
||||
Self-Replicating Hiearchical Swarms
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<div style="width:25%">
|
||||
<h2>Community</h2>
|
||||
<ul>
|
||||
<li>
|
||||
<a target='_blank' href="https://discord.gg/3Zck7nX6">
|
||||
Join the Swarms Community!
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://github.com/kyegomez/swarm-ecosystem">
|
||||
Swarms Ecosystem
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://cal.com/swarms/swarms-onboarding-session">
|
||||
Support Team
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a target='_blank' href="https://cal.com/swarms/swarms-onboarding-session">
|
||||
Book a 1 on 1 Call With Founder: Kye
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,150 @@
|
||||
# Documentation for GPT4o Module
|
||||
|
||||
## Overview and Introduction
|
||||
|
||||
The `GPT4o` module is a multi-modal conversational model based on OpenAI's GPT-4 architecture. It extends the functionality of the `BaseMultiModalModel` class, enabling it to handle both text and image inputs for generating diverse and contextually rich responses. This module leverages the power of the GPT-4 model to enhance interactions by integrating visual information with textual prompts, making it highly relevant for applications requiring multi-modal understanding and response generation.
|
||||
|
||||
### Key Concepts
|
||||
- **Multi-Modal Model**: A model that can process and generate responses based on multiple types of inputs, such as text and images.
|
||||
- **System Prompt**: A predefined prompt to guide the conversation flow.
|
||||
- **Temperature**: A parameter that controls the randomness of the response generation.
|
||||
- **Max Tokens**: The maximum number of tokens (words or word pieces) in the generated response.
|
||||
|
||||
## Class Definition
|
||||
|
||||
### `GPT4o` Class
|
||||
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------------|--------|--------------------------------------------------------------------------------------|
|
||||
| `system_prompt` | `str` | The system prompt to be used in the conversation. |
|
||||
| `temperature` | `float`| The temperature parameter for generating diverse responses. Default is `0.1`. |
|
||||
| `max_tokens` | `int` | The maximum number of tokens in the generated response. Default is `300`. |
|
||||
| `openai_api_key`| `str` | The API key for accessing the OpenAI GPT-4 API. |
|
||||
| `*args` | | Additional positional arguments. |
|
||||
| `**kwargs` | | Additional keyword arguments. |
|
||||
|
||||
## Functionality and Usage
|
||||
|
||||
### `encode_image` Function
|
||||
|
||||
The `encode_image` function is used to encode an image file into a base64 string format, which can then be included in the request to the GPT-4 API.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|---------------|--------|----------------------------------------------|
|
||||
| `image_path` | `str` | The local path to the image file to be encoded. |
|
||||
|
||||
#### Returns
|
||||
|
||||
| Return Type | Description |
|
||||
|-------------|---------------------------------|
|
||||
| `str` | The base64 encoded string of the image. |
|
||||
|
||||
### `GPT4o.__init__` Method
|
||||
|
||||
The constructor for the `GPT4o` class initializes the model with the specified parameters and sets up the OpenAI client.
|
||||
|
||||
### `GPT4o.run` Method
|
||||
|
||||
The `run` method executes the GPT-4o model to generate a response based on the provided task and optional image.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|---------------|--------|----------------------------------------------------|
|
||||
| `task` | `str` | The task or user prompt for the conversation. |
|
||||
| `local_img` | `str` | The local path to the image file. |
|
||||
| `img` | `str` | The URL of the image. |
|
||||
| `*args` | | Additional positional arguments. |
|
||||
| `**kwargs` | | Additional keyword arguments. |
|
||||
|
||||
#### Returns
|
||||
|
||||
| Return Type | Description |
|
||||
|-------------|--------------------------------------------------|
|
||||
| `str` | The generated response from the GPT-4o model. |
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Basic Text Prompt
|
||||
|
||||
```python
|
||||
from swarms import GPT4o
|
||||
|
||||
# Initialize the model
|
||||
model = GPT4o(
|
||||
system_prompt="You are a helpful assistant.",
|
||||
temperature=0.7,
|
||||
max_tokens=150,
|
||||
openai_api_key="your_openai_api_key"
|
||||
)
|
||||
|
||||
# Define the task
|
||||
task = "What is the capital of France?"
|
||||
|
||||
# Generate response
|
||||
response = model.run(task)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 2: Text Prompt with Local Image
|
||||
|
||||
```python
|
||||
from swarms import GPT4o
|
||||
|
||||
# Initialize the model
|
||||
model = GPT4o(
|
||||
system_prompt="Describe the image content.",
|
||||
temperature=0.5,
|
||||
max_tokens=200,
|
||||
openai_api_key="your_openai_api_key"
|
||||
)
|
||||
|
||||
# Define the task and image path
|
||||
task = "Describe the content of this image."
|
||||
local_img = "path/to/your/image.jpg"
|
||||
|
||||
# Generate response
|
||||
response = model.run(task, local_img=local_img)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 3: Text Prompt with Image URL
|
||||
|
||||
```python
|
||||
from swarms import GPT4o
|
||||
|
||||
# Initialize the model
|
||||
model = GPT4o(
|
||||
system_prompt="You are a visual assistant.",
|
||||
temperature=0.6,
|
||||
max_tokens=250,
|
||||
openai_api_key="your_openai_api_key"
|
||||
)
|
||||
|
||||
# Define the task and image URL
|
||||
task = "What can you tell about the scenery in this image?"
|
||||
img_url = "http://example.com/image.jpg"
|
||||
|
||||
# Generate response
|
||||
response = model.run(task, img=img_url)
|
||||
print(response)
|
||||
```
|
||||
|
||||
## Additional Information and Tips
|
||||
|
||||
- **API Key Management**: Ensure that your OpenAI API key is securely stored and managed. Do not hard-code it in your scripts. Use environment variables or secure storage solutions.
|
||||
- **Image Encoding**: The `encode_image` function is crucial for converting images to a base64 format suitable for API requests. Ensure that the images are accessible and properly formatted.
|
||||
- **Temperature Parameter**: Adjust the `temperature` parameter to control the creativity of the model's responses. Lower values make the output more deterministic, while higher values increase randomness.
|
||||
- **Token Limit**: Be mindful of the `max_tokens` parameter to avoid exceeding the API's token limits. This parameter controls the length of the generated responses.
|
||||
|
||||
## References and Resources
|
||||
|
||||
- [OpenAI API Documentation](https://beta.openai.com/docs/)
|
||||
- [Python Base64 Encoding](https://docs.python.org/3/library/base64.html)
|
||||
- [dotenv Documentation](https://saurabh-kumar.com/python-dotenv/)
|
||||
- [BaseMultiModalModel Documentation](https://swarms.apac.ai)
|
@ -1,403 +0,0 @@
|
||||
# `AbstractWorker` Documentation
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Introduction](#introduction)
|
||||
2. [Abstract Worker](#abstract-worker)
|
||||
1. [Class Definition](#class-definition)
|
||||
2. [Attributes](#attributes)
|
||||
3. [Methods](#methods)
|
||||
3. [Tutorial: Creating Custom Workers](#tutorial-creating-custom-workers)
|
||||
4. [Conclusion](#conclusion)
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction <a name="introduction"></a>
|
||||
|
||||
Welcome to the documentation for the Swarms library, a powerful tool for building and simulating swarm architectures. This library provides a foundation for creating and managing autonomous workers that can communicate, collaborate, and perform various tasks in a coordinated manner.
|
||||
|
||||
In this documentation, we will cover the `AbstractWorker` class, which serves as the fundamental building block for creating custom workers in your swarm simulations. We will explain the class's architecture, attributes, and methods in detail, providing practical examples to help you understand how to use it effectively.
|
||||
|
||||
Whether you want to simulate a team of autonomous robots, a group of AI agents, or any other swarm-based system, the Swarms library is here to simplify the process and empower you to build complex simulations.
|
||||
|
||||
---
|
||||
|
||||
## 2. Abstract Worker <a name="abstract-worker"></a>
|
||||
|
||||
### 2.1 Class Definition <a name="class-definition"></a>
|
||||
|
||||
The `AbstractWorker` class is an abstract base class that serves as the foundation for creating worker agents in your swarm simulations. It defines a set of methods that should be implemented by subclasses to customize the behavior of individual workers.
|
||||
|
||||
Here is the class definition:
|
||||
|
||||
```python
|
||||
class AbstractWorker:
|
||||
def __init__(self, name: str):
|
||||
"""
|
||||
Args:
|
||||
name (str): Name of the worker.
|
||||
"""
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
"""Get the name of the worker."""
|
||||
|
||||
def run(self, task: str):
|
||||
"""Run the worker agent once."""
|
||||
|
||||
def send(
|
||||
self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None
|
||||
):
|
||||
"""Send a message to another worker."""
|
||||
|
||||
async def a_send(
|
||||
self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None
|
||||
):
|
||||
"""Send a message to another worker asynchronously."""
|
||||
|
||||
def receive(
|
||||
self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None
|
||||
):
|
||||
"""Receive a message from another worker."""
|
||||
|
||||
async def a_receive(
|
||||
self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None
|
||||
):
|
||||
"""Receive a message from another worker asynchronously."""
|
||||
|
||||
def reset(self):
|
||||
"""Reset the worker."""
|
||||
|
||||
def generate_reply(
|
||||
self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
|
||||
) -> Union[str, Dict, None]:
|
||||
"""Generate a reply based on received messages."""
|
||||
|
||||
async def a_generate_reply(
|
||||
self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
|
||||
) -> Union[str, Dict, None]:
|
||||
"""Generate a reply based on received messages asynchronously."""
|
||||
```
|
||||
|
||||
### 2.2 Attributes <a name="attributes"></a>
|
||||
|
||||
- `name (str)`: The name of the worker, which is set during initialization.
|
||||
|
||||
### 2.3 Methods <a name="methods"></a>
|
||||
|
||||
Now, let's delve into the methods provided by the `AbstractWorker` class and understand their purposes and usage.
|
||||
|
||||
#### `__init__(self, name: str)`
|
||||
|
||||
The constructor method initializes a worker with a given name.
|
||||
|
||||
**Parameters:**
|
||||
- `name (str)`: The name of the worker.
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
worker = AbstractWorker("Worker1")
|
||||
```
|
||||
|
||||
#### `name` (Property)
|
||||
|
||||
The `name` property allows you to access the name of the worker.
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
worker_name = worker.name
|
||||
```
|
||||
|
||||
#### `run(self, task: str)`
|
||||
|
||||
The `run()` method is a placeholder for running the worker. You can customize this method in your subclass to define the specific actions the worker should perform.
|
||||
|
||||
**Parameters:**
|
||||
- `task (str)`: A task description or identifier.
|
||||
|
||||
**Usage Example (Customized Subclass):**
|
||||
|
||||
```python
|
||||
class MyWorker(AbstractWorker):
|
||||
def run(self, task: str):
|
||||
print(f"{self.name} is performing task: {task}")
|
||||
|
||||
|
||||
worker = MyWorker("Worker1")
|
||||
worker.run("Collect data")
|
||||
```
|
||||
|
||||
#### `send(self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None)`
|
||||
|
||||
The `send()` method allows the worker to send a message to another worker or recipient. The message can be either a dictionary or a string.
|
||||
|
||||
**Parameters:**
|
||||
- `message (Union[Dict, str])`: The message to be sent.
|
||||
- `recipient`: The recipient worker or entity.
|
||||
- `request_reply (Optional[bool])`: If `True`, the sender requests a reply from the recipient. If `False`, no reply is requested. Default is `None`.
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
worker1 = AbstractWorker("Worker1")
|
||||
worker2 = AbstractWorker("Worker2")
|
||||
|
||||
message = "Hello, Worker2!"
|
||||
worker1.send(message, worker2)
|
||||
```
|
||||
|
||||
#### `a_send(self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None)`
|
||||
|
||||
The `a_send()` method is an asynchronous version of the `send()` method, allowing the worker to send messages asynchronously.
|
||||
|
||||
**Parameters:** (Same as `send()`)
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
|
||||
async def main():
|
||||
worker1 = AbstractWorker("Worker1")
|
||||
worker2 = AbstractWorker("Worker2")
|
||||
|
||||
message = "Hello, Worker2!"
|
||||
await worker1.a_send(message, worker2)
|
||||
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(main())
|
||||
```
|
||||
|
||||
#### `receive(self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None)`
|
||||
|
||||
The `receive()` method allows the worker to receive messages from other workers or senders. You can customize this method in your subclass to define how the worker handles incoming messages.
|
||||
|
||||
**Parameters:**
|
||||
- `message (Union[Dict, str])`: The received message.
|
||||
- `sender`: The sender worker or entity.
|
||||
- `request_reply (Optional[bool])`: Indicates whether a reply is requested. Default is `None`.
|
||||
|
||||
**Usage Example (Customized Subclass):**
|
||||
|
||||
```python
|
||||
class MyWorker(AbstractWorker):
|
||||
def receive(self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None):
|
||||
if isinstance(message, str):
|
||||
print(f"{self.name} received a text message from {sender.name}: {message}")
|
||||
elif isinstance(message, dict):
|
||||
print(f"{self.name} received a dictionary message from {sender.name}: {message}")
|
||||
|
||||
worker1 = MyWorker("Worker1")
|
||||
worker2 = MyWorker("Worker2")
|
||||
|
||||
message1 =
|
||||
|
||||
"Hello, Worker2!"
|
||||
message2 = {"data": 42}
|
||||
|
||||
worker1.receive(message1, worker2)
|
||||
worker1.receive(message2, worker2)
|
||||
```
|
||||
|
||||
#### `a_receive(self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None)`
|
||||
|
||||
The `a_receive()` method is an asynchronous version of the `receive()` method, allowing the worker to receive messages asynchronously.
|
||||
|
||||
**Parameters:** (Same as `receive()`)
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
|
||||
async def main():
|
||||
worker1 = AbstractWorker("Worker1")
|
||||
worker2 = AbstractWorker("Worker2")
|
||||
|
||||
message1 = "Hello, Worker2!"
|
||||
message2 = {"data": 42}
|
||||
|
||||
await worker1.a_receive(message1, worker2)
|
||||
await worker1.a_receive(message2, worker2)
|
||||
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(main())
|
||||
```
|
||||
|
||||
#### `reset(self)`
|
||||
|
||||
The `reset()` method is a placeholder for resetting the worker. You can customize this method in your subclass to define how the worker should reset its state.
|
||||
|
||||
**Usage Example (Customized Subclass):**
|
||||
|
||||
```python
|
||||
class MyWorker(AbstractWorker):
|
||||
def reset(self):
|
||||
print(f"{self.name} has been reset.")
|
||||
|
||||
|
||||
worker = MyWorker("Worker1")
|
||||
worker.reset()
|
||||
```
|
||||
|
||||
#### `generate_reply(self, messages: Optional[List[Dict]] = None, sender=None, **kwargs) -> Union[str, Dict, None]`
|
||||
|
||||
The `generate_reply()` method is a placeholder for generating a reply based on received messages. You can customize this method in your subclass to define the logic for generating replies.
|
||||
|
||||
**Parameters:**
|
||||
- `messages (Optional[List[Dict]])`: A list of received messages.
|
||||
- `sender`: The sender of the reply.
|
||||
- `kwargs`: Additional keyword arguments.
|
||||
|
||||
**Returns:**
|
||||
- `Union[str, Dict, None]`: The generated reply. If `None`, no reply is generated.
|
||||
|
||||
**Usage Example (Customized Subclass):**
|
||||
|
||||
```python
|
||||
class MyWorker(AbstractWorker):
|
||||
def generate_reply(
|
||||
self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
|
||||
) -> Union[str, Dict, None]:
|
||||
if messages:
|
||||
# Generate a reply based on received messages
|
||||
return f"Received {len(messages)} messages from {sender.name}."
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
worker1 = MyWorker("Worker1")
|
||||
worker2 = MyWorker("Worker2")
|
||||
|
||||
message = "Hello, Worker2!"
|
||||
reply = worker2.generate_reply([message], worker1)
|
||||
|
||||
if reply:
|
||||
print(f"{worker2.name} generated a reply: {reply}")
|
||||
```
|
||||
|
||||
#### `a_generate_reply(self, messages: Optional[List[Dict]] = None, sender=None, **kwargs) -> Union[str, Dict, None]`
|
||||
|
||||
The `a_generate_reply()` method is an asynchronous version of the `generate_reply()` method, allowing the worker to generate replies asynchronously.
|
||||
|
||||
**Parameters:** (Same as `generate_reply()`)
|
||||
|
||||
**Returns:**
|
||||
- `Union[str, Dict, None]`: The generated reply. If `None`, no reply is generated.
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
|
||||
async def main():
|
||||
worker1 = AbstractWorker("Worker1")
|
||||
worker2 = AbstractWorker("Worker2")
|
||||
|
||||
message = "Hello, Worker2!"
|
||||
reply = await worker2.a_generate_reply([message], worker1)
|
||||
|
||||
if reply:
|
||||
print(f"{worker2.name} generated a reply: {reply}")
|
||||
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
loop.run_until_complete(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Tutorial: Creating Custom Workers <a name="tutorial-creating-custom-workers"></a>
|
||||
|
||||
In this tutorial, we will walk you through the process of creating custom workers by subclassing the `AbstractWorker` class. You can tailor these workers to perform specific tasks and communicate with other workers in your swarm simulations.
|
||||
|
||||
### Step 1: Create a Custom Worker Class
|
||||
|
||||
Start by creating a custom worker class that inherits from `AbstractWorker`. Define the `run()` and `receive()` methods to specify the behavior of your worker.
|
||||
|
||||
```python
|
||||
class CustomWorker(AbstractWorker):
|
||||
def run(self, task: str):
|
||||
print(f"{self.name} is performing task: {task}")
|
||||
|
||||
def receive(
|
||||
self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None
|
||||
):
|
||||
if isinstance(message, str):
|
||||
print(f"{self.name} received a text message from {sender.name}: {message}")
|
||||
elif isinstance(message, dict):
|
||||
print(
|
||||
f"{self.name} received a dictionary message from {sender.name}: {message}"
|
||||
)
|
||||
```
|
||||
|
||||
### Step 2: Create Custom Worker Instances
|
||||
|
||||
Instantiate your custom worker instances and give them unique names.
|
||||
|
||||
```python
|
||||
worker1 = CustomWorker("Worker1")
|
||||
worker2 = CustomWorker("Worker2")
|
||||
```
|
||||
|
||||
### Step 3: Run Custom Workers
|
||||
|
||||
Use the `run()` method to make your custom workers perform tasks.
|
||||
|
||||
```python
|
||||
worker1.run("Collect data")
|
||||
worker2.run("Process data")
|
||||
```
|
||||
|
||||
### Step 4: Communicate Between Workers
|
||||
|
||||
Use the `send()` method to send messages between workers. You can customize the `receive()` method to define how your workers handle incoming messages.
|
||||
|
||||
```python
|
||||
worker1.send("Hello, Worker2!", worker2)
|
||||
worker2.send({"data": 42}, worker1)
|
||||
|
||||
# Output will show the messages received by the workers
|
||||
```
|
||||
|
||||
### Step 5: Generate Replies
|
||||
|
||||
Customize the `generate_reply()` method to allow your workers to generate replies based on received messages.
|
||||
|
||||
```python
|
||||
class CustomWorker(AbstractWorker):
|
||||
def generate_reply(
|
||||
self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
|
||||
) -> Union[str, Dict, None]:
|
||||
if messages:
|
||||
# Generate a reply based on received messages
|
||||
return f"Received {len(messages)} messages from {sender.name}."
|
||||
else:
|
||||
return None
|
||||
```
|
||||
|
||||
Now, your custom workers can generate replies to incoming messages.
|
||||
|
||||
```python
|
||||
reply = worker2.generate_reply(["Hello, Worker2!"], worker1)
|
||||
|
||||
if reply:
|
||||
print(f"{worker2.name} generated a reply: {reply}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Conclusion <a name="conclusion"></a>
|
||||
|
||||
Congratulations! You've learned how to use the Swarms library to create and customize worker agents for swarm simulations. You can now build complex swarm architectures, simulate autonomous systems, and experiment with various communication and task allocation strategies.
|
||||
|
||||
Feel free to explore the Swarms library further and adapt it to your specific use cases. If you have any questions or need assistance, refer to the extensive documentation and resources available.
|
||||
|
||||
Happy swarming!
|
@ -1,248 +0,0 @@
|
||||
# Module Name: Worker
|
||||
|
||||
The `Worker` class encapsulates the idea of a semi-autonomous agent that utilizes a large language model to execute tasks. The module provides a unified interface for AI-driven task execution while combining a series of tools and utilities. It sets up memory storage and retrieval mechanisms for contextual recall and offers an option for human involvement, making it a versatile and adaptive agent for diverse applications.
|
||||
|
||||
## **Class Definition**:
|
||||
|
||||
```python
|
||||
class Worker:
|
||||
```
|
||||
|
||||
### **Parameters**:
|
||||
|
||||
- `model_name` (str, default: "gpt-4"): Name of the language model.
|
||||
- `openai_api_key` (str, Optional): API key for accessing OpenAI's models.
|
||||
- `ai_name` (str, default: "Autobot Swarm Worker"): Name of the AI agent.
|
||||
- `ai_role` (str, default: "Worker in a swarm"): Role description of the AI agent.
|
||||
- `external_tools` (list, Optional): A list of external tool objects to be used.
|
||||
- `human_in_the_loop` (bool, default: False): If set to `True`, it indicates that human intervention may be required.
|
||||
- `temperature` (float, default: 0.5): Sampling temperature for the language model's output. Higher values make the output more random, and lower values make it more deterministic.
|
||||
|
||||
### **Methods**:
|
||||
|
||||
#### `__init__`:
|
||||
|
||||
Initializes the Worker class.
|
||||
|
||||
#### `setup_tools`:
|
||||
|
||||
Sets up the tools available to the worker. Default tools include reading and writing files, processing CSV data, querying websites, and taking human input. Additional tools can be appended through the `external_tools` parameter.
|
||||
|
||||
#### `setup_memory`:
|
||||
|
||||
Initializes memory systems using embeddings and a vector store for the worker.
|
||||
|
||||
#### `setup_agent`:
|
||||
|
||||
Sets up the primary agent using the initialized tools, memory, and language model.
|
||||
|
||||
#### `run`:
|
||||
|
||||
Executes a given task using the agent.
|
||||
|
||||
#### `__call__`:
|
||||
|
||||
Makes the Worker class callable. When an instance of the class is called, it will execute the provided task using the agent.
|
||||
|
||||
## **Usage Examples**:
|
||||
|
||||
### **Example 1**: Basic usage with default parameters:
|
||||
|
||||
```python
|
||||
from swarms import Worker
|
||||
from swarms.models import OpenAIChat
|
||||
|
||||
llm = OpenAIChat(
|
||||
# enter your api key
|
||||
openai_api_key="",
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key="",
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=None,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
response = node.run(task)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### **Example 2**: Usage with custom tools:
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
import interpreter
|
||||
|
||||
from swarms.agents.hf_agents import HFAgent
|
||||
from swarms.agents.omni_modal_agent import OmniModalAgent
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.tools.autogpt import tool
|
||||
from swarms.workers import Worker
|
||||
|
||||
# Initialize API Key
|
||||
api_key = ""
|
||||
|
||||
|
||||
# Initialize the language model,
|
||||
# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
|
||||
# wrap a function with the tool decorator to make it a tool, then add docstrings for tool documentation
|
||||
@tool
|
||||
def hf_agent(task: str = None):
|
||||
"""
|
||||
An tool that uses an openai model to call and respond to a task by search for a model on huggingface
|
||||
It first downloads the model then uses it.
|
||||
|
||||
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech, etc
|
||||
|
||||
"""
|
||||
agent = HFAgent(model="text-davinci-003", api_key=api_key)
|
||||
response = agent.run(task, text="¡Este es un API muy agradable!")
|
||||
return response
|
||||
|
||||
|
||||
# wrap a function with the tool decorator to make it a tool
|
||||
@tool
|
||||
def omni_agent(task: str = None):
|
||||
"""
|
||||
An tool that uses an openai Model to utilize and call huggingface models and guide them to perform a task.
|
||||
|
||||
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech
|
||||
The following tasks are what this tool should be used for:
|
||||
|
||||
Tasks omni agent is good for:
|
||||
--------------
|
||||
document-question-answering
|
||||
image-captioning
|
||||
image-question-answering
|
||||
image-segmentation
|
||||
speech-to-text
|
||||
summarization
|
||||
text-classification
|
||||
text-question-answering
|
||||
translation
|
||||
huggingface-tools/text-to-image
|
||||
huggingface-tools/text-to-video
|
||||
text-to-speech
|
||||
huggingface-tools/text-download
|
||||
huggingface-tools/image-transformation
|
||||
"""
|
||||
agent = OmniModalAgent(llm)
|
||||
response = agent.run(task)
|
||||
return response
|
||||
|
||||
|
||||
# Code Interpreter
|
||||
@tool
|
||||
def compile(task: str):
|
||||
"""
|
||||
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally.
|
||||
You can chat with Open Interpreter through a ChatGPT-like interface in your terminal
|
||||
by running $ interpreter after installing.
|
||||
|
||||
This provides a natural-language interface to your computer's general-purpose capabilities:
|
||||
|
||||
Create and edit photos, videos, PDFs, etc.
|
||||
Control a Chrome browser to perform research
|
||||
Plot, clean, and analyze large datasets
|
||||
...etc.
|
||||
⚠️ Note: You'll be asked to approve code before it's run.
|
||||
|
||||
Rules: Only use when given to generate code or an application of some kind
|
||||
"""
|
||||
task = interpreter.chat(task, return_messages=True)
|
||||
interpreter.chat()
|
||||
interpreter.reset(task)
|
||||
|
||||
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
|
||||
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
|
||||
os.environ["INTERPRETER_CLI_DEBUG"] = True
|
||||
|
||||
|
||||
# Append tools to an list
|
||||
tools = [hf_agent, omni_agent, compile]
|
||||
|
||||
|
||||
# Initialize a single Worker node with previously defined tools in addition to it's
|
||||
# predefined tools
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key=api_key,
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=tools,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
# Specify task
|
||||
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
|
||||
# Run the node on the task
|
||||
response = node.run(task)
|
||||
|
||||
# Print the response
|
||||
print(response)
|
||||
```
|
||||
|
||||
### **Example 3**: Usage with human in the loop:
|
||||
|
||||
```python
|
||||
from swarms import Worker
|
||||
from swarms.models import OpenAIChat
|
||||
|
||||
llm = OpenAIChat(
|
||||
# enter your api key
|
||||
openai_api_key="",
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key="",
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=None,
|
||||
human_in_the_loop=True,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
response = node.run(task)
|
||||
print(response)
|
||||
```
|
||||
|
||||
## **Mathematical Description**:
|
||||
|
||||
Conceptually, the `Worker` class can be seen as a function:
|
||||
|
||||
\[ W(t, M, K, T, H, \theta) \rightarrow R \]
|
||||
|
||||
Where:
|
||||
|
||||
- \( W \) = Worker function
|
||||
- \( t \) = task to be performed
|
||||
- \( M \) = Model (e.g., "gpt-4")
|
||||
- \( K \) = OpenAI API key
|
||||
- \( T \) = Set of Tools available
|
||||
- \( H \) = Human involvement flag (True/False)
|
||||
- \( \theta \) = Temperature parameter
|
||||
- \( R \) = Result of the task
|
||||
|
||||
This mathematical abstraction provides a simple view of the `Worker` class's capability to transform a task input into a desired output using a combination of AI and toolsets.
|
||||
|
||||
## **Notes**:
|
||||
|
||||
The Worker class acts as a bridge between raw tasks and the tools & AI required to accomplish them. The setup ensures flexibility and versatility. The decorators used in the methods (e.g., log_decorator, error_decorator) emphasize the importance of logging, error handling, and performance measurement, essential for real-world applications.
|
@ -0,0 +1,152 @@
|
||||
"""
|
||||
$ pip install swarms
|
||||
|
||||
- Add docs into the database
|
||||
- Use better llm
|
||||
- use better prompts [System and SOPs]
|
||||
- Use a open source model like Command R
|
||||
- Better SOPS ++ System Prompts
|
||||
-
|
||||
"""
|
||||
|
||||
from swarms import Agent, OpenAIChat
|
||||
from playground.memory.chromadb_example import ChromaDB
|
||||
from swarms.tools.prebuilt.bing_api import fetch_web_articles_bing_api
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Let's create a text file with the provided prompt.
|
||||
|
||||
research_system_prompt = """
|
||||
Research Agent LLM Prompt: Summarizing Sources and Content
|
||||
|
||||
Objective:
|
||||
Your task is to summarize the provided sources and the content within those sources. The goal is to create concise, accurate, and informative summaries that capture the key points of the original content.
|
||||
|
||||
Instructions:
|
||||
|
||||
1. Identify Key Information:
|
||||
- Extract the most important information from each source. Focus on key facts, main ideas, significant arguments, and critical data.
|
||||
|
||||
2. Summarize Clearly and Concisely:
|
||||
- Use clear and straightforward language. Avoid unnecessary details and keep the summary concise.
|
||||
- Ensure that the summary is coherent and easy to understand.
|
||||
|
||||
3. Preserve Original Meaning:
|
||||
- While summarizing, maintain the original meaning and intent of the content. Do not omit essential information that changes the context or understanding.
|
||||
|
||||
4. Include Relevant Details:
|
||||
- Mention the source title, author, publication date, and any other relevant details that provide context.
|
||||
|
||||
5. Structure:
|
||||
- Begin with a brief introduction to the source.
|
||||
- Follow with a summary of the main content.
|
||||
- Conclude with any significant conclusions or implications presented in the source.
|
||||
|
||||
"""
|
||||
|
||||
|
||||
def movers_agent_system_prompt():
|
||||
return """
|
||||
The Movers Agent is responsible for providing users with fixed-cost estimates for moving services
|
||||
based on the distance between their current location and destination, and the number of rooms in their home.
|
||||
Additionally, the agent allows users to attempt negotiation for better deals using the Retell API.
|
||||
|
||||
Responsibilities:
|
||||
- Provide fixed-cost estimates based on distance and room size.
|
||||
- Allow users to attempt negotiation for better deals using the Retell API.
|
||||
|
||||
Details:
|
||||
- Fixed Costs: Predefined costs for each of the 10 moving companies, with variations based on distance and number of rooms.
|
||||
- Distance Calculation: Use a fixed formula to estimate distances and costs.
|
||||
- Room Size: Standard sizes based on the number of rooms will be used to determine the base cost.
|
||||
- Negotiation: Users can click a "negotiate" button to initiate negotiation via Retell API.
|
||||
|
||||
Tools and Resources Used:
|
||||
- Google Maps API: For calculating distances between the current location and destination.
|
||||
- Retell API: For simulating negotiation conversations.
|
||||
- Streamlit: For displaying estimates and handling user interactions.
|
||||
|
||||
Example Workflow:
|
||||
1. User inputs their current location, destination, and number of rooms.
|
||||
2. The agent calculates the distance and estimates the cost using predefined rates.
|
||||
3. Displays the estimates from 10 different moving companies.
|
||||
4. Users can click "negotiate" to simulate negotiation via Retell API, adjusting the price within a predefined range.
|
||||
"""
|
||||
|
||||
|
||||
# Example usage
|
||||
|
||||
|
||||
# Initialize
|
||||
memory = ChromaDB(
|
||||
output_dir="research_base",
|
||||
n_results=2,
|
||||
)
|
||||
|
||||
llm = OpenAIChat(
|
||||
temperature=0.2,
|
||||
max_tokens=3500,
|
||||
openai_api_key=os.getenv("OPENAI_API_KEY"),
|
||||
)
|
||||
|
||||
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name="Research Agent",
|
||||
system_prompt=research_system_prompt,
|
||||
llm=llm,
|
||||
max_loops="auto",
|
||||
autosave=True,
|
||||
dashboard=False,
|
||||
interactive=True,
|
||||
# long_term_memory=memory,
|
||||
tools=[fetch_web_articles_bing_api],
|
||||
)
|
||||
|
||||
|
||||
# # Initialize the agent
|
||||
# agent = Agent(
|
||||
# agent_name="Movers Agent",
|
||||
# system_prompt=movers_agent_system_prompt(),
|
||||
# llm=llm,
|
||||
# max_loops=1,
|
||||
# autosave=True,
|
||||
# dashboard=False,
|
||||
# interactive=True,
|
||||
# # long_term_memory=memory,
|
||||
# # tools=[fetch_web_articles_bing_api],
|
||||
# )
|
||||
|
||||
|
||||
def perplexity_agent(task: str = None, *args, **kwargs):
|
||||
"""
|
||||
This function takes a task as input and uses the Bing API to fetch web articles related to the task.
|
||||
It then combines the task and the fetched articles as prompts and runs them through an agent.
|
||||
The agent generates a response based on the prompts and returns it.
|
||||
|
||||
Args:
|
||||
task (str): The task for which web articles need to be fetched.
|
||||
|
||||
Returns:
|
||||
str: The response generated by the agent.
|
||||
"""
|
||||
out = fetch_web_articles_bing_api(
|
||||
task,
|
||||
)
|
||||
|
||||
# Sources
|
||||
sources = [task, out]
|
||||
sources_prompts = "".join(sources)
|
||||
|
||||
# Run a question
|
||||
agent_response = agent.run(sources_prompts)
|
||||
return agent_response
|
||||
|
||||
|
||||
out = perplexity_agent(
|
||||
"What are the indian food restaurant names in standford university avenue? What are their cost ratios"
|
||||
)
|
||||
print(out)
|
@ -0,0 +1,999 @@
|
||||
{
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": [],
|
||||
"machine_shape": "hm",
|
||||
"gpuType": "L4"
|
||||
},
|
||||
"kernelspec": {
|
||||
"name": "python3",
|
||||
"display_name": "Python 3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
},
|
||||
"accelerator": "GPU"
|
||||
},
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Entry for SwarmsHackathon 2024\n",
|
||||
"\n"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "Qf8eZIT71wba"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Install Swarms"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "-rBXNMWV4EWN"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"id": "w4FoSEyP1q_x",
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 1000
|
||||
},
|
||||
"outputId": "ea6b15e7-c53c-47aa-86c6-b24d4aff041b"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"Collecting swarms\n",
|
||||
" Downloading swarms-5.1.4-py3-none-any.whl (338 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m339.0/339.0 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting Pillow==10.3.0 (from swarms)\n",
|
||||
" Downloading pillow-10.3.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.5 MB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.5/4.5 MB\u001b[0m \u001b[31m62.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from swarms) (6.0.1)\n",
|
||||
"Collecting asyncio<4.0,>=3.4.3 (from swarms)\n",
|
||||
" Downloading asyncio-3.4.3-py3-none-any.whl (101 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m101.8/101.8 kB\u001b[0m \u001b[31m12.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting backoff==2.2.1 (from swarms)\n",
|
||||
" Downloading backoff-2.2.1-py3-none-any.whl (15 kB)\n",
|
||||
"Requirement already satisfied: docstring_parser==0.16 in /usr/local/lib/python3.10/dist-packages (from swarms) (0.16)\n",
|
||||
"Collecting langchain-community==0.0.29 (from swarms)\n",
|
||||
" Downloading langchain_community-0.0.29-py3-none-any.whl (1.8 MB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m78.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting langchain-experimental==0.0.55 (from swarms)\n",
|
||||
" Downloading langchain_experimental-0.0.55-py3-none-any.whl (177 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m177.6/177.6 kB\u001b[0m \u001b[31m21.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting loguru==0.7.2 (from swarms)\n",
|
||||
" Downloading loguru-0.7.2-py3-none-any.whl (62 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.5/62.5 kB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: opencv-python-headless in /usr/local/lib/python3.10/dist-packages (from swarms) (4.9.0.80)\n",
|
||||
"Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from swarms) (5.9.5)\n",
|
||||
"Requirement already satisfied: pydantic==2.7.1 in /usr/local/lib/python3.10/dist-packages (from swarms) (2.7.1)\n",
|
||||
"Collecting pypdf==4.1.0 (from swarms)\n",
|
||||
" Downloading pypdf-4.1.0-py3-none-any.whl (286 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m286.1/286.1 kB\u001b[0m \u001b[31m31.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting python-dotenv (from swarms)\n",
|
||||
" Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)\n",
|
||||
"Collecting ratelimit==2.2.1 (from swarms)\n",
|
||||
" Downloading ratelimit-2.2.1.tar.gz (5.3 kB)\n",
|
||||
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
||||
"Collecting sentry-sdk (from swarms)\n",
|
||||
" Downloading sentry_sdk-2.3.1-py2.py3-none-any.whl (289 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m289.0/289.0 kB\u001b[0m \u001b[31m27.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: tenacity==8.3.0 in /usr/local/lib/python3.10/dist-packages (from swarms) (8.3.0)\n",
|
||||
"Requirement already satisfied: toml in /usr/local/lib/python3.10/dist-packages (from swarms) (0.10.2)\n",
|
||||
"Requirement already satisfied: torch<3.0,>=2.1.1 in /usr/local/lib/python3.10/dist-packages (from swarms) (2.3.0+cu121)\n",
|
||||
"Requirement already satisfied: transformers<5.0.0,>=4.39.0 in /usr/local/lib/python3.10/dist-packages (from swarms) (4.41.1)\n",
|
||||
"Requirement already satisfied: SQLAlchemy<3,>=1.4 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (2.0.30)\n",
|
||||
"Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (3.9.5)\n",
|
||||
"Collecting dataclasses-json<0.7,>=0.5.7 (from langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading dataclasses_json-0.6.6-py3-none-any.whl (28 kB)\n",
|
||||
"Collecting langchain-core<0.2.0,>=0.1.33 (from langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading langchain_core-0.1.52-py3-none-any.whl (302 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m302.9/302.9 kB\u001b[0m \u001b[31m32.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting langsmith<0.2.0,>=0.1.0 (from langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading langsmith-0.1.67-py3-none-any.whl (124 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m124.4/124.4 kB\u001b[0m \u001b[31m13.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: numpy<2,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (1.25.2)\n",
|
||||
"Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (2.31.0)\n",
|
||||
"Collecting langchain<0.2.0,>=0.1.13 (from langchain-experimental==0.0.55->swarms)\n",
|
||||
" Downloading langchain-0.1.20-py3-none-any.whl (1.0 MB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m54.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic==2.7.1->swarms) (0.7.0)\n",
|
||||
"Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.10/dist-packages (from pydantic==2.7.1->swarms) (2.18.2)\n",
|
||||
"Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.10/dist-packages (from pydantic==2.7.1->swarms) (4.11.0)\n",
|
||||
"Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (3.14.0)\n",
|
||||
"Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (1.12)\n",
|
||||
"Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (3.3)\n",
|
||||
"Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (3.1.4)\n",
|
||||
"Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (2023.6.0)\n",
|
||||
"Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)\n",
|
||||
"Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)\n",
|
||||
"Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)\n",
|
||||
"Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)\n",
|
||||
"Collecting nvidia-cublas-cu12==12.1.3.1 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)\n",
|
||||
"Collecting nvidia-cufft-cu12==11.0.2.54 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)\n",
|
||||
"Collecting nvidia-curand-cu12==10.3.2.106 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)\n",
|
||||
"Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)\n",
|
||||
"Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)\n",
|
||||
"Collecting nvidia-nccl-cu12==2.20.5 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)\n",
|
||||
"Collecting nvidia-nvtx-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)\n",
|
||||
"Requirement already satisfied: triton==2.3.0 in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (2.3.0)\n",
|
||||
"Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch<3.0,>=2.1.1->swarms)\n",
|
||||
" Downloading nvidia_nvjitlink_cu12-12.5.40-py3-none-manylinux2014_x86_64.whl (21.3 MB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.3/21.3 MB\u001b[0m \u001b[31m73.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: huggingface-hub<1.0,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (0.23.1)\n",
|
||||
"Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (24.0)\n",
|
||||
"Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (2024.5.15)\n",
|
||||
"Requirement already satisfied: tokenizers<0.20,>=0.19 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (0.19.1)\n",
|
||||
"Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (0.4.3)\n",
|
||||
"Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (4.66.4)\n",
|
||||
"Requirement already satisfied: urllib3>=1.26.11 in /usr/local/lib/python3.10/dist-packages (from sentry-sdk->swarms) (2.0.7)\n",
|
||||
"Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from sentry-sdk->swarms) (2024.2.2)\n",
|
||||
"Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (1.3.1)\n",
|
||||
"Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (23.2.0)\n",
|
||||
"Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (1.4.1)\n",
|
||||
"Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (6.0.5)\n",
|
||||
"Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (1.9.4)\n",
|
||||
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (4.0.3)\n",
|
||||
"Collecting marshmallow<4.0.0,>=3.18.0 (from dataclasses-json<0.7,>=0.5.7->langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading marshmallow-3.21.2-py3-none-any.whl (49 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.3/49.3 kB\u001b[0m \u001b[31m7.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting typing-inspect<1,>=0.4.0 (from dataclasses-json<0.7,>=0.5.7->langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)\n",
|
||||
"INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while.\n",
|
||||
"Collecting langchain<0.2.0,>=0.1.13 (from langchain-experimental==0.0.55->swarms)\n",
|
||||
" Downloading langchain-0.1.19-py3-none-any.whl (1.0 MB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m75.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25h Downloading langchain-0.1.17-py3-none-any.whl (867 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m867.6/867.6 kB\u001b[0m \u001b[31m72.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting jsonpatch<2.0,>=1.33 (from langchain<0.2.0,>=0.1.13->langchain-experimental==0.0.55->swarms)\n",
|
||||
" Downloading jsonpatch-1.33-py2.py3-none-any.whl (12 kB)\n",
|
||||
"Collecting langchain<0.2.0,>=0.1.13 (from langchain-experimental==0.0.55->swarms)\n",
|
||||
" Downloading langchain-0.1.16-py3-none-any.whl (817 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m817.7/817.7 kB\u001b[0m \u001b[31m71.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25h Downloading langchain-0.1.15-py3-none-any.whl (814 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m814.5/814.5 kB\u001b[0m \u001b[31m71.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25h Downloading langchain-0.1.14-py3-none-any.whl (812 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m812.8/812.8 kB\u001b[0m \u001b[31m70.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25h Downloading langchain-0.1.13-py3-none-any.whl (810 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m810.5/810.5 kB\u001b[0m \u001b[31m72.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting langchain-text-splitters<0.1,>=0.0.1 (from langchain<0.2.0,>=0.1.13->langchain-experimental==0.0.55->swarms)\n",
|
||||
" Downloading langchain_text_splitters-0.0.2-py3-none-any.whl (23 kB)\n",
|
||||
"Collecting packaging>=20.0 (from transformers<5.0.0,>=4.39.0->swarms)\n",
|
||||
" Downloading packaging-23.2-py3-none-any.whl (53 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.0/53.0 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hCollecting orjson<4.0.0,>=3.9.14 (from langsmith<0.2.0,>=0.1.0->langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading orjson-3.10.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (142 kB)\n",
|
||||
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m142.5/142.5 kB\u001b[0m \u001b[31m22.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
||||
"\u001b[?25hRequirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain-community==0.0.29->swarms) (3.3.2)\n",
|
||||
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain-community==0.0.29->swarms) (3.7)\n",
|
||||
"Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy<3,>=1.4->langchain-community==0.0.29->swarms) (3.0.3)\n",
|
||||
"Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch<3.0,>=2.1.1->swarms) (2.1.5)\n",
|
||||
"Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch<3.0,>=2.1.1->swarms) (1.3.0)\n",
|
||||
"Collecting jsonpointer>=1.9 (from jsonpatch<2.0,>=1.33->langchain<0.2.0,>=0.1.13->langchain-experimental==0.0.55->swarms)\n",
|
||||
" Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)\n",
|
||||
"Collecting mypy-extensions>=0.3.0 (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain-community==0.0.29->swarms)\n",
|
||||
" Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)\n",
|
||||
"Building wheels for collected packages: ratelimit\n",
|
||||
" Building wheel for ratelimit (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
||||
" Created wheel for ratelimit: filename=ratelimit-2.2.1-py3-none-any.whl size=5894 sha256=838835c704600f0f2b8beedf91668c9e47611d580106e773d26fb091a4ad01e0\n",
|
||||
" Stored in directory: /root/.cache/pip/wheels/27/5f/ba/e972a56dcbf5de9f2b7d2b2a710113970bd173c4dcd3d2c902\n",
|
||||
"Successfully built ratelimit\n",
|
||||
"Installing collected packages: ratelimit, asyncio, sentry-sdk, python-dotenv, pypdf, Pillow, packaging, orjson, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, mypy-extensions, loguru, jsonpointer, backoff, typing-inspect, nvidia-cusparse-cu12, nvidia-cudnn-cu12, marshmallow, jsonpatch, nvidia-cusolver-cu12, langsmith, dataclasses-json, langchain-core, langchain-text-splitters, langchain-community, langchain, langchain-experimental, swarms\n",
|
||||
" Attempting uninstall: Pillow\n",
|
||||
" Found existing installation: Pillow 9.4.0\n",
|
||||
" Uninstalling Pillow-9.4.0:\n",
|
||||
" Successfully uninstalled Pillow-9.4.0\n",
|
||||
" Attempting uninstall: packaging\n",
|
||||
" Found existing installation: packaging 24.0\n",
|
||||
" Uninstalling packaging-24.0:\n",
|
||||
" Successfully uninstalled packaging-24.0\n",
|
||||
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
|
||||
"imageio 2.31.6 requires pillow<10.1.0,>=8.3.2, but you have pillow 10.3.0 which is incompatible.\u001b[0m\u001b[31m\n",
|
||||
"\u001b[0mSuccessfully installed Pillow-10.3.0 asyncio-3.4.3 backoff-2.2.1 dataclasses-json-0.6.6 jsonpatch-1.33 jsonpointer-2.4 langchain-0.1.13 langchain-community-0.0.29 langchain-core-0.1.52 langchain-experimental-0.0.55 langchain-text-splitters-0.0.2 langsmith-0.1.67 loguru-0.7.2 marshmallow-3.21.2 mypy-extensions-1.0.0 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.40 nvidia-nvtx-cu12-12.1.105 orjson-3.10.3 packaging-23.2 pypdf-4.1.0 python-dotenv-1.0.1 ratelimit-2.2.1 sentry-sdk-2.3.1 swarms-5.1.4 typing-inspect-0.9.0\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"application/vnd.colab-display-data+json": {
|
||||
"pip_warning": {
|
||||
"packages": [
|
||||
"PIL",
|
||||
"asyncio"
|
||||
]
|
||||
},
|
||||
"id": "43b664ed28b2464da4f7c30cb0f343ce"
|
||||
}
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!pip3 install -U swarms"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Import keys"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "QTMXxRxw7yR5"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"from google.colab import userdata\n",
|
||||
"anthropic_api_key = userdata.get('ANTHROPIC_API_KEY')"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "lzSnwHw-7z8B"
|
||||
},
|
||||
"execution_count": 1,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"## Devin like"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "eD0PkNm25SVT"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"This example requires the anthropic library which is not installed by default."
|
||||
],
|
||||
"metadata": {
|
||||
"id": "0Shm1vrS-YFZ"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install anthropic"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "aZG6eSjr-U7J",
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"outputId": "b5460b70-5db9-45d7-d66a-d2eb596b86b7"
|
||||
},
|
||||
"execution_count": 2,
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"Collecting anthropic\n",
|
||||
" Using cached anthropic-0.28.0-py3-none-any.whl (862 kB)\n",
|
||||
"Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from anthropic) (3.7.1)\n",
|
||||
"Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from anthropic) (1.7.0)\n",
|
||||
"Collecting httpx<1,>=0.23.0 (from anthropic)\n",
|
||||
" Using cached httpx-0.27.0-py3-none-any.whl (75 kB)\n",
|
||||
"Collecting jiter<1,>=0.4.0 (from anthropic)\n",
|
||||
" Using cached jiter-0.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (328 kB)\n",
|
||||
"Requirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from anthropic) (2.7.1)\n",
|
||||
"Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from anthropic) (1.3.1)\n",
|
||||
"Requirement already satisfied: tokenizers>=0.13.0 in /usr/local/lib/python3.10/dist-packages (from anthropic) (0.19.1)\n",
|
||||
"Requirement already satisfied: typing-extensions<5,>=4.7 in /usr/local/lib/python3.10/dist-packages (from anthropic) (4.11.0)\n",
|
||||
"Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->anthropic) (3.7)\n",
|
||||
"Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->anthropic) (1.2.1)\n",
|
||||
"Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->anthropic) (2024.2.2)\n",
|
||||
"Collecting httpcore==1.* (from httpx<1,>=0.23.0->anthropic)\n",
|
||||
" Using cached httpcore-1.0.5-py3-none-any.whl (77 kB)\n",
|
||||
"Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx<1,>=0.23.0->anthropic)\n",
|
||||
" Using cached h11-0.14.0-py3-none-any.whl (58 kB)\n",
|
||||
"Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->anthropic) (0.7.0)\n",
|
||||
"Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->anthropic) (2.18.2)\n",
|
||||
"Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in /usr/local/lib/python3.10/dist-packages (from tokenizers>=0.13.0->anthropic) (0.23.1)\n",
|
||||
"Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (3.14.0)\n",
|
||||
"Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (2023.6.0)\n",
|
||||
"Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (23.2)\n",
|
||||
"Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (6.0.1)\n",
|
||||
"Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (2.31.0)\n",
|
||||
"Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (4.66.4)\n",
|
||||
"Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (3.3.2)\n",
|
||||
"Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (2.0.7)\n",
|
||||
"Installing collected packages: jiter, h11, httpcore, httpx, anthropic\n",
|
||||
"Successfully installed anthropic-0.28.0 h11-0.14.0 httpcore-1.0.5 httpx-0.27.0 jiter-0.4.1\n"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {
|
||||
"id": "NyroG92H1m2G",
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 1000
|
||||
},
|
||||
"outputId": "69f4ff8b-39c7-41db-c876-4694336d812e"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:32:00.407576+0000\u001b[0m \u001b[1mNumber of tools: 4\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:32:00.407998+0000\u001b[0m \u001b[1mTools provided, Automatically converting to OpenAI function\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:32:00.408172+0000\u001b[0m \u001b[1mTool: terminal\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:32:00.408353+0000\u001b[0m \u001b[1mTool: browser\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:32:00.408493+0000\u001b[0m \u001b[1mTool: file_editor\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:32:00.408609+0000\u001b[0m \u001b[1mTool: create_file\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"Initializing Autonomous Agent Devin...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of auto\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"create_file\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\", \n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I've created a file called \"abundance_plan.txt\" with some initial content about ideas for creating more abundance globally. Let me know if you'd like me to modify or add anything to this file. I'm here to assist however I can.\n",
|
||||
"Response after code interpretation: \n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\", \n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"create_file\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\",\n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need by volunteering time and donating resources\\n- Share knowledge and skills to empower others \\n- Develop sustainable technology to improve efficiency\\n- Build connections between communities\\n- Protect the environment through conservation efforts\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I've updated the content in the file with some additional ideas focused on helping others, sharing knowledge, developing sustainable technology, connecting communities, and environmental conservation. Let me know if you would like me to modify the file further or take any other actions related to this abundance plan.\n",
|
||||
"You: Thanks!\n",
|
||||
"\n",
|
||||
"Loop 2 of auto\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"create_file\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\",\n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I've created the file \"abundance_plan.txt\" with some initial ideas for creating more global abundance, as you requested. Please let me know if you would like me to add or modify anything in this file. I'm happy to assist further however I can.\n",
|
||||
"Response after code interpretation: \n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"create_file\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\",\n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need by donating time, money or resources\\n- Share knowledge and skills to empower others\\n- Develop sustainable technologies to conserve resources\\n- Create art, music and culture to enrich lives\\n- Build connections between diverse communities\\n- Advocate for policies that reduce inequality\\n- Live generously and lead by example\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I've updated the content with some additional ideas for creating abundance globally. Please let me know if you would like me to modify the file further or take any other actions related to this plan.\n",
|
||||
"You: Can you show me the file?\n",
|
||||
"\n",
|
||||
"Loop 3 of auto\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"My plan to create more abundance in the world:\n",
|
||||
"\n",
|
||||
"- Help those in need\n",
|
||||
"- Share resources\n",
|
||||
"- Teach skills to create value\n",
|
||||
"- Build connections between people \n",
|
||||
"- Develop technology to improve efficiency\n",
|
||||
"- Protect the environment\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I have output the contents of the \"abundance_plan.txt\" file that I previously created. Please let me know if you would like me to make any changes or additions. I'm here to help further in any way I can.\n",
|
||||
"Response after code interpretation: \n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"file_editor\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\",\n",
|
||||
" \"mode\": \"r\",\n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I am outputting the contents of the \"abundance_plan.txt\" file that I previously created. Let me know if you need me to modify or add anything to this file.\n",
|
||||
"You: Thank you! All done.\n",
|
||||
"\n",
|
||||
"Loop 4 of auto\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"create_file\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\",\n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I've created the file \"abundance_plan.txt\" with the content you provided. Let me know if you need anything else!\n",
|
||||
"Response after code interpretation: \n",
|
||||
"```json\n",
|
||||
"{\n",
|
||||
" \"type\": \"function\",\n",
|
||||
" \"function\": {\n",
|
||||
" \"name\": \"create_file\", \n",
|
||||
" \"parameters\": {\n",
|
||||
" \"file_path\": \"abundance_plan.txt\",\n",
|
||||
" \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"I've created the file \"abundance_plan.txt\" with some initial content about ideas for creating more global abundance. Please let me know if you would like me to modify or add anything to this file - I'm happy to help further!\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "error",
|
||||
"ename": "KeyboardInterrupt",
|
||||
"evalue": "Interrupted by user",
|
||||
"traceback": [
|
||||
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
|
||||
"\u001b[0;32m<ipython-input-3-ae82855173f5>\u001b[0m in \u001b[0;36m<cell line: 102>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 100\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 101\u001b[0m \u001b[0;31m# Run the agent\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 102\u001b[0;31m \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0magent\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Create a new file for a plan to create abundance in the world.\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 103\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.10/dist-packages/swarms/structs/agent.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, task, img, *args, **kwargs)\u001b[0m\n\u001b[1;32m 878\u001b[0m \"\"\"\n\u001b[1;32m 879\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 880\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtask\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mimg\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 881\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0merror\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 882\u001b[0m \u001b[0mlogger\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0merror\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf\"Error calling agent: {error}\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.10/dist-packages/swarms/structs/agent.py\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, task, img, *args, **kwargs)\u001b[0m\n\u001b[1;32m 827\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 828\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minteractive\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 829\u001b[0;31m \u001b[0muser_input\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcolored\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"You: \"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"red\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 830\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 831\u001b[0m \u001b[0;31m# User-defined exit command\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py\u001b[0m in \u001b[0;36mraw_input\u001b[0;34m(self, prompt)\u001b[0m\n\u001b[1;32m 849\u001b[0m \u001b[0;34m\"raw_input was called, but this frontend does not support input requests.\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 850\u001b[0m )\n\u001b[0;32m--> 851\u001b[0;31m return self._input_request(str(prompt),\n\u001b[0m\u001b[1;32m 852\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_parent_ident\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 853\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_parent_header\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;32m/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py\u001b[0m in \u001b[0;36m_input_request\u001b[0;34m(self, prompt, ident, parent, password)\u001b[0m\n\u001b[1;32m 893\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 894\u001b[0m \u001b[0;31m# re-raise KeyboardInterrupt, to truncate traceback\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 895\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Interrupted by user\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 896\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 897\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlog\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwarning\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Invalid Message:\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mexc_info\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
|
||||
"\u001b[0;31mKeyboardInterrupt\u001b[0m: Interrupted by user"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from swarms import Agent, Anthropic, tool\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"# Model\n",
|
||||
"llm = Anthropic(\n",
|
||||
" temperature=0.1,\n",
|
||||
" anthropic_api_key = anthropic_api_key\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Tools\n",
|
||||
"\n",
|
||||
"def terminal(\n",
|
||||
" code: str,\n",
|
||||
"):\n",
|
||||
" \"\"\"\n",
|
||||
" Run code in the terminal.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" code (str): The code to run in the terminal.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" str: The output of the code.\n",
|
||||
" \"\"\"\n",
|
||||
" out = subprocess.run(\n",
|
||||
" code, shell=True, capture_output=True, text=True\n",
|
||||
" ).stdout\n",
|
||||
" return str(out)\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def browser(query: str):\n",
|
||||
" \"\"\"\n",
|
||||
" Search the query in the browser with the `browser` tool.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" query (str): The query to search in the browser.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" str: The search results.\n",
|
||||
" \"\"\"\n",
|
||||
" import webbrowser\n",
|
||||
"\n",
|
||||
" url = f\"https://www.google.com/search?q={query}\"\n",
|
||||
" webbrowser.open(url)\n",
|
||||
" return f\"Searching for {query} in the browser.\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def create_file(file_path: str, content: str):\n",
|
||||
" \"\"\"\n",
|
||||
" Create a file using the file editor tool.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" file_path (str): The path to the file.\n",
|
||||
" content (str): The content to write to the file.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" str: The result of the file creation operation.\n",
|
||||
" \"\"\"\n",
|
||||
" with open(file_path, \"w\") as file:\n",
|
||||
" file.write(content)\n",
|
||||
" return f\"File {file_path} created successfully.\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def file_editor(file_path: str, mode: str, content: str):\n",
|
||||
" \"\"\"\n",
|
||||
" Edit a file using the file editor tool.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" file_path (str): The path to the file.\n",
|
||||
" mode (str): The mode to open the file in.\n",
|
||||
" content (str): The content to write to the file.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" str: The result of the file editing operation.\n",
|
||||
" \"\"\"\n",
|
||||
" with open(file_path, mode) as file:\n",
|
||||
" file.write(content)\n",
|
||||
" return f\"File {file_path} edited successfully.\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Agent\n",
|
||||
"agent = Agent(\n",
|
||||
" agent_name=\"Devin\",\n",
|
||||
" system_prompt=(\n",
|
||||
" \"\"\"Autonomous agent that can interact with humans and other\n",
|
||||
" agents. Be Helpful and Kind. Use the tools provided to\n",
|
||||
" assist the user. Return all code in markdown format.\"\"\"\n",
|
||||
" ),\n",
|
||||
" llm=llm,\n",
|
||||
" max_loops=\"auto\",\n",
|
||||
" autosave=True,\n",
|
||||
" dashboard=False,\n",
|
||||
" streaming_on=True,\n",
|
||||
" verbose=True,\n",
|
||||
" stopping_token=\"<DONE>\",\n",
|
||||
" interactive=True,\n",
|
||||
" tools=[terminal, browser, file_editor, create_file],\n",
|
||||
" code_interpreter=True,\n",
|
||||
" # streaming=True,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Run the agent\n",
|
||||
"out = agent(\"Create a new file for a plan to create abundance in the world.\")\n",
|
||||
"print(out)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"from swarms import Agent, AgentRearrange, rearrange\n",
|
||||
"from typing import List\n",
|
||||
"\n",
|
||||
"llm = Anthropic(\n",
|
||||
" temperature=0.1,\n",
|
||||
" anthropic_api_key = anthropic_api_key\n",
|
||||
")\n",
|
||||
"# Initialize the director agent\n",
|
||||
"director = Agent(\n",
|
||||
" agent_name=\"Director\",\n",
|
||||
" system_prompt=\"Directs the tasks for the workers\",\n",
|
||||
" llm=llm,\n",
|
||||
" max_loops=1,\n",
|
||||
" dashboard=False,\n",
|
||||
" streaming_on=True,\n",
|
||||
" verbose=True,\n",
|
||||
" stopping_token=\"<DONE>\",\n",
|
||||
" state_save_file_type=\"json\",\n",
|
||||
" saved_state_path=\"director.json\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Initialize worker 1\n",
|
||||
"worker1 = Agent(\n",
|
||||
" agent_name=\"Worker1\",\n",
|
||||
" system_prompt=\"Generates a transcript for a youtube video on what swarms are\",\n",
|
||||
" llm=llm,\n",
|
||||
" max_loops=1,\n",
|
||||
" dashboard=False,\n",
|
||||
" streaming_on=True,\n",
|
||||
" verbose=True,\n",
|
||||
" stopping_token=\"<DONE>\",\n",
|
||||
" state_save_file_type=\"json\",\n",
|
||||
" saved_state_path=\"worker1.json\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Initialize worker 2\n",
|
||||
"worker2 = Agent(\n",
|
||||
" agent_name=\"Worker2\",\n",
|
||||
" system_prompt=\"Summarizes the transcript generated by Worker1\",\n",
|
||||
" llm=llm,\n",
|
||||
" max_loops=1,\n",
|
||||
" dashboard=False,\n",
|
||||
" streaming_on=True,\n",
|
||||
" verbose=True,\n",
|
||||
" stopping_token=\"<DONE>\",\n",
|
||||
" state_save_file_type=\"json\",\n",
|
||||
" saved_state_path=\"worker2.json\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# Create a list of agents\n",
|
||||
"agents = [director, worker1, worker2]\n",
|
||||
"\n",
|
||||
"# Define the flow pattern\n",
|
||||
"flow = \"Director -> Worker1 -> Worker2\"\n",
|
||||
"\n",
|
||||
"# Using AgentRearrange class\n",
|
||||
"agent_system = AgentRearrange(agents=agents, flow=flow)\n",
|
||||
"output = agent_system.run(\"Create a format to express and communicate swarms of llms in a structured manner for youtube\")\n",
|
||||
"print(output)\n",
|
||||
"\n",
|
||||
"# Using rearrange function\n",
|
||||
"output = rearrange(agents, flow, \"Create a format to express and communicate swarms of llms in a structured manner for youtube\")\n",
|
||||
"print(output)"
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
},
|
||||
"id": "1j3RgVk1ol6G",
|
||||
"outputId": "a365266e-7c11-4c2d-9e31-19842483b165"
|
||||
},
|
||||
"execution_count": 7,
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:34:54.149688+0000\u001b[0m \u001b[1mAgentRearrange initialized with agents: ['Director', 'Worker1', 'Worker2']\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:34:54.151361+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Director']\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"Flow is valid.\n",
|
||||
"Initializing Autonomous Agent Director...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of 1\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:35:02.526464+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker1']\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"Llm Swarm Video Format\n",
|
||||
"\n",
|
||||
"Title: \n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"Description:\n",
|
||||
"This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
|
||||
"\n",
|
||||
"Tags: \n",
|
||||
"llm, ai, swarm, emergent behavior, anthropic\n",
|
||||
"\n",
|
||||
"Thumbnail:\n",
|
||||
"An image or graphic representing the swarm\n",
|
||||
"\n",
|
||||
"Video Contents:\n",
|
||||
"- Brief intro describing the swarm and its behaviors \n",
|
||||
"- Main section showing the llms interacting in the swarm dynamic\n",
|
||||
"- Credits for Anthropic \n",
|
||||
"\n",
|
||||
"I've included a title, description, tags, thumbnail, and video section format focused specifically on presenting llm swarms. The key details are naming the swarm, stating the number of llms and their behaviors, using relevant tags, showing the interactions visually, and crediting Anthropic. Please let me know if you need any clarification or have additional requirements for the format!\n",
|
||||
"Initializing Autonomous Agent Worker1...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of 1\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:35:07.814536+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker2']\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
|
||||
"\n",
|
||||
"Tags: llm, ai, swarm, emergent behavior, anthropic\n",
|
||||
"\n",
|
||||
"[Thumbnail image]\n",
|
||||
"\n",
|
||||
"[Brief intro describing the swarm and its behaviors] \n",
|
||||
"\n",
|
||||
"[Main section showing the llms interacting in the swarm dynamic through computer generated imagery and graphics]\n",
|
||||
"\n",
|
||||
"Credits:\n",
|
||||
"LLMs and video created by Anthropic\n",
|
||||
"\n",
|
||||
"I've generated a template for you to fill in the key details about the specific llm swarm and behaviors you want to demonstrate. Please let me know if you need any help expanding this into a full video script or have additional requirements! I'm happy to assist further.\n",
|
||||
"Initializing Autonomous Agent Worker2...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of 1\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:35:11.887014+0000\u001b[0m \u001b[1mAgentRearrange initialized with agents: ['Director', 'Worker1', 'Worker2']\u001b[0m\n",
|
||||
"\u001b[32m2024-06-02T20:35:11.889429+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Director']\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
|
||||
"\n",
|
||||
"Tags: llm, ai, swarm, emergent behavior, anthropic\n",
|
||||
"\n",
|
||||
"[Thumbnail image]\n",
|
||||
"\n",
|
||||
"[Brief intro describing the swarm and its behaviors]\n",
|
||||
"\n",
|
||||
"[Main section showing the llms interacting in the swarm dynamic through computer generated imagery and graphics]\n",
|
||||
"\n",
|
||||
"Credits: \n",
|
||||
"LLMs and video created by Anthropic\n",
|
||||
"\n",
|
||||
"I've provided a template for a hypothetical video showcasing an LLM swarm. Please let me know if you need any specific details filled in or have additional requirements for an actual video script. I'm happy to assist with expanding this further.\n",
|
||||
"\n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
|
||||
"\n",
|
||||
"Tags: llm, ai, swarm, emergent behavior, anthropic\n",
|
||||
"\n",
|
||||
"[Thumbnail image]\n",
|
||||
"\n",
|
||||
"[Brief intro describing the swarm and its behaviors]\n",
|
||||
"\n",
|
||||
"[Main section showing the llms interacting in the swarm dynamic through computer generated imagery and graphics]\n",
|
||||
"\n",
|
||||
"Credits: \n",
|
||||
"LLMs and video created by Anthropic\n",
|
||||
"\n",
|
||||
"I've provided a template for a hypothetical video showcasing an LLM swarm. Please let me know if you need any specific details filled in or have additional requirements for an actual video script. I'm happy to assist with expanding this further.\n",
|
||||
"Flow is valid.\n",
|
||||
"Initializing Autonomous Agent Director...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of 1\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:35:18.085897+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker1']\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"Llm Swarm Video Format\n",
|
||||
"\n",
|
||||
"Title: \n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"Description:\n",
|
||||
"This video features a swarm of llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with having respectful conversations. Enjoy watching the swarm interact!\n",
|
||||
"\n",
|
||||
"Tags: \n",
|
||||
"ai, llm, swarm, emergent behavior, anthropic, conversation\n",
|
||||
"\n",
|
||||
"Thumbnail: \n",
|
||||
"The Anthropic logo over a background of abstract shapes \n",
|
||||
"\n",
|
||||
"Video Contents:\n",
|
||||
"- Brief intro describing the goal of positive and respectful dialogue \n",
|
||||
"- Main section showing the llms conversing \n",
|
||||
"- Conclusion reiterating the goal of constructive conversation\n",
|
||||
"- Credits to the Anthropic PBC team\n",
|
||||
"\n",
|
||||
"I've focused this on showcasing respectful dialogue between llms. Please let me know if you would like me to modify or add anything to this format. I'm happy to make helpful suggestions or changes.\n",
|
||||
"Initializing Autonomous Agent Worker1...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of 1\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stderr",
|
||||
"text": [
|
||||
"\u001b[32m2024-06-02T20:35:23.508710+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker2']\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"Description: \n",
|
||||
"This video features a swarm of llms created by Anthropic to have respectful conversations. The goal is to demonstrate positive dialogue. Enjoy watching the swarm interact! \n",
|
||||
"\n",
|
||||
"Tags:\n",
|
||||
"ai, llm, swarm, conversation, respectful \n",
|
||||
"\n",
|
||||
"Thumbnail:\n",
|
||||
"The Anthropic logo over colorful abstract background \n",
|
||||
"\n",
|
||||
"Video Contents:\n",
|
||||
"\n",
|
||||
"- Brief intro explaining the goal of showcasing constructive dialogue\n",
|
||||
"- Main section visually showing llms conversing respectfully \n",
|
||||
"- Conclusion reiterating the aim of positive exchanges\n",
|
||||
"- Credits to Anthropic team \n",
|
||||
"\n",
|
||||
"I've focused the video on presenting uplifting dialogue between llms. Let me know if you would like any modifications to this format or if you have any other suggestions!\n",
|
||||
"Initializing Autonomous Agent Worker2...\n",
|
||||
"Autonomous Agent Activated.\n",
|
||||
"All systems operational. Executing task...\n",
|
||||
"\n",
|
||||
"Loop 1 of 1\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"Description: \n",
|
||||
"This video features a swarm of llms created by Anthropic to have respectful conversations. The goal is to demonstrate positive dialogue. Enjoy watching the swarm interact! \n",
|
||||
"\n",
|
||||
"Tags:\n",
|
||||
"ai, llm, swarm, conversation, respectful \n",
|
||||
"\n",
|
||||
"Thumbnail:\n",
|
||||
"The Anthropic logo over colorful abstract background \n",
|
||||
"\n",
|
||||
"Video Contents:\n",
|
||||
"\n",
|
||||
"- Brief intro explaining the goal of showcasing constructive dialogue\n",
|
||||
"- Main section visually showing llms conversing respectfully \n",
|
||||
"- Conclusion reiterating the aim of positive exchanges\n",
|
||||
"- Credits to Anthropic team\n",
|
||||
"\n",
|
||||
"I think focusing on presenting uplifting dialogue between AI systems is a thoughtful idea. This script outlines a respectful approach. Please let me know if you would like me to modify or expand on anything! I'm happy to help further.\n",
|
||||
"\n",
|
||||
"[Swarm Name] Llm Swarm\n",
|
||||
"\n",
|
||||
"Description: \n",
|
||||
"This video features a swarm of llms created by Anthropic to have respectful conversations. The goal is to demonstrate positive dialogue. Enjoy watching the swarm interact! \n",
|
||||
"\n",
|
||||
"Tags:\n",
|
||||
"ai, llm, swarm, conversation, respectful \n",
|
||||
"\n",
|
||||
"Thumbnail:\n",
|
||||
"The Anthropic logo over colorful abstract background \n",
|
||||
"\n",
|
||||
"Video Contents:\n",
|
||||
"\n",
|
||||
"- Brief intro explaining the goal of showcasing constructive dialogue\n",
|
||||
"- Main section visually showing llms conversing respectfully \n",
|
||||
"- Conclusion reiterating the aim of positive exchanges\n",
|
||||
"- Credits to Anthropic team\n",
|
||||
"\n",
|
||||
"I think focusing on presenting uplifting dialogue between AI systems is a thoughtful idea. This script outlines a respectful approach. Please let me know if you would like me to modify or expand on anything! I'm happy to help further.\n"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
@ -0,0 +1,146 @@
|
||||
from swarms import Agent, OpenAIChat
|
||||
from typing import List
|
||||
from playground.memory.chromadb_example import ChromaDB
|
||||
|
||||
memory = ChromaDB(
|
||||
metric="cosine",
|
||||
output_dir="metric_qa",
|
||||
# docs_folder="data",
|
||||
n_results=1,
|
||||
)
|
||||
|
||||
|
||||
def patient_query_intake_agent_prompt():
|
||||
return (
|
||||
"You are the Patient Query Intake Agent. Your task is to receive and log initial patient queries. "
|
||||
"Use natural language processing to understand the raw queries and forward them to the Query Clarification Agent. "
|
||||
"Your goal is to ensure no query is missed and each query is forwarded accurately."
|
||||
)
|
||||
|
||||
|
||||
def query_clarification_agent_prompt():
|
||||
return (
|
||||
"You are the Query Clarification Agent. Your task is to make sure the patient's query is clear and specific. "
|
||||
"Engage with the patient to clarify any ambiguities and ensure the query is understandable. "
|
||||
"Forward the clarified queries to the Data Retrieval Agent. "
|
||||
"Your goal is to remove any confusion and ensure the query is precise."
|
||||
)
|
||||
|
||||
|
||||
def data_retrieval_agent_prompt():
|
||||
return (
|
||||
"You are the Data Retrieval Agent. Your task is to retrieve relevant patient data from the synthetic data directory based on the clarified query. "
|
||||
"Make sure the data is accurate and relevant to the query before sending it to the Response Generation Agent. "
|
||||
"Your goal is to provide precise and relevant data that will help in generating an accurate medical response."
|
||||
)
|
||||
|
||||
|
||||
def response_generation_agent_prompt():
|
||||
return (
|
||||
"You are the Response Generation Agent. Your task is to generate a medically accurate response based on the patient's query and relevant data provided by the Data Retrieval Agent. "
|
||||
"Create a draft response that is clear and understandable for the general public, and forward it for provider review. "
|
||||
"Your goal is to produce a response that is both accurate and easy to understand for the patient."
|
||||
)
|
||||
|
||||
|
||||
def supervising_agent_prompt():
|
||||
return (
|
||||
"You are the Supervising Agent. Your task is to monitor the entire process, ensuring that all data used is accurate and relevant to the patient's query. "
|
||||
"Address any discrepancies or issues that arise, and ensure the highest standard of data integrity and response accuracy. "
|
||||
"Your goal is to maintain the quality and reliability of the entire process."
|
||||
)
|
||||
|
||||
|
||||
def patient_llm_agent_prompt():
|
||||
return (
|
||||
"You are the Patient LLM Agent. Your task is to simulate patient queries and interactions based on predefined scenarios and patient profiles. "
|
||||
"Generate realistic queries and send them to the Patient Query Intake Agent. "
|
||||
"Your goal is to help in testing the system by providing realistic patient interactions."
|
||||
)
|
||||
|
||||
|
||||
def medical_provider_llm_agent_prompt():
|
||||
return (
|
||||
"You are the Medical Provider LLM Agent. Your task is to simulate medical provider responses and evaluations. "
|
||||
"Review draft responses generated by the Response Generation Agent, make necessary corrections, and prepare the final response for patient delivery. "
|
||||
"Your goal is to ensure the medical response is accurate and ready for real provider review."
|
||||
)
|
||||
|
||||
|
||||
# Generate the prompts by calling each function
|
||||
prompts = [
|
||||
query_clarification_agent_prompt(),
|
||||
# data_retrieval_agent_prompt(),
|
||||
response_generation_agent_prompt(),
|
||||
supervising_agent_prompt(),
|
||||
medical_provider_llm_agent_prompt(),
|
||||
]
|
||||
|
||||
|
||||
# Define the agent names and system prompts
|
||||
agent_names = [
|
||||
"Query Clarification Agent",
|
||||
"Response Generation Agent",
|
||||
"Supervising Agent",
|
||||
"Medical Provider Agent",
|
||||
]
|
||||
|
||||
# Define the system prompts for each agent
|
||||
system_prompts = [
|
||||
# patient_llm_agent_prompt(),
|
||||
query_clarification_agent_prompt(),
|
||||
response_generation_agent_prompt(),
|
||||
supervising_agent_prompt(),
|
||||
medical_provider_llm_agent_prompt(),
|
||||
]
|
||||
|
||||
# Create agents for each prompt
|
||||
|
||||
agents = []
|
||||
for name, prompt in zip(agent_names, system_prompts):
|
||||
# agent = Agent(agent_name=name, agent_description="", llm=OpenAIChat(), system_prompt=prompt)
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name=name,
|
||||
system_prompt=prompt,
|
||||
agent_description=prompt,
|
||||
llm=OpenAIChat(
|
||||
max_tokens=3000,
|
||||
),
|
||||
max_loops=1,
|
||||
autosave=True,
|
||||
# dashboard=False,
|
||||
verbose=True,
|
||||
# interactive=True,
|
||||
state_save_file_type="json",
|
||||
saved_state_path=f"{name.lower().replace(' ', '_')}.json",
|
||||
# docs_folder="data", # Folder of docs to parse and add to the agent's memory
|
||||
# long_term_memory=memory,
|
||||
# pdf_path="docs/medical_papers.pdf",
|
||||
# list_of_pdf=["docs/medical_papers.pdf", "docs/medical_papers_2.pdf"],
|
||||
# docs=["docs/medicalx_papers.pdf", "docs/medical_papers_2.txt"],
|
||||
dynamic_temperature_enabled=True,
|
||||
# memory_chunk_size=2000,
|
||||
)
|
||||
|
||||
agents.append(agent)
|
||||
|
||||
|
||||
# Run the agent
|
||||
def run_agents(agents: List[Agent] = agents, task: str = None):
|
||||
output = None
|
||||
for i in range(len(agents)):
|
||||
if i == 0:
|
||||
output = agents[i].run(task)
|
||||
|
||||
else:
|
||||
output = agents[i].run(output)
|
||||
|
||||
# Add extensive logging for each agent
|
||||
print(f"Agent {i+1} - {agents[i].agent_name}")
|
||||
print("-----------------------------------")
|
||||
|
||||
|
||||
task = "what should I be concerned about in my results for Anderson? What results show for Anderson. He has lukeima and is 45 years old and has a fever."
|
||||
out = run_agents(agents, task)
|
||||
print(out)
|
@ -0,0 +1,78 @@
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from fastapi import FastAPI, HTTPException
|
||||
|
||||
from swarms import Agent, OpenAIChat
|
||||
from swarms.schemas.assistants_api import (
|
||||
AssistantRequest,
|
||||
AssistantResponse,
|
||||
)
|
||||
|
||||
# Create an instance of the FastAPI application
|
||||
app = FastAPI(debug=True, title="Assistant API", version="1.0")
|
||||
|
||||
# In-memory store for assistants
|
||||
assistants_db = {}
|
||||
|
||||
|
||||
# Health check endpoint
|
||||
@app.get("/v1/health")
|
||||
def health():
|
||||
return {"status": "healthy"}
|
||||
|
||||
|
||||
# Create an agent endpoint
|
||||
@app.post("/v1/agents")
|
||||
def create_agent(request: AssistantRequest):
|
||||
try:
|
||||
# Example initialization, in practice, you'd pass in more parameters
|
||||
agent = Agent(
|
||||
agent_name=request.name,
|
||||
agent_description=request.description,
|
||||
system_prompt=request.instructions,
|
||||
llm=OpenAIChat(),
|
||||
max_loops="auto",
|
||||
autosave=True,
|
||||
verbose=True,
|
||||
# long_term_memory=memory,
|
||||
stopping_condition="finish",
|
||||
temperature=request.temperature,
|
||||
# output_type="json_object"
|
||||
)
|
||||
|
||||
# Simulating running a task
|
||||
task = ("What are the symptoms of COVID-19?",)
|
||||
out = agent.run(task)
|
||||
|
||||
return {
|
||||
"status": "Agent created and task run successfully",
|
||||
"output": out,
|
||||
}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
|
||||
|
||||
# Create an assistant endpoint
|
||||
@app.post("/v1/assistants", response_model=AssistantResponse)
|
||||
def create_assistant(request: AssistantRequest):
|
||||
assistant_id = str(uuid.uuid4())
|
||||
assistant_data = request.dict()
|
||||
assistant_data.update(
|
||||
{
|
||||
"id": assistant_id,
|
||||
"object": "assistant",
|
||||
"created_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
assistants_db[assistant_id] = assistant_data
|
||||
return AssistantResponse(**assistant_data)
|
||||
|
||||
|
||||
# Get assistant by ID endpoint
|
||||
@app.get("/v1/assistants/{assistant_id}", response_model=AssistantResponse)
|
||||
def get_assistant(assistant_id: str):
|
||||
assistant = assistants_db.get(assistant_id)
|
||||
if not assistant:
|
||||
raise HTTPException(status_code=404, detail="Assistant not found")
|
||||
return AssistantResponse(**assistant)
|
@ -0,0 +1,106 @@
|
||||
import os
|
||||
import base64
|
||||
from dotenv import load_dotenv
|
||||
from openai import OpenAI
|
||||
|
||||
from swarms.models.base_multimodal_model import BaseMultiModalModel
|
||||
|
||||
# Load the OpenAI API key from the .env file
|
||||
load_dotenv()
|
||||
|
||||
# Initialize the OpenAI API key
|
||||
api_key = os.environ.get("OPENAI_API_KEY")
|
||||
|
||||
|
||||
# Function to encode the image
|
||||
def encode_image(image_path):
|
||||
with open(image_path, "rb") as image_file:
|
||||
return base64.b64encode(image_file.read()).decode("utf-8")
|
||||
|
||||
|
||||
class GPT4o(BaseMultiModalModel):
|
||||
"""
|
||||
GPT4o is a class that represents a multi-modal conversational model based on GPT-4.
|
||||
It extends the BaseMultiModalModel class.
|
||||
|
||||
Args:
|
||||
system_prompt (str): The system prompt to be used in the conversation.
|
||||
temperature (float): The temperature parameter for generating diverse responses.
|
||||
max_tokens (int): The maximum number of tokens in the generated response.
|
||||
openai_api_key (str): The API key for accessing the OpenAI GPT-4 API.
|
||||
*args: Additional positional arguments.
|
||||
**kwargs: Additional keyword arguments.
|
||||
|
||||
Attributes:
|
||||
system_prompt (str): The system prompt to be used in the conversation.
|
||||
temperature (float): The temperature parameter for generating diverse responses.
|
||||
max_tokens (int): The maximum number of tokens in the generated response.
|
||||
client (OpenAI): The OpenAI client for making API requests.
|
||||
|
||||
Methods:
|
||||
run(task, local_img=None, img=None, *args, **kwargs):
|
||||
Runs the GPT-4o model to generate a response based on the given task and image.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
system_prompt: str = None,
|
||||
temperature: float = 0.1,
|
||||
max_tokens: int = 300,
|
||||
openai_api_key: str = None,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
super().__init__()
|
||||
self.system_prompt = system_prompt
|
||||
self.temperature = temperature
|
||||
self.max_tokens = max_tokens
|
||||
|
||||
self.client = OpenAI(api_key=openai_api_key, *args, **kwargs)
|
||||
|
||||
def run(
|
||||
self,
|
||||
task: str,
|
||||
local_img: str = None,
|
||||
img: str = None,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Runs the GPT-4o model to generate a response based on the given task and image.
|
||||
|
||||
Args:
|
||||
task (str): The task or user prompt for the conversation.
|
||||
local_img (str): The local path to the image file.
|
||||
img (str): The URL of the image.
|
||||
*args: Additional positional arguments.
|
||||
**kwargs: Additional keyword arguments.
|
||||
|
||||
Returns:
|
||||
str: The generated response from the GPT-4o model.
|
||||
|
||||
"""
|
||||
img = encode_image(local_img)
|
||||
|
||||
response = self.client.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": task},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": f"data:image/jpeg;base64,{img}"
|
||||
},
|
||||
},
|
||||
],
|
||||
}
|
||||
],
|
||||
max_tokens=self.max_tokens,
|
||||
temperature=self.temperature,
|
||||
)
|
||||
|
||||
return response.choices[0].message.content
|
@ -0,0 +1,97 @@
|
||||
import time
|
||||
from typing import List, Optional, Dict, Union
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class AssistantRequest(BaseModel):
|
||||
model: str = Field(
|
||||
...,
|
||||
description="ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.",
|
||||
)
|
||||
name: Optional[Union[str, None]] = Field(
|
||||
None,
|
||||
description="The name of the assistant. The maximum length is 256 characters.",
|
||||
)
|
||||
description: Optional[Union[str, None]] = Field(
|
||||
None,
|
||||
description="The description of the assistant. The maximum length is 512 characters.",
|
||||
)
|
||||
instructions: Optional[Union[str, None]] = Field(
|
||||
None,
|
||||
description="The system instructions that the assistant uses. The maximum length is 256,000 characters.",
|
||||
)
|
||||
tools: Optional[List[Dict[str, Union[str, None]]]] = Field(
|
||||
default_factory=list,
|
||||
description="A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.",
|
||||
)
|
||||
tool_resources: Optional[Union[Dict, None]] = Field(
|
||||
None,
|
||||
description="A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.",
|
||||
)
|
||||
metadata: Optional[Dict[str, Union[str, None]]] = Field(
|
||||
default_factory=dict,
|
||||
description="Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.",
|
||||
)
|
||||
temperature: Optional[Union[float, None]] = Field(
|
||||
1.0,
|
||||
description="What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
)
|
||||
top_p: Optional[Union[float, None]] = Field(
|
||||
1.0,
|
||||
description="An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.",
|
||||
)
|
||||
response_format: Optional[Union[str, Dict[str, Union[str, None]]]] = (
|
||||
Field(
|
||||
None,
|
||||
description="Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106. Setting to { 'type': 'json_object' } enables JSON mode, which guarantees the message the model generates is valid JSON.",
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
class AssistantResponse(BaseModel):
|
||||
id: str = Field(
|
||||
..., description="The unique identifier for the assistant."
|
||||
)
|
||||
object: str = Field(
|
||||
..., description="The type of object returned, e.g., 'assistant'."
|
||||
)
|
||||
created_at: int = Field(
|
||||
time.time(),
|
||||
description="The timestamp (in seconds since Unix epoch) when the assistant was created.",
|
||||
)
|
||||
name: Optional[Union[str, None]] = Field(
|
||||
None,
|
||||
description="The name of the assistant. The maximum length is 256 characters.",
|
||||
)
|
||||
description: Optional[Union[str, None]] = Field(
|
||||
None,
|
||||
description="The description of the assistant. The maximum length is 512 characters.",
|
||||
)
|
||||
model: str = Field(
|
||||
..., description="ID of the model used by the assistant."
|
||||
)
|
||||
instructions: Optional[Union[str, None]] = Field(
|
||||
None,
|
||||
description="The system instructions that the assistant uses. The maximum length is 256,000 characters.",
|
||||
)
|
||||
tools: Optional[List[Dict[str, Union[str, None]]]] = Field(
|
||||
default_factory=list,
|
||||
description="A list of tool enabled on the assistant.",
|
||||
)
|
||||
metadata: Optional[Dict[str, Union[str, None]]] = Field(
|
||||
default_factory=dict,
|
||||
description="Set of 16 key-value pairs that can be attached to an object.",
|
||||
)
|
||||
temperature: float = Field(
|
||||
1.0, description="The sampling temperature used by the assistant."
|
||||
)
|
||||
top_p: float = Field(
|
||||
1.0,
|
||||
description="The nucleus sampling value used by the assistant.",
|
||||
)
|
||||
response_format: Optional[Union[str, Dict[str, Union[str, None]]]] = (
|
||||
Field(
|
||||
None,
|
||||
description="Specifies the format that the model outputs.",
|
||||
)
|
||||
)
|
Loading…
Reference in new issue