stacked swarms

Former-commit-id: 6a799acd87
discord-bot-framework
Kye 1 year ago
parent 76de7acfdb
commit 7a504623c4

@ -140,6 +140,134 @@ print(response)
```
## Full Code
- The full code example of stacked swarms
```python
import os
import interpreter
from swarms.agents.hf_agents import HFAgent
from swarms.agents.omni_modal_agent import OmniModalAgent
from swarms.models import OpenAIChat
from swarms.tools.autogpt import tool
from swarms.workers import Worker
# Initialize API Key
api_key = ""
# Initialize the language model,
# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
)
# wrap a function with the tool decorator to make it a tool, then add docstrings for tool documentation
@tool
def hf_agent(task: str = None):
"""
An tool that uses an openai model to call and respond to a task by search for a model on huggingface
It first downloads the model then uses it.
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech, etc
"""
agent = HFAgent(model="text-davinci-003", api_key=api_key)
response = agent.run(task, text="¡Este es un API muy agradable!")
return response
# wrap a function with the tool decorator to make it a tool
@tool
def omni_agent(task: str = None):
"""
An tool that uses an openai Model to utilize and call huggingface models and guide them to perform a task.
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech
The following tasks are what this tool should be used for:
Tasks omni agent is good for:
--------------
document-question-answering
image-captioning
image-question-answering
image-segmentation
speech-to-text
summarization
text-classification
text-question-answering
translation
huggingface-tools/text-to-image
huggingface-tools/text-to-video
text-to-speech
huggingface-tools/text-download
huggingface-tools/image-transformation
"""
agent = OmniModalAgent(llm)
response = agent.run(task)
return response
# Code Interpreter
@tool
def compile(task: str):
"""
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally.
You can chat with Open Interpreter through a ChatGPT-like interface in your terminal
by running $ interpreter after installing.
This provides a natural-language interface to your computer's general-purpose capabilities:
Create and edit photos, videos, PDFs, etc.
Control a Chrome browser to perform research
Plot, clean, and analyze large datasets
...etc.
⚠️ Note: You'll be asked to approve code before it's run.
Rules: Only use when given to generate code or an application of some kind
"""
task = interpreter.chat(task, return_messages=True)
interpreter.chat()
interpreter.reset(task)
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
os.environ["INTERPRETER_CLI_DEBUG"] = True
# Append tools to an list
tools = [hf_agent, omni_agent, compile]
# Initialize a single Worker node with previously defined tools in addition to it's
# predefined tools
node = Worker(
llm=llm,
ai_name="Optimus Prime",
openai_api_key=api_key,
ai_role="Worker in a swarm",
external_tools=tools,
human_in_the_loop=False,
temperature=0.5,
)
# Specify task
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
# Run the node on the task
response = node.run(task)
# Print the response
print(response)
```
## 8. Conclusion
In this extensive tutorial, we've embarked on a journey to explore a sophisticated system designed to harness the power of AI models and tools for a myriad of tasks. We've peeled back the layers of code, dissected its various components, and gained a profound understanding of how these elements come together to create a versatile, modular, and powerful swarm-based AI system.

@ -0,0 +1,285 @@
# Swarms Documentation
## Table of Contents
1. [Introduction](#introduction)
2. [Overview](#overview)
3. [Class Definition](#class-definition)
- [Mistral Class](#mistral-class)
- [Initialization Parameters](#initialization-parameters)
4. [Functionality and Usage](#functionality-and-usage)
- [Loading the Model](#loading-the-model)
- [Running the Model](#running-the-model)
- [Chatting with the Agent](#chatting-with-the-agent)
5. [Additional Information](#additional-information)
6. [Examples](#examples)
- [Example 1: Initializing Mistral](#example-1-initializing-mistral)
- [Example 2: Running a Task](#example-2-running-a-task)
- [Example 3: Chatting with the Agent](#example-3-chatting-with-the-agent)
7. [References and Resources](#references-and-resources)
---
## 1. Introduction <a name="introduction"></a>
Welcome to the documentation for Mistral, a powerful language model-based AI agent. Mistral leverages the capabilities of large language models to generate text-based responses to queries and tasks. This documentation provides a comprehensive guide to understanding and using the Mistral AI agent.
### 1.1 Purpose
Mistral is designed to assist users by generating coherent and contextually relevant text based on user inputs or tasks. It can be used for various natural language understanding and generation tasks, such as chatbots, text completion, question answering, and content generation.
### 1.2 Key Features
- Utilizes large pre-trained language models.
- Supports GPU acceleration for faster processing.
- Provides an easy-to-use interface for running tasks and engaging in chat-based conversations.
- Offers fine-grained control over response generation through temperature and maximum length settings.
---
## 2. Overview <a name="overview"></a>
Before diving into the details of the Mistral AI agent, let's provide an overview of its purpose and functionality.
Mistral is built on top of powerful language models, such as GPT-3. It allows you to:
- Generate text-based responses to tasks and queries.
- Control the temperature of response generation for creativity.
- Set a maximum length for generated responses.
- Engage in chat-based conversations with the AI agent.
- Utilize GPU acceleration for faster inference.
In the following sections, we will explore the class definition, its initialization parameters, and how to use Mistral effectively.
---
## 3. Class Definition <a name="class-definition"></a>
Mistral consists of a single class, the `Mistral` class. This class provides methods for initializing the agent, loading the pre-trained model, and running tasks.
### 3.1 Mistral Class <a name="mistral-class"></a>
```python
class Mistral:
"""
Mistral
model = Mistral(device="cuda", use_flash_attention=True, temperature=0.7, max_length=200)
task = "My favorite condiment is"
result = model.run(task)
print(result)
"""
def __init__(
self,
ai_name: str = "Node Model Agent",
system_prompt: str = None,
model_name: str = "mistralai/Mistral-7B-v0.1",
device: str = "cuda",
use_flash_attention: bool = False,
temperature: float = 1.0,
max_length: int = 100,
do_sample: bool = True,
):
"""
Initializes the Mistral AI agent.
Parameters:
- ai_name (str): The name or identifier of the AI agent. Default: "Node Model Agent".
- system_prompt (str): A system-level prompt for context (e.g., conversation history). Default: None.
- model_name (str): The name of the pre-trained language model to use. Default: "mistralai/Mistral-7B-v0.1".
- device (str): The device for model inference, such as "cuda" or "cpu". Default: "cuda".
- use_flash_attention (bool): If True, enables flash attention for faster inference. Default: False.
- temperature (float): A value controlling the creativity of generated text. Default: 1.0.
- max_length (int): The maximum length of generated text. Default: 100.
- do_sample (bool): If True, uses sampling for text generation. Default: True.
"""
```
### 3.2 Initialization Parameters <a name="initialization-parameters"></a>
- `ai_name` (str): The name or identifier of the AI agent. This name can be used to distinguish between different agents if multiple instances are used. The default value is "Node Model Agent".
- `system_prompt` (str): A system-level prompt that provides context for the AI agent. This can be useful for maintaining a conversation history or providing context for the current task. By default, it is set to `None`.
- `model_name` (str): The name of the pre-trained language model to use. The default value is "mistralai/Mistral-7B-v0.1", which points to a specific version of the Mistral model.
- `device` (str): The device on which the model should perform inference. You can specify "cuda" for GPU acceleration or "cpu" for CPU-only inference. The default is "cuda", assuming GPU availability.
- `use_flash_attention` (bool): If set to `True`, Mistral uses flash attention for faster inference. This is beneficial when low-latency responses are required. The default is `False`.
- `temperature` (float): The temperature parameter controls the creativity of the generated text. Higher values (e.g., 1.0) produce more random output, while lower values (e.g., 0.7) make the output more focused and deterministic. The default value is 1.0.
- `max_length` (int): This parameter sets the maximum length of the generated text. It helps control the length of responses. The default value is 100.
- `do_sample` (bool): If set to `True`, Mistral uses sampling during text generation. Sampling introduces randomness into the generated text. The default is `True`.
---
## 4. Functionality and Usage <a name="functionality-and-usage"></a>
Now that we've introduced the Mistral class and its parameters, let's explore how to use Mistral for various tasks.
### 4.1 Loading the Model <a name="loading-the-model"></a>
The `Mistral` class handles the loading of the pre-trained language model during initialization. You do not need to explicitly load the model. Simply create an instance of `Mistral`, and it will take care of loading the model into memory.
### 4.2 Running the Model <a name="running-the-model"></a>
Mistral provides two methods for running the model:
#### 4.2.1 `run` Method
The `run` method is used to generate text-based responses to a given task or input. It takes a single string parameter, `task`, and returns the generated text as a string.
```python
def run
(self, task: str) -> str:
"""
Run the model on a given task.
Parameters:
- task (str): The task or query for which to generate a response.
Returns:
- str: The generated text response.
"""
```
Example:
```python
from swarms.models import Mistral
model = Mistral()
task = "Translate the following English text to French: 'Hello, how are you?'"
result = model.run(task)
print(result)
```
#### 4.2.2 `__call__` Method
The `__call__` method provides a more concise way to run the model on a given task. You can use it by simply calling the `Mistral` instance with a task string.
Example:
```python
model = Mistral()
task = "Generate a summary of the latest research paper on AI ethics."
result = model(task)
print(result)
```
### 4.3 Chatting with the Agent <a name="chatting-with-the-agent"></a>
Mistral supports chat-based interactions with the AI agent. You can send a series of messages to the agent, and it will respond accordingly. The `chat` method handles these interactions.
#### `chat` Method
The `chat` method allows you to engage in chat-based conversations with the AI agent. You can send messages to the agent, and it will respond with text-based messages.
```python
def chat(self, msg: str = None, streaming: bool = False) -> str:
"""
Run a chat conversation with the agent.
Parameters:
- msg (str, optional): The message to send to the agent. Defaults to None.
- streaming (bool, optional): Whether to stream the response token by token. Defaults to False.
Returns:
- str: The response from the agent.
"""
```
Example:
```python
model = Mistral()
conversation = [
"Tell me a joke.",
"What's the weather like today?",
"Translate 'apple' to Spanish.",
]
for user_message in conversation:
response = model.chat(user_message)
print(f"User: {user_message}")
print(f"Agent: {response}")
```
---
## 5. Additional Information <a name="additional-information"></a>
Here are some additional tips and information for using Mistral effectively:
- Mistral uses a specific pre-trained model ("mistralai/Mistral-7B-v0.1" by default). You can explore other available models and choose one that best suits your task.
- The `temperature` parameter controls the randomness of generated text. Experiment with different values to achieve the desired level of creativity in responses.
- Be cautious with `max_length`, especially if you set it to a very high value, as it may lead to excessively long responses.
- Ensure that you have the required libraries, such as `torch` and `transformers`, installed to use Mistral successfully.
- Consider providing a system-level prompt when engaging in chat-based conversations to provide context for the agent.
---
## 6. Examples <a name="examples"></a>
In this section, we provide practical examples to illustrate how to use Mistral for various tasks.
### 6.1 Example 1: Initializing Mistral <a name="example-1-initializing-mistral"></a>
In this example, we initialize the Mistral AI agent with custom settings:
```python
model = Mistral(
ai_name="My AI Assistant",
device="cpu",
temperature=0.8,
max_length=150,
)
```
### 6.2 Example 2: Running a Task <a name="example-2-running-a-task"></a>
Here, we run a text generation task using Mistral:
```python
model = Mistral()
task = "Summarize the main findings of the recent climate change report."
result = model.run(task)
print(result)
```
### 6.3 Example 3: Chatting with the Agent <a name="example-3-chatting-with-the-agent"></a>
Engage in a chat-based conversation with Mistral:
```python
model = Mistral()
conversation = [
"Tell me a joke.",
"What's the latest news?",
"Translate 'cat' to French.",
]
for user_message in conversation:
response = model.chat(user_message)
print(f"User: {user_message}")
print(f"Agent: {response}")
```
---
## 7. References and Resources <a name="references-and-resources"></a>
Here are some references and resources for further information on Mistral and related topics:
- [Mistral GitHub Repository](https://github.com/mistralai/mistral): Official Mistral repository for updates and contributions.
- [Hugging Face Transformers](https://huggingface.co/transformers/): Documentation and models for various transformers, including Mistral's parent models.
- [PyTorch Official Website](https://pytorch.org/): Official website for PyTorch, the deep learning framework used in Mistral.
This concludes the documentation for the Mistral AI agent. You now have a comprehensive understanding of how to use Mistral for text generation and chat-based interactions. If you have any further questions or need assistance, please refer to the provided references and resources. Happy AI modeling!

@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "1.8.0"
version = "1.8.1"
description = "Swarms - Pytorch"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]

@ -6,9 +6,23 @@ from swarms.agents.message import Message
class Mistral:
"""
Mistral
Mistral is an all-new llm
Args:
ai_name (str, optional): Name of the AI. Defaults to "Mistral".
system_prompt (str, optional): System prompt. Defaults to None.
model_name (str, optional): Model name. Defaults to "mistralai/Mistral-7B-v0.1".
device (str, optional): Device to use. Defaults to "cuda".
use_flash_attention (bool, optional): Whether to use flash attention. Defaults to False.
temperature (float, optional): Temperature. Defaults to 1.0.
max_length (int, optional): Max length. Defaults to 100.
do_sample (bool, optional): Whether to sample. Defaults to True.
Usage:
from swarms.models import Mistral
model = Mistral(device="cuda", use_flash_attention=True, temperature=0.7, max_length=200)
task = "My favourite condiment is"
result = model.run(task)
print(result)

Loading…
Cancel
Save