parent
9865024eaf
commit
883505253f
@ -0,0 +1,185 @@
|
||||
# Tutorial: Understanding and Utilizing Worker Examples
|
||||
|
||||
## Table of Contents
|
||||
1. Introduction
|
||||
2. Code Overview
|
||||
- Import Statements
|
||||
- Initializing API Key and Language Model
|
||||
- Creating Swarm Tools
|
||||
- Appending Tools to a List
|
||||
- Initializing a Worker Node
|
||||
3. Understanding the `hf_agent` Tool
|
||||
4. Understanding the `omni_agent` Tool
|
||||
5. Understanding the `compile` Tool
|
||||
6. Running a Swarm
|
||||
7. Interactive Examples
|
||||
- Example 1: Initializing API Key and Language Model
|
||||
- Example 2: Using the `hf_agent` Tool
|
||||
- Example 3: Using the `omni_agent` Tool
|
||||
- Example 4: Using the `compile` Tool
|
||||
8. Conclusion
|
||||
|
||||
## 1. Introduction
|
||||
The provided code showcases a system built around a worker node that utilizes various AI models and tools to perform tasks. This tutorial will break down the code step by step, explaining its components, how they work together, and how to utilize its modularity for various tasks.
|
||||
|
||||
## 2. Code Overview
|
||||
|
||||
### Import Statements
|
||||
The code begins with import statements, bringing in necessary modules and classes. Key imports include the `OpenAIChat` class, which represents a language model, and several custom agents and tools from the `swarms` package.
|
||||
|
||||
```python
|
||||
import os
|
||||
import interpreter # Assuming this is a custom module
|
||||
from swarms.agents.hf_agents import HFAgent
|
||||
from swarms.agents.omni_modal_agent import OmniModalAgent
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.tools.autogpt import tool
|
||||
from swarms.workers import Worker
|
||||
```
|
||||
|
||||
### Initializing API Key and Language Model
|
||||
Here, an API key is initialized, and a language model (`OpenAIChat`) is created. This model is capable of generating human-like text based on the provided input.
|
||||
|
||||
```python
|
||||
# Initialize API Key
|
||||
api_key = "YOUR_OPENAI_API_KEY"
|
||||
|
||||
# Initialize the language model
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
### Creating Swarm Tools
|
||||
The code defines three tools: `hf_agent`, `omni_agent`, and `compile`. These tools encapsulate specific functionalities and can be invoked to perform tasks.
|
||||
|
||||
### Appending Tools to a List
|
||||
All defined tools are appended to a list called `tools`. This list is later used when initializing a worker node, allowing the node to access and utilize these tools.
|
||||
|
||||
```python
|
||||
# Append tools to a list
|
||||
tools = [
|
||||
hf_agent,
|
||||
omni_agent,
|
||||
compile
|
||||
]
|
||||
```
|
||||
|
||||
### Initializing a Worker Node
|
||||
A worker node is initialized using the `Worker` class. The worker node is equipped with the language model, a name, API key, and the list of tools. It's set up to perform tasks without human intervention.
|
||||
|
||||
```python
|
||||
# Initialize a single Worker node with previously defined tools in addition to its predefined tools
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key=api_key,
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=tools,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
## 3. Understanding the `hf_agent` Tool
|
||||
The `hf_agent` tool utilizes an OpenAI model (`text-davinci-003`) to perform tasks. It takes a task as input and returns a response. This tool is suitable for multi-modal tasks like generating images, videos, speech, etc. The tool's primary rule is not to be used for simple tasks like generating summaries.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def hf_agent(task: str = None):
|
||||
# Create an HFAgent instance with the specified model and API key
|
||||
agent = HFAgent(model="text-davinci-003", api_key=api_key)
|
||||
# Run the agent with the provided task and optional text input
|
||||
response = agent.run(task, text="¡Este es un API muy agradable!")
|
||||
return response
|
||||
```
|
||||
|
||||
## 4. Understanding the `omni_agent` Tool
|
||||
The `omni_agent` tool is more versatile and leverages the `llm` (language model) to interact with Huggingface models for various tasks. It's intended for multi-modal tasks such as document-question-answering, image-captioning, summarization, and more. The tool's rule is also not to be used for simple tasks.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def omni_agent(task: str = None):
|
||||
# Create an OmniModalAgent instance with the provided language model
|
||||
agent = OmniModalAgent(llm)
|
||||
# Run the agent with the provided task
|
||||
response = agent.run(task)
|
||||
return response
|
||||
```
|
||||
|
||||
## 5. Understanding the `compile` Tool
|
||||
The `compile` tool allows the execution of code locally, supporting various programming languages like Python, JavaScript, and Shell. It provides a natural language interface to your computer's capabilities. Users can chat with this tool in a terminal-like interface to perform tasks such as creating and editing files, controlling a browser, and more.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def compile(task: str):
|
||||
# Use the interpreter module to chat with the local interpreter
|
||||
task = interpreter.chat(task, return_messages=True)
|
||||
interpreter.chat()
|
||||
interpreter.reset(task)
|
||||
|
||||
# Set environment variables for the interpreter
|
||||
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
|
||||
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
|
||||
os.environ["INTERPRETER_CLI_DEBUG"] = True
|
||||
```
|
||||
|
||||
## 6. Running a Swarm
|
||||
After defining tools and initializing the worker node, a specific task is provided as input to the worker node. The node then runs the task, and the response is printed to the console.
|
||||
|
||||
```python
|
||||
# Specify the task
|
||||
task = "What were the winning Boston Marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
|
||||
# Run the node on the task
|
||||
response = node.run(task)
|
||||
|
||||
# Print the response
|
||||
print(response)
|
||||
```
|
||||
|
||||
|
||||
## 8. Conclusion
|
||||
In this extensive tutorial, we embarked on a journey to understand and utilize the swarm examples provided. We broke down the code step by step, examined its components, and explored how to leverage its modularity for various AI tasks. As we wrap up our exploration, let's recap what we've learned, revisit the concept of the worker node, and envision the exciting future possibilities of this system.
|
||||
|
||||
Recap of What We've Learned
|
||||
Throughout this tutorial, we've gained a deep understanding of the following key concepts and components:
|
||||
|
||||
Code Structure: We dissected the provided code, starting with import statements and progressing through the initialization of API keys, language models, and the definition of swarm tools.
|
||||
|
||||
Swarm Tools: We introduced three essential tools—hf_agent, omni_agent, and compile. Each tool serves a specific purpose, from generating text to interacting with Huggingface models and running code locally.
|
||||
|
||||
Worker Node: We explored the worker node, a pivotal component in this system. The worker node combines language models, API keys, and tools to perform tasks autonomously, without the need for human intervention.
|
||||
|
||||
Interactive Examples: We conducted interactive examples to demonstrate the practical application of each block of code. These examples showed us how to initialize the system, generate text, answer questions, and execute code.
|
||||
|
||||
Recap of the Worker Node
|
||||
The worker node is at the heart of this system. It acts as a digital assistant, capable of utilizing various tools and models to accomplish tasks. Let's recap the core features and benefits of the worker node:
|
||||
|
||||
Modularity: The worker node is highly modular. It can seamlessly integrate new tools, models, and APIs, making it adaptable to a wide range of tasks and applications.
|
||||
|
||||
Automation: Once set up, the worker node can perform tasks autonomously, reducing the need for manual intervention. This automation can significantly boost productivity and efficiency.
|
||||
|
||||
Versatility: With the ability to switch between tools and models, the worker node can tackle diverse tasks, from generating creative content to executing code and answering questions.
|
||||
|
||||
Scalability: This system can be scaled by adding multiple worker nodes, enabling concurrent task processing and handling larger workloads.
|
||||
|
||||
Consistency: The worker node consistently applies the defined rules and procedures for each tool, ensuring reliable and reproducible results.
|
||||
|
||||
The Future of the Worker Node
|
||||
As we conclude this tutorial, it's essential to consider the potential future developments and features that could enhance the capabilities of the worker node and the broader system. Here are some exciting possibilities:
|
||||
|
||||
Enhanced Natural Language Understanding
|
||||
Future iterations of the worker node could incorporate advanced natural language understanding capabilities. This would enable it to comprehend and respond to more complex and context-aware queries, making it even more proficient in various tasks.
|
||||
|
||||
Improved Multi-Modal Integration
|
||||
Enhancements in multi-modal capabilities could enable the worker node to seamlessly combine text, images, audio, and video to generate richer and more comprehensive responses. This would be particularly valuable for tasks like content generation, summarization, and content transformation.
|
||||
|
||||
Expanded Tool Ecosystem
|
||||
The worker node's tool ecosystem could be expanded to include a broader range of specialized tools. These might encompass tools for data analysis, machine learning model training, and data visualization, allowing users to perform more sophisticated tasks.
|
||||
|
||||
Learning and Adaptation
|
||||
Future versions of the worker node could incorporate machine learning and adaptive capabilities. This would enable it to learn from user interactions and improve its performance over time. It could also adapt to specific user preferences and work more intelligently.
|
||||
|
||||
|
@ -1,9 +1,6 @@
|
||||
from swarms.models.openai_models import OpenAIChat
|
||||
|
||||
openai = OpenAIChat(
|
||||
openai_api_key="",
|
||||
verbose=False
|
||||
)
|
||||
openai = OpenAIChat(openai_api_key="", verbose=False)
|
||||
|
||||
chat = openai("Are quantum fields everywhere?")
|
||||
print(chat)
|
||||
|
@ -0,0 +1,5 @@
|
||||
from swarms.chunkers.base import BaseChunker
|
||||
from swarms.chunkers.chunk_seperator import ChunkSeparator
|
||||
from swarms.chunkers.markdown import MarkdownChunker
|
||||
from swarms.chunkers.text import TextChunker
|
||||
from swarms.chunkers.pdf import PdfChunker
|
@ -0,0 +1,113 @@
|
||||
from __future__ import annotations
|
||||
from abc import ABC
|
||||
from typing import Optional
|
||||
from attr import define, field, Factory
|
||||
from griptape.artifacts import TextArtifact
|
||||
from swarm.chunkers.chunk_seperators import ChunkSeparator
|
||||
from griptape.tokenizers import OpenAiTokenizer
|
||||
|
||||
|
||||
@define
|
||||
class BaseChunker(ABC):
|
||||
"""
|
||||
Base Chunker
|
||||
|
||||
A chunker is a tool that splits a text into smaller chunks that can be processed by a language model.
|
||||
|
||||
Usage:
|
||||
--------------
|
||||
|
||||
|
||||
"""
|
||||
|
||||
DEFAULT_SEPARATORS = [ChunkSeparator(" ")]
|
||||
|
||||
separators: list[ChunkSeparator] = field(
|
||||
default=Factory(lambda self: self.DEFAULT_SEPARATORS, takes_self=True),
|
||||
kw_only=True,
|
||||
)
|
||||
tokenizer: OpenAiTokenizer = field(
|
||||
default=Factory(
|
||||
lambda: OpenAiTokenizer(
|
||||
model=OpenAiTokenizer.DEFAULT_OPENAI_GPT_3_CHAT_MODEL
|
||||
)
|
||||
),
|
||||
kw_only=True,
|
||||
)
|
||||
max_tokens: int = field(
|
||||
default=Factory(lambda self: self.tokenizer.max_tokens, takes_self=True),
|
||||
kw_only=True,
|
||||
)
|
||||
|
||||
def chunk(self, text: TextArtifact | str) -> list[TextArtifact]:
|
||||
text = text.value if isinstance(text, TextArtifact) else text
|
||||
|
||||
return [TextArtifact(c) for c in self._chunk_recursively(text)]
|
||||
|
||||
def _chunk_recursively(
|
||||
self, chunk: str, current_separator: Optional[ChunkSeparator] = None
|
||||
) -> list[str]:
|
||||
token_count = self.tokenizer.token_count(chunk)
|
||||
|
||||
if token_count <= self.max_tokens:
|
||||
return [chunk]
|
||||
else:
|
||||
balance_index = -1
|
||||
balance_diff = float("inf")
|
||||
tokens_count = 0
|
||||
half_token_count = token_count // 2
|
||||
|
||||
if current_separator:
|
||||
separators = self.separators[self.separators.index(current_separator) :]
|
||||
else:
|
||||
separators = self.separators
|
||||
|
||||
for separator in separators:
|
||||
subchanks = list(filter(None, chunk.split(separator.value)))
|
||||
|
||||
if len(subchanks) > 1:
|
||||
for index, subchunk in enumerate(subchanks):
|
||||
if index < len(subchanks):
|
||||
if separator.is_prefix:
|
||||
subchunk = separator.value + subchunk
|
||||
else:
|
||||
subchunk = subchunk + separator.value
|
||||
|
||||
tokens_count += self.tokenizer.token_count(subchunk)
|
||||
|
||||
if abs(tokens_count - half_token_count) < balance_diff:
|
||||
balance_index = index
|
||||
balance_diff = abs(tokens_count - half_token_count)
|
||||
|
||||
if separator.is_prefix:
|
||||
first_subchunk = separator.value + separator.value.join(
|
||||
subchanks[: balance_index + 1]
|
||||
)
|
||||
second_subchunk = separator.value + separator.value.join(
|
||||
subchanks[balance_index + 1 :]
|
||||
)
|
||||
else:
|
||||
first_subchunk = (
|
||||
separator.value.join(subchanks[: balance_index + 1])
|
||||
+ separator.value
|
||||
)
|
||||
second_subchunk = separator.value.join(
|
||||
subchanks[balance_index + 1 :]
|
||||
)
|
||||
|
||||
first_subchunk_rec = self._chunk_recursively(
|
||||
first_subchunk.strip(), separator
|
||||
)
|
||||
second_subchunk_rec = self._chunk_recursively(
|
||||
second_subchunk.strip(), separator
|
||||
)
|
||||
|
||||
if first_subchunk_rec and second_subchunk_rec:
|
||||
return first_subchunk_rec + second_subchunk_rec
|
||||
elif first_subchunk_rec:
|
||||
return first_subchunk_rec
|
||||
elif second_subchunk_rec:
|
||||
return second_subchunk_rec
|
||||
else:
|
||||
return []
|
||||
return []
|
@ -0,0 +1,7 @@
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChunkSeparator:
|
||||
value: str
|
||||
is_prefix: bool = False
|
@ -0,0 +1,17 @@
|
||||
from swarms.chunkers.base import BaseChunker
|
||||
from swarms.chunk_seperator import ChunkSeparator
|
||||
|
||||
|
||||
class MarkdownChunker(BaseChunker):
|
||||
DEFAULT_SEPARATORS = [
|
||||
ChunkSeparator("##", is_prefix=True),
|
||||
ChunkSeparator("###", is_prefix=True),
|
||||
ChunkSeparator("####", is_prefix=True),
|
||||
ChunkSeparator("#####", is_prefix=True),
|
||||
ChunkSeparator("######", is_prefix=True),
|
||||
ChunkSeparator("\n\n"),
|
||||
ChunkSeparator(". "),
|
||||
ChunkSeparator("! "),
|
||||
ChunkSeparator("? "),
|
||||
ChunkSeparator(" ")
|
||||
]
|
@ -0,0 +1,12 @@
|
||||
from swarms.chunkers.base import BaseChunker
|
||||
from swarms.chunkers.chunk_seperator import ChunkSeparator
|
||||
|
||||
|
||||
class PdfChunker(BaseChunker):
|
||||
DEFAULT_SEPARATORS = [
|
||||
ChunkSeparator("\n\n"),
|
||||
ChunkSeparator(". "),
|
||||
ChunkSeparator("! "),
|
||||
ChunkSeparator("? "),
|
||||
ChunkSeparator(" "),
|
||||
]
|
@ -0,0 +1,13 @@
|
||||
from swarms.chunkers.base import BaseChunker
|
||||
from swarms.chunkers.chunk_seperator import ChunkSeparator
|
||||
|
||||
|
||||
class TextChunker(BaseChunker):
|
||||
DEFAULT_SEPARATORS = [
|
||||
ChunkSeparator("\n\n"),
|
||||
ChunkSeparator("\n"),
|
||||
ChunkSeparator(". "),
|
||||
ChunkSeparator("! "),
|
||||
ChunkSeparator("? "),
|
||||
ChunkSeparator(" "),
|
||||
]
|
@ -1,5 +1,6 @@
|
||||
from swarms.models.anthropic import Anthropic
|
||||
from swarms.models.petals import Petals
|
||||
from swarms.models.mistral import Mistral
|
||||
|
||||
# from swarms.models.openai_llm import OpenAIModel
|
||||
from swarms.models.openai_models import OpenAI, AzureOpenAI, OpenAIChat
|
||||
from swarms.models.openai_models import OpenAI, AzureOpenAI, OpenAIChat
|
||||
|
Loading…
Reference in new issue