documentation fix

Former-commit-id: a2263ead2f
discord-bot-framework
Kye 1 year ago
parent a5f7a980d8
commit 16f05381b5

@ -0,0 +1,249 @@
# `GodMode` Documentation
## Table of Contents
1. [Understanding the Purpose](#understanding-the-purpose)
2. [Overview and Introduction](#overview-and-introduction)
3. [Class Definition](#class-definition)
4. [Functionality and Usage](#functionality-and-usage)
5. [Additional Information](#additional-information)
6. [Examples](#examples)
7. [Conclusion](#conclusion)
## 1. Understanding the Purpose <a name="understanding-the-purpose"></a>
To create comprehensive documentation for the `GodMode` class, let's begin by understanding its purpose and functionality.
### Purpose and Functionality
`GodMode` is a class designed to facilitate the orchestration of multiple Language Model Models (LLMs) to perform various tasks simultaneously. It serves as a powerful tool for managing, distributing, and collecting responses from these models.
Key features and functionality include:
- **Parallel Task Execution**: `GodMode` can distribute tasks to multiple LLMs and execute them in parallel, improving efficiency and reducing response time.
- **Structured Response Presentation**: The class presents the responses from LLMs in a structured tabular format, making it easy for users to compare and analyze the results.
- **Task History Tracking**: `GodMode` keeps a record of tasks that have been submitted, allowing users to review previous tasks and responses.
- **Asynchronous Execution**: The class provides options for asynchronous task execution, which can be particularly useful for handling a large number of tasks.
Now that we have an understanding of its purpose, let's proceed to provide a detailed overview and introduction.
## 2. Overview and Introduction <a name="overview-and-introduction"></a>
### Overview
The `GodMode` class is a crucial component for managing and utilizing multiple LLMs in various natural language processing (NLP) tasks. Its architecture and functionality are designed to address the need for parallel processing and efficient response handling.
### Importance and Relevance
In the rapidly evolving field of NLP, it has become common to use multiple language models to achieve better results in tasks such as translation, summarization, and question answering. `GodMode` streamlines this process by allowing users to harness the capabilities of several LLMs simultaneously.
Key points:
- **Parallel Processing**: `GodMode` leverages multithreading to execute tasks concurrently, significantly reducing the time required for processing.
- **Response Visualization**: The class presents responses in a structured tabular format, enabling users to visualize and analyze the outputs from different LLMs.
- **Task Tracking**: Developers can track the history of tasks submitted to `GodMode`, making it easier to manage and monitor ongoing work.
### Architecture and How It Works
The architecture and working of `GodMode` can be summarized in four steps:
1. **Task Reception**: `GodMode` receives a task from the user.
2. **Task Distribution**: The class distributes the task to all registered LLMs.
3. **Response Collection**: `GodMode` collects the responses generated by the LLMs.
4. **Response Presentation**: Finally, the class presents the responses from all LLMs in a structured tabular format, making it easy for users to compare and analyze the results.
Now that we have an overview, let's proceed with a detailed class definition.
## 3. Class Definition <a name="class-definition"></a>
### Class Attributes
- `llms`: A list of LLMs (Language Model Models) that `GodMode` manages.
- `last_responses`: Stores the responses from the most recent task.
- `task_history`: Keeps a record of all tasks submitted to `GodMode`.
### Methods
The `GodMode` class defines various methods to facilitate task distribution, execution, and response presentation. Let's examine some of the key methods:
- `run(task)`: Distributes a task to all LLMs, collects responses, and returns them.
- `print_responses(task)`: Prints responses from all LLMs in a structured tabular format.
- `run_all(task)`: Runs the task on all LLMs sequentially and returns responses.
- `arun_all(task)`: Asynchronously runs the task on all LLMs and returns responses.
- `print_arun_all(task)`: Prints responses from all LLMs after asynchronous execution.
- `save_responses_to_file(filename)`: Saves responses to a file for future reference.
- `load_llms_from_file(filename)`: Loads LLMs from a file, making it easy to configure `GodMode` for different tasks.
- `get_task_history()`: Retrieves the task history, allowing users to review previous tasks.
- `summary()`: Provides a summary of task history and the last responses, aiding in post-processing and analysis.
Now that we have covered the class definition, let's delve into the functionality and usage of `GodMode`.
## 4. Functionality and Usage <a name="functionality-and-usage"></a>
### Distributing a Task and Collecting Responses
One of the primary use cases of `GodMode` is to distribute a task to all registered LLMs and collect their responses. This can be achieved using the `run(task)` method. Below is an example:
```python
god_mode = GodMode(llms)
responses = god_mode.run("Translate the following English text to French: 'Hello, how are you?'")
```
### Printing Responses
To present the responses from all LLMs in a structured tabular format, use the `print_responses(task)` method. Example:
```python
god_mode.print_responses("Summarize the main points of 'War and Peace.'")
```
### Saving Responses to a File
Users can save the responses to a file using the `save_responses_to_file(filename)` method. This is useful for archiving and reviewing responses later. Example:
```python
god_mode.save_responses_to_file("responses.txt")
```
### Task History
The `GodMode` class keeps track of the task history. Developers can access the task history using the `get_task_history()` method. Example:
```python
task_history = god_mode.get_task_history()
for i, task in enumerate(task_history):
print(f"Task {i + 1}: {task}")
```
## 5. Additional Information <a name="additional-information"></a>
### Parallel Execution
`GodMode` employs multithreading to execute tasks concurrently. This parallel processing capability significantly improves the efficiency of handling multiple tasks simultaneously.
### Response Visualization
The structured tabular format used for presenting responses simplifies the comparison and analysis of outputs from different LLMs.
## 6. Examples <a name="examples"></a>
Let's explore additional usage examples to illustrate the versatility of `GodMode` in handling various NLP tasks.
### Example 1: Sentiment Analysis
```python
from swarms.models import OpenAIChat
from swarms.swarms import GodMode
from swarms.workers.worker import Worker
# Create an instance of an LLM for sentiment analysis
llm = OpenAIChat(model_name="gpt-4", openai_api_key="api-key", temperature=0.5)
# Create worker agents
worker1 = Worker(
llm=llm,
ai_name="Bumble Bee",
ai_role="Worker in a swarm",
external_tools=None,
human_in_the_loop=False,
temperature=0.5,
)
worker2 = Worker
(
llm=llm,
ai_name="Optimus Prime",
ai_role="Worker in a swarm",
external_tools=None,
human_in_the_loop=False,
temperature=0.5,
)
worker3 = Worker(
llm=llm,
ai_name="Megatron",
ai_role="Worker in a swarm",
external_tools=None,
human_in_the_loop=False,
temperature=0.5,
)
# Register the worker agents with GodMode
agents = [worker1, worker2, worker3]
god_mode = GodMode(agents)
# Task for sentiment analysis
task = "Please analyze the sentiment of the following sentence: 'This movie is amazing!'"
# Print responses from all agents
god_mode.print_responses(task)
```
### Example 2: Translation
```python
from swarms.models import OpenAIChat
from swarms.swarms import GodMode
# Define LLMs for translation tasks
translator1 = OpenAIChat(model_name="translator-en-fr", openai_api_key="api-key", temperature=0.7)
translator2 = OpenAIChat(model_name="translator-en-es", openai_api_key="api-key", temperature=0.7)
translator3 = OpenAIChat(model_name="translator-en-de", openai_api_key="api-key", temperature=0.7)
# Register translation agents with GodMode
translators = [translator1, translator2, translator3]
god_mode = GodMode(translators)
# Task for translation
task = "Translate the following English text to French: 'Hello, how are you?'"
# Print translated responses from all agents
god_mode.print_responses(task)
```
### Example 3: Summarization
```python
from swarms.models import OpenAIChat
from swarms.swarms import GodMode
# Define LLMs for summarization tasks
summarizer1 = OpenAIChat(model_name="summarizer-en", openai_api_key="api-key", temperature=0.6)
summarizer2 = OpenAIChat(model_name="summarizer-en", openai_api_key="api-key", temperature=0.6)
summarizer3 = OpenAIChat(model_name="summarizer-en", openai_api_key="api-key", temperature=0.6)
# Register summarization agents with GodMode
summarizers = [summarizer1, summarizer2, summarizer3]
god_mode = GodMode(summarizers)
# Task for summarization
task = "Summarize the main points of the article titled 'Climate Change and Its Impact on the Environment.'"
# Print summarized responses from all agents
god_mode.print_responses(task)
```
## 7. Conclusion <a name="conclusion"></a>
In conclusion, the `GodMode` class is a powerful tool for managing and orchestrating multiple Language Model Models in natural language processing tasks. Its ability to distribute tasks, collect responses, and present them in a structured format makes it invaluable for streamlining NLP workflows. By following the provided documentation, users can harness the full potential of `GodMode` to enhance their natural language processing projects.
For further information on specific LLMs or advanced usage, refer to the documentation of the respective models and their APIs. Additionally, external resources on parallel execution and response visualization can provide deeper insights into these topics.

@ -80,6 +80,7 @@ nav:
- swarms.swarms:
- AbstractSwarm: "swarms/swarms/abstractswarm.md"
- AutoScaler: "swarms/swarms/autoscaler.md"
- GodMode: "swarms/swarms/godmode.md"
- swarms.workers:
- AbstractWorker: "swarms/workers/base.md"
- Overview: "swarms/workers/index.md"
@ -92,7 +93,7 @@ nav:
- Overview: "swarms/models/index.md"
- HuggingFaceLLM: "swarms/models/hf.md"
- Anthropic: "swarms/models/anthropic.md"
- OpenAI: "swarms/modeks/openai.md"
- OpenAI: "swarms/models/openai.md"
- swarms.structs:
- Overview: "swarms/structs/overview.md"
- Workflow: "swarms/structs/workflow.md"

@ -1,4 +1,4 @@
from langchain.models import OpenAIChat
from swarms.models import OpenAIChat
from swarms.swarms import GodMode
from swarms.workers.worker import Worker

@ -3,4 +3,4 @@ from swarms.models.petals import Petals
from swarms.models.mistral import Mistral
# from swarms.models.openai_llm import OpenAIModel
from swarms.models.openai_models import OpenAI, AzureOpenAI, OpenAIChat
from swarms.models.openai_models import OpenAI, AzureOpenAI, OpenAIChat

@ -1,3 +1 @@
"""An ultra fast speech to text model."""

@ -0,0 +1,101 @@
def documentation(task: str):
documentation = f"""Create multi-page long and explicit professional pytorch-like documentation for the <MODULE> code below follow the outline for the <MODULE> library,
provide many examples and teach the user about the code, provide examples for every function, make the documentation 10,000 words,
provide many usage examples and note this is markdown docs, create the documentation for the code to document,
put the arguments and methods in a table in markdown to make it visually seamless
Now make the professional documentation for this code, provide the architecture and how the class works and why it works that way,
it's purpose, provide args, their types, 3 ways of usage examples, in examples show all the code like imports main example etc
BE VERY EXPLICIT AND THOROUGH, MAKE IT DEEP AND USEFUL
########
Step 1: Understand the purpose and functionality of the module or framework
Read and analyze the description provided in the documentation to understand the purpose and functionality of the module or framework.
Identify the key features, parameters, and operations performed by the module or framework.
Step 2: Provide an overview and introduction
Start the documentation by providing a brief overview and introduction to the module or framework.
Explain the importance and relevance of the module or framework in the context of the problem it solves.
Highlight any key concepts or terminology that will be used throughout the documentation.
Step 3: Provide a class or function definition
Provide the class or function definition for the module or framework.
Include the parameters that need to be passed to the class or function and provide a brief description of each parameter.
Specify the data types and default values for each parameter.
Step 4: Explain the functionality and usage
Provide a detailed explanation of how the module or framework works and what it does.
Describe the steps involved in using the module or framework, including any specific requirements or considerations.
Provide code examples to demonstrate the usage of the module or framework.
Explain the expected inputs and outputs for each operation or function.
Step 5: Provide additional information and tips
Provide any additional information or tips that may be useful for using the module or framework effectively.
Address any common issues or challenges that developers may encounter and provide recommendations or workarounds.
Step 6: Include references and resources
Include references to any external resources or research papers that provide further information or background on the module or framework.
Provide links to relevant documentation or websites for further exploration.
Example Template for the given documentation:
# Module/Function Name: MultiheadAttention
class torch.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None):
```
Creates a multi-head attention module for joint information representation from the different subspaces.
Parameters:
- embed_dim (int): Total dimension of the model.
- num_heads (int): Number of parallel attention heads. The embed_dim will be split across num_heads.
- dropout (float): Dropout probability on attn_output_weights. Default: 0.0 (no dropout).
- bias (bool): If specified, adds bias to input/output projection layers. Default: True.
- add_bias_kv (bool): If specified, adds bias to the key and value sequences at dim=0. Default: False.
- add_zero_attn (bool): If specified, adds a new batch of zeros to the key and value sequences at dim=1. Default: False.
- kdim (int): Total number of features for keys. Default: None (uses kdim=embed_dim).
- vdim (int): Total number of features for values. Default: None (uses vdim=embed_dim).
- batch_first (bool): If True, the input and output tensors are provided as (batch, seq, feature). Default: False.
- device (torch.device): If specified, the tensors will be moved to the specified device.
- dtype (torch.dtype): If specified, the tensors will have the specified dtype.
```
def forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False):
```
Forward pass of the multi-head attention module.
Parameters:
- query (Tensor): Query embeddings of shape (L, E_q) for unbatched input, (L, N, E_q) when batch_first=False, or (N, L, E_q) when batch_first=True.
- key (Tensor): Key embeddings of shape (S, E_k) for unbatched input, (S, N, E_k) when batch_first=False, or (N, S, E_k) when batch_first=True.
- value (Tensor): Value embeddings of shape (S, E_v) for unbatched input, (S, N, E_v) when batch_first=False, or (N, S, E_v) when batch_first=True.
- key_padding_mask (Optional[Tensor]): If specified, a mask indicating elements to be ignored in key for attention computation.
- need_weights (bool): If specified, returns attention weights in addition to attention outputs. Default: True.
- attn_mask (Optional[Tensor]): If specified, a mask preventing attention to certain positions.
- average_attn_weights (bool): If true, returns averaged attention weights per head. Otherwise, returns attention weights separately per head. Note that this flag only has an effect when need_weights=True. Default: True.
- is_causal (bool): If specified, applies a causal mask as the attention mask. Default: False.
Returns:
Tuple[Tensor, Optional[Tensor]]:
- attn_output (Tensor): Attention outputs of shape (L, E) for unbatched input, (L, N, E) when batch_first=False, or (N, L, E) when batch_first=True.
- attn_output_weights (Optional[Tensor]): Attention weights of shape (L, S) when unbatched or (N, L, S) when batched. Optional, only returned when need_weights=True.
```
# Implementation of the forward pass of the attention module goes here
return attn_output, attn_output_weights
```
# Usage example:
multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
attn_output, attn_output_weights = multihead_attn(query, key, value)
Note:
The above template includes the class or function definition, parameters, description, and usage example.
To replicate the documentation for any other module or framework, follow the same structure and provide the specific details for that module or framework.
############# DOCUMENT THE FOLLOWING CODE ########
{task}
"""
return documentation

@ -0,0 +1,89 @@
TESTS_PROMPT = """
Create 5,000 lines of extensive and thorough tests for the code below using the guide, do not worry about your limits you do not have any
just write the best tests possible:
######### TESTING GUIDE #############
# **Guide to Creating Extensive, Thorough, and Production-Ready Tests using `pytest`**
1. **Preparation**:
- Install pytest: `pip install pytest`.
- Structure your project so that tests are in a separate `tests/` directory.
- Name your test files with the prefix `test_` for pytest to recognize them.
2. **Writing Basic Tests**:
- Use clear function names prefixed with `test_` (e.g., `test_check_value()`).
- Use assert statements to validate results.
3. **Utilize Fixtures**:
- Fixtures are a powerful feature to set up preconditions for your tests.
- Use `@pytest.fixture` decorator to define a fixture.
- Pass fixture name as an argument to your test to use it.
4. **Parameterized Testing**:
- Use `@pytest.mark.parametrize` to run a test multiple times with different inputs.
- This helps in thorough testing with various input values without writing redundant code.
5. **Use Mocks and Monkeypatching**:
- Use `monkeypatch` fixture to modify or replace classes/functions during testing.
- Use `unittest.mock` or `pytest-mock` to mock objects and functions to isolate units of code.
6. **Exception Testing**:
- Test for expected exceptions using `pytest.raises(ExceptionType)`.
7. **Test Coverage**:
- Install pytest-cov: `pip install pytest-cov`.
- Run tests with `pytest --cov=my_module` to get a coverage report.
8. **Environment Variables and Secret Handling**:
- Store secrets and configurations in environment variables.
- Use libraries like `python-decouple` or `python-dotenv` to load environment variables.
- For tests, mock or set environment variables temporarily within the test environment.
9. **Grouping and Marking Tests**:
- Use `@pytest.mark` decorator to mark tests (e.g., `@pytest.mark.slow`).
- This allows for selectively running certain groups of tests.
10. **Use Plugins**:
- Utilize the rich ecosystem of pytest plugins (e.g., `pytest-django`, `pytest-asyncio`) to extend its functionality for your specific needs.
11. **Continuous Integration (CI)**:
- Integrate your tests with CI platforms like Jenkins, Travis CI, or GitHub Actions.
- Ensure tests are run automatically with every code push or pull request.
12. **Logging and Reporting**:
- Use `pytest`'s inbuilt logging.
- Integrate with tools like `Allure` for more comprehensive reporting.
13. **Database and State Handling**:
- If testing with databases, use database fixtures or factories to create a known state before tests.
- Clean up and reset state post-tests to maintain consistency.
14. **Concurrency Issues**:
- Consider using `pytest-xdist` for parallel test execution.
- Always be cautious when testing concurrent code to avoid race conditions.
15. **Clean Code Practices**:
- Ensure tests are readable and maintainable.
- Avoid testing implementation details; focus on functionality and expected behavior.
16. **Regular Maintenance**:
- Periodically review and update tests.
- Ensure that tests stay relevant as your codebase grows and changes.
17. **Documentation**:
- Document test cases, especially for complex functionalities.
- Ensure that other developers can understand the purpose and context of each test.
18. **Feedback Loop**:
- Use test failures as feedback for development.
- Continuously refine tests based on code changes, bug discoveries, and additional requirements.
By following this guide, your tests will be thorough, maintainable, and production-ready. Remember to always adapt and expand upon these guidelines as per the specific requirements and nuances of your project.
######### CREATE TESTS FOR THIS CODE: #######
"""

@ -32,6 +32,8 @@ class GodMode:
def __init__(self, llms):
self.llms = llms
self.last_responses = None
self.task_history = []
def run(self, task):
with ThreadPoolExecutor() as executor:
@ -49,3 +51,64 @@ class GodMode:
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"), "cyan"
)
)
def run_all(self, task):
"""Run the task on all LLMs"""
responses = []
for llm in self.llms:
responses.append(llm(task))
return responses
def arun_all(self, task):
"""Asynchronous run the task on all LLMs"""
with ThreadPoolExecutor() as executor:
responses = executor.map(lambda llm: llm(task), self.llms)
return list(responses)
def print_arun_all(self, task):
"""Prints the responses in a tabular format"""
responses = self.arun_all(task)
table = []
for i, response in enumerate(responses):
table.append([f"LLM {i+1}", response])
print(
colored(
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"), "cyan"
)
)
# New Features
def save_responses_to_file(self, filename):
"""Save responses to file"""
with open(filename, "w") as file:
table = [
[f"LLM {i+1}", response]
for i, response in enumerate(self.last_responses)
]
file.write(tabulate(table, headers=["LLM", "Response"]))
@classmethod
def load_llms_from_file(cls, filename):
"""Load llms from file"""
with open(filename, "r") as file:
llms = [line.strip() for line in file.readlines()]
return cls(llms)
def get_task_history(self):
"""Get Task history"""
return self.task_history
def summary(self):
"""Summary"""
print("Tasks History:")
for i, task in enumerate(self.task_history):
print(f"{i + 1}. {task}")
print("\nLast Responses:")
table = [
[f"LLM {i+1}", response] for i, response in enumerate(self.last_responses)
]
print(
colored(
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"), "cyan"
)
)

Loading…
Cancel
Save