parent
8a3beda652
commit
6de166ce50
@ -0,0 +1,101 @@
|
||||
# BingChat Documentation
|
||||
|
||||
## Introduction
|
||||
|
||||
Welcome to the documentation for BingChat, a powerful chatbot and image generation tool based on OpenAI's GPT model. This documentation provides a comprehensive understanding of BingChat, its architecture, usage, and how it can be integrated into your projects.
|
||||
|
||||
## Overview
|
||||
|
||||
BingChat is designed to provide text responses and generate images based on user prompts. It utilizes the capabilities of the GPT model to generate creative and contextually relevant responses. In addition, it can create images based on text prompts, making it a versatile tool for various applications.
|
||||
|
||||
## Class Definition
|
||||
|
||||
```python
|
||||
class BingChat:
|
||||
def __init__(self, cookies_path: str):
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
To use BingChat, follow these steps:
|
||||
|
||||
1. Initialize the BingChat instance:
|
||||
|
||||
```python
|
||||
from swarms.models.bing_chat import BingChat
|
||||
|
||||
edgegpt = BingChat(cookies_path="./path/to/cookies.json")
|
||||
```
|
||||
|
||||
2. Get a text response:
|
||||
|
||||
```python
|
||||
response = edgegpt("Hello, my name is ChatGPT")
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 1 - Text Response
|
||||
|
||||
```python
|
||||
from swarms.models.bing_chat import BingChat
|
||||
|
||||
|
||||
edgegpt = BingChat(cookies_path="./path/to/cookies.json")
|
||||
response = edgegpt("Hello, my name is ChatGPT")
|
||||
print(response)
|
||||
```
|
||||
|
||||
3. Generate an image based on a text prompt:
|
||||
|
||||
```python
|
||||
image_path = edgegpt.create_img("Sunset over mountains", output_dir="./output", auth_cookie="your_auth_cookie")
|
||||
print(f"Generated image saved at {image_path}")
|
||||
```
|
||||
|
||||
### Example 2 - Image Generation
|
||||
|
||||
```python
|
||||
from swarms.models.bing_chat import BingChat
|
||||
|
||||
edgegpt = BingChat(cookies_path="./path/to/cookies.json")
|
||||
|
||||
image_path = edgegpt.create_img("Sunset over mountains", output_dir="./output", auth_cookie="your_auth_cookie")
|
||||
|
||||
print(f"Generated image saved at {image_path}")
|
||||
```
|
||||
|
||||
4. Set the directory path for managing cookies:
|
||||
|
||||
```python
|
||||
BingChat.set_cookie_dir_path("./cookie_directory")
|
||||
```
|
||||
|
||||
### Example 3 - Set Cookie Directory Path
|
||||
|
||||
```python
|
||||
BingChat.set_cookie_dir_path("./cookie_directory")
|
||||
```
|
||||
|
||||
## How BingChat Works
|
||||
|
||||
BingChat works by utilizing cookies for authentication and interacting with OpenAI's GPT model. Here's how it works:
|
||||
|
||||
1. **Initialization**: When you create a BingChat instance, it loads the necessary cookies for authentication with BingChat.
|
||||
|
||||
2. **Text Response**: You can use the `__call__` method to get a text response from the GPT model based on the provided prompt. You can specify the conversation style for different response types.
|
||||
|
||||
3. **Image Generation**: The `create_img` method allows you to generate images based on text prompts. It requires an authentication cookie and saves the generated images to the specified output directory.
|
||||
|
||||
4. **Cookie Directory**: You can set the directory path for managing cookies using the `set_cookie_dir_path` method.
|
||||
|
||||
## Parameters
|
||||
|
||||
- `cookies_path`: The path to the cookies.json file necessary for authenticating with BingChat.
|
||||
|
||||
## Additional Information
|
||||
|
||||
- BingChat provides both text-based and image-based responses, making it versatile for various use cases.
|
||||
- Cookies are used for authentication, so make sure to provide the correct path to the cookies.json file.
|
||||
- Image generation requires an authentication cookie, and the generated images can be saved to a specified directory.
|
||||
|
||||
That concludes the documentation for BingChat. We hope you find this tool valuable for your text generation and image generation tasks. If you have any questions or encounter any issues, please refer to the BingChat documentation for further assistance. Enjoy working with BingChat!
|
@ -0,0 +1,103 @@
|
||||
# `Idefics` Documentation
|
||||
|
||||
## Introduction
|
||||
|
||||
Welcome to the documentation for Idefics, a versatile multimodal inference tool using pre-trained models from the Hugging Face Hub. Idefics is designed to facilitate the generation of text from various prompts, including text and images. This documentation provides a comprehensive understanding of Idefics, its architecture, usage, and how it can be integrated into your projects.
|
||||
|
||||
## Overview
|
||||
|
||||
Idefics leverages the power of pre-trained models to generate textual responses based on a wide range of prompts. It is capable of handling both text and images, making it suitable for various multimodal tasks, including text generation from images.
|
||||
|
||||
## Class Definition
|
||||
|
||||
```python
|
||||
class Idefics:
|
||||
def __init__(
|
||||
self,
|
||||
checkpoint="HuggingFaceM4/idefics-9b-instruct",
|
||||
device=None,
|
||||
torch_dtype=torch.bfloat16,
|
||||
max_length=100,
|
||||
):
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
To use Idefics, follow these steps:
|
||||
|
||||
1. Initialize the Idefics instance:
|
||||
|
||||
```python
|
||||
from swarms.models import Idefics
|
||||
|
||||
model = Idefics()
|
||||
```
|
||||
|
||||
2. Generate text based on prompts:
|
||||
|
||||
```python
|
||||
prompts = ["User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"]
|
||||
response = model(prompts)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 1 - Image Questioning
|
||||
|
||||
```python
|
||||
from swarms.models import Idefics
|
||||
|
||||
model = Idefics()
|
||||
prompts = ["User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"]
|
||||
response = model(prompts)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 2 - Bidirectional Conversation
|
||||
|
||||
```python
|
||||
from swarms.models import Idefics
|
||||
|
||||
model = Idefics()
|
||||
user_input = "User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"
|
||||
response = model.chat(user_input)
|
||||
print(response)
|
||||
|
||||
user_input = "User: Who is that? https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052"
|
||||
response = model.chat(user_input)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 3 - Configuration Changes
|
||||
|
||||
```python
|
||||
model.set_checkpoint("new_checkpoint")
|
||||
model.set_device("cpu")
|
||||
model.set_max_length(200)
|
||||
model.clear_chat_history()
|
||||
```
|
||||
|
||||
## How Idefics Works
|
||||
|
||||
Idefics operates by leveraging pre-trained models from the Hugging Face Hub. Here's how it works:
|
||||
|
||||
1. **Initialization**: When you create an Idefics instance, it initializes the model using a specified checkpoint, sets the device for inference, and configures other parameters like data type and maximum text length.
|
||||
|
||||
2. **Prompt-Based Inference**: You can use the `infer` method to generate text based on prompts. It processes prompts in batched or non-batched mode, depending on your preference. It uses a pre-trained processor to handle text and images.
|
||||
|
||||
3. **Bidirectional Conversation**: The `chat` method enables bidirectional conversations. You provide user input, and the model responds accordingly. The chat history is maintained for context.
|
||||
|
||||
4. **Configuration Changes**: You can change the model checkpoint, device, maximum text length, or clear the chat history as needed during runtime.
|
||||
|
||||
## Parameters
|
||||
|
||||
- `checkpoint`: The name of the pre-trained model checkpoint (default is "HuggingFaceM4/idefics-9b-instruct").
|
||||
- `device`: The device to use for inference. By default, it uses CUDA if available; otherwise, it uses CPU.
|
||||
- `torch_dtype`: The data type to use for inference. By default, it uses torch.bfloat16.
|
||||
- `max_length`: The maximum length of the generated text (default is 100).
|
||||
|
||||
## Additional Information
|
||||
|
||||
- Idefics provides a convenient way to engage in bidirectional conversations with pre-trained models.
|
||||
- You can easily change the model checkpoint, device, and other settings to adapt to your specific use case.
|
||||
|
||||
That concludes the documentation for Idefics. We hope you find this tool valuable for your multimodal text generation tasks. If you have any questions or encounter any issues, please refer to the Hugging Face Transformers documentation for further assistance. Enjoy working with Idefics!
|
Loading…
Reference in new issue