conversation

dependabot/pip/transformers-gte-4.39.0-and-lt-4.53.0
Kye Gomez 3 weeks ago
parent ab030d46b9
commit bdf6885cbc

@ -357,6 +357,9 @@ nav:
- Swarms API as MCP: "swarms_cloud/mcp.md" - Swarms API as MCP: "swarms_cloud/mcp.md"
- Swarms API Tools: "swarms_cloud/swarms_api_tools.md" - Swarms API Tools: "swarms_cloud/swarms_api_tools.md"
- Individual Agent Completions: "swarms_cloud/agent_api.md" - Individual Agent Completions: "swarms_cloud/agent_api.md"
- Clients:
- Swarms API Python Client: "swarms_cloud/python_client.md" - Swarms API Python Client: "swarms_cloud/python_client.md"
- Pricing: - Pricing:

@ -7,24 +7,36 @@ The `Conversation` class is a powerful tool for managing and structuring convers
## Table of Contents ## Table of Contents
1. **Class Definition** 1. **Class Definition**
- Overview - Overview
- Attributes - Attributes
2. **Methods** - Initialization Parameters
- `__init__(self, time_enabled: bool = False, *args, **kwargs)`
- `add(self, role: str, content: str, *args, **kwargs)` 2. **Core Methods**
- `delete(self, index: str)`
- `update(self, index: str, role, content)` - Message Management
- `query(self, index: str)`
- `search(self, keyword: str)` - History Operations
- `display_conversation(self, detailed: bool = False)`
- `export_conversation(self, filename: str)` - Export/Import
- `import_conversation(self, filename: str)`
- `count_messages_by_role(self)` - Search and Query
- `return_history_as_string(self)`
- `save_as_json(self, filename: str)` - Cache Management
- `load_from_json(self, filename: str)`
- `search_keyword_in_conversation(self, keyword: str)` - Memory Management
3. **Advanced Features**
- Token Counting
- Memory Providers
- Caching System
- Batch Operations
--- ---
@ -36,217 +48,303 @@ The `Conversation` class is designed to manage conversations by keeping track of
#### Attributes #### Attributes
- `time_enabled (bool)`: A flag indicating whether to enable timestamp recording for messages. - `id (str)`: Unique identifier for the conversation
- `conversation_history (list)`: A list that stores messages in the conversation.
- `name (str)`: Name of the conversation
### 2. Methods - `system_prompt (Optional[str])`: System prompt for the conversation
#### `__init__(self, time_enabled: bool = False, *args, **kwargs)` - `time_enabled (bool)`: Flag to enable time tracking for messages
- **Description**: Initializes a new Conversation object. - `autosave (bool)`: Flag to enable automatic saving
- **Parameters**:
- `time_enabled (bool)`: If `True`, timestamps will be recorded for each message. Default is `False`.
#### `add(self, role: str, content: str, *args, **kwargs)` - `save_filepath (str)`: File path for saving conversation history
- **Description**: Adds a message to the conversation history. - `conversation_history (list)`: List storing conversation messages
- **Parameters**:
- `role (str)`: The role of the speaker (e.g., "user," "assistant").
- `content (str)`: The content of the message.
#### `delete(self, index: str)` - `tokenizer (Any)`: Tokenizer for counting tokens
- **Description**: Deletes a message from the conversation history. - `context_length (int)`: Maximum number of tokens allowed
- **Parameters**:
- `index (str)`: The index of the message to delete. - `rules (str)`: Rules for the conversation
- `custom_rules_prompt (str)`: Custom prompt for rules
- `user (str)`: User identifier for messages
- `auto_save (bool)`: Flag for auto-saving
- `save_as_yaml (bool)`: Flag to save as YAML
- `save_as_json_bool (bool)`: Flag to save as JSON
- `token_count (bool)`: Flag to enable token counting
- `cache_enabled (bool)`: Flag to enable prompt caching
- `cache_stats (dict)`: Statistics about cache usage
- `provider (Literal["mem0", "in-memory"])`: Memory provider type
#### Initialization Parameters
```python
conversation = Conversation(
id="unique_id", # Optional: Unique identifier
name="conversation_name", # Optional: Name of conversation
system_prompt="System message", # Optional: Initial system prompt
time_enabled=True, # Optional: Enable timestamps
autosave=True, # Optional: Enable auto-saving
save_filepath="path/to/save.json", # Optional: Save location
tokenizer=your_tokenizer, # Optional: Token counter
context_length=8192, # Optional: Max tokens
rules="conversation rules", # Optional: Rules
custom_rules_prompt="custom", # Optional: Custom rules
user="User:", # Optional: User identifier
auto_save=True, # Optional: Auto-save
save_as_yaml=True, # Optional: Save as YAML
save_as_json_bool=False, # Optional: Save as JSON
token_count=True, # Optional: Count tokens
cache_enabled=True, # Optional: Enable caching
conversations_dir="path/to/dir", # Optional: Cache directory
provider="in-memory" # Optional: Memory provider
)
```
#### `update(self, index: str, role, content)` ### 2. Core Methods
- **Description**: Updates a message in the conversation history. #### Message Management
- **Parameters**:
- `index (str)`: The index of the message to update.
- `role (_type_)`: The new role of the speaker.
- `content (_type_)`: The new content of the message.
#### `query(self, index: str)` ##### `add(role: str, content: Union[str, dict, list], metadata: Optional[dict] = None)`
- **Description**: Retrieves a message from the conversation history. Adds a message to the conversation history.
- **Parameters**:
- `index (str)`: The index of the message to query.
- **Returns**: The message as a string.
#### `search(self, keyword: str)` ```python
# Add a simple text message
conversation.add("user", "Hello, how are you?")
- **Description**: Searches for messages containing a specific keyword in the conversation history. # Add a structured message
- **Parameters**: conversation.add("assistant", {
- `keyword (str)`: The keyword to search for. "type": "response",
- **Returns**: A list of messages that contain the keyword. "content": "I'm doing well!"
})
#### `display_conversation(self, detailed: bool = False)` # Add with metadata
conversation.add("user", "Hello", metadata={"timestamp": "2024-03-20"})
```
- **Description**: Displays the conversation history. ##### `add_multiple_messages(roles: List[str], contents: List[Union[str, dict, list]])`
- **Parameters**:
- `detailed (bool, optional)`: If `True`, provides detailed information about each message. Default is `False`.
#### `export_conversation(self, filename: str)` Adds multiple messages at once.
- **Description**: Exports the conversation history to a text file. ```python
- **Parameters**: conversation.add_multiple_messages(
- `filename (str)`: The name of the file to export to. roles=["user", "assistant"],
contents=["Hello!", "Hi there!"]
)
```
#### `import_conversation(self, filename: str)` ##### `add_tool_output_to_agent(role: str, tool_output: dict)`
- **Description**: Imports a conversation history from a text file. Adds a tool output to the conversation.
- **Parameters**:
- `filename (str)`: The name of the file to import from.
#### `count_messages_by_role(self)` ```python
conversation.add_tool_output_to_agent(
"tool",
{"name": "calculator", "result": "42"}
)
```
- **Description**: Counts the number of messages by role in the conversation. #### History Operations
- **Returns**: A dictionary containing the count of messages for each role.
#### `return_history_as_string(self)` ##### `get_last_message_as_string() -> str`
- **Description**: Returns the entire conversation history as a single string. Returns the last message as a string.
- **Returns**: The conversation history as a string.
#### `save_as_json(self, filename: str)` ```python
last_message = conversation.get_last_message_as_string()
# Returns: "assistant: Hello there!"
```
- **Description**: Saves the conversation history as a JSON file. ##### `get_final_message() -> str`
- **Parameters**:
- `filename (str)`: The name of the JSON file to save.
#### `load_from_json(self, filename: str)` Returns the final message from the conversation.
- **Description**: Loads a conversation history from a JSON file. ```python
- **Parameters**: final_message = conversation.get_final_message()
- `filename (str)`: The name of the JSON file to load. # Returns: "assistant: Goodbye!"
```
#### `search_keyword_in_conversation(self, keyword: str)` ##### `get_final_message_content() -> str`
- **Description**: Searches for a keyword in the conversation history and returns matching messages. Returns just the content of the final message.
- **Parameters**:
- `keyword (str)`: The keyword to search for.
- **Returns**: A list of messages containing the keyword.
## Examples ```python
final_content = conversation.get_final_message_content()
# Returns: "Goodbye!"
```
Here are some usage examples of the `Conversation` class: ##### `return_all_except_first() -> list`
### Creating a Conversation Returns all messages except the first one.
```python ```python
from swarms.structs import Conversation messages = conversation.return_all_except_first()
```
##### `return_all_except_first_string() -> str`
Returns all messages except the first one as a string.
conv = Conversation() ```python
messages_str = conversation.return_all_except_first_string()
``` ```
### Adding Messages #### Export/Import
##### `to_json() -> str`
Converts conversation to JSON string.
```python ```python
conv.add("user", "Hello, world!") json_str = conversation.to_json()
conv.add("assistant", "Hello, user!")
``` ```
### Displaying the Conversation ##### `to_dict() -> list`
Converts conversation to dictionary.
```python ```python
conv.display_conversation() dict_data = conversation.to_dict()
``` ```
### Searching for Messages ##### `to_yaml() -> str`
Converts conversation to YAML string.
```python ```python
result = conv.search("Hello") yaml_str = conversation.to_yaml()
``` ```
### Exporting and Importing Conversations ##### `return_json() -> str`
Returns conversation as formatted JSON string.
```python ```python
conv.export_conversation("conversation.txt") json_str = conversation.return_json()
conv.import_conversation("conversation.txt")
``` ```
### Counting Messages by Role #### Search and Query
##### `get_visible_messages(agent: "Agent", turn: int) -> List[Dict]`
Gets visible messages for a specific agent and turn.
```python ```python
counts = conv.count_messages_by_role() visible_msgs = conversation.get_visible_messages(agent, turn=1)
``` ```
### Loading and Saving as JSON #### Cache Management
##### `get_cache_stats() -> Dict[str, int]`
Gets statistics about cache usage.
```python ```python
conv.save_as_json("conversation.json") stats = conversation.get_cache_stats()
conv.load_from_json("conversation.json") # Returns: {
# "hits": 10,
# "misses": 5,
# "cached_tokens": 1000,
# "total_tokens": 2000,
# "hit_rate": 0.67
# }
``` ```
Certainly! Let's continue with more examples and additional information about the `Conversation` class. #### Memory Management
### Querying a Specific Message ##### `clear_memory()`
You can retrieve a specific message from the conversation by its index: Clears the conversation memory.
```python ```python
message = conv.query(0) # Retrieves the first message conversation.clear_memory()
``` ```
### Updating a Message ##### `clear()`
You can update a message's content or role within the conversation: Clears the conversation history.
```python ```python
conv.update(0, "user", "Hi there!") # Updates the first message conversation.clear()
``` ```
### Deleting a Message ### 3. Advanced Features
If you want to remove a message from the conversation, you can use the `delete` method: #### Token Counting
The class supports automatic token counting when enabled:
```python ```python
conv.delete(0) # Deletes the first message conversation = Conversation(token_count=True)
conversation.add("user", "Hello world")
# Token count will be automatically calculated and stored
``` ```
### Counting Messages by Role #### Memory Providers
You can count the number of messages by role in the conversation: The class supports different memory providers:
```python ```python
counts = conv.count_messages_by_role() # In-memory provider (default)
# Example result: {'user': 2, 'assistant': 2} conversation = Conversation(provider="in-memory")
# Mem0 provider
conversation = Conversation(provider="mem0")
``` ```
### Exporting and Importing as Text #### Caching System
You can export the conversation to a text file and later import it: The caching system can be enabled to improve performance:
```python ```python
conv.export_conversation("conversation.txt") # Export conversation = Conversation(cache_enabled=True)
conv.import_conversation("conversation.txt") # Import # Messages will be cached for faster retrieval
``` ```
### Exporting and Importing as JSON #### Batch Operations
Conversations can also be saved and loaded as JSON files: The class supports batch operations for efficiency:
```python ```python
conv.save_as_json("conversation.json") # Save as JSON # Batch add messages
conv.load_from_json("conversation.json") # Load from JSON conversation.batch_add([
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"}
])
``` ```
### Searching for a Keyword ### Class Methods
You can search for messages containing a specific keyword within the conversation: #### `load_conversation(name: str, conversations_dir: Optional[str] = None) -> "Conversation"`
Loads a conversation from cache.
```python ```python
results = conv.search_keyword_in_conversation("Hello") conversation = Conversation.load_conversation("my_conversation")
``` ```
#### `list_cached_conversations(conversations_dir: Optional[str] = None) -> List[str]`
Lists all cached conversations.
These examples demonstrate the versatility of the `Conversation` class in managing and interacting with conversation data. Whether you're building a chatbot, conducting analysis, or simply organizing dialogues, this class offers a robust set of tools to help you accomplish your goals. ```python
conversations = Conversation.list_cached_conversations()
```
## Conclusion ## Conclusion
The `Conversation` class is a valuable utility for handling conversation data in Python. With its ability to add, update, delete, search, export, and import messages, you have the flexibility to work with conversations in various ways. Feel free to explore its features and adapt them to your specific projects and applications. The `Conversation` class provides a comprehensive set of tools for managing conversations in Python applications. With support for multiple memory providers, caching, token counting, and various export formats, it's suitable for a wide range of use cases from simple chat applications to complex AI systems.
If you have any further questions or need additional assistance, please don't hesitate to ask! For more information or specific use cases, please refer to the examples above or consult the source code.

@ -4,7 +4,15 @@ import json
import os import os
import threading import threading
import uuid import uuid
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union from typing import (
TYPE_CHECKING,
Any,
Dict,
List,
Optional,
Union,
Literal,
)
import yaml import yaml
@ -16,12 +24,18 @@ from swarms.utils.litellm_tokenizer import count_tokens
if TYPE_CHECKING: if TYPE_CHECKING:
from swarms.structs.agent import Agent from swarms.structs.agent import Agent
from loguru import logger
def generate_conversation_id(): def generate_conversation_id():
"""Generate a unique conversation ID.""" """Generate a unique conversation ID."""
return str(uuid.uuid4()) return str(uuid.uuid4())
# Define available providers
providers = Literal["mem0", "in-memory"]
class Conversation(BaseStructure): class Conversation(BaseStructure):
""" """
A class to manage a conversation history, allowing for the addition, deletion, A class to manage a conversation history, allowing for the addition, deletion,
@ -68,6 +82,7 @@ class Conversation(BaseStructure):
token_count: bool = True, token_count: bool = True,
cache_enabled: bool = True, cache_enabled: bool = True,
conversations_dir: Optional[str] = None, conversations_dir: Optional[str] = None,
provider: providers = "in-memory",
*args, *args,
**kwargs, **kwargs,
): ):
@ -91,6 +106,7 @@ class Conversation(BaseStructure):
self.save_as_json_bool = save_as_json_bool self.save_as_json_bool = save_as_json_bool
self.token_count = token_count self.token_count = token_count
self.cache_enabled = cache_enabled self.cache_enabled = cache_enabled
self.provider = provider
self.cache_stats = { self.cache_stats = {
"hits": 0, "hits": 0,
"misses": 0, "misses": 0,
@ -98,9 +114,13 @@ class Conversation(BaseStructure):
"total_tokens": 0, "total_tokens": 0,
} }
self.cache_lock = threading.Lock() self.cache_lock = threading.Lock()
self.conversations_dir = conversations_dir
self.setup()
def setup(self):
# Set up conversations directory # Set up conversations directory
self.conversations_dir = conversations_dir or os.path.join( self.conversations_dir = self.conversations_dir or os.path.join(
os.path.expanduser("~"), ".swarms", "conversations" os.path.expanduser("~"), ".swarms", "conversations"
) )
os.makedirs(self.conversations_dir, exist_ok=True) os.makedirs(self.conversations_dir, exist_ok=True)
@ -127,15 +147,33 @@ class Conversation(BaseStructure):
self.add("System", self.system_prompt) self.add("System", self.system_prompt)
if self.rules is not None: if self.rules is not None:
self.add("User", rules) self.add(self.user or "User", self.rules)
if custom_rules_prompt is not None: if self.custom_rules_prompt is not None:
self.add(user or "User", custom_rules_prompt) self.add(self.user or "User", self.custom_rules_prompt)
# If tokenizer then truncate # If tokenizer then truncate
if tokenizer is not None: if self.tokenizer is not None:
self.truncate_memory_with_tokenizer() self.truncate_memory_with_tokenizer()
def mem0_provider(self):
try:
from mem0 import AsyncMemory
except ImportError:
logger.warning(
"mem0ai is not installed. Please install it to use the Conversation class."
)
return None
try:
memory = AsyncMemory()
return memory
except Exception as e:
logger.error(
f"Failed to initialize AsyncMemory: {str(e)}"
)
return None
def _generate_cache_key( def _generate_cache_key(
self, content: Union[str, dict, list] self, content: Union[str, dict, list]
) -> str: ) -> str:
@ -230,7 +268,7 @@ class Conversation(BaseStructure):
with open(conversation_file, "w") as f: with open(conversation_file, "w") as f:
json.dump(save_data, f, indent=4) json.dump(save_data, f, indent=4)
def add( def add_in_memory(
self, self,
role: str, role: str,
content: Union[str, dict, list], content: Union[str, dict, list],
@ -277,6 +315,38 @@ class Conversation(BaseStructure):
# Save to cache after adding message # Save to cache after adding message
self._save_to_cache() self._save_to_cache()
def add_mem0(
self,
role: str,
content: Union[str, dict, list],
metadata: Optional[dict] = None,
):
"""Add a message to the conversation history using the Mem0 provider."""
if self.provider == "mem0":
memory = self.mem0_provider()
memory.add(
messages=content,
agent_id=role,
run_id=self.id,
metadata=metadata,
)
def add(
self,
role: str,
content: Union[str, dict, list],
metadata: Optional[dict] = None,
):
"""Add a message to the conversation history."""
if self.provider == "in-memory":
self.add_in_memory(role, content)
elif self.provider == "mem0":
self.add_mem0(
role=role, content=content, metadata=metadata
)
else:
raise ValueError(f"Invalid provider: {self.provider}")
def add_multiple_messages( def add_multiple_messages(
self, roles: List[str], contents: List[Union[str, dict, list]] self, roles: List[str], contents: List[Union[str, dict, list]]
): ):
@ -570,7 +640,13 @@ class Conversation(BaseStructure):
Returns: Returns:
str: The last message formatted as 'role: content'. str: The last message formatted as 'role: content'.
""" """
if self.provider == "mem0":
memory = self.mem0_provider()
return memory.get_all(run_id=self.id)
elif self.provider == "in-memory":
return f"{self.conversation_history[-1]['role']}: {self.conversation_history[-1]['content']}" return f"{self.conversation_history[-1]['role']}: {self.conversation_history[-1]['content']}"
else:
raise ValueError(f"Invalid provider: {self.provider}")
def return_messages_as_list(self): def return_messages_as_list(self):
"""Return the conversation messages as a list of formatted strings. """Return the conversation messages as a list of formatted strings.
@ -734,6 +810,10 @@ class Conversation(BaseStructure):
) # Remove .json extension ) # Remove .json extension
return conversations return conversations
def clear_memory(self):
"""Clear the memory of the conversation."""
self.conversation_history = []
# # Example usage # # Example usage
# # conversation = Conversation() # # conversation = Conversation()

@ -20,6 +20,7 @@ HistoryOutputType = Literal[
"str-all-except-first", "str-all-except-first",
] ]
def history_output_formatter( def history_output_formatter(
conversation: Conversation, type: HistoryOutputType = "list" conversation: Conversation, type: HistoryOutputType = "list"
) -> Union[List[Dict[str, Any]], Dict[str, Any], str]: ) -> Union[List[Dict[str, Any]], Dict[str, Any], str]:

@ -1,6 +1,7 @@
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from typing import Any from typing import Any
def dict_to_xml(tag: str, d: dict) -> ET.Element: def dict_to_xml(tag: str, d: dict) -> ET.Element:
"""Convert a dictionary to an XML Element.""" """Convert a dictionary to an XML Element."""
elem = ET.Element(tag) elem = ET.Element(tag)
@ -21,6 +22,7 @@ def dict_to_xml(tag: str, d: dict) -> ET.Element:
elem.append(child) elem.append(child)
return elem return elem
def to_xml_string(data: Any, root_tag: str = "root") -> str: def to_xml_string(data: Any, root_tag: str = "root") -> str:
"""Convert a dict or list to an XML string.""" """Convert a dict or list to an XML string."""
if isinstance(data, dict): if isinstance(data, dict):

Loading…
Cancel
Save