pull/633/merge^2
commit
db16ce491a
@ -0,0 +1,59 @@
|
|||||||
|
# Swarms 6.0.0 - Performance & Reliability Update 🚀
|
||||||
|
|
||||||
|
We're excited to announce the release of Swarms 6.0.0, bringing significant improvements to performance, reliability, and developer experience. This release focuses on streamlining core functionalities while enhancing the overall stability of the framework.
|
||||||
|
|
||||||
|
## 📦 Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip3 install -U swarms
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🌟 Highlights
|
||||||
|
|
||||||
|
### Agent Enhancements
|
||||||
|
- **Improved RAG Performance**: Significant improvements to Retrieval-Augmented Generation capabilities
|
||||||
|
- **Enhanced Prompt Generation**: Auto-generate prompt now incorporates name, description, and system prompt for more contextual interactions
|
||||||
|
- **Streamlined Architecture**: Cleaned up unused code for better performance and maintainability
|
||||||
|
- **Simplified State Management**: Consolidated state management methods into a single `load()` function
|
||||||
|
|
||||||
|
### Tools & Execution
|
||||||
|
- **Optimized Environment Management**: Fixed multiple environment instantiation issue
|
||||||
|
- Environments now initialize once during `__init__`
|
||||||
|
- **New SwarmRouter Function**: Simplified routing mechanism
|
||||||
|
- Returns consolidated string output from all agents
|
||||||
|
- Improved coordination between swarm components
|
||||||
|
|
||||||
|
## 💪 Performance Improvements
|
||||||
|
- Faster execution times
|
||||||
|
- Reduced memory footprint
|
||||||
|
- More reliable logging system
|
||||||
|
- Lightweight and efficient codebase
|
||||||
|
|
||||||
|
## 🤝 Join Our Community
|
||||||
|
|
||||||
|
### We're Hiring!
|
||||||
|
Join our growing team! We're currently looking for:
|
||||||
|
- Agent Engineers
|
||||||
|
- Developer Relations
|
||||||
|
- Infrastructure Engineers
|
||||||
|
- And more!
|
||||||
|
|
||||||
|
### Get Involved
|
||||||
|
- ⭐ Star our repository
|
||||||
|
- 🔄 Fork the project
|
||||||
|
- 🛠 Submit pull requests
|
||||||
|
- 🐛 Report issues
|
||||||
|
- 💡 Share your ideas
|
||||||
|
|
||||||
|
### Contact & Support
|
||||||
|
- 📧 Email: kye@swarms.world
|
||||||
|
- 🔗 Issues: [GitHub Issues](https://github.com/kyegomez/swarms/issues)
|
||||||
|
|
||||||
|
## 🔜 What's Next?
|
||||||
|
Have ideas for features, bug fixes, or improvements? We'd love to hear from you! Reach out through our GitHub issues or email us directly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Thank you to all our contributors and users who make Swarms better every day. Together, we're building the future of swarm intelligence.*
|
||||||
|
|
||||||
|
#SwarmAI #OpenSource #AI #MachineLearning
|
@ -1,238 +1,231 @@
|
|||||||
# GroupChat
|
# GroupChat Class Documentation
|
||||||
|
|
||||||
The `GroupChat` class is designed to manage a group chat session involving multiple agents. This class handles initializing the conversation, selecting the next speaker, resetting the chat, and executing the chat rounds, providing a structured approach to managing a dynamic and interactive conversation.
|
|
||||||
|
|
||||||
### Key Concepts
|
The GroupChat class manages multi-agent conversations with state persistence, comprehensive logging, and flexible agent configurations. It supports both Agent class instances and callable functions, making it versatile for different use cases.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
```bash
|
||||||
|
pip install swarms python-dotenv pydantic
|
||||||
|
```
|
||||||
|
|
||||||
- **Agents**: Entities participating in the group chat.
|
|
||||||
- **Conversation Management**: Handling the flow of conversation, selecting speakers, and maintaining chat history.
|
|
||||||
- **Round-based Execution**: Managing the chat in predefined rounds.
|
|
||||||
|
|
||||||
## Attributes
|
## Attributes
|
||||||
|
|
||||||
### Arguments
|
| Attribute | Type | Description |
|
||||||
|
|-----------|------|-------------|
|
||||||
| Argument | Type | Default | Description |
|
| state_path | str | Path for saving/loading chat state |
|
||||||
|---------------------|----------------------|-------------|-------------|
|
| wrapped_agents | List[AgentWrapper] | List of wrapped agent instances |
|
||||||
| `agents` | `List[Agent]` | `None` | List of agents participating in the group chat. |
|
| selector_agent | AgentWrapper | Agent responsible for speaker selection |
|
||||||
| `max_rounds` | `int` | `10` | Maximum number of chat rounds. |
|
| state | GroupChatState | Current state of the group chat |
|
||||||
| `admin_name` | `str` | `"Admin"` | Name of the admin user. |
|
|
||||||
| `group_objective` | `str` | `None` | Objective of the group chat. |
|
|
||||||
| `selector_agent` | `Agent` | `None` | Agent responsible for selecting the next speaker. |
|
|
||||||
| `rules` | `str` | `None` | Rules for the group chat. |
|
|
||||||
| `*args` | | | Variable length argument list. |
|
|
||||||
| `**kwargs` | | | Arbitrary keyword arguments. |
|
|
||||||
|
|
||||||
### Attributes
|
|
||||||
|
|
||||||
| Attribute | Type | Description |
|
|
||||||
|---------------------|----------------------|-------------|
|
|
||||||
| `agents` | `List[Agent]` | List of agents participating in the group chat. |
|
|
||||||
| `max_rounds` | `int` | Maximum number of chat rounds. |
|
|
||||||
| `admin_name` | `str` | Name of the admin user. |
|
|
||||||
| `group_objective` | `str` | Objective of the group chat. |
|
|
||||||
| `selector_agent` | `Agent` | Agent responsible for selecting the next speaker. |
|
|
||||||
| `messages` | `Conversation` | Conversation object for storing the chat messages. |
|
|
||||||
|
|
||||||
## Methods
|
## Methods
|
||||||
|
|
||||||
### __init__
|
### Core Methods
|
||||||
|
|
||||||
Initializes the group chat with the given parameters.
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
agents = [Agent(name="Agent 1"), Agent(name="Agent 2")]
|
def run(self, task: str) -> str:
|
||||||
group_chat = GroupChat(agents=agents, max_rounds=5, admin_name="GroupAdmin")
|
"""Execute the group chat conversation"""
|
||||||
```
|
|
||||||
|
|
||||||
### agent_names
|
|
||||||
|
|
||||||
Returns the names of the agents in the group chat.
|
|
||||||
|
|
||||||
**Returns:**
|
def save_state(self) -> None:
|
||||||
|
"""Save current state to disk"""
|
||||||
|
|
||||||
| Return Type | Description |
|
@classmethod
|
||||||
|-------------|-------------|
|
def load_state(cls, state_path: str) -> 'GroupChat':
|
||||||
| `List[str]` | List of agent names. |
|
"""Load GroupChat from saved state"""
|
||||||
|
|
||||||
**Examples:**
|
def get_conversation_summary(self) -> Dict[str, Any]:
|
||||||
|
"""Return a summary of the conversation"""
|
||||||
|
|
||||||
```python
|
def export_conversation(self, format: str = "json") -> Union[str, Dict]:
|
||||||
names = group_chat.agent_names
|
"""Export the conversation in specified format"""
|
||||||
print(names) # Output: ['Agent 1', 'Agent 2']
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### reset
|
### Internal Methods
|
||||||
|
|
||||||
Resets the group chat by clearing the message history.
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
group_chat.reset()
|
def _log_interaction(self, agent_name: str, position: int, input_text: str, output_text: str) -> None:
|
||||||
```
|
"""Log a single interaction"""
|
||||||
|
|
||||||
### agent_by_name
|
|
||||||
|
|
||||||
Finds an agent whose name is contained within the given name string.
|
|
||||||
|
|
||||||
**Arguments:**
|
|
||||||
|
|
||||||
| Parameter | Type | Description |
|
def _add_message(self, role: str, content: str) -> None:
|
||||||
|-----------|--------|-------------|
|
"""Add a message to the conversation history"""
|
||||||
| `name` | `str` | Name string to search for. |
|
|
||||||
|
|
||||||
**Returns:**
|
def select_next_speaker(self, last_speaker: AgentWrapper) -> AgentWrapper:
|
||||||
|
"""Select the next speaker using the selector agent"""
|
||||||
| Return Type | Description |
|
```
|
||||||
|-------------|-------------|
|
|
||||||
| `Agent` | Agent object with a name contained in the given name string. |
|
|
||||||
|
|
||||||
**Raises:**
|
|
||||||
|
|
||||||
- `ValueError`: If no agent is found with a name contained in the given name string.
|
|
||||||
|
|
||||||
**Examples:**
|
## Usage Examples
|
||||||
|
|
||||||
|
### 1. Basic Setup with Two Agents
|
||||||
```python
|
```python
|
||||||
agent = group_chat.agent_by_name("Agent 1")
|
import os
|
||||||
print(agent.agent_name) # Output: 'Agent 1'
|
from swarms import Agent
|
||||||
|
from swarm_models import OpenAIChat
|
||||||
|
|
||||||
|
# Initialize OpenAI
|
||||||
|
api_key = os.getenv("OPENAI_API_KEY")
|
||||||
|
model = OpenAIChat(openai_api_key=api_key, model_name="gpt-4-mini")
|
||||||
|
|
||||||
|
# Create agents
|
||||||
|
analyst = Agent(
|
||||||
|
agent_name="Financial-Analyst",
|
||||||
|
system_prompt="You are a financial analyst...",
|
||||||
|
llm=model
|
||||||
|
)
|
||||||
|
|
||||||
|
advisor = Agent(
|
||||||
|
agent_name="Investment-Advisor",
|
||||||
|
system_prompt="You are an investment advisor...",
|
||||||
|
llm=model
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create group chat
|
||||||
|
chat = GroupChat(
|
||||||
|
name="Investment Team",
|
||||||
|
agents=[analyst, advisor],
|
||||||
|
max_rounds=5,
|
||||||
|
group_objective="Provide investment advice"
|
||||||
|
)
|
||||||
|
|
||||||
|
response = chat.run("What's the best investment strategy for retirement?")
|
||||||
```
|
```
|
||||||
|
|
||||||
### next_agent
|
### 2. Advanced Setup with State Management
|
||||||
|
|
||||||
Returns the next agent in the list.
|
|
||||||
|
|
||||||
**Arguments:**
|
|
||||||
|
|
||||||
| Parameter | Type | Description |
|
|
||||||
|-----------|--------|-------------|
|
|
||||||
| `agent` | `Agent`| Current agent. |
|
|
||||||
|
|
||||||
**Returns:**
|
|
||||||
|
|
||||||
| Return Type | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| `Agent` | Next agent in the list. |
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
current_agent = group_chat.agents[0]
|
# Create group chat with state persistence
|
||||||
next_agent = group_chat.next_agent(current_agent)
|
chat = GroupChat(
|
||||||
print(next_agent.agent_name) # Output: Name of the next agent
|
name="Investment Advisory Team",
|
||||||
|
description="Expert team for financial planning",
|
||||||
|
agents=[analyst, advisor, tax_specialist],
|
||||||
|
max_rounds=10,
|
||||||
|
admin_name="Senior Advisor",
|
||||||
|
group_objective="Provide comprehensive financial planning",
|
||||||
|
state_path="investment_chat_state.json",
|
||||||
|
rules="1. Always provide sources\n2. Be concise\n3. Focus on practical advice"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Run chat and save state
|
||||||
|
response = chat.run("Create a retirement plan for a 35-year old")
|
||||||
|
chat.save_state()
|
||||||
|
|
||||||
|
# Load existing chat state
|
||||||
|
loaded_chat = GroupChat.load_state("investment_chat_state.json")
|
||||||
```
|
```
|
||||||
|
|
||||||
### select_speaker_msg
|
### 3. Using Custom Callable Agents
|
||||||
|
|
||||||
Returns the message for selecting the next speaker.
|
|
||||||
|
|
||||||
**Returns:**
|
|
||||||
|
|
||||||
| Return Type | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| `str` | Prompt message for selecting the next speaker. |
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
message = group_chat.select_speaker_msg()
|
def custom_agent(input_text: str) -> str:
|
||||||
print(message)
|
# Custom logic here
|
||||||
|
return f"Processed: {input_text}"
|
||||||
|
|
||||||
|
# Mix of regular agents and callable functions
|
||||||
|
chat = GroupChat(
|
||||||
|
name="Hybrid Team",
|
||||||
|
agents=[analyst, custom_agent],
|
||||||
|
max_rounds=3
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### select_speaker
|
### 4. Export and Analysis
|
||||||
|
```python
|
||||||
Selects the next speaker.
|
# Run chat
|
||||||
|
chat.run("Analyze market conditions")
|
||||||
**Arguments:**
|
|
||||||
|
|
||||||
| Parameter | Type | Description |
|
|
||||||
|----------------------|--------|-------------|
|
|
||||||
| `last_speaker_agent` | `Agent`| Last speaker in the conversation. |
|
|
||||||
| `selector_agent` | `Agent`| Agent responsible for selecting the next speaker. |
|
|
||||||
|
|
||||||
**Returns:**
|
|
||||||
|
|
||||||
| Return Type | Description |
|
# Get summary
|
||||||
|-------------|-------------|
|
summary = chat.get_conversation_summary()
|
||||||
| `Agent` | Next speaker. |
|
print(summary)
|
||||||
|
|
||||||
**Examples:**
|
# Export in different formats
|
||||||
|
json_conv = chat.export_conversation(format="json")
|
||||||
|
text_conv = chat.export_conversation(format="text")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Advanced Configuration with Custom Selector
|
||||||
```python
|
```python
|
||||||
next_speaker = group_chat.select_speaker(last_speaker_agent, selector_agent)
|
class CustomSelector(Agent):
|
||||||
print(next_speaker.agent_name)
|
def run(self, input_text: str) -> str:
|
||||||
|
# Custom selection logic
|
||||||
|
return "Financial-Analyst"
|
||||||
|
|
||||||
|
chat = GroupChat(
|
||||||
|
name="Custom Selection Team",
|
||||||
|
agents=[analyst, advisor],
|
||||||
|
selector_agent=CustomSelector(
|
||||||
|
agent_name="Custom-Selector",
|
||||||
|
system_prompt="Select the next speaker based on expertise",
|
||||||
|
llm=model
|
||||||
|
),
|
||||||
|
max_rounds=5
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
### _participant_roles
|
### 6. Debugging Setup
|
||||||
|
|
||||||
Returns the roles of the participants.
|
|
||||||
|
|
||||||
**Returns:**
|
|
||||||
|
|
||||||
| Return Type | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| `str` | Participant roles. |
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
roles = group_chat._participant_roles()
|
import logging
|
||||||
print(roles)
|
|
||||||
|
# Configure logging
|
||||||
|
logging.basicConfig(level=logging.DEBUG)
|
||||||
|
|
||||||
|
chat = GroupChat(
|
||||||
|
name="Debug Team",
|
||||||
|
agents=[analyst, advisor],
|
||||||
|
max_rounds=3,
|
||||||
|
state_path="debug_chat.json"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Run with detailed logging
|
||||||
|
try:
|
||||||
|
response = chat.run("Complex query")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Chat failed: {str(e)}")
|
||||||
|
# Access last successful state
|
||||||
|
state = chat.state
|
||||||
```
|
```
|
||||||
|
|
||||||
### __call__
|
## Error Handling
|
||||||
|
|
||||||
Executes the group chat as a function.
|
The GroupChat class includes comprehensive error handling:
|
||||||
|
|
||||||
**Arguments:**
|
|
||||||
|
|
||||||
| Parameter | Type | Description |
|
|
||||||
|-----------|--------|-------------|
|
|
||||||
| `task` | `str` | Task to be performed. |
|
|
||||||
|
|
||||||
**Returns:**
|
|
||||||
|
|
||||||
| Return Type | Description |
|
|
||||||
|-------------|-------------|
|
|
||||||
| `str` | Reply from the last speaker. |
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
response = group_chat(task="Discuss the project plan")
|
try:
|
||||||
print(response)
|
chat = GroupChat(agents=[analyst]) # Will raise ValueError
|
||||||
|
except ValueError as e:
|
||||||
|
print("Configuration error:", str(e))
|
||||||
|
|
||||||
|
try:
|
||||||
|
response = chat.run("Query")
|
||||||
|
except Exception as e:
|
||||||
|
# Access error state
|
||||||
|
error_summary = chat.get_conversation_summary()
|
||||||
|
print("Execution error:", str(e))
|
||||||
|
print("State at error:", error_summary)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Additional Examples
|
## Best Practices
|
||||||
|
|
||||||
#### Example 1: Initializing and Running a Group Chat
|
1. **State Management**:
|
||||||
|
- Always specify a `state_path` for important conversations
|
||||||
|
- Use `save_state()` after critical operations
|
||||||
|
- Implement regular state backups for long conversations
|
||||||
|
|
||||||
```python
|
2. **Agent Configuration**:
|
||||||
agents = [Agent(name="Agent 1"), Agent(name="Agent 2"), Agent(name="Agent 3")]
|
- Provide clear system prompts for each agent
|
||||||
selector_agent = Agent(name="Selector")
|
- Use descriptive agent names
|
||||||
group_chat = GroupChat(agents=agents, selector_agent=selector_agent, max_rounds=3, group_objective="Discuss the quarterly goals.")
|
- Consider agent expertise when setting the group objective
|
||||||
|
|
||||||
response = group_chat(task="Let's start the discussion on quarterly goals.")
|
3. **Performance**:
|
||||||
print(response)
|
- Keep `max_rounds` reasonable (5-10 for most cases)
|
||||||
```
|
- Use early stopping conditions when possible
|
||||||
|
- Monitor conversation length and complexity
|
||||||
|
|
||||||
#### Example 2: Resetting the Group Chat
|
4. **Error Handling**:
|
||||||
|
- Always wrap chat execution in try-except blocks
|
||||||
|
- Implement proper logging
|
||||||
|
- Save states before potentially risky operations
|
||||||
|
|
||||||
```python
|
## Limitations
|
||||||
group_chat.reset()
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Example 3: Selecting the Next Speaker
|
|
||||||
|
|
||||||
```python
|
|
||||||
last_speaker = group_chat.agents[0]
|
|
||||||
next_speaker = group_chat.select_speaker(last_speaker_agent=last_speaker, selector_agent=selector_agent)
|
|
||||||
print(next_speaker.agent_name)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Summary
|
- Agents must either have a `run` method or be callable
|
||||||
|
- State files can grow large with many interactions
|
||||||
|
- Selector agent may need optimization for large agent groups
|
||||||
|
- Real-time streaming not supported in basic configuration
|
||||||
|
|
||||||
The `GroupChat` class offers a structured approach to managing a group chat involving multiple agents. With functionalities for initializing conversations, selecting speakers, and handling chat rounds, it provides a robust framework for dynamic and interactive discussions. This makes it an essential tool for applications requiring coordinated communication among multiple agents.
|
|
@ -0,0 +1,113 @@
|
|||||||
|
import os
|
||||||
|
from swarms import Agent
|
||||||
|
from swarm_models import OpenAIChat
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
# Custom system prompt for VC legal document generation
|
||||||
|
VC_LEGAL_AGENT_PROMPT = """You are a specialized legal document assistant focusing on venture capital documentation.
|
||||||
|
Your role is to help draft preliminary versions of common VC legal documents while adhering to these guidelines:
|
||||||
|
|
||||||
|
1. Always include standard legal disclaimers
|
||||||
|
2. Follow standard VC document structures
|
||||||
|
3. Flag areas that need attorney review
|
||||||
|
4. Request necessary information for document completion
|
||||||
|
5. Maintain consistency across related documents
|
||||||
|
6. Output <DONE> only when document is complete and verified
|
||||||
|
|
||||||
|
Remember: All output should be marked as 'DRAFT' and require professional legal review."""
|
||||||
|
|
||||||
|
|
||||||
|
def create_vc_legal_agent():
|
||||||
|
load_dotenv()
|
||||||
|
api_key = os.getenv("OPENAI_API_KEY")
|
||||||
|
|
||||||
|
# Configure the model with appropriate parameters for legal work
|
||||||
|
# Get the OpenAI API key from the environment variable
|
||||||
|
api_key = os.getenv("GROQ_API_KEY")
|
||||||
|
|
||||||
|
# Model
|
||||||
|
model = OpenAIChat(
|
||||||
|
openai_api_base="https://api.groq.com/openai/v1",
|
||||||
|
openai_api_key=api_key,
|
||||||
|
model_name="llama-3.1-70b-versatile",
|
||||||
|
temperature=0.1,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Initialize the persistent agent
|
||||||
|
agent = Agent(
|
||||||
|
agent_name="VC-Legal-Document-Agent",
|
||||||
|
system_prompt=VC_LEGAL_AGENT_PROMPT,
|
||||||
|
llm=model,
|
||||||
|
max_loops="auto", # Allows multiple iterations until completion
|
||||||
|
stopping_token="<DONE>", # Agent will continue until this token is output
|
||||||
|
autosave=True,
|
||||||
|
dashboard=True, # Enable dashboard for monitoring
|
||||||
|
verbose=True,
|
||||||
|
dynamic_temperature_enabled=False, # Disable for consistency in legal documents
|
||||||
|
saved_state_path="vc_legal_agent_state.json",
|
||||||
|
user_name="legal_corp",
|
||||||
|
retry_attempts=3,
|
||||||
|
context_length=200000,
|
||||||
|
return_step_meta=True,
|
||||||
|
output_type="string",
|
||||||
|
streaming_on=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
return agent
|
||||||
|
|
||||||
|
|
||||||
|
def generate_legal_document(agent, document_type, parameters):
|
||||||
|
"""
|
||||||
|
Generate a legal document with multiple refinement iterations
|
||||||
|
|
||||||
|
Args:
|
||||||
|
agent: The initialized VC legal agent
|
||||||
|
document_type: Type of document to generate (e.g., "term_sheet", "investment_agreement")
|
||||||
|
parameters: Dict containing necessary parameters for the document
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The generated document content
|
||||||
|
"""
|
||||||
|
prompt = f"""
|
||||||
|
Generate a {document_type} with the following parameters:
|
||||||
|
{parameters}
|
||||||
|
|
||||||
|
Please follow these steps:
|
||||||
|
1. Create initial draft
|
||||||
|
2. Review for completeness
|
||||||
|
3. Add necessary legal disclaimers
|
||||||
|
4. Verify all required sections
|
||||||
|
5. Output <DONE> when complete
|
||||||
|
|
||||||
|
Include [REQUIRES LEGAL REVIEW] tags for sections needing attorney attention.
|
||||||
|
"""
|
||||||
|
|
||||||
|
return agent.run(prompt)
|
||||||
|
|
||||||
|
|
||||||
|
# Example usage
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Initialize the agent
|
||||||
|
legal_agent = create_vc_legal_agent()
|
||||||
|
|
||||||
|
# Example parameters for a term sheet
|
||||||
|
parameters = {
|
||||||
|
"company_name": "TechStartup Inc.",
|
||||||
|
"investment_amount": "$5,000,000",
|
||||||
|
"valuation": "$20,000,000",
|
||||||
|
"investor_rights": [
|
||||||
|
"Board seat",
|
||||||
|
"Pro-rata rights",
|
||||||
|
"Information rights",
|
||||||
|
],
|
||||||
|
"type_of_security": "Series A Preferred Stock",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate a term sheet
|
||||||
|
document = generate_legal_document(
|
||||||
|
legal_agent, "term_sheet", parameters
|
||||||
|
)
|
||||||
|
|
||||||
|
# Save the generated document
|
||||||
|
with open("generated_term_sheet_draft.md", "w") as f:
|
||||||
|
f.write(document)
|
@ -0,0 +1,319 @@
|
|||||||
|
"""
|
||||||
|
Zoe - Real Estate Agent
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Optional, Dict, Any, List
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime
|
||||||
|
import os
|
||||||
|
import json
|
||||||
|
import requests
|
||||||
|
from loguru import logger
|
||||||
|
from swarms import Agent
|
||||||
|
from swarm_models import OpenAIChat
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
from enum import Enum
|
||||||
|
|
||||||
|
# Configure loguru logger
|
||||||
|
logger.add(
|
||||||
|
"logs/real_estate_agent_{time}.log",
|
||||||
|
rotation="500 MB",
|
||||||
|
retention="10 days",
|
||||||
|
level="INFO",
|
||||||
|
format="{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class PropertyType(str, Enum):
|
||||||
|
"""Enum for property types"""
|
||||||
|
|
||||||
|
OFFICE = "office"
|
||||||
|
RETAIL = "retail"
|
||||||
|
INDUSTRIAL = "industrial"
|
||||||
|
MIXED_USE = "mixed-use"
|
||||||
|
LAND = "land"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PropertyListing:
|
||||||
|
"""Data class for commercial property listings"""
|
||||||
|
|
||||||
|
property_id: str
|
||||||
|
address: str
|
||||||
|
city: str
|
||||||
|
state: str
|
||||||
|
zip_code: str
|
||||||
|
price: float
|
||||||
|
square_footage: float
|
||||||
|
property_type: PropertyType
|
||||||
|
zoning: str
|
||||||
|
listing_date: datetime
|
||||||
|
lat: float
|
||||||
|
lng: float
|
||||||
|
description: Optional[str] = None
|
||||||
|
features: Optional[List[str]] = None
|
||||||
|
images: Optional[List[str]] = None
|
||||||
|
|
||||||
|
|
||||||
|
class PropertyRadarAPI:
|
||||||
|
"""Client for PropertyRadar API integration"""
|
||||||
|
|
||||||
|
def __init__(self, api_key: str):
|
||||||
|
"""Initialize PropertyRadar API client
|
||||||
|
|
||||||
|
Args:
|
||||||
|
api_key (str): PropertyRadar API key
|
||||||
|
"""
|
||||||
|
self.api_key = api_key
|
||||||
|
self.base_url = "https://api.propertyradar.com/v1"
|
||||||
|
self.session = requests.Session()
|
||||||
|
self.session.headers.update(
|
||||||
|
{
|
||||||
|
"Authorization": f"Bearer {api_key}",
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
def search_properties(
|
||||||
|
self,
|
||||||
|
max_price: float = 10_000_000,
|
||||||
|
property_types: List[PropertyType] = None,
|
||||||
|
location: Dict[str, Any] = None,
|
||||||
|
min_sqft: Optional[float] = None,
|
||||||
|
max_sqft: Optional[float] = None,
|
||||||
|
page: int = 1,
|
||||||
|
limit: int = 20,
|
||||||
|
) -> List[PropertyListing]:
|
||||||
|
"""
|
||||||
|
Search for commercial properties using PropertyRadar API
|
||||||
|
|
||||||
|
Args:
|
||||||
|
max_price (float): Maximum property price
|
||||||
|
property_types (List[PropertyType]): Types of properties to search for
|
||||||
|
location (Dict[str, Any]): Location criteria (city, county, or coordinates)
|
||||||
|
min_sqft (Optional[float]): Minimum square footage
|
||||||
|
max_sqft (Optional[float]): Maximum square footage
|
||||||
|
page (int): Page number for pagination
|
||||||
|
limit (int): Number of results per page
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[PropertyListing]: List of matching properties
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Build the query parameters
|
||||||
|
params = {
|
||||||
|
"price_max": max_price,
|
||||||
|
"property_types": (
|
||||||
|
[pt.value for pt in property_types]
|
||||||
|
if property_types
|
||||||
|
else None
|
||||||
|
),
|
||||||
|
"page": page,
|
||||||
|
"limit": limit,
|
||||||
|
"for_sale": True,
|
||||||
|
"state": "FL", # Florida only
|
||||||
|
"commercial_property": True,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add location parameters
|
||||||
|
if location:
|
||||||
|
params.update(location)
|
||||||
|
|
||||||
|
# Add square footage filters
|
||||||
|
if min_sqft:
|
||||||
|
params["square_feet_min"] = min_sqft
|
||||||
|
if max_sqft:
|
||||||
|
params["square_feet_max"] = max_sqft
|
||||||
|
|
||||||
|
# Make the API request
|
||||||
|
response = self.session.get(
|
||||||
|
f"{self.base_url}/properties",
|
||||||
|
params={
|
||||||
|
k: v for k, v in params.items() if v is not None
|
||||||
|
},
|
||||||
|
)
|
||||||
|
response.raise_for_status()
|
||||||
|
|
||||||
|
# Parse the response
|
||||||
|
properties_data = response.json()
|
||||||
|
|
||||||
|
# Convert to PropertyListing objects
|
||||||
|
return [
|
||||||
|
PropertyListing(
|
||||||
|
property_id=prop["id"],
|
||||||
|
address=prop["address"],
|
||||||
|
city=prop["city"],
|
||||||
|
state=prop["state"],
|
||||||
|
zip_code=prop["zip_code"],
|
||||||
|
price=float(prop["price"]),
|
||||||
|
square_footage=float(prop["square_feet"]),
|
||||||
|
property_type=PropertyType(prop["property_type"]),
|
||||||
|
zoning=prop["zoning"],
|
||||||
|
listing_date=datetime.fromisoformat(
|
||||||
|
prop["list_date"]
|
||||||
|
),
|
||||||
|
lat=float(prop["latitude"]),
|
||||||
|
lng=float(prop["longitude"]),
|
||||||
|
description=prop.get("description"),
|
||||||
|
features=prop.get("features", []),
|
||||||
|
images=prop.get("images", []),
|
||||||
|
)
|
||||||
|
for prop in properties_data["results"]
|
||||||
|
]
|
||||||
|
|
||||||
|
except requests.RequestException as e:
|
||||||
|
logger.error(f"Error fetching properties: {str(e)}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
class CommercialRealEstateAgent:
|
||||||
|
"""Agent for searching and analyzing commercial real estate properties"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
openai_api_key: str,
|
||||||
|
propertyradar_api_key: str,
|
||||||
|
model_name: str = "gpt-4",
|
||||||
|
temperature: float = 0.1,
|
||||||
|
saved_state_path: Optional[str] = None,
|
||||||
|
):
|
||||||
|
"""Initialize the real estate agent
|
||||||
|
|
||||||
|
Args:
|
||||||
|
openai_api_key (str): OpenAI API key
|
||||||
|
propertyradar_api_key (str): PropertyRadar API key
|
||||||
|
model_name (str): Name of the LLM model to use
|
||||||
|
temperature (float): Temperature setting for the LLM
|
||||||
|
saved_state_path (Optional[str]): Path to save agent state
|
||||||
|
"""
|
||||||
|
self.property_api = PropertyRadarAPI(propertyradar_api_key)
|
||||||
|
|
||||||
|
# Initialize OpenAI model
|
||||||
|
self.model = OpenAIChat(
|
||||||
|
openai_api_key=openai_api_key,
|
||||||
|
model_name=model_name,
|
||||||
|
temperature=temperature,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Initialize the agent
|
||||||
|
self.agent = Agent(
|
||||||
|
agent_name="Commercial-Real-Estate-Agent",
|
||||||
|
system_prompt=self._get_system_prompt(),
|
||||||
|
llm=self.model,
|
||||||
|
max_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
verbose=True,
|
||||||
|
saved_state_path=saved_state_path,
|
||||||
|
context_length=200000,
|
||||||
|
streaming_on=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
"Commercial Real Estate Agent initialized successfully"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _get_system_prompt(self) -> str:
|
||||||
|
"""Get the system prompt for the agent"""
|
||||||
|
return """You are a specialized commercial real estate agent assistant focused on Central Florida properties.
|
||||||
|
Your primary responsibilities are:
|
||||||
|
1. Search for commercial properties under $10 million
|
||||||
|
2. Focus on properties zoned for commercial use
|
||||||
|
3. Provide detailed analysis of property features, location benefits, and potential ROI
|
||||||
|
4. Consider local market conditions and growth potential
|
||||||
|
5. Verify zoning compliance and restrictions
|
||||||
|
|
||||||
|
When analyzing properties, consider:
|
||||||
|
- Current market valuations
|
||||||
|
- Local business development plans
|
||||||
|
- Traffic patterns and accessibility
|
||||||
|
- Nearby amenities and businesses
|
||||||
|
- Future development potential"""
|
||||||
|
|
||||||
|
def search_properties(
|
||||||
|
self,
|
||||||
|
max_price: float = 10_000_000,
|
||||||
|
property_types: List[PropertyType] = None,
|
||||||
|
location: Dict[str, Any] = None,
|
||||||
|
min_sqft: Optional[float] = None,
|
||||||
|
max_sqft: Optional[float] = None,
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Search for properties and provide analysis
|
||||||
|
|
||||||
|
Args:
|
||||||
|
max_price (float): Maximum property price
|
||||||
|
property_types (List[PropertyType]): Types of properties to search
|
||||||
|
location (Dict[str, Any]): Location criteria
|
||||||
|
min_sqft (Optional[float]): Minimum square footage
|
||||||
|
max_sqft (Optional[float]): Maximum square footage
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List[Dict[str, Any]]: List of properties with analysis
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Search for properties
|
||||||
|
properties = self.property_api.search_properties(
|
||||||
|
max_price=max_price,
|
||||||
|
property_types=property_types,
|
||||||
|
location=location,
|
||||||
|
min_sqft=min_sqft,
|
||||||
|
max_sqft=max_sqft,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Analyze each property
|
||||||
|
analyzed_properties = []
|
||||||
|
for prop in properties:
|
||||||
|
analysis = self.agent.run(
|
||||||
|
f"Analyze this commercial property:\n"
|
||||||
|
f"Address: {prop.address}, {prop.city}, FL {prop.zip_code}\n"
|
||||||
|
f"Price: ${prop.price:,.2f}\n"
|
||||||
|
f"Square Footage: {prop.square_footage:,.0f}\n"
|
||||||
|
f"Property Type: {prop.property_type.value}\n"
|
||||||
|
f"Zoning: {prop.zoning}\n"
|
||||||
|
f"Description: {prop.description or 'Not provided'}"
|
||||||
|
)
|
||||||
|
|
||||||
|
analyzed_properties.append(
|
||||||
|
{"property": prop.__dict__, "analysis": analysis}
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(
|
||||||
|
f"Successfully analyzed {len(analyzed_properties)} properties"
|
||||||
|
)
|
||||||
|
return analyzed_properties
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(
|
||||||
|
f"Error in property search and analysis: {str(e)}"
|
||||||
|
)
|
||||||
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
"""Main function to demonstrate usage"""
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Initialize the agent
|
||||||
|
agent = CommercialRealEstateAgent(
|
||||||
|
openai_api_key=os.getenv("OPENAI_API_KEY"),
|
||||||
|
propertyradar_api_key=os.getenv("PROPERTYRADAR_API_KEY"),
|
||||||
|
saved_state_path="real_estate_agent_state.json",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Example search
|
||||||
|
results = agent.search_properties(
|
||||||
|
max_price=5_000_000,
|
||||||
|
property_types=[PropertyType.RETAIL, PropertyType.OFFICE],
|
||||||
|
location={"city": "Orlando", "radius_miles": 25},
|
||||||
|
min_sqft=2000,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Save results
|
||||||
|
with open("search_results.json", "w") as f:
|
||||||
|
json.dump(results, f, default=str, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
@ -1,44 +0,0 @@
|
|||||||
import os
|
|
||||||
|
|
||||||
from swarms_memory import ChromaDB
|
|
||||||
|
|
||||||
from swarms import Agent
|
|
||||||
from swarm_models import Anthropic
|
|
||||||
from swarms.prompts.finance_agent_sys_prompt import (
|
|
||||||
FINANCIAL_AGENT_SYS_PROMPT,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Initilaize the chromadb client
|
|
||||||
chromadb = ChromaDB(
|
|
||||||
metric="cosine",
|
|
||||||
output_dir="fiance_agent_rag",
|
|
||||||
# docs_folder="artifacts", # Folder of your documents
|
|
||||||
)
|
|
||||||
|
|
||||||
# Model
|
|
||||||
model = Anthropic(anthropic_api_key=os.getenv("ANTHROPIC_API_KEY"))
|
|
||||||
|
|
||||||
|
|
||||||
# Initialize the agent
|
|
||||||
agent = Agent(
|
|
||||||
agent_name="Financial-Analysis-Agent",
|
|
||||||
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
|
||||||
agent_description="Agent creates ",
|
|
||||||
llm=model,
|
|
||||||
max_loops="auto",
|
|
||||||
autosave=True,
|
|
||||||
dashboard=False,
|
|
||||||
verbose=True,
|
|
||||||
streaming_on=True,
|
|
||||||
dynamic_temperature_enabled=True,
|
|
||||||
saved_state_path="finance_agent.json",
|
|
||||||
user_name="swarms_corp",
|
|
||||||
retry_attempts=3,
|
|
||||||
context_length=200000,
|
|
||||||
long_term_memory=chromadb,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
agent.run(
|
|
||||||
"What are the components of a startups stock incentive equity plan"
|
|
||||||
)
|
|
@ -1,117 +0,0 @@
|
|||||||
from swarms import Agent
|
|
||||||
from swarm_models import OpenAIChat
|
|
||||||
from swarms_memory import ChromaDB
|
|
||||||
import subprocess
|
|
||||||
import os
|
|
||||||
|
|
||||||
# Making an instance of the ChromaDB class
|
|
||||||
memory = ChromaDB(
|
|
||||||
metric="cosine",
|
|
||||||
n_results=3,
|
|
||||||
output_dir="results",
|
|
||||||
docs_folder="docs",
|
|
||||||
)
|
|
||||||
|
|
||||||
# Model
|
|
||||||
model = OpenAIChat(
|
|
||||||
api_key=os.getenv("OPENAI_API_KEY"),
|
|
||||||
model_name="gpt-4o-mini",
|
|
||||||
temperature=0.1,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# Tools in swarms are simple python functions and docstrings
|
|
||||||
def terminal(
|
|
||||||
code: str,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Run code in the terminal.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
code (str): The code to run in the terminal.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The output of the code.
|
|
||||||
"""
|
|
||||||
out = subprocess.run(
|
|
||||||
code, shell=True, capture_output=True, text=True
|
|
||||||
).stdout
|
|
||||||
return str(out)
|
|
||||||
|
|
||||||
|
|
||||||
def browser(query: str):
|
|
||||||
"""
|
|
||||||
Search the query in the browser with the `browser` tool.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
query (str): The query to search in the browser.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The search results.
|
|
||||||
"""
|
|
||||||
import webbrowser
|
|
||||||
|
|
||||||
url = f"https://www.google.com/search?q={query}"
|
|
||||||
webbrowser.open(url)
|
|
||||||
return f"Searching for {query} in the browser."
|
|
||||||
|
|
||||||
|
|
||||||
def create_file(file_path: str, content: str):
|
|
||||||
"""
|
|
||||||
Create a file using the file editor tool.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
file_path (str): The path to the file.
|
|
||||||
content (str): The content to write to the file.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The result of the file creation operation.
|
|
||||||
"""
|
|
||||||
with open(file_path, "w") as file:
|
|
||||||
file.write(content)
|
|
||||||
return f"File {file_path} created successfully."
|
|
||||||
|
|
||||||
|
|
||||||
def file_editor(file_path: str, mode: str, content: str):
|
|
||||||
"""
|
|
||||||
Edit a file using the file editor tool.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
file_path (str): The path to the file.
|
|
||||||
mode (str): The mode to open the file in.
|
|
||||||
content (str): The content to write to the file.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The result of the file editing operation.
|
|
||||||
"""
|
|
||||||
with open(file_path, mode) as file:
|
|
||||||
file.write(content)
|
|
||||||
return f"File {file_path} edited successfully."
|
|
||||||
|
|
||||||
|
|
||||||
# Agent
|
|
||||||
agent = Agent(
|
|
||||||
agent_name="Devin",
|
|
||||||
system_prompt=(
|
|
||||||
"Autonomous agent that can interact with humans and other"
|
|
||||||
" agents. Be Helpful and Kind. Use the tools provided to"
|
|
||||||
" assist the user. Return all code in markdown format."
|
|
||||||
),
|
|
||||||
llm=model,
|
|
||||||
max_loops="auto",
|
|
||||||
autosave=True,
|
|
||||||
dashboard=False,
|
|
||||||
streaming_on=True,
|
|
||||||
verbose=True,
|
|
||||||
stopping_token="<DONE>",
|
|
||||||
interactive=True,
|
|
||||||
tools=[terminal, browser, file_editor, create_file],
|
|
||||||
streaming=True,
|
|
||||||
long_term_memory=memory,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Run the agent
|
|
||||||
out = agent(
|
|
||||||
"Create a CSV file with the latest tax rates for C corporations in the following ten states and the District of Columbia: Alabama, California, Florida, Georgia, Illinois, New York, North Carolina, Ohio, Texas, and Washington."
|
|
||||||
)
|
|
||||||
print(out)
|
|
@ -0,0 +1,52 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Set up logging
|
||||||
|
LOG_FILE="docs_compilation.log"
|
||||||
|
OUTPUT_FILE="combined_docs.txt"
|
||||||
|
|
||||||
|
# Initialize log file
|
||||||
|
echo "$(date): Starting documentation compilation" > "$LOG_FILE"
|
||||||
|
|
||||||
|
# Create/clear output file
|
||||||
|
> "$OUTPUT_FILE"
|
||||||
|
|
||||||
|
# Function to determine file type and handle accordingly
|
||||||
|
process_file() {
|
||||||
|
local file="$1"
|
||||||
|
|
||||||
|
# Get file extension
|
||||||
|
extension="${file##*.}"
|
||||||
|
|
||||||
|
echo "$(date): Processing $file" >> "$LOG_FILE"
|
||||||
|
|
||||||
|
case "$extension" in
|
||||||
|
md|markdown)
|
||||||
|
echo "# $(basename "$file")" >> "$OUTPUT_FILE"
|
||||||
|
cat "$file" >> "$OUTPUT_FILE"
|
||||||
|
echo -e "\n\n" >> "$OUTPUT_FILE"
|
||||||
|
;;
|
||||||
|
txt)
|
||||||
|
echo "# $(basename "$file")" >> "$OUTPUT_FILE"
|
||||||
|
cat "$file" >> "$OUTPUT_FILE"
|
||||||
|
echo -e "\n\n" >> "$OUTPUT_FILE"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "$(date): Skipping $file - unsupported format" >> "$LOG_FILE"
|
||||||
|
return
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
echo "$(date): Successfully processed $file" >> "$LOG_FILE"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Find and process all documentation files
|
||||||
|
find ../docs -type f \( -name "*.md" -o -name "*.txt" -o -name "*.markdown" \) | while read -r file; do
|
||||||
|
process_file "$file"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Log completion
|
||||||
|
echo "$(date): Documentation compilation complete" >> "$LOG_FILE"
|
||||||
|
echo "$(date): Output saved to $OUTPUT_FILE" >> "$LOG_FILE"
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
echo "Documentation compilation complete. Check $LOG_FILE for details."
|
@ -1,120 +0,0 @@
|
|||||||
from swarms.utils.loguru_logger import logger
|
|
||||||
import yaml
|
|
||||||
from pydantic import BaseModel
|
|
||||||
from typing import List, Optional
|
|
||||||
import json
|
|
||||||
from swarms.structs.agent_registry import AgentRegistry
|
|
||||||
from swarms.structs.agent import Agent
|
|
||||||
from swarm_models.popular_llms import OpenAIChat
|
|
||||||
|
|
||||||
|
|
||||||
class AgentInput(BaseModel):
|
|
||||||
agent_name: str = "Swarm Agent"
|
|
||||||
system_prompt: Optional[str] = None
|
|
||||||
agent_description: Optional[str] = None
|
|
||||||
model_name: str = "OpenAIChat"
|
|
||||||
max_loops: int = 1
|
|
||||||
autosave: bool = False
|
|
||||||
dynamic_temperature_enabled: bool = False
|
|
||||||
dashboard: bool = False
|
|
||||||
verbose: bool = False
|
|
||||||
streaming_on: bool = True
|
|
||||||
saved_state_path: Optional[str] = None
|
|
||||||
sop: Optional[str] = None
|
|
||||||
sop_list: Optional[List[str]] = None
|
|
||||||
user_name: str = "User"
|
|
||||||
retry_attempts: int = 3
|
|
||||||
context_length: int = 8192
|
|
||||||
task: Optional[str] = None
|
|
||||||
interactive: bool = False
|
|
||||||
|
|
||||||
|
|
||||||
def parse_yaml_to_json(yaml_str: str) -> str:
|
|
||||||
"""
|
|
||||||
Parses the given YAML string into an AgentInput model and converts it to a JSON string.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
yaml_str (str): The YAML string to be parsed.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The JSON string representation of the parsed YAML.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If the YAML string cannot be parsed into the AgentInput model.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
data = yaml.safe_load(yaml_str)
|
|
||||||
agent_input = AgentInput(**data)
|
|
||||||
return agent_input.json()
|
|
||||||
except yaml.YAMLError as e:
|
|
||||||
print(f"YAML Error: {e}")
|
|
||||||
raise ValueError("Invalid YAML input.") from e
|
|
||||||
except ValueError as e:
|
|
||||||
print(f"Validation Error: {e}")
|
|
||||||
raise ValueError("Invalid data for AgentInput model.") from e
|
|
||||||
|
|
||||||
|
|
||||||
# # Example usage
|
|
||||||
# yaml_input = """
|
|
||||||
# agent_name: "Custom Agent"
|
|
||||||
# system_prompt: "System prompt example"
|
|
||||||
# agent_description: "This is a test agent"
|
|
||||||
# model_name: "CustomModel"
|
|
||||||
# max_loops: 5
|
|
||||||
# autosave: true
|
|
||||||
# dynamic_temperature_enabled: true
|
|
||||||
# dashboard: true
|
|
||||||
# verbose: true
|
|
||||||
# streaming_on: false
|
|
||||||
# saved_state_path: "/path/to/state"
|
|
||||||
# sop: "Standard operating procedure"
|
|
||||||
# sop_list: ["step1", "step2"]
|
|
||||||
# user_name: "Tester"
|
|
||||||
# retry_attempts: 5
|
|
||||||
# context_length: 4096
|
|
||||||
# task: "Perform testing"
|
|
||||||
# """
|
|
||||||
|
|
||||||
# json_output = parse_yaml_to_json(yaml_input)
|
|
||||||
# print(json_output)
|
|
||||||
|
|
||||||
registry = AgentRegistry()
|
|
||||||
|
|
||||||
|
|
||||||
def create_agent_from_yaml(yaml_path: str) -> None:
|
|
||||||
with open(yaml_path, "r") as file:
|
|
||||||
yaml_str = file.read()
|
|
||||||
agent_json = parse_yaml_to_json(yaml_str)
|
|
||||||
agent_config = json.loads(agent_json)
|
|
||||||
|
|
||||||
agent = Agent(
|
|
||||||
agent_name=agent_config.get("agent_name", "Swarm Agent"),
|
|
||||||
system_prompt=agent_config.get("system_prompt"),
|
|
||||||
agent_description=agent_config.get("agent_description"),
|
|
||||||
llm=OpenAIChat(),
|
|
||||||
max_loops=agent_config.get("max_loops", 1),
|
|
||||||
autosave=agent_config.get("autosave", False),
|
|
||||||
dynamic_temperature_enabled=agent_config.get(
|
|
||||||
"dynamic_temperature_enabled", False
|
|
||||||
),
|
|
||||||
dashboard=agent_config.get("dashboard", False),
|
|
||||||
verbose=agent_config.get("verbose", False),
|
|
||||||
streaming_on=agent_config.get("streaming_on", True),
|
|
||||||
saved_state_path=agent_config.get("saved_state_path"),
|
|
||||||
retry_attempts=agent_config.get("retry_attempts", 3),
|
|
||||||
context_length=agent_config.get("context_length", 8192),
|
|
||||||
)
|
|
||||||
|
|
||||||
registry.add(agent.agent_name, agent)
|
|
||||||
logger.info(f"Agent {agent.agent_name} created from {yaml_path}.")
|
|
||||||
|
|
||||||
|
|
||||||
def run_agent(agent_name: str, task: str) -> None:
|
|
||||||
agent = registry.find_agent_by_name(agent_name)
|
|
||||||
agent.run(task)
|
|
||||||
|
|
||||||
|
|
||||||
def list_agents() -> None:
|
|
||||||
agents = registry.list_agents()
|
|
||||||
for agent_id in agents:
|
|
||||||
print(agent_id)
|
|
@ -1,10 +0,0 @@
|
|||||||
from typing import List
|
|
||||||
from pydantic import BaseModel
|
|
||||||
from swarms.schemas.agent_step_schemas import Step
|
|
||||||
|
|
||||||
|
|
||||||
class Plan(BaseModel):
|
|
||||||
steps: List[Step]
|
|
||||||
|
|
||||||
class Config:
|
|
||||||
orm_mode = True
|
|
@ -1,10 +1,12 @@
|
|||||||
from typing import List, Optional
|
from typing import List, Optional
|
||||||
|
|
||||||
import chromadb
|
import chromadb
|
||||||
from loguru import logger
|
|
||||||
from tenacity import retry, stop_after_attempt, wait_exponential
|
from tenacity import retry, stop_after_attempt, wait_exponential
|
||||||
from typing import Union, Callable, Any
|
from typing import Union, Callable, Any
|
||||||
from swarms import Agent
|
from swarms import Agent
|
||||||
|
from swarms.utils.loguru_logger import initialize_logger
|
||||||
|
|
||||||
|
logger = initialize_logger(log_folder="agent_router")
|
||||||
|
|
||||||
|
|
||||||
class AgentRouter:
|
class AgentRouter:
|
@ -1,3 +0,0 @@
|
|||||||
"""
|
|
||||||
This class will input a swarm type -> then auto generate a list of `Agent` structures with their name, descriptions, system prompts, and more.
|
|
||||||
"""
|
|
@ -1,393 +0,0 @@
|
|||||||
from typing import List, Callable, Union, Optional
|
|
||||||
from loguru import logger
|
|
||||||
from swarms.structs.base_swarm import BaseSwarm
|
|
||||||
from queue import PriorityQueue
|
|
||||||
from concurrent.futures import (
|
|
||||||
ThreadPoolExecutor,
|
|
||||||
as_completed,
|
|
||||||
)
|
|
||||||
import time
|
|
||||||
from pydantic import BaseModel, Field
|
|
||||||
|
|
||||||
|
|
||||||
class SwarmRunData(BaseModel):
|
|
||||||
"""
|
|
||||||
Pydantic model to capture metadata about each swarm's execution.
|
|
||||||
"""
|
|
||||||
|
|
||||||
swarm_name: str
|
|
||||||
task: str
|
|
||||||
priority: int
|
|
||||||
start_time: Optional[float] = None
|
|
||||||
end_time: Optional[float] = None
|
|
||||||
duration: Optional[float] = None
|
|
||||||
status: str = "Pending"
|
|
||||||
retries: int = 0
|
|
||||||
result: Optional[str] = None
|
|
||||||
exception: Optional[str] = None
|
|
||||||
|
|
||||||
|
|
||||||
class FederatedSwarmModel(BaseModel):
|
|
||||||
"""
|
|
||||||
Pydantic base model to capture and log data for the FederatedSwarm system.
|
|
||||||
"""
|
|
||||||
|
|
||||||
task: str
|
|
||||||
swarms_data: List[SwarmRunData] = Field(default_factory=list)
|
|
||||||
|
|
||||||
def add_swarm(self, swarm_name: str, task: str, priority: int):
|
|
||||||
swarm_data = SwarmRunData(
|
|
||||||
swarm_name=swarm_name, task=task, priority=priority
|
|
||||||
)
|
|
||||||
self.swarms_data.append(swarm_data)
|
|
||||||
|
|
||||||
def update_swarm_status(
|
|
||||||
self,
|
|
||||||
swarm_name: str,
|
|
||||||
status: str,
|
|
||||||
start_time: float = None,
|
|
||||||
end_time: float = None,
|
|
||||||
retries: int = 0,
|
|
||||||
result: str = None,
|
|
||||||
exception: str = None,
|
|
||||||
):
|
|
||||||
for swarm in self.swarms_data:
|
|
||||||
if swarm.name == swarm_name:
|
|
||||||
swarm.status = status
|
|
||||||
if start_time:
|
|
||||||
swarm.start_time = start_time
|
|
||||||
if end_time:
|
|
||||||
swarm.end_time = end_time
|
|
||||||
swarm.duration = end_time - swarm.start_time
|
|
||||||
swarm.retries = retries
|
|
||||||
swarm.result = result
|
|
||||||
swarm.exception = exception
|
|
||||||
break
|
|
||||||
|
|
||||||
|
|
||||||
class FederatedSwarm:
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
swarms: List[Union[BaseSwarm, Callable]],
|
|
||||||
max_workers: int = 4,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Initializes the FederatedSwarm with a list of swarms or callable objects and
|
|
||||||
sets up a priority queue and thread pool for concurrency.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
swarms (List[Union[BaseSwarm, Callable]]): A list of swarms (BaseSwarm) or callable objects.
|
|
||||||
max_workers (int): The maximum number of concurrent workers (threads) to run swarms in parallel.
|
|
||||||
"""
|
|
||||||
self.swarms = PriorityQueue()
|
|
||||||
self.max_workers = max_workers
|
|
||||||
self.thread_pool = ThreadPoolExecutor(
|
|
||||||
max_workers=self.max_workers
|
|
||||||
)
|
|
||||||
self.task_queue = []
|
|
||||||
self.future_to_swarm = {}
|
|
||||||
self.results = {}
|
|
||||||
self.validate_swarms(swarms)
|
|
||||||
|
|
||||||
def init_metadata(self, task: str):
|
|
||||||
"""
|
|
||||||
Initializes the Pydantic base model to capture metadata about the current task and swarms.
|
|
||||||
"""
|
|
||||||
self.metadata = FederatedSwarmModel(task=task)
|
|
||||||
for priority, swarm in list(self.swarms.queue):
|
|
||||||
swarm_name = (
|
|
||||||
swarm.__class__.__name__
|
|
||||||
if hasattr(swarm, "__class__")
|
|
||||||
else str(swarm)
|
|
||||||
)
|
|
||||||
self.metadata.add_swarm(
|
|
||||||
swarm_name=swarm_name, task=task, priority=priority
|
|
||||||
)
|
|
||||||
logger.info(f"Metadata initialized for task '{task}'.")
|
|
||||||
|
|
||||||
def validate_swarms(
|
|
||||||
self, swarms: List[Union[BaseSwarm, Callable]]
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Validates and adds swarms to the priority queue, ensuring each swarm has a `run(task)` method.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
swarms (List[Union[BaseSwarm, Callable]]): List of swarms with an optional priority value.
|
|
||||||
"""
|
|
||||||
for swarm, priority in swarms:
|
|
||||||
if not callable(swarm):
|
|
||||||
raise TypeError(f"{swarm} is not callable.")
|
|
||||||
|
|
||||||
if hasattr(swarm, "run"):
|
|
||||||
logger.info(f"{swarm} has a 'run' method.")
|
|
||||||
else:
|
|
||||||
raise AttributeError(
|
|
||||||
f"{swarm} does not have a 'run(task)' method."
|
|
||||||
)
|
|
||||||
|
|
||||||
self.swarms.put((priority, swarm))
|
|
||||||
logger.info(
|
|
||||||
f"Swarm {swarm} added with priority {priority}."
|
|
||||||
)
|
|
||||||
|
|
||||||
def run_parallel(
|
|
||||||
self,
|
|
||||||
task: str,
|
|
||||||
timeout: Optional[float] = None,
|
|
||||||
retries: int = 0,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Runs all swarms in parallel with prioritization and optional timeout.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
task (str): The task to be passed to the `run` method of each swarm.
|
|
||||||
timeout (Optional[float]): Maximum time allowed for each swarm to run.
|
|
||||||
retries (int): Number of retries allowed for failed swarms.
|
|
||||||
"""
|
|
||||||
logger.info(
|
|
||||||
f"Running task '{task}' in parallel with timeout: {timeout}, retries: {retries}"
|
|
||||||
)
|
|
||||||
self.init_metadata(task)
|
|
||||||
|
|
||||||
while not self.swarms.empty():
|
|
||||||
priority, swarm = self.swarms.get()
|
|
||||||
swarm_name = (
|
|
||||||
swarm.__class__.__name__
|
|
||||||
if hasattr(swarm, "__class__")
|
|
||||||
else str(swarm)
|
|
||||||
)
|
|
||||||
future = self.thread_pool.submit(
|
|
||||||
self._run_with_retry,
|
|
||||||
swarm,
|
|
||||||
task,
|
|
||||||
retries,
|
|
||||||
timeout,
|
|
||||||
swarm_name,
|
|
||||||
)
|
|
||||||
self.future_to_swarm[future] = swarm
|
|
||||||
|
|
||||||
for future in as_completed(self.future_to_swarm):
|
|
||||||
swarm = self.future_to_swarm[future]
|
|
||||||
try:
|
|
||||||
result = future.result()
|
|
||||||
swarm_name = (
|
|
||||||
swarm.__class__.__name__
|
|
||||||
if hasattr(swarm, "__class__")
|
|
||||||
else str(swarm)
|
|
||||||
)
|
|
||||||
self.metadata.update_swarm_status(
|
|
||||||
swarm_name=swarm_name,
|
|
||||||
status="Completed",
|
|
||||||
result=result,
|
|
||||||
)
|
|
||||||
logger.info(
|
|
||||||
f"Swarm {swarm_name} completed successfully."
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
swarm_name = (
|
|
||||||
swarm.__class__.__name__
|
|
||||||
if hasattr(swarm, "__class__")
|
|
||||||
else str(swarm)
|
|
||||||
)
|
|
||||||
self.metadata.update_swarm_status(
|
|
||||||
swarm_name=swarm_name,
|
|
||||||
status="Failed",
|
|
||||||
exception=str(e),
|
|
||||||
)
|
|
||||||
logger.error(f"Swarm {swarm_name} failed: {e}")
|
|
||||||
self.results[swarm] = "Failed"
|
|
||||||
|
|
||||||
def run_sequentially(
|
|
||||||
self,
|
|
||||||
task: str,
|
|
||||||
retries: int = 0,
|
|
||||||
timeout: Optional[float] = None,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Runs all swarms sequentially in order of priority.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
task (str): The task to pass to the `run` method of each swarm.
|
|
||||||
retries (int): Number of retries for failed swarms.
|
|
||||||
timeout (Optional[float]): Optional time limit for each swarm.
|
|
||||||
"""
|
|
||||||
logger.info(f"Running task '{task}' sequentially.")
|
|
||||||
|
|
||||||
while not self.swarms.empty():
|
|
||||||
priority, swarm = self.swarms.get()
|
|
||||||
try:
|
|
||||||
logger.info(
|
|
||||||
f"Running swarm {swarm} with priority {priority}."
|
|
||||||
)
|
|
||||||
self._run_with_retry(swarm, task, retries, timeout)
|
|
||||||
logger.info(f"Swarm {swarm} completed successfully.")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Swarm {swarm} failed with error: {e}")
|
|
||||||
|
|
||||||
def _run_with_retry(
|
|
||||||
self,
|
|
||||||
swarm: Union[BaseSwarm, Callable],
|
|
||||||
task: str,
|
|
||||||
retries: int,
|
|
||||||
timeout: Optional[float],
|
|
||||||
swarm_name: str,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Helper function to run a swarm with a retry mechanism and optional timeout.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
swarm (Union[BaseSwarm, Callable]): The swarm to run.
|
|
||||||
task (str): The task to pass to the swarm.
|
|
||||||
retries (int): The number of retries allowed for the swarm in case of failure.
|
|
||||||
timeout (Optional[float]): Maximum time allowed for the swarm to run.
|
|
||||||
swarm_name (str): Name of the swarm (used for metadata).
|
|
||||||
"""
|
|
||||||
attempts = 0
|
|
||||||
start_time = time.time()
|
|
||||||
while attempts <= retries:
|
|
||||||
try:
|
|
||||||
logger.info(
|
|
||||||
f"Running swarm {swarm}. Attempt: {attempts + 1}"
|
|
||||||
)
|
|
||||||
self.metadata.update_swarm_status(
|
|
||||||
swarm_name=swarm_name,
|
|
||||||
status="Running",
|
|
||||||
start_time=start_time,
|
|
||||||
)
|
|
||||||
if hasattr(swarm, "run"):
|
|
||||||
if timeout:
|
|
||||||
start_time = time.time()
|
|
||||||
swarm.run(task)
|
|
||||||
duration = time.time() - start_time
|
|
||||||
if duration > timeout:
|
|
||||||
raise TimeoutError(
|
|
||||||
f"Swarm {swarm} timed out after {duration:.2f}s."
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
swarm.run(task)
|
|
||||||
else:
|
|
||||||
swarm(task)
|
|
||||||
end_time = time.time()
|
|
||||||
self.metadata.update_swarm_status(
|
|
||||||
swarm_name=swarm_name,
|
|
||||||
status="Completed",
|
|
||||||
end_time=end_time,
|
|
||||||
retries=attempts,
|
|
||||||
)
|
|
||||||
return "Success"
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Swarm {swarm} failed: {e}")
|
|
||||||
attempts += 1
|
|
||||||
if attempts > retries:
|
|
||||||
end_time = time.time()
|
|
||||||
self.metadata.update_swarm_status(
|
|
||||||
swarm_name=swarm_name,
|
|
||||||
status="Failed",
|
|
||||||
end_time=end_time,
|
|
||||||
retries=attempts,
|
|
||||||
exception=str(e),
|
|
||||||
)
|
|
||||||
logger.error(f"Swarm {swarm} exhausted retries.")
|
|
||||||
raise
|
|
||||||
|
|
||||||
def add_swarm(
|
|
||||||
self, swarm: Union[BaseSwarm, Callable], priority: int
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Adds a new swarm to the FederatedSwarm at runtime.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
swarm (Union[BaseSwarm, Callable]): The swarm to add.
|
|
||||||
priority (int): The priority level for the swarm.
|
|
||||||
"""
|
|
||||||
self.swarms.put((priority, swarm))
|
|
||||||
logger.info(
|
|
||||||
f"Swarm {swarm} added dynamically with priority {priority}."
|
|
||||||
)
|
|
||||||
|
|
||||||
def queue_task(self, task: str):
|
|
||||||
"""
|
|
||||||
Adds a task to the internal task queue for batch processing.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
task (str): The task to queue.
|
|
||||||
"""
|
|
||||||
self.task_queue.append(task)
|
|
||||||
logger.info(f"Task '{task}' added to the queue.")
|
|
||||||
|
|
||||||
def process_task_queue(self):
|
|
||||||
"""
|
|
||||||
Processes all tasks in the task queue.
|
|
||||||
"""
|
|
||||||
for task in self.task_queue:
|
|
||||||
logger.info(f"Processing task: {task}")
|
|
||||||
self.run_parallel(task)
|
|
||||||
self.task_queue = []
|
|
||||||
|
|
||||||
def log_swarm_results(self):
|
|
||||||
"""
|
|
||||||
Logs the results of all swarms after execution.
|
|
||||||
"""
|
|
||||||
logger.info("Logging swarm results...")
|
|
||||||
for swarm, result in self.results.items():
|
|
||||||
logger.info(f"Swarm {swarm}: {result}")
|
|
||||||
|
|
||||||
def get_swarm_status(self) -> dict:
|
|
||||||
"""
|
|
||||||
Retrieves the status of each swarm (completed, running, failed).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
dict: Dictionary containing swarm statuses.
|
|
||||||
"""
|
|
||||||
status = {}
|
|
||||||
for future, swarm in self.future_to_swarm.items():
|
|
||||||
if future.done():
|
|
||||||
status[swarm] = "Completed"
|
|
||||||
elif future.running():
|
|
||||||
status[swarm] = "Running"
|
|
||||||
else:
|
|
||||||
status[swarm] = "Failed"
|
|
||||||
return status
|
|
||||||
|
|
||||||
def cancel_running_swarms(self):
|
|
||||||
"""
|
|
||||||
Cancels all currently running swarms by shutting down the thread pool.
|
|
||||||
"""
|
|
||||||
logger.warning("Cancelling all running swarms...")
|
|
||||||
self.thread_pool.shutdown(wait=False)
|
|
||||||
logger.info("All running swarms cancelled.")
|
|
||||||
|
|
||||||
|
|
||||||
# Example Usage:
|
|
||||||
|
|
||||||
|
|
||||||
# class ExampleSwarm(BaseSwarm):
|
|
||||||
# def run(self, task: str):
|
|
||||||
# logger.info(f"ExampleSwarm is processing task: {task}")
|
|
||||||
|
|
||||||
|
|
||||||
# def example_callable(task: str):
|
|
||||||
# logger.info(f"Callable is processing task: {task}")
|
|
||||||
|
|
||||||
|
|
||||||
# if __name__ == "__main__":
|
|
||||||
# swarms = [(ExampleSwarm(), 1), (example_callable, 2)]
|
|
||||||
# federated_swarm = FederatedSwarm(swarms)
|
|
||||||
|
|
||||||
# # Run in parallel
|
|
||||||
# federated_swarm.run_parallel(
|
|
||||||
# "Process data", timeout=10, retries=3
|
|
||||||
# )
|
|
||||||
|
|
||||||
# # Run sequentially
|
|
||||||
# federated_swarm.run_sequentially("Process data sequentially")
|
|
||||||
|
|
||||||
# # Log results
|
|
||||||
# federated_swarm.log_swarm_results()
|
|
||||||
|
|
||||||
# # Get status of swarms
|
|
||||||
# status = federated_swarm.get_swarm_status()
|
|
||||||
# logger.info(f"Swarm statuses: {status}")
|
|
||||||
|
|
||||||
# # Cancel running swarms (if needed)
|
|
||||||
# # federated_swarm.cancel_running_swarms()
|
|
@ -1,214 +0,0 @@
|
|||||||
import hashlib
|
|
||||||
from time import time_ns
|
|
||||||
from typing import Callable, List, Optional, Sequence, Union
|
|
||||||
|
|
||||||
from swarms.structs.agent import Agent
|
|
||||||
from swarms.utils.loguru_logger import logger
|
|
||||||
from swarms.structs.base_swarm import BaseSwarm
|
|
||||||
|
|
||||||
|
|
||||||
def _hash(input: str):
|
|
||||||
"""
|
|
||||||
Hashes the input string using SHA256 algorithm.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
input (str): The string to be hashed.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The hexadecimal representation of the hash value.
|
|
||||||
"""
|
|
||||||
hex_dig = hashlib.sha256(input.encode("utf-8")).hexdigest()
|
|
||||||
return hex_dig
|
|
||||||
|
|
||||||
|
|
||||||
def msg_hash(
|
|
||||||
agent: Agent, content: str, turn: int, msg_type: str = "text"
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Generate a hash value for a message.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent (Agent): The agent sending the message.
|
|
||||||
content (str): The content of the message.
|
|
||||||
turn (int): The turn number of the message.
|
|
||||||
msg_type (str, optional): The type of the message. Defaults to "text".
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int: The hash value of the message.
|
|
||||||
"""
|
|
||||||
time = time_ns()
|
|
||||||
return _hash(
|
|
||||||
f"agent: {agent.agent_name}\ncontent: {content}\ntimestamp:"
|
|
||||||
f" {str(time)}\nturn: {turn}\nmsg_type: {msg_type}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class MessagePool(BaseSwarm):
|
|
||||||
"""
|
|
||||||
A class representing a message pool for agents in a swarm.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
agents (Optional[Sequence[Agent]]): The list of agents in the swarm.
|
|
||||||
moderator (Optional[Agent]): The moderator agent.
|
|
||||||
turns (Optional[int]): The number of turns.
|
|
||||||
routing_function (Optional[Callable]): The routing function for message distribution.
|
|
||||||
show_names (Optional[bool]): Flag indicating whether to show agent names.
|
|
||||||
messages (List[Dict]): The list of messages in the pool.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
>>> from swarms.structs.agent import Agent
|
|
||||||
>>> from swarms.structs.message_pool import MessagePool
|
|
||||||
>>> agent1 = Agent(agent_name="agent1")
|
|
||||||
>>> agent2 = Agent(agent_name="agent2")
|
|
||||||
>>> agent3 = Agent(agent_name="agent3")
|
|
||||||
>>> moderator = Agent(agent_name="moderator")
|
|
||||||
>>> agents = [agent1, agent2, agent3]
|
|
||||||
>>> message_pool = MessagePool(agents=agents, moderator=moderator, turns=5)
|
|
||||||
>>> message_pool.add(agent=agent1, content="Hello, agent2!", turn=1)
|
|
||||||
>>> message_pool.add(agent=agent2, content="Hello, agent1!", turn=1)
|
|
||||||
>>> message_pool.add(agent=agent3, content="Hello, agent1!", turn=1)
|
|
||||||
>>> message_pool.get_all_messages()
|
|
||||||
[{'agent': Agent(agent_name='agent1'), 'content': 'Hello, agent2!', 'turn': 1, 'visible_to': 'all', 'logged': True}, {'agent': Agent(agent_name='agent2'), 'content': 'Hello, agent1!', 'turn': 1, 'visible_to': 'all', 'logged': True}, {'agent': Agent(agent_name='agent3'), 'content': 'Hello, agent1!', 'turn': 1, 'visible_to': 'all', 'logged': True}]
|
|
||||||
>>> message_pool.get_visible_messages(agent=agent1, turn=1)
|
|
||||||
[{'agent': Agent(agent_name='agent1'), 'content': 'Hello, agent2!', 'turn': 1, 'visible_to': 'all', 'logged': True}, {'agent': Agent(agent_name='agent2'), 'content': 'Hello, agent1!', 'turn': 1, 'visible_to': 'all', 'logged': True}, {'agent': Agent(agent_name='agent3'), 'content': 'Hello, agent1!', 'turn': 1, 'visible_to': 'all', 'logged': True}]
|
|
||||||
>>> message_pool.get_visible_messages(agent=agent2, turn=1)
|
|
||||||
[{'agent': Agent(agent_name='agent1'), 'content': 'Hello, agent2!', 'turn': 1, 'visible_to': 'all', 'logged': True}, {'agent': Agent(agent_name='agent2'), 'content': 'Hello, agent1!', 'turn': 1, 'visible_to': 'all', 'logged': True}, {'agent': Agent(agent_name='agent3'), 'content': 'Hello, agent1!', 'turn': 1, 'visible_to': 'all', 'logged': True}]
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
agents: Optional[Sequence[Agent]] = None,
|
|
||||||
moderator: Optional[Agent] = None,
|
|
||||||
turns: Optional[int] = 5,
|
|
||||||
routing_function: Optional[Callable] = None,
|
|
||||||
show_names: Optional[bool] = False,
|
|
||||||
autosave: Optional[bool] = False,
|
|
||||||
*args,
|
|
||||||
**kwargs,
|
|
||||||
):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
self.agent = agents
|
|
||||||
self.moderator = moderator
|
|
||||||
self.turns = turns
|
|
||||||
self.routing_function = routing_function
|
|
||||||
self.show_names = show_names
|
|
||||||
self.autosave = autosave
|
|
||||||
|
|
||||||
self.messages = []
|
|
||||||
|
|
||||||
logger.info("MessagePool initialized")
|
|
||||||
logger.info(f"Number of agents: {len(agents)}")
|
|
||||||
logger.info(
|
|
||||||
f"Agents: {[agent.agent_name for agent in agents]}"
|
|
||||||
)
|
|
||||||
logger.info(f"moderator: {moderator.agent_name} is available")
|
|
||||||
logger.info(f"Number of turns: {turns}")
|
|
||||||
|
|
||||||
def add(
|
|
||||||
self,
|
|
||||||
agent: Agent,
|
|
||||||
content: str,
|
|
||||||
turn: int,
|
|
||||||
visible_to: Union[str, List[str]] = "all",
|
|
||||||
logged: bool = True,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Add a message to the pool.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent (Agent): The agent sending the message.
|
|
||||||
content (str): The content of the message.
|
|
||||||
turn (int): The turn number.
|
|
||||||
visible_to (Union[str, List[str]], optional): The agents who can see the message. Defaults to "all".
|
|
||||||
logged (bool, optional): Flag indicating whether the message should be logged. Defaults to True.
|
|
||||||
"""
|
|
||||||
|
|
||||||
self.messages.append(
|
|
||||||
{
|
|
||||||
"agent": agent,
|
|
||||||
"content": content,
|
|
||||||
"turn": turn,
|
|
||||||
"visible_to": visible_to,
|
|
||||||
"logged": logged,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
logger.info(f"Message added: {content}")
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
"""
|
|
||||||
Reset the message pool.
|
|
||||||
"""
|
|
||||||
self.messages = []
|
|
||||||
logger.info("MessagePool reset")
|
|
||||||
|
|
||||||
def last_turn(self):
|
|
||||||
"""
|
|
||||||
Get the last turn number.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
int: The last turn number.
|
|
||||||
"""
|
|
||||||
if len(self.messages) == 0:
|
|
||||||
return 0
|
|
||||||
else:
|
|
||||||
return self.messages[-1]["turn"]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def last_message(self):
|
|
||||||
"""
|
|
||||||
Get the last message in the pool.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
dict: The last message.
|
|
||||||
"""
|
|
||||||
if len(self.messages) == 0:
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
return self.messages[-1]
|
|
||||||
|
|
||||||
def get_all_messages(self):
|
|
||||||
"""
|
|
||||||
Get all messages in the pool.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List[Dict]: The list of all messages.
|
|
||||||
"""
|
|
||||||
return self.messages
|
|
||||||
|
|
||||||
def get_visible_messages(self, agent: Agent, turn: int):
|
|
||||||
"""
|
|
||||||
Get the visible messages for a given agent and turn.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
agent (Agent): The agent.
|
|
||||||
turn (int): The turn number.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List[Dict]: The list of visible messages.
|
|
||||||
"""
|
|
||||||
# Get the messages before the current turn
|
|
||||||
prev_messages = [
|
|
||||||
message
|
|
||||||
for message in self.messages
|
|
||||||
if message["turn"] < turn
|
|
||||||
]
|
|
||||||
|
|
||||||
visible_messages = []
|
|
||||||
for message in prev_messages:
|
|
||||||
if (
|
|
||||||
message["visible_to"] == "all"
|
|
||||||
or agent.agent_name in message["visible_to"]
|
|
||||||
):
|
|
||||||
visible_messages.append(message)
|
|
||||||
return visible_messages
|
|
||||||
|
|
||||||
# def query(self, query: str):
|
|
||||||
# """
|
|
||||||
# Query a message from the messages list and then pass it to the moderator
|
|
||||||
# """
|
|
||||||
# return [
|
|
||||||
# (mod, content)
|
|
||||||
# for mod, content, _ in self.messages # Add an underscore to ignore the rest of the elements
|
|
||||||
# if query in content
|
|
||||||
# ]
|
|
@ -1,16 +0,0 @@
|
|||||||
def log_agent_data(data: dict):
|
|
||||||
import requests
|
|
||||||
|
|
||||||
data_dict = {
|
|
||||||
"data": data,
|
|
||||||
}
|
|
||||||
|
|
||||||
url = "https://swarms.world/api/get-agents/log-agents"
|
|
||||||
headers = {
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
"Authorization": "Bearer sk-f24a13ed139f757d99cdd9cdcae710fccead92681606a97086d9711f69d44869",
|
|
||||||
}
|
|
||||||
|
|
||||||
response = requests.post(url, json=data_dict, headers=headers)
|
|
||||||
|
|
||||||
return response.json()
|
|
@ -1,91 +0,0 @@
|
|||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
from loguru import logger
|
|
||||||
from typing import Tuple, Union, List
|
|
||||||
from e2b_code_interpreter import CodeInterpreter
|
|
||||||
|
|
||||||
# load_dotenv()
|
|
||||||
|
|
||||||
|
|
||||||
# Helper function to lazily install the package if not found
|
|
||||||
def lazy_install(package: str) -> None:
|
|
||||||
try:
|
|
||||||
__import__(package)
|
|
||||||
except ImportError:
|
|
||||||
logger.warning(f"{package} not found. Installing now...")
|
|
||||||
subprocess.check_call(
|
|
||||||
[sys.executable, "-m", "pip", "install", package]
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# Ensure e2b_code_interpreter is installed lazily
|
|
||||||
lazy_install("e2b_code_interpreter")
|
|
||||||
|
|
||||||
|
|
||||||
def code_interpret(
|
|
||||||
code_interpreter: CodeInterpreter, code: str
|
|
||||||
) -> Union[Tuple[List[str], List[str]], None]:
|
|
||||||
"""
|
|
||||||
Runs AI-generated code using the provided CodeInterpreter and logs the process.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
code_interpreter (CodeInterpreter): An instance of the CodeInterpreter class.
|
|
||||||
code (str): The code string to be executed.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Union[Tuple[List[str], List[str]], None]: A tuple of (results, logs) if successful,
|
|
||||||
or None if an error occurred.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
ValueError: If the code or code_interpreter is invalid.
|
|
||||||
"""
|
|
||||||
if not isinstance(code_interpreter, CodeInterpreter):
|
|
||||||
logger.error("Invalid CodeInterpreter instance provided.")
|
|
||||||
raise ValueError(
|
|
||||||
"code_interpreter must be an instance of CodeInterpreter."
|
|
||||||
)
|
|
||||||
if not isinstance(code, str) or not code.strip():
|
|
||||||
logger.error("Invalid code provided.")
|
|
||||||
raise ValueError("code must be a non-empty string.")
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
f"\n{'='*50}\n> Running the following AI-generated code:\n{code}\n{'='*50}"
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
exec_result = code_interpreter.notebook.exec_cell(
|
|
||||||
code,
|
|
||||||
# on_stderr=lambda stderr: logger.error(f"[Code Interpreter stderr] {stderr}"),
|
|
||||||
# on_stdout=lambda stdout: logger.info(f"[Code Interpreter stdout] {stdout}")
|
|
||||||
)
|
|
||||||
|
|
||||||
if exec_result.error:
|
|
||||||
logger.error(
|
|
||||||
f"[Code Interpreter error] {exec_result.error}"
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
logger.success("Code executed successfully.")
|
|
||||||
# return exec_result.results, exec_result.logs
|
|
||||||
# return exec_result.results
|
|
||||||
prompt = f"{exec_result.results}: {exec_result.logs}"
|
|
||||||
return prompt
|
|
||||||
|
|
||||||
except Exception:
|
|
||||||
logger.exception(
|
|
||||||
"An error occurred during code interpretation."
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
# # from e2b_code_interpreter import CodeInterpreter
|
|
||||||
|
|
||||||
# interpreter = CodeInterpreter()
|
|
||||||
# code = "print('Hello, World!')"
|
|
||||||
|
|
||||||
# result = code_interpret(interpreter, code)
|
|
||||||
|
|
||||||
# if result:
|
|
||||||
# results = result
|
|
||||||
# print("Execution Results:", results)
|
|
||||||
# # print("Execution Logs:", logs)
|
|
@ -1,49 +0,0 @@
|
|||||||
import concurrent.futures
|
|
||||||
from typing import List, Tuple, Any, Dict, Union, Callable
|
|
||||||
|
|
||||||
|
|
||||||
def execute_concurrently(
|
|
||||||
callable_functions: List[
|
|
||||||
Tuple[Callable, Tuple[Any, ...], Dict[str, Any]]
|
|
||||||
],
|
|
||||||
max_workers: int = 5,
|
|
||||||
) -> List[Union[Any, Exception]]:
|
|
||||||
"""
|
|
||||||
Executes callable functions concurrently using multithreading.
|
|
||||||
|
|
||||||
Parameters:
|
|
||||||
- callable_functions: A list of tuples, each containing the callable function and its arguments.
|
|
||||||
For example: [(function1, (arg1, arg2), {'kwarg1': val1}), (function2, (), {})]
|
|
||||||
- max_workers: The maximum number of threads to use.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
- results: A list of results returned by the callable functions. If an error occurs in any function,
|
|
||||||
the exception object will be placed at the corresponding index in the list.
|
|
||||||
"""
|
|
||||||
results = [None] * len(callable_functions)
|
|
||||||
|
|
||||||
def worker(
|
|
||||||
fn: Callable,
|
|
||||||
args: Tuple[Any, ...],
|
|
||||||
kwargs: Dict[str, Any],
|
|
||||||
index: int,
|
|
||||||
) -> None:
|
|
||||||
try:
|
|
||||||
result = fn(*args, **kwargs)
|
|
||||||
results[index] = result
|
|
||||||
except Exception as e:
|
|
||||||
results[index] = e
|
|
||||||
|
|
||||||
with concurrent.futures.ThreadPoolExecutor(
|
|
||||||
max_workers=max_workers
|
|
||||||
) as executor:
|
|
||||||
futures = []
|
|
||||||
for i, (fn, args, kwargs) in enumerate(callable_functions):
|
|
||||||
futures.append(
|
|
||||||
executor.submit(worker, fn, args, kwargs, i)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Wait for all threads to complete
|
|
||||||
concurrent.futures.wait(futures)
|
|
||||||
|
|
||||||
return results
|
|
@ -1,116 +0,0 @@
|
|||||||
import functools
|
|
||||||
import logging
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
import warnings
|
|
||||||
|
|
||||||
|
|
||||||
def log_decorator(func):
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
logging.info(f"Entering {func.__name__}")
|
|
||||||
result = func(*args, **kwargs)
|
|
||||||
logging.info(f"Exiting {func.__name__}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def error_decorator(func):
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
try:
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
except Exception as e:
|
|
||||||
logging.error(f"Error in {func.__name__}: {str(e)}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def timing_decorator(func):
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
start_time = time.time()
|
|
||||||
result = func(*args, **kwargs)
|
|
||||||
end_time = time.time()
|
|
||||||
logging.info(
|
|
||||||
f"{func.__name__} executed in"
|
|
||||||
f" {end_time - start_time} seconds"
|
|
||||||
)
|
|
||||||
return result
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def retry_decorator(max_retries=5):
|
|
||||||
"""
|
|
||||||
Decorator that retries a function a specified number of times if an exception occurs.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
max_retries (int): The maximum number of times to retry the function.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
function: The decorated function.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
def decorator(func):
|
|
||||||
@functools.wraps(func)
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
for _ in range(max_retries):
|
|
||||||
try:
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
except Exception as error:
|
|
||||||
logging.error(
|
|
||||||
f" Error in {func.__name__}:"
|
|
||||||
f" {str(error)} Retrying ...."
|
|
||||||
)
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
return decorator
|
|
||||||
|
|
||||||
|
|
||||||
def singleton_decorator(cls):
|
|
||||||
instances = {}
|
|
||||||
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
if cls not in instances:
|
|
||||||
instances[cls] = cls(*args, **kwargs)
|
|
||||||
return instances[cls]
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def synchronized_decorator(func):
|
|
||||||
func.__lock__ = threading.Lock()
|
|
||||||
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
with func.__lock__:
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def deprecated_decorator(func):
|
|
||||||
@functools.wraps(func)
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
warnings.warn(
|
|
||||||
f"{func.__name__} is deprecated",
|
|
||||||
category=DeprecationWarning,
|
|
||||||
)
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
def validate_inputs_decorator(validator):
|
|
||||||
def decorator(func):
|
|
||||||
@functools.wraps(func)
|
|
||||||
def wrapper(*args, **kwargs):
|
|
||||||
if not validator(*args, **kwargs):
|
|
||||||
raise ValueError("Invalid Inputs")
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
return decorator
|
|
@ -1,127 +0,0 @@
|
|||||||
import time
|
|
||||||
from os import cpu_count
|
|
||||||
from typing import Any, Callable, List, Optional
|
|
||||||
|
|
||||||
from loguru import logger
|
|
||||||
from pathos.multiprocessing import ProcessingPool as Pool
|
|
||||||
|
|
||||||
|
|
||||||
from typing import Tuple
|
|
||||||
|
|
||||||
|
|
||||||
def execute_parallel_optimized(
|
|
||||||
callables_with_args: List[
|
|
||||||
Tuple[Callable[..., Any], Tuple[Any, ...]]
|
|
||||||
],
|
|
||||||
max_workers: Optional[int] = None,
|
|
||||||
chunk_size: Optional[int] = None,
|
|
||||||
retries: int = 3,
|
|
||||||
**kwargs,
|
|
||||||
) -> List[Any]:
|
|
||||||
"""
|
|
||||||
Executes a list of callables in parallel, leveraging all available CPU cores.
|
|
||||||
|
|
||||||
This function is optimized for high performance and reliability.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
callables_with_args (List[Tuple[Callable[..., Any], Tuple[Any, ...]]]):
|
|
||||||
A list of tuples, where each tuple contains a callable and a tuple of its arguments.
|
|
||||||
max_workers (Optional[int]): The maximum number of workers to use. Defaults to the number of available cores.
|
|
||||||
chunk_size (Optional[int]): The size of chunks to split the tasks into for balanced execution. Defaults to automatic chunking.
|
|
||||||
retries (int): Number of retries for a failed task. Default is 3.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List[Any]: A list of results from each callable. The order corresponds to the order of the input list.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
Exception: Any exception raised by the callable will be logged and re-raised after retries are exhausted.
|
|
||||||
"""
|
|
||||||
max_workers = cpu_count() if max_workers is None else max_workers
|
|
||||||
results = []
|
|
||||||
logger.info(
|
|
||||||
f"Starting optimized parallel execution of {len(callables_with_args)} tasks."
|
|
||||||
)
|
|
||||||
|
|
||||||
pool = Pool(
|
|
||||||
nodes=max_workers, **kwargs
|
|
||||||
) # Initialize the pool once
|
|
||||||
|
|
||||||
def _execute_with_retry(callable_, args, retries):
|
|
||||||
attempt = 0
|
|
||||||
while attempt < retries:
|
|
||||||
try:
|
|
||||||
result = callable_(*args)
|
|
||||||
logger.info(
|
|
||||||
f"Task {callable_} with args {args} completed successfully."
|
|
||||||
)
|
|
||||||
return result
|
|
||||||
except Exception as e:
|
|
||||||
attempt += 1
|
|
||||||
logger.warning(
|
|
||||||
f"Task {callable_} with args {args} failed on attempt {attempt}: {e}"
|
|
||||||
)
|
|
||||||
time.sleep(1) # Small delay before retrying
|
|
||||||
if attempt >= retries:
|
|
||||||
logger.error(
|
|
||||||
f"Task {callable_} with args {args} failed after {retries} retries."
|
|
||||||
)
|
|
||||||
raise
|
|
||||||
|
|
||||||
try:
|
|
||||||
if chunk_size is None:
|
|
||||||
chunk_size = (
|
|
||||||
len(callables_with_args)
|
|
||||||
// (max_workers or pool.ncpus)
|
|
||||||
or 1
|
|
||||||
)
|
|
||||||
|
|
||||||
# Use chunking and mapping for efficient execution
|
|
||||||
results = pool.map(
|
|
||||||
lambda item: _execute_with_retry(
|
|
||||||
item[0], item[1], retries
|
|
||||||
),
|
|
||||||
callables_with_args,
|
|
||||||
chunksize=chunk_size,
|
|
||||||
)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.critical(
|
|
||||||
f"Parallel execution failed due to an error: {e}"
|
|
||||||
)
|
|
||||||
raise
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
f"Optimized parallel execution completed. {len(results)} tasks executed."
|
|
||||||
)
|
|
||||||
pool.close() # Ensure pool is properly closed
|
|
||||||
pool.join()
|
|
||||||
|
|
||||||
|
|
||||||
# return results
|
|
||||||
|
|
||||||
|
|
||||||
# def add(a, b):
|
|
||||||
# return a + b
|
|
||||||
|
|
||||||
|
|
||||||
# def multiply(a, b):
|
|
||||||
# return a * b
|
|
||||||
|
|
||||||
|
|
||||||
# def power(a, b):
|
|
||||||
# return a**b
|
|
||||||
|
|
||||||
|
|
||||||
# # if __name__ == "__main__":
|
|
||||||
# # # List of callables with their respective arguments
|
|
||||||
# # callables_with_args = [
|
|
||||||
# # (add, (2, 3)),
|
|
||||||
# # (multiply, (5, 4)),
|
|
||||||
# # (power, (2, 10)),
|
|
||||||
# # ]
|
|
||||||
|
|
||||||
# # # Execute the callables in parallel
|
|
||||||
# # results = execute_parallel_optimized(callables_with_args)
|
|
||||||
|
|
||||||
# # # Print the results
|
|
||||||
# # print("Results:", results)
|
|
@ -1,98 +0,0 @@
|
|||||||
from functools import wraps
|
|
||||||
from loguru import logger
|
|
||||||
import tracemalloc
|
|
||||||
import psutil
|
|
||||||
import time
|
|
||||||
from typing import Callable, Any
|
|
||||||
|
|
||||||
|
|
||||||
def profile_all(func: Callable) -> Callable:
|
|
||||||
"""
|
|
||||||
A decorator to profile memory usage, CPU usage, and I/O operations
|
|
||||||
of a function and log the data using loguru.
|
|
||||||
|
|
||||||
It combines tracemalloc for memory profiling, psutil for CPU and I/O operations,
|
|
||||||
and measures execution time.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
func (Callable): The function to be profiled.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Callable: The wrapped function with profiling enabled.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@wraps(func)
|
|
||||||
def wrapper(*args: Any, **kwargs: Any) -> Any:
|
|
||||||
# Start memory tracking
|
|
||||||
tracemalloc.start()
|
|
||||||
|
|
||||||
# Get initial CPU stats
|
|
||||||
process = psutil.Process()
|
|
||||||
initial_cpu_times = process.cpu_times()
|
|
||||||
|
|
||||||
# Get initial I/O stats if available
|
|
||||||
try:
|
|
||||||
initial_io_counters = process.io_counters()
|
|
||||||
io_tracking_available = True
|
|
||||||
except AttributeError:
|
|
||||||
logger.warning(
|
|
||||||
"I/O counters not available on this platform."
|
|
||||||
)
|
|
||||||
io_tracking_available = False
|
|
||||||
|
|
||||||
# Start timing the function execution
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
# Execute the function
|
|
||||||
result = func(*args, **kwargs)
|
|
||||||
|
|
||||||
# Stop timing
|
|
||||||
end_time = time.time()
|
|
||||||
execution_time = end_time - start_time
|
|
||||||
|
|
||||||
# Get final CPU stats
|
|
||||||
final_cpu_times = process.cpu_times()
|
|
||||||
|
|
||||||
# Get final I/O stats if available
|
|
||||||
if io_tracking_available:
|
|
||||||
final_io_counters = process.io_counters()
|
|
||||||
io_read_count = (
|
|
||||||
final_io_counters.read_count
|
|
||||||
- initial_io_counters.read_count
|
|
||||||
)
|
|
||||||
io_write_count = (
|
|
||||||
final_io_counters.write_count
|
|
||||||
- initial_io_counters.write_count
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
io_read_count = io_write_count = 0
|
|
||||||
|
|
||||||
# Get memory usage statistics
|
|
||||||
snapshot = tracemalloc.take_snapshot()
|
|
||||||
top_stats = snapshot.statistics("lineno")
|
|
||||||
|
|
||||||
# Calculate CPU usage
|
|
||||||
cpu_usage = (
|
|
||||||
final_cpu_times.user
|
|
||||||
- initial_cpu_times.user
|
|
||||||
+ final_cpu_times.system
|
|
||||||
- initial_cpu_times.system
|
|
||||||
)
|
|
||||||
|
|
||||||
# Log the data
|
|
||||||
logger.info(f"Execution time: {execution_time:.4f} seconds")
|
|
||||||
logger.info(f"CPU usage: {cpu_usage:.2f} seconds")
|
|
||||||
if io_tracking_available:
|
|
||||||
logger.info(
|
|
||||||
f"I/O Operations - Read: {io_read_count}, Write: {io_write_count}"
|
|
||||||
)
|
|
||||||
logger.info("Top memory usage:")
|
|
||||||
for stat in top_stats[:10]:
|
|
||||||
logger.info(stat)
|
|
||||||
|
|
||||||
# Stop memory tracking
|
|
||||||
tracemalloc.stop()
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
return wrapper
|
|
@ -1,108 +0,0 @@
|
|||||||
import datetime
|
|
||||||
import os
|
|
||||||
import platform
|
|
||||||
import traceback
|
|
||||||
|
|
||||||
from loguru import logger
|
|
||||||
|
|
||||||
# Remove default logger configuration
|
|
||||||
logger.remove()
|
|
||||||
|
|
||||||
# Define the path for the log folder
|
|
||||||
log_folder = os.path.join(os.getcwd(), "errors")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Create the log folder if it doesn't exist
|
|
||||||
os.makedirs(log_folder, exist_ok=True)
|
|
||||||
except PermissionError:
|
|
||||||
logger.error(f"Permission denied: '{log_folder}'")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(
|
|
||||||
f"An error occurred while creating the log folder: {e}"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# If the folder was created successfully, add a new logger
|
|
||||||
logger.add(
|
|
||||||
os.path.join(log_folder, "error_{time}.log"),
|
|
||||||
level="ERROR",
|
|
||||||
format="<red>{time}</red> - <level>{level}</level> - <level>{message}</level>",
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def report_error(error: Exception):
|
|
||||||
"""
|
|
||||||
Logs an error message and provides instructions for reporting the issue on Swarms GitHub
|
|
||||||
or joining the community on Discord for real-time support.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
error (Exception): The exception that occurred.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
None
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
None
|
|
||||||
"""
|
|
||||||
# Gather extensive context information
|
|
||||||
context_info = {
|
|
||||||
"exception_type": type(error).__name__,
|
|
||||||
"exception_message": str(error),
|
|
||||||
"stack_trace": traceback.format_exc(),
|
|
||||||
"timestamp": datetime.datetime.now().isoformat(),
|
|
||||||
"python_version": platform.python_version(),
|
|
||||||
"platform": platform.platform(),
|
|
||||||
"machine": platform.machine(),
|
|
||||||
"processor": platform.processor(),
|
|
||||||
"user": os.getenv("USER") or os.getenv("USERNAME"),
|
|
||||||
"current_working_directory": os.getcwd(),
|
|
||||||
}
|
|
||||||
|
|
||||||
error_message = (
|
|
||||||
f"\n"
|
|
||||||
f"------------------Error: {error}-----------------------\n"
|
|
||||||
f"#########################################\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"# ERROR DETECTED! #\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"#########################################\n"
|
|
||||||
f"\n"
|
|
||||||
f"Error Message: {context_info['exception_message']} ({context_info['exception_type']})\n"
|
|
||||||
f"\n"
|
|
||||||
f"Stack Trace:\n{context_info['stack_trace']}\n"
|
|
||||||
f"\n"
|
|
||||||
f"Context Information:\n"
|
|
||||||
f"-----------------------------------------\n"
|
|
||||||
f"Timestamp: {context_info['timestamp']}\n"
|
|
||||||
f"Python Version: {context_info['python_version']}\n"
|
|
||||||
f"Platform: {context_info['platform']}\n"
|
|
||||||
f"Machine: {context_info['machine']}\n"
|
|
||||||
f"Processor: {context_info['processor']}\n"
|
|
||||||
f"User: {context_info['user']}\n"
|
|
||||||
f"Current Working Directory: {context_info['current_working_directory']}\n"
|
|
||||||
f"-----------------------------------------\n"
|
|
||||||
f"\n"
|
|
||||||
"Support"
|
|
||||||
f"\n"
|
|
||||||
f"\n"
|
|
||||||
f"To report this issue, please visit the Swarms GitHub Issues page:\n"
|
|
||||||
f"https://github.com/kyegomez/swarms/issues\n"
|
|
||||||
f"\n"
|
|
||||||
f"You can also join the Swarms community on Discord for real-time support:\n"
|
|
||||||
f"https://discord.com/servers/agora-999382051935506503\n"
|
|
||||||
f"\n"
|
|
||||||
f"#########################################\n"
|
|
||||||
f"-----------------------------------------\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
return logger.error(error_message)
|
|
||||||
|
|
||||||
|
|
||||||
# # Example usage:
|
|
||||||
# try:
|
|
||||||
# # Simulate an error
|
|
||||||
# raise ValueError("An example error")
|
|
||||||
# except Exception as e:
|
|
||||||
# report_error(e)
|
|
@ -1,125 +0,0 @@
|
|||||||
import os
|
|
||||||
import psutil
|
|
||||||
from typing import Callable, Any
|
|
||||||
from loguru import logger
|
|
||||||
import functools
|
|
||||||
|
|
||||||
|
|
||||||
def run_on_cpu(func: Callable) -> Callable:
|
|
||||||
"""
|
|
||||||
Decorator that ensures the function runs on all available CPU cores,
|
|
||||||
maximizing CPU and memory usage to execute the function as quickly as possible.
|
|
||||||
|
|
||||||
This decorator sets the CPU affinity of the current process to all available CPU cores
|
|
||||||
before executing the function. After the function completes, the original CPU affinity is restored.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
func (Callable): The function to be executed.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Callable: The wrapped function with CPU affinity settings applied.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
RuntimeError: If the CPU affinity cannot be set or restored.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@functools.wraps(func)
|
|
||||||
def wrapper(*args: Any, **kwargs: Any) -> Any:
|
|
||||||
# Get the current process
|
|
||||||
process = psutil.Process(os.getpid())
|
|
||||||
|
|
||||||
# Check if the platform supports cpu_affinity
|
|
||||||
if not hasattr(process, "cpu_affinity"):
|
|
||||||
logger.warning(
|
|
||||||
"CPU affinity is not supported on this platform. Executing function without setting CPU affinity."
|
|
||||||
)
|
|
||||||
return func(*args, **kwargs)
|
|
||||||
|
|
||||||
# Save the original CPU affinity
|
|
||||||
original_affinity = process.cpu_affinity()
|
|
||||||
logger.info(f"Original CPU affinity: {original_affinity}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Set the CPU affinity to all available CPU cores
|
|
||||||
all_cpus = list(range(os.cpu_count()))
|
|
||||||
process.cpu_affinity(all_cpus)
|
|
||||||
logger.info(f"Set CPU affinity to: {all_cpus}")
|
|
||||||
|
|
||||||
# Set process priority to high
|
|
||||||
try:
|
|
||||||
process.nice(psutil.HIGH_PRIORITY_CLASS)
|
|
||||||
logger.info("Set process priority to high.")
|
|
||||||
except AttributeError:
|
|
||||||
logger.warning(
|
|
||||||
"Setting process priority is not supported on this platform."
|
|
||||||
)
|
|
||||||
|
|
||||||
# Pre-allocate memory by creating a large array (optional step)
|
|
||||||
memory_size = int(
|
|
||||||
psutil.virtual_memory().available * 0.9
|
|
||||||
) # 90% of available memory
|
|
||||||
try:
|
|
||||||
logger.info(
|
|
||||||
f"Pre-allocating memory: {memory_size} bytes"
|
|
||||||
)
|
|
||||||
_ = bytearray(memory_size)
|
|
||||||
except MemoryError:
|
|
||||||
logger.error(
|
|
||||||
"Failed to pre-allocate memory, continuing without pre-allocation."
|
|
||||||
)
|
|
||||||
|
|
||||||
# Run the function
|
|
||||||
result = func(*args, **kwargs)
|
|
||||||
|
|
||||||
except psutil.AccessDenied as e:
|
|
||||||
logger.error(
|
|
||||||
"Access denied while setting CPU affinity",
|
|
||||||
exc_info=True,
|
|
||||||
)
|
|
||||||
raise RuntimeError(
|
|
||||||
"Access denied while setting CPU affinity"
|
|
||||||
) from e
|
|
||||||
|
|
||||||
except psutil.NoSuchProcess as e:
|
|
||||||
logger.error("Process does not exist", exc_info=True)
|
|
||||||
raise RuntimeError("Process does not exist") from e
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(
|
|
||||||
"An error occurred during function execution",
|
|
||||||
exc_info=True,
|
|
||||||
)
|
|
||||||
raise RuntimeError(
|
|
||||||
"An error occurred during function execution"
|
|
||||||
) from e
|
|
||||||
|
|
||||||
finally:
|
|
||||||
# Restore the original CPU affinity
|
|
||||||
try:
|
|
||||||
process.cpu_affinity(original_affinity)
|
|
||||||
logger.info(
|
|
||||||
f"Restored original CPU affinity: {original_affinity}"
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(
|
|
||||||
"Failed to restore CPU affinity", exc_info=True
|
|
||||||
)
|
|
||||||
raise RuntimeError(
|
|
||||||
"Failed to restore CPU affinity"
|
|
||||||
) from e
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
return wrapper
|
|
||||||
|
|
||||||
|
|
||||||
# # Example usage of the decorator
|
|
||||||
# @run_on_cpu
|
|
||||||
# def compute_heavy_task() -> None:
|
|
||||||
# # An example task that is CPU and memory intensive
|
|
||||||
# data = [i**2 for i in range(100000000)]
|
|
||||||
# sum(data)
|
|
||||||
# print("Task completed.")
|
|
||||||
|
|
||||||
|
|
||||||
# compute_heavy_task()
|
|
@ -1,75 +0,0 @@
|
|||||||
from loguru import logger
|
|
||||||
import sys
|
|
||||||
import platform
|
|
||||||
import os
|
|
||||||
import datetime
|
|
||||||
|
|
||||||
# Configuring loguru to log to both the console and a file
|
|
||||||
logger.remove() # Remove default logger configuration
|
|
||||||
logger.add(
|
|
||||||
sys.stderr,
|
|
||||||
level="INFO",
|
|
||||||
format="<green>{time}</green> - <level>{level}</level> - <level>{message}</level>",
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.add(
|
|
||||||
"info.log", level="INFO", format="{time} - {level} - {message}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def log_success_message() -> None:
|
|
||||||
"""
|
|
||||||
Logs a success message with instructions for sharing agents on the Swarms Agent Explorer and joining the community for assistance.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
None
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
None
|
|
||||||
"""
|
|
||||||
# Gather extensive context information
|
|
||||||
context_info = {
|
|
||||||
"timestamp": datetime.datetime.now().isoformat(),
|
|
||||||
"python_version": platform.python_version(),
|
|
||||||
"platform": platform.platform(),
|
|
||||||
"machine": platform.machine(),
|
|
||||||
"processor": platform.processor(),
|
|
||||||
"user": os.getenv("USER") or os.getenv("USERNAME"),
|
|
||||||
"current_working_directory": os.getcwd(),
|
|
||||||
}
|
|
||||||
|
|
||||||
success_message = (
|
|
||||||
f"\n"
|
|
||||||
f"#########################################\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"# SUCCESSFUL RUN DETECTED! #\n"
|
|
||||||
f"# #\n"
|
|
||||||
f"#########################################\n"
|
|
||||||
f"\n"
|
|
||||||
f"Your task completed successfully!\n"
|
|
||||||
f"\n"
|
|
||||||
f"Context Information:\n"
|
|
||||||
f"-----------------------------------------\n"
|
|
||||||
f"Timestamp: {context_info['timestamp']}\n"
|
|
||||||
f"Python Version: {context_info['python_version']}\n"
|
|
||||||
f"Platform: {context_info['platform']}\n"
|
|
||||||
f"Machine: {context_info['machine']}\n"
|
|
||||||
f"Processor: {context_info['processor']}\n"
|
|
||||||
f"User: {context_info['user']}\n"
|
|
||||||
f"Current Working Directory: {context_info['current_working_directory']}\n"
|
|
||||||
f"-----------------------------------------\n"
|
|
||||||
f"\n"
|
|
||||||
f"Share your agents on the Swarms Agent Explorer with friends:\n"
|
|
||||||
f"https://swarms.world/platform/explorer\n"
|
|
||||||
f"\n"
|
|
||||||
f"Join the Swarms community if you want assistance or help debugging:\n"
|
|
||||||
f"https://discord.gg/uzu63HQx\n"
|
|
||||||
f"\n"
|
|
||||||
f"#########################################\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(success_message)
|
|
||||||
|
|
||||||
|
|
||||||
# Example usage:
|
|
||||||
# log_success_message()
|
|
@ -1,34 +0,0 @@
|
|||||||
from typing import Union, Dict, List
|
|
||||||
from swarms.artifacts.main_artifact import Artifact
|
|
||||||
|
|
||||||
|
|
||||||
def handle_artifact_outputs(
|
|
||||||
file_path: str,
|
|
||||||
data: Union[str, Dict, List],
|
|
||||||
output_type: str = "txt",
|
|
||||||
folder_path: str = "./artifacts",
|
|
||||||
) -> str:
|
|
||||||
"""
|
|
||||||
Handle different types of data and create files in various formats.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
file_path: Path where the file should be saved
|
|
||||||
data: Input data that can be string, dict or list
|
|
||||||
output_type: Type of output file (txt, md, pdf, csv, json)
|
|
||||||
folder_path: Folder to save artifacts
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: Path to the created file
|
|
||||||
"""
|
|
||||||
# Create artifact with appropriate file type
|
|
||||||
artifact = Artifact(
|
|
||||||
folder_path=folder_path,
|
|
||||||
file_path=file_path,
|
|
||||||
file_type=output_type,
|
|
||||||
contents=data,
|
|
||||||
edit_count=0,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Save the file
|
|
||||||
# artifact.save()
|
|
||||||
artifact.save_as(output_format=output_type)
|
|
@ -1,117 +0,0 @@
|
|||||||
from swarm_models import OpenAIChat
|
|
||||||
from swarms.structs.agent import Agent
|
|
||||||
from swarms.structs.message_pool import MessagePool
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_initialization():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
agent2 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
moderator = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
agents = [agent1, agent2]
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=agents, moderator=moderator, turns=5
|
|
||||||
)
|
|
||||||
|
|
||||||
assert message_pool.agent == agents
|
|
||||||
assert message_pool.moderator == moderator
|
|
||||||
assert message_pool.turns == 5
|
|
||||||
assert message_pool.messages == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_add():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=[agent1], moderator=agent1, turns=5
|
|
||||||
)
|
|
||||||
message_pool.add(agent=agent1, content="Hello, world!", turn=1)
|
|
||||||
|
|
||||||
assert message_pool.messages == [
|
|
||||||
{
|
|
||||||
"agent": agent1,
|
|
||||||
"content": "Hello, world!",
|
|
||||||
"turn": 1,
|
|
||||||
"visible_to": "all",
|
|
||||||
"logged": True,
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_reset():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=[agent1], moderator=agent1, turns=5
|
|
||||||
)
|
|
||||||
message_pool.add(agent=agent1, content="Hello, world!", turn=1)
|
|
||||||
message_pool.reset()
|
|
||||||
|
|
||||||
assert message_pool.messages == []
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_last_turn():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=[agent1], moderator=agent1, turns=5
|
|
||||||
)
|
|
||||||
message_pool.add(agent=agent1, content="Hello, world!", turn=1)
|
|
||||||
|
|
||||||
assert message_pool.last_turn() == 1
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_last_message():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=[agent1], moderator=agent1, turns=5
|
|
||||||
)
|
|
||||||
message_pool.add(agent=agent1, content="Hello, world!", turn=1)
|
|
||||||
|
|
||||||
assert message_pool.last_message == {
|
|
||||||
"agent": agent1,
|
|
||||||
"content": "Hello, world!",
|
|
||||||
"turn": 1,
|
|
||||||
"visible_to": "all",
|
|
||||||
"logged": True,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_get_all_messages():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=[agent1], moderator=agent1, turns=5
|
|
||||||
)
|
|
||||||
message_pool.add(agent=agent1, content="Hello, world!", turn=1)
|
|
||||||
|
|
||||||
assert message_pool.get_all_messages() == [
|
|
||||||
{
|
|
||||||
"agent": agent1,
|
|
||||||
"content": "Hello, world!",
|
|
||||||
"turn": 1,
|
|
||||||
"visible_to": "all",
|
|
||||||
"logged": True,
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def test_message_pool_get_visible_messages():
|
|
||||||
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
|
|
||||||
agent2 = Agent(agent_name="agent2")
|
|
||||||
message_pool = MessagePool(
|
|
||||||
agents=[agent1, agent2], moderator=agent1, turns=5
|
|
||||||
)
|
|
||||||
message_pool.add(
|
|
||||||
agent=agent1,
|
|
||||||
content="Hello, agent2!",
|
|
||||||
turn=1,
|
|
||||||
visible_to=[agent2.agent_name],
|
|
||||||
)
|
|
||||||
|
|
||||||
assert message_pool.get_visible_messages(
|
|
||||||
agent=agent2, turn=2
|
|
||||||
) == [
|
|
||||||
{
|
|
||||||
"agent": agent1,
|
|
||||||
"content": "Hello, agent2!",
|
|
||||||
"turn": 1,
|
|
||||||
"visible_to": [agent2.agent_name],
|
|
||||||
"logged": True,
|
|
||||||
}
|
|
||||||
]
|
|
Loading…
Reference in new issue