[FEAT][Better RAG][OUTPUT JSON]

pull/622/head
Your Name 2 months ago
parent 38b138c56e
commit d73f1a68c4

@ -1 +0,0 @@
# 5.8.7

@ -192,6 +192,7 @@ nav:
- Changelog: - Changelog:
- Swarms 5.6.8: "swarms/changelog/5_6_8.md" - Swarms 5.6.8: "swarms/changelog/5_6_8.md"
- Swarms 5.8.1: "swarms/changelog/5_8_1.md" - Swarms 5.8.1: "swarms/changelog/5_8_1.md"
- Swarms 5.9.2: "swarms/changelog/changelog_new.md"
- Swarm Models: - Swarm Models:
- Overview: "swarms/models/index.md" - Overview: "swarms/models/index.md"
# - Models Available: "swarms/models/index.md" # - Models Available: "swarms/models/index.md"

@ -0,0 +1,90 @@
# 🚀 Swarms 5.9.2 Release Notes
### 🎯 Major Features
#### Concurrent Agent Execution Suite
We're excited to introduce a comprehensive suite of agent execution methods to supercharge your multi-agent workflows:
- `run_agents_concurrently`: Execute multiple agents in parallel with optimal resource utilization
- `run_agents_concurrently_async`: Asynchronous execution for improved performance
- `run_single_agent`: Streamlined single agent execution
- `run_agents_concurrently_multiprocess`: Multi-process execution for CPU-intensive tasks
- `run_agents_sequentially`: Sequential execution with controlled flow
- `run_agents_with_different_tasks`: Assign different tasks to different agents
- `run_agent_with_timeout`: Time-bounded agent execution
- `run_agents_with_resource_monitoring`: Monitor and manage resource usage
### 📚 Documentation
- Comprehensive documentation added for all new execution methods
- Updated examples and usage patterns
- Enhanced API reference
### 🛠️ Improvements
- Tree swarm implementation fixes
- Workspace directory now automatically set to `agent_workspace`
- Improved error handling and stability
## Quick Start
```python
from swarms import Agent, run_agents_concurrently, run_agents_with_timeout, run_agents_with_different_tasks
# Initialize multiple agents
agents = [
Agent(
agent_name=f"Analysis-Agent-{i}",
system_prompt="You are a financial analysis expert",
llm=model,
max_loops=1
)
for i in range(5)
]
# Run agents concurrently
task = "Analyze the impact of rising interest rates on tech stocks"
outputs = run_agents_concurrently(agents, task)
# Example with timeout
outputs_with_timeout = run_agents_with_timeout(
agents=agents,
task=task,
timeout=30.0,
batch_size=2
)
# Run different tasks
task_pairs = [
(agents[0], "Analyze tech stocks"),
(agents[1], "Analyze energy stocks"),
(agents[2], "Analyze retail stocks")
]
different_outputs = run_agents_with_different_tasks(task_pairs)
```
## Installation
```bash
pip3 install -U swarms
```
## Coming Soon
- 🌟 Auto Swarm Builder: Automatically construct and configure entire swarms from a single task specification (in development)
- Auto Prompt Generator for thousands of agents (in development)
## Community
We believe in the power of community-driven development. Help us make Swarms better!
- ⭐ Star our repository: https://github.com/kyegomez/swarms
- 🔄 Fork the project and contribute your improvements
- 🤝 Join our growing community of contributors
## Bug Fixes
- Fixed Tree Swarm implementation issues
- Resolved workspace directory configuration problems
- General stability improvements
---
For detailed documentation and examples, visit our [GitHub repository](https://github.com/kyegomez/swarms).
Let's build the future of multi-agent systems together! 🚀

@ -156,7 +156,6 @@ graph TD
| `save_to_yaml(file_path)` | Saves the agent to a YAML file. | `file_path` (str): Path to save the YAML file. | `agent.save_to_yaml("agent_config.yaml")` | | `save_to_yaml(file_path)` | Saves the agent to a YAML file. | `file_path` (str): Path to save the YAML file. | `agent.save_to_yaml("agent_config.yaml")` |
| `get_llm_parameters()` | Returns the parameters of the language model. | None | `llm_params = agent.get_llm_parameters()` | | `get_llm_parameters()` | Returns the parameters of the language model. | None | `llm_params = agent.get_llm_parameters()` |
| `save_state(file_path, *args, **kwargs)` | Saves the current state of the agent to a JSON file. | `file_path` (str): Path to save the JSON file.<br>`*args`, `**kwargs`: Additional arguments. | `agent.save_state("agent_state.json")` | | `save_state(file_path, *args, **kwargs)` | Saves the current state of the agent to a JSON file. | `file_path` (str): Path to save the JSON file.<br>`*args`, `**kwargs`: Additional arguments. | `agent.save_state("agent_state.json")` |
| `load_state(file_path)` | Loads the state of the agent from a JSON file. | `file_path` (str): Path to the JSON file. | `agent.load_state("agent_state.json")` |
| `update_system_prompt(system_prompt)` | Updates the system prompt. | `system_prompt` (str): New system prompt. | `agent.update_system_prompt("New system instructions")` | | `update_system_prompt(system_prompt)` | Updates the system prompt. | `system_prompt` (str): New system prompt. | `agent.update_system_prompt("New system instructions")` |
| `update_max_loops(max_loops)` | Updates the maximum number of loops. | `max_loops` (int): New maximum number of loops. | `agent.update_max_loops(5)` | | `update_max_loops(max_loops)` | Updates the maximum number of loops. | `max_loops` (int): New maximum number of loops. | `agent.update_max_loops(5)` |
| `update_loop_interval(loop_interval)` | Updates the loop interval. | `loop_interval` (int): New loop interval. | `agent.update_loop_interval(2)` | | `update_loop_interval(loop_interval)` | Updates the loop interval. | `loop_interval` (int): New loop interval. | `agent.update_loop_interval(2)` |
@ -184,11 +183,9 @@ graph TD
| `check_available_tokens()` | Checks and returns the number of available tokens. | None | `available_tokens = agent.check_available_tokens()` | | `check_available_tokens()` | Checks and returns the number of available tokens. | None | `available_tokens = agent.check_available_tokens()` |
| `tokens_checks()` | Performs token checks and returns available tokens. | None | `token_info = agent.tokens_checks()` | | `tokens_checks()` | Performs token checks and returns available tokens. | None | `token_info = agent.tokens_checks()` |
| `truncate_string_by_tokens(input_string, limit)` | Truncates a string to fit within a token limit. | `input_string` (str): String to truncate.<br>`limit` (int): Token limit. | `truncated_string = agent.truncate_string_by_tokens("Long string", 100)` | | `truncate_string_by_tokens(input_string, limit)` | Truncates a string to fit within a token limit. | `input_string` (str): String to truncate.<br>`limit` (int): Token limit. | `truncated_string = agent.truncate_string_by_tokens("Long string", 100)` |
| `if_tokens_exceeds_context_length()` | Checks if the number of tokens exceeds the context length. | None | `exceeds = agent.if_tokens_exceeds_context_length()` |
| `tokens_operations(input_string)` | Performs various token-related operations on the input string. | `input_string` (str): String to process. | `processed_string = agent.tokens_operations("Input string")` | | `tokens_operations(input_string)` | Performs various token-related operations on the input string. | `input_string` (str): String to process. | `processed_string = agent.tokens_operations("Input string")` |
| `parse_function_call_and_execute(response)` | Parses a function call from the response and executes it. | `response` (str): Response containing the function call. | `result = agent.parse_function_call_and_execute(response)` | | `parse_function_call_and_execute(response)` | Parses a function call from the response and executes it. | `response` (str): Response containing the function call. | `result = agent.parse_function_call_and_execute(response)` |
| `activate_agentops()` | Activates AgentOps functionality. | None | `agent.activate_agentops()` | | `activate_agentops()` | Activates AgentOps functionality. | None | `agent.activate_agentops()` |
| `count_tokens_and_subtract_from_context_window(response, *args, **kwargs)` | Counts tokens in the response and adjusts the context window. | `response` (str): Response to process.<br>`*args`, `**kwargs`: Additional arguments. | `await agent.count_tokens_and_subtract_from_context_window(response)` |
| `llm_output_parser(response)` | Parses the output from the language model. | `response` (Any): Response from the LLM. | `parsed_response = agent.llm_output_parser(llm_output)` | | `llm_output_parser(response)` | Parses the output from the language model. | `response` (Any): Response from the LLM. | `parsed_response = agent.llm_output_parser(llm_output)` |
| `log_step_metadata(loop, task, response)` | Logs metadata for each step of the agent's execution. | `loop` (int): Current loop number.<br>`task` (str): Current task.<br>`response` (str): Agent's response. | `agent.log_step_metadata(1, "Analyze data", "Analysis complete")` | | `log_step_metadata(loop, task, response)` | Logs metadata for each step of the agent's execution. | `loop` (int): Current loop number.<br>`task` (str): Current task.<br>`response` (str): Agent's response. | `agent.log_step_metadata(1, "Analyze data", "Analysis complete")` |
| `to_dict()` | Converts the agent's attributes to a dictionary. | None | `agent_dict = agent.to_dict()` | | `to_dict()` | Converts the agent's attributes to a dictionary. | None | `agent_dict = agent.to_dict()` |
@ -391,7 +388,7 @@ agent.save_state('saved_flow.json')
# Load the agent state # Load the agent state
agent = Agent(llm=llm_instance, max_loops=5) agent = Agent(llm=llm_instance, max_loops=5)
agent.load_state('saved_flow.json') agent.load('saved_flow.json')
agent.run("Continue with the task") agent.run("Continue with the task")
``` ```
@ -537,7 +534,7 @@ print(agent.system_prompt)
4. Leverage `long_term_memory` for tasks that require persistent information. 4. Leverage `long_term_memory` for tasks that require persistent information.
5. Use `interactive` mode for real-time conversations and `dashboard` for monitoring. 5. Use `interactive` mode for real-time conversations and `dashboard` for monitoring.
6. Implement `sentiment_analysis` for applications requiring tone management. 6. Implement `sentiment_analysis` for applications requiring tone management.
7. Utilize `autosave` and `save_state`/`load_state` methods for continuity across sessions. 7. Utilize `autosave` and `save`/`load` methods for continuity across sessions.
8. Optimize token usage with `dynamic_context_window` and `tokens_checks` methods. 8. Optimize token usage with `dynamic_context_window` and `tokens_checks` methods.
9. Use `concurrent` and `async` methods for performance-critical applications. 9. Use `concurrent` and `async` methods for performance-critical applications.
10. Regularly review and analyze feedback using the `analyze_feedback` method. 10. Regularly review and analyze feedback using the `analyze_feedback` method.

@ -455,7 +455,7 @@ agent.save_state('saved_flow.json')
# Load the agent state # Load the agent state
agent = Agent(llm=llm_instance, max_loops=5) agent = Agent(llm=llm_instance, max_loops=5)
agent.load_state('saved_flow.json') agent.load('saved_flow.json')
agent.run("Continue with the task") agent.run("Continue with the task")
``` ```

@ -2,9 +2,6 @@ import os
from swarms import Agent from swarms import Agent
from swarm_models import OpenAIChat from swarm_models import OpenAIChat
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from dotenv import load_dotenv from dotenv import load_dotenv
load_dotenv() load_dotenv()
@ -20,9 +17,9 @@ model = OpenAIChat(
# Initialize the agent # Initialize the agent
agent = Agent( agent = Agent(
agent_name="Financial-Analysis-Agent", agent_name="Financial-Analysis-Agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT, # system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
llm=model, llm=model,
max_loops=1, max_loops=3,
autosave=True, autosave=True,
dashboard=False, dashboard=False,
verbose=True, verbose=True,
@ -31,7 +28,7 @@ agent = Agent(
user_name="swarms_corp", user_name="swarms_corp",
retry_attempts=1, retry_attempts=1,
context_length=200000, context_length=200000,
return_step_meta=False, return_step_meta=True,
# output_type="json", # output_type="json",
output_type="json", # "json", "dict", "csv" OR "string" soon "yaml" and output_type="json", # "json", "dict", "csv" OR "string" soon "yaml" and
streaming_on=False, streaming_on=False,

@ -1,30 +0,0 @@
from swarms import Prompt
from swarm_models import OpenAIChat
import os
model = OpenAIChat(
api_key=os.getenv("OPENAI_API_KEY"),
model_name="gpt-4o-mini",
temperature=0.1,
)
# Aggregator system prompt
prompt_generator_sys_prompt = Prompt(
name="prompt-generator-sys-prompt-o1",
description="Generate the most reliable prompt for a specific problem",
content="""
Your purpose is to craft extremely reliable and production-grade system prompts for other agents.
# Instructions
- Understand the prompt required for the agent.
- Utilize a combination of the most effective prompting strategies available, including chain of thought, many shot, few shot, and instructions-examples-constraints.
- Craft the prompt by blending the most suitable prompting strategies.
- Ensure the prompt is production-grade ready and educates the agent on how to reason and why to reason in that manner.
- Provide constraints if necessary and as needed.
- The system prompt should be extensive and cover a vast array of potential scenarios to specialize the agent.
""",
auto_generate_prompt=True,
llm=model,
)
# print(prompt_generator_sys_prompt.get_prompt())

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry] [tool.poetry]
name = "swarms" name = "swarms"
version = "5.9.2" version = "6.0.0"
description = "Swarms - Pytorch" description = "Swarms - Pytorch"
license = "MIT" license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"] authors = ["Kye Gomez <kye@apac.ai>"]

@ -8,6 +8,7 @@ from pydantic import BaseModel, Field
from swarms.schemas.base_schemas import ( from swarms.schemas.base_schemas import (
AgentChatCompletionResponse, AgentChatCompletionResponse,
) )
from typing import Union
def get_current_time(): def get_current_time():
@ -56,7 +57,7 @@ class ManySteps(BaseModel):
description="The ID of the task this step belongs to.", description="The ID of the task this step belongs to.",
examples=["50da533e-3904-4401-8a07-c49adf88b5eb"], examples=["50da533e-3904-4401-8a07-c49adf88b5eb"],
) )
steps: Optional[List[Step]] = Field( steps: Optional[List[Union[Step, Any]]] = Field(
[], [],
description="The steps of the task.", description="The steps of the task.",
) )

@ -76,7 +76,6 @@ from swarms.structs.multi_agent_exec import (
run_agents_with_different_tasks, run_agents_with_different_tasks,
run_agent_with_timeout, run_agent_with_timeout,
run_agents_with_resource_monitoring, run_agents_with_resource_monitoring,
) )
__all__ = [ __all__ = [

File diff suppressed because it is too large Load Diff

@ -1,11 +1,11 @@
import datetime import datetime
import json import json
from typing import Optional from typing import Any, Optional
import yaml
from termcolor import colored from termcolor import colored
from swarms.structs.base_structure import BaseStructure from swarms.structs.base_structure import BaseStructure
from typing import Any
class Conversation(BaseStructure): class Conversation(BaseStructure):
@ -96,10 +96,10 @@ class Conversation(BaseStructure):
self.add("System: ", self.system_prompt) self.add("System: ", self.system_prompt)
if self.rules is not None: if self.rules is not None:
self.add(user, rules) self.add("User", rules)
if custom_rules_prompt is not None: if custom_rules_prompt is not None:
self.add(user, custom_rules_prompt) self.add(user or "User", custom_rules_prompt)
# If tokenizer then truncate # If tokenizer then truncate
if tokenizer is not None: if tokenizer is not None:
@ -245,6 +245,9 @@ class Conversation(BaseStructure):
] ]
) )
def get_str(self):
return self.return_history_as_string()
def save_as_json(self, filename: str = None): def save_as_json(self, filename: str = None):
"""Save the conversation history as a JSON file """Save the conversation history as a JSON file
@ -379,3 +382,21 @@ class Conversation(BaseStructure):
def clear(self): def clear(self):
self.conversation_history = [] self.conversation_history = []
def to_json(self):
return json.dumps(self.conversation_history)
def to_dict(self):
return self.conversation_history
def to_yaml(self):
return yaml.dump(self.conversation_history)
# # Example usage
# conversation = Conversation()
# conversation.add("user", "Hello, how are you?")
# conversation.add("assistant", "I am doing well, thanks.")
# # print(conversation.to_json())
# print(type(conversation.to_dict()))
# # print(conversation.to_yaml())

@ -137,8 +137,11 @@ def run_agents_concurrently_multiprocess(
return results return results
@profile_func @profile_func
def run_agents_sequentially(agents: List[AgentType], task: str) -> List[Any]: def run_agents_sequentially(
agents: List[AgentType], task: str
) -> List[Any]:
""" """
Run multiple agents sequentially for baseline comparison. Run multiple agents sequentially for baseline comparison.
@ -156,7 +159,7 @@ def run_agents_sequentially(agents: List[AgentType], task: str) -> List[Any]:
def run_agents_with_different_tasks( def run_agents_with_different_tasks(
agent_task_pairs: List[tuple[AgentType, str]], agent_task_pairs: List[tuple[AgentType, str]],
batch_size: int = None, batch_size: int = None,
max_workers: int = None max_workers: int = None,
) -> List[Any]: ) -> List[Any]:
""" """
Run multiple agents with different tasks concurrently. Run multiple agents with different tasks concurrently.
@ -169,7 +172,10 @@ def run_agents_with_different_tasks(
Returns: Returns:
List of outputs from each agent List of outputs from each agent
""" """
async def run_pair_async(pair: tuple[AgentType, str], executor: ThreadPoolExecutor) -> Any:
async def run_pair_async(
pair: tuple[AgentType, str], executor: ThreadPoolExecutor
) -> Any:
agent, task = pair agent, task = pair
return await run_agent_async(agent, task, executor) return await run_agent_async(agent, task, executor)
@ -188,7 +194,12 @@ def run_agents_with_different_tasks(
for i in range(0, len(agent_task_pairs), batch_size): for i in range(0, len(agent_task_pairs), batch_size):
batch = agent_task_pairs[i : i + batch_size] batch = agent_task_pairs[i : i + batch_size]
batch_results = loop.run_until_complete( batch_results = loop.run_until_complete(
asyncio.gather(*(run_pair_async(pair, executor) for pair in batch)) asyncio.gather(
*(
run_pair_async(pair, executor)
for pair in batch
)
)
) )
results.extend(batch_results) results.extend(batch_results)
@ -199,7 +210,7 @@ async def run_agent_with_timeout(
agent: AgentType, agent: AgentType,
task: str, task: str,
timeout: float, timeout: float,
executor: ThreadPoolExecutor executor: ThreadPoolExecutor,
) -> Any: ) -> Any:
""" """
Run an agent with a timeout limit. Run an agent with a timeout limit.
@ -215,19 +226,19 @@ async def run_agent_with_timeout(
""" """
try: try:
return await asyncio.wait_for( return await asyncio.wait_for(
run_agent_async(agent, task, executor), run_agent_async(agent, task, executor), timeout=timeout
timeout=timeout
) )
except asyncio.TimeoutError: except asyncio.TimeoutError:
return None return None
@profile_func @profile_func
def run_agents_with_timeout( def run_agents_with_timeout(
agents: List[AgentType], agents: List[AgentType],
task: str, task: str,
timeout: float, timeout: float,
batch_size: int = None, batch_size: int = None,
max_workers: int = None max_workers: int = None,
) -> List[Any]: ) -> List[Any]:
""" """
Run multiple agents concurrently with a timeout for each agent. Run multiple agents concurrently with a timeout for each agent.
@ -258,8 +269,12 @@ def run_agents_with_timeout(
batch = agents[i : i + batch_size] batch = agents[i : i + batch_size]
batch_results = loop.run_until_complete( batch_results = loop.run_until_complete(
asyncio.gather( asyncio.gather(
*(run_agent_with_timeout(agent, task, timeout, executor) *(
for agent in batch) run_agent_with_timeout(
agent, task, timeout, executor
)
for agent in batch
)
) )
) )
results.extend(batch_results) results.extend(batch_results)
@ -267,30 +282,29 @@ def run_agents_with_timeout(
return results return results
@dataclass @dataclass
class ResourceMetrics: class ResourceMetrics:
cpu_percent: float cpu_percent: float
memory_percent: float memory_percent: float
active_threads: int active_threads: int
def get_system_metrics() -> ResourceMetrics: def get_system_metrics() -> ResourceMetrics:
"""Get current system resource usage""" """Get current system resource usage"""
return ResourceMetrics( return ResourceMetrics(
cpu_percent=psutil.cpu_percent(), cpu_percent=psutil.cpu_percent(),
memory_percent=psutil.virtual_memory().percent, memory_percent=psutil.virtual_memory().percent,
active_threads=threading.active_count() active_threads=threading.active_count(),
) )
@profile_func @profile_func
def run_agents_with_resource_monitoring( def run_agents_with_resource_monitoring(
agents: List[AgentType], agents: List[AgentType],
task: str, task: str,
cpu_threshold: float = 90.0, cpu_threshold: float = 90.0,
memory_threshold: float = 90.0, memory_threshold: float = 90.0,
check_interval: float = 1.0 check_interval: float = 1.0,
) -> List[Any]: ) -> List[Any]:
""" """
Run agents with system resource monitoring and adaptive batch sizing. Run agents with system resource monitoring and adaptive batch sizing.
@ -305,16 +319,21 @@ def run_agents_with_resource_monitoring(
Returns: Returns:
List of outputs from each agent List of outputs from each agent
""" """
async def monitor_resources(): async def monitor_resources():
while True: while True:
metrics = get_system_metrics() metrics = get_system_metrics()
if metrics.cpu_percent > cpu_threshold or metrics.memory_percent > memory_threshold: if (
metrics.cpu_percent > cpu_threshold
or metrics.memory_percent > memory_threshold
):
# Reduce batch size or pause execution # Reduce batch size or pause execution
pass pass
await asyncio.sleep(check_interval) await asyncio.sleep(check_interval)
# Implementation details... # Implementation details...
# # Example usage: # # Example usage:
# # Initialize your agents with the same model to avoid re-creating it # # Initialize your agents with the same model to avoid re-creating it
# agents = [ # agents = [
@ -341,4 +360,3 @@ def run_agents_with_resource_monitoring(
# for i, output in enumerate(outputs): # for i, output in enumerate(outputs):
# print(f"Output from agent {i+1}:\n{output}") # print(f"Output from agent {i+1}:\n{output}")

@ -1,15 +1,17 @@
import json import json
from typing import List from typing import List, Any, Callable
from swarms.utils.loguru_logger import logger from swarms.utils.loguru_logger import logger
from swarms.utils.parse_code import extract_code_from_markdown from swarms.utils.parse_code import extract_code_from_markdown
def parse_and_execute_json( def parse_and_execute_json(
functions: List[callable] = None, functions: List[Callable[..., Any]],
json_string: str = None, json_string: str,
parse_md: bool = False, parse_md: bool = False,
): verbose: bool = False,
return_str: bool = True,
) -> dict:
""" """
Parses and executes a JSON string containing function names and parameters. Parses and executes a JSON string containing function names and parameters.
@ -17,191 +19,104 @@ def parse_and_execute_json(
functions (List[callable]): A list of callable functions. functions (List[callable]): A list of callable functions.
json_string (str): The JSON string to parse and execute. json_string (str): The JSON string to parse and execute.
parse_md (bool): Flag indicating whether to extract code from Markdown. parse_md (bool): Flag indicating whether to extract code from Markdown.
verbose (bool): Flag indicating whether to enable verbose logging.
return_str (bool): Flag indicating whether to return a JSON string.
Returns: Returns:
A dictionary containing the results of executing the functions with the parsed parameters. dict: A dictionary containing the results of executing the functions with the parsed parameters.
""" """
if not functions or not json_string:
raise ValueError("Functions and JSON string are required")
if parse_md: if parse_md:
json_string = extract_code_from_markdown(json_string) json_string = extract_code_from_markdown(json_string)
try: try:
# Create a dictionary that maps function names to functions # Create function name to function mapping
function_dict = {func.__name__: func for func in functions} function_dict = {func.__name__: func for func in functions}
if verbose:
logger.info(
f"Available functions: {list(function_dict.keys())}"
)
logger.info(f"Processing JSON: {json_string}")
# Parse JSON data
data = json.loads(json_string) data = json.loads(json_string)
function_list = (
data.get("functions", []) # Handle both single function and function list formats
if data.get("functions") function_list = []
else [data.get("function", [])] if "functions" in data:
) function_list = data["functions"]
elif "function" in data:
function_list = [data["function"]]
else:
function_list = [
data
] # Assume entire object is single function
# Ensure function_list is a list and filter None values
if isinstance(function_list, dict):
function_list = [function_list]
function_list = [f for f in function_list if f]
if verbose:
logger.info(f"Processing {len(function_list)} functions")
results = {} results = {}
for function_data in function_list: for function_data in function_list:
function_name = function_data.get("name") function_name = function_data.get("name")
parameters = function_data.get("parameters") parameters = function_data.get("parameters", {})
# Check if the function name is in the function dictionary if not function_name:
if function_name in function_dict: logger.warning("Function data missing name field")
# Call the function with the parsed parameters continue
result = function_dict[function_name](**parameters)
results[function_name] = str(result) if verbose:
else: logger.info(
f"Executing {function_name} with params: {parameters}"
)
if function_name not in function_dict:
logger.warning(f"Function {function_name} not found")
results[function_name] = None results[function_name] = None
continue
return results try:
result = function_dict[function_name](**parameters)
results[function_name] = str(result)
if verbose:
logger.info(
f"Result for {function_name}: {result}"
)
except Exception as e:
logger.error(
f"Error executing {function_name}: {str(e)}"
)
results[function_name] = f"Error: {str(e)}"
# Format final results
if len(results) == 1:
# Return single result directly
data = {"result": next(iter(results.values()))}
else:
# Return all results
data = {
"results": results,
"summary": "\n".join(
f"{k}: {v}" for k, v in results.items()
),
}
if return_str:
return json.dumps(data)
else:
return data
except json.JSONDecodeError as e:
error = f"Invalid JSON format: {str(e)}"
logger.error(error)
return {"error": error}
except Exception as e: except Exception as e:
logger.error(f"Error parsing and executing JSON: {e}") error = f"Error parsing and executing JSON: {str(e)}"
return None logger.error(error)
return {"error": error}
# def parse_and_execute_json(
# functions: List[Callable[..., Any]],
# json_string: str = None,
# parse_md: bool = False,
# verbose: bool = False,
# ) -> Dict[str, Any]:
# """
# Parses and executes a JSON string containing function names and parameters.
# Args:
# functions (List[Callable]): A list of callable functions.
# json_string (str): The JSON string to parse and execute.
# parse_md (bool): Flag indicating whether to extract code from Markdown.
# verbose (bool): Flag indicating whether to enable verbose logging.
# Returns:
# Dict[str, Any]: A dictionary containing the results of executing the functions with the parsed parameters.
# """
# if parse_md:
# json_string = extract_code_from_markdown(json_string)
# logger.info("Number of functions: " + str(len(functions)))
# try:
# # Create a dictionary that maps function names to functions
# function_dict = {func.__name__: func for func in functions}
# data = json.loads(json_string)
# function_list = data.get("functions") or [data.get("function")]
# # Ensure function_list is a list and filter out None values
# if isinstance(function_list, dict):
# function_list = [function_list]
# else:
# function_list = [f for f in function_list if f]
# results = {}
# # Determine if concurrency is needed
# concurrency = len(function_list) > 1
# if concurrency:
# with concurrent.futures.ThreadPoolExecutor() as executor:
# future_to_function = {
# executor.submit(
# execute_and_log_function,
# function_dict,
# function_data,
# verbose,
# ): function_data
# for function_data in function_list
# }
# for future in concurrent.futures.as_completed(
# future_to_function
# ):
# function_data = future_to_function[future]
# try:
# result = future.result()
# results.update(result)
# except Exception as e:
# if verbose:
# logger.error(
# f"Error executing function {function_data.get('name')}: {e}"
# )
# results[function_data.get("name")] = None
# else:
# for function_data in function_list:
# function_name = function_data.get("name")
# parameters = function_data.get("parameters")
# if verbose:
# logger.info(
# f"Executing function: {function_name} with parameters: {parameters}"
# )
# if function_name in function_dict:
# try:
# result = function_dict[function_name](**parameters)
# results[function_name] = str(result)
# if verbose:
# logger.info(
# f"Result for function {function_name}: {result}"
# )
# except Exception as e:
# if verbose:
# logger.error(
# f"Error executing function {function_name}: {e}"
# )
# results[function_name] = None
# else:
# if verbose:
# logger.warning(
# f"Function {function_name} not found."
# )
# results[function_name] = None
# # Merge all results into a single string
# merged_results = "\n".join(
# f"{key}: {value}" for key, value in results.items()
# )
# return {"merged_results": merged_results}
# except Exception as e:
# logger.error(f"Error parsing and executing JSON: {e}")
# return None
# def execute_and_log_function(
# function_dict: Dict[str, Callable],
# function_data: Dict[str, Any],
# verbose: bool,
# ) -> Dict[str, Any]:
# """
# Executes a function from a given dictionary of functions and logs the execution details.
# Args:
# function_dict (Dict[str, Callable]): A dictionary containing the available functions.
# function_data (Dict[str, Any]): A dictionary containing the function name and parameters.
# verbose (bool): A flag indicating whether to log the execution details.
# Returns:
# Dict[str, Any]: A dictionary containing the function name and its result.
# """
# function_name = function_data.get("name")
# parameters = function_data.get("parameters")
# if verbose:
# logger.info(
# f"Executing function: {function_name} with parameters: {parameters}"
# )
# if function_name in function_dict:
# try:
# result = function_dict[function_name](**parameters)
# if verbose:
# logger.info(
# f"Result for function {function_name}: {result}"
# )
# return {function_name: str(result)}
# except Exception as e:
# if verbose:
# logger.error(
# f"Error executing function {function_name}: {e}"
# )
# return {function_name: None}
# else:
# if verbose:
# logger.warning(f"Function {function_name} not found.")
# return {function_name: None}

@ -1,9 +1,5 @@
from unittest.mock import Mock, MagicMock from unittest.mock import MagicMock
from dataclasses import dataclass, field, asdict
from typing import List, Dict, Any
from datetime import datetime
import unittest import unittest
from swarms.schemas.agent_step_schemas import ManySteps, Step
from swarms.structs.agent import Agent from swarms.structs.agent import Agent
from swarms.tools.tool_parse_exec import parse_and_execute_json from swarms.tools.tool_parse_exec import parse_and_execute_json
@ -12,61 +8,80 @@ parse_and_execute_json = MagicMock()
parse_and_execute_json.return_value = { parse_and_execute_json.return_value = {
"tool_name": "calculator", "tool_name": "calculator",
"args": {"numbers": [2, 2]}, "args": {"numbers": [2, 2]},
"output": "4" "output": "4",
} }
class TestAgentLogging(unittest.TestCase): class TestAgentLogging(unittest.TestCase):
def setUp(self): def setUp(self):
self.mock_tokenizer = MagicMock() self.mock_tokenizer = MagicMock()
self.mock_tokenizer.count_tokens.return_value = 100 self.mock_tokenizer.count_tokens.return_value = 100
self.mock_short_memory = MagicMock() self.mock_short_memory = MagicMock()
self.mock_short_memory.get_memory_stats.return_value = {"message_count": 2} self.mock_short_memory.get_memory_stats.return_value = {
"message_count": 2
}
self.mock_long_memory = MagicMock() self.mock_long_memory = MagicMock()
self.mock_long_memory.get_memory_stats.return_value = {"item_count": 5} self.mock_long_memory.get_memory_stats.return_value = {
"item_count": 5
}
self.agent = Agent( self.agent = Agent(
tokenizer=self.mock_tokenizer, tokenizer=self.mock_tokenizer,
short_memory=self.mock_short_memory, short_memory=self.mock_short_memory,
long_term_memory=self.mock_long_memory long_term_memory=self.mock_long_memory,
) )
def test_log_step_metadata_basic(self): def test_log_step_metadata_basic(self):
log_result = self.agent.log_step_metadata(1, "Test prompt", "Test response") log_result = self.agent.log_step_metadata(
1, "Test prompt", "Test response"
)
self.assertIn('step_id', log_result) self.assertIn("step_id", log_result)
self.assertIn('timestamp', log_result) self.assertIn("timestamp", log_result)
self.assertIn('tokens', log_result) self.assertIn("tokens", log_result)
self.assertIn('memory_usage', log_result) self.assertIn("memory_usage", log_result)
self.assertEqual(log_result['tokens']['total'], 200) self.assertEqual(log_result["tokens"]["total"], 200)
def test_log_step_metadata_no_long_term_memory(self): def test_log_step_metadata_no_long_term_memory(self):
self.agent.long_term_memory = None self.agent.long_term_memory = None
log_result = self.agent.log_step_metadata(1, "prompt", "response") log_result = self.agent.log_step_metadata(
self.assertEqual(log_result['memory_usage']['long_term'], {}) 1, "prompt", "response"
)
self.assertEqual(log_result["memory_usage"]["long_term"], {})
def test_log_step_metadata_timestamp(self): def test_log_step_metadata_timestamp(self):
log_result = self.agent.log_step_metadata(1, "prompt", "response") log_result = self.agent.log_step_metadata(
self.assertIn('timestamp', log_result) 1, "prompt", "response"
)
self.assertIn("timestamp", log_result)
def test_token_counting_integration(self): def test_token_counting_integration(self):
self.mock_tokenizer.count_tokens.side_effect = [150, 250] self.mock_tokenizer.count_tokens.side_effect = [150, 250]
log_result = self.agent.log_step_metadata(1, "prompt", "response") log_result = self.agent.log_step_metadata(
1, "prompt", "response"
)
self.assertEqual(log_result['tokens']['total'], 400) self.assertEqual(log_result["tokens"]["total"], 400)
def test_agent_output_updating(self): def test_agent_output_updating(self):
initial_total_tokens = sum(step['tokens']['total'] for step in self.agent.agent_output.steps) initial_total_tokens = sum(
self.agent.log_step_metadata(1, "prompt", "response") step["tokens"]["total"]
for step in self.agent.agent_output.steps
)
self.agent.log_step_metadata(1, "prompt", "response")
final_total_tokens = sum(
step["tokens"]["total"]
for step in self.agent.agent_output.steps
)
self.assertEqual(
final_total_tokens - initial_total_tokens, 200
)
self.assertEqual(len(self.agent.agent_output.steps), 1)
final_total_tokens = sum(step['tokens']['total'] for step in self.agent.agent_output.steps)
self.assertEqual(
final_total_tokens - initial_total_tokens,
200
)
self.assertEqual(len(self.agent.agent_output.steps), 1)
class TestAgentLoggingIntegration(unittest.TestCase): class TestAgentLoggingIntegration(unittest.TestCase):
def setUp(self): def setUp(self):
@ -79,20 +94,21 @@ class TestAgentLoggingIntegration(unittest.TestCase):
result = self.agent._run(task, max_loops=max_loops) result = self.agent._run(task, max_loops=max_loops)
self.assertIsInstance(result, dict) self.assertIsInstance(result, dict)
self.assertIn('steps', result) self.assertIn("steps", result)
self.assertIsInstance(result['steps'], list) self.assertIsInstance(result["steps"], list)
self.assertEqual(len(result['steps']), max_loops) self.assertEqual(len(result["steps"]), max_loops)
if result['steps']: if result["steps"]:
step = result['steps'][0] step = result["steps"][0]
self.assertIn('step_id', step) self.assertIn("step_id", step)
self.assertIn('timestamp', step) self.assertIn("timestamp", step)
self.assertIn('task', step) self.assertIn("task", step)
self.assertIn('response', step) self.assertIn("response", step)
self.assertEqual(step['task'], task) self.assertEqual(step["task"], task)
self.assertEqual(step['response'], f"Response for loop 1") self.assertEqual(step["response"], "Response for loop 1")
self.assertTrue(len(self.agent.agent_output.steps) > 0) self.assertTrue(len(self.agent.agent_output.steps) > 0)
if __name__ == '__main__':
if __name__ == "__main__":
unittest.main() unittest.main()

@ -541,7 +541,7 @@ def test_flow_load_state(flow_instance):
"max_loops": 10, "max_loops": 10,
"autosave_path": "/path/to/load", "autosave_path": "/path/to/load",
} }
flow_instance.load_state(state) flow_instance.load(state)
assert flow_instance.get_current_prompt() == "Loaded prompt" assert flow_instance.get_current_prompt() == "Loaded prompt"
assert "Step 1" in flow_instance.get_instructions() assert "Step 1" in flow_instance.get_instructions()
assert "User message 1" in flow_instance.get_user_messages() assert "User message 1" in flow_instance.get_user_messages()

Loading…
Cancel
Save