parent
c7d56c5a5c
commit
0171ddd3ae
@ -0,0 +1,95 @@
|
|||||||
|
# Consistency Agent Documentation
|
||||||
|
|
||||||
|
|
||||||
|
The `SelfConsistencyAgent` is a specialized agent designed for generating multiple independent responses to a given task and aggregating them into a single, consistent final answer. It leverages concurrent processing to enhance efficiency and employs a majority voting mechanism to ensure the reliability of the aggregated response.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
The primary objective of the `SelfConsistencyAgent` is to provide a robust mechanism for decision-making and problem-solving by generating diverse responses and synthesizing them into a coherent final answer. This approach is particularly useful in scenarios where consistency and reliability are critical.
|
||||||
|
|
||||||
|
## Class: `SelfConsistencyAgent`
|
||||||
|
|
||||||
|
### Initialization
|
||||||
|
|
||||||
|
- **`__init__`**: Initializes the `SelfConsistencyAgent` with specified parameters.
|
||||||
|
|
||||||
|
#### Arguments
|
||||||
|
|
||||||
|
| Argument | Type | Default | Description |
|
||||||
|
|------------------------|---------|---------|-----------------------------------------------------------------------------|
|
||||||
|
| `num_samples` | `int` | `5` | Number of independent responses to sample. |
|
||||||
|
| `return_list` | `bool` | `False` | Whether to return the conversation as a list. |
|
||||||
|
| `max_loops` | `int` | `1` | Maximum number of loops for the agent to run. |
|
||||||
|
| `return_dict` | `bool` | `False` | Whether to return the conversation as a dictionary. |
|
||||||
|
| `return_json` | `bool` | `False` | Whether to return the conversation as JSON. |
|
||||||
|
| `majority_voting_prompt` | `str` | `None` | Custom prompt for majority voting. |
|
||||||
|
|
||||||
|
### Methods
|
||||||
|
|
||||||
|
- **`run`**: Generates multiple responses for the given task and aggregates them.
|
||||||
|
- **Arguments**:
|
||||||
|
- `task` (`str`): The input prompt.
|
||||||
|
- `answer` (`str`, optional): The expected answer to validate responses against.
|
||||||
|
- **Returns**: `str` - The aggregated final answer.
|
||||||
|
|
||||||
|
- **`aggregate`**: Aggregates a list of responses into a single final answer using majority voting.
|
||||||
|
- **Arguments**:
|
||||||
|
- `responses` (`List[str]`): The list of responses.
|
||||||
|
- **Returns**: `str` - The aggregated answer.
|
||||||
|
|
||||||
|
- **`check_responses_for_answer`**: Checks if a specified answer is present in any of the provided responses.
|
||||||
|
- **Arguments**:
|
||||||
|
- `responses` (`List[str]`): A list of responses to check.
|
||||||
|
- `answer` (`str`): The answer to look for in the responses.
|
||||||
|
- **Returns**: `bool` - `True` if the answer is found, `False` otherwise.
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
#### Example 1: Basic Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from swarms.agents.consistency_agent import SelfConsistencyAgent
|
||||||
|
|
||||||
|
# Initialize the agent
|
||||||
|
agent = SelfConsistencyAgent(
|
||||||
|
agent_name="Reasoning-Agent",
|
||||||
|
model_name="gpt-4o-mini",
|
||||||
|
max_loops=1,
|
||||||
|
num_samples=5
|
||||||
|
)
|
||||||
|
|
||||||
|
# Define a task
|
||||||
|
task = "What is the 40th prime number?"
|
||||||
|
|
||||||
|
# Run the agent
|
||||||
|
final_answer = agent.run(task)
|
||||||
|
|
||||||
|
# Print the final aggregated answer
|
||||||
|
print("Final aggregated answer:", final_answer)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Example 2: Using Custom Majority Voting Prompt
|
||||||
|
|
||||||
|
```python
|
||||||
|
from swarms.agents.consistency_agent import SelfConsistencyAgent
|
||||||
|
|
||||||
|
# Initialize the agent with a custom majority voting prompt
|
||||||
|
agent = SelfConsistencyAgent(
|
||||||
|
agent_name="Reasoning-Agent",
|
||||||
|
model_name="gpt-4o-mini",
|
||||||
|
max_loops=1,
|
||||||
|
num_samples=5,
|
||||||
|
majority_voting_prompt="Please provide the most common response."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Define a task
|
||||||
|
task = "Explain the theory of relativity in simple terms."
|
||||||
|
|
||||||
|
# Run the agent
|
||||||
|
final_answer = agent.run(task)
|
||||||
|
|
||||||
|
# Print the final aggregated answer
|
||||||
|
print("Final aggregated answer:", final_answer)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
@ -0,0 +1,56 @@
|
|||||||
|
# Iterative Reflective Expansion (IRE) Algorithm Documentation
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Iterative Reflective Expansion (IRE) Algorithm is a sophisticated reasoning framework that employs iterative hypothesis generation, simulation, and refinement to solve complex problems. It leverages a multi-step approach where an AI agent generates initial solution paths, evaluates their effectiveness through simulation, reflects on errors, and dynamically revises reasoning strategies. Through continuous cycles of hypothesis testing and meta-cognitive reflection, the algorithm progressively converges on optimal solutions by learning from both successful and unsuccessful reasoning attempts.
|
||||||
|
|
||||||
|
## Workflow
|
||||||
|
|
||||||
|
1. Generate initial hypotheses
|
||||||
|
2. Simulate paths
|
||||||
|
3. Reflect on errors
|
||||||
|
4. Revise paths
|
||||||
|
5. Select promising paths
|
||||||
|
6. Synthesize solution
|
||||||
|
|
||||||
|
## Class: IterativeReflectiveExpansion
|
||||||
|
|
||||||
|
### Arguments
|
||||||
|
|
||||||
|
| Argument | Type | Default | Description |
|
||||||
|
|----------------|--------|---------|-------------|
|
||||||
|
| agent | Agent | None | The Swarms agent instance used to perform reasoning tasks. |
|
||||||
|
| max_iterations | int | 5 | Maximum number of iterations for the reasoning process. |
|
||||||
|
| return_list | bool | False | If True, returns the conversation as a list of messages. |
|
||||||
|
| return_dict | bool | False | If True, returns the conversation as a dictionary of messages. |
|
||||||
|
| prompt | str | GENERAL_REASONING_AGENT_SYS_PROMPT | The system prompt for the agent. |
|
||||||
|
|
||||||
|
### Methods
|
||||||
|
|
||||||
|
| Method | Description |
|
||||||
|
|-------------------------------|-------------|
|
||||||
|
| generate_initial_hypotheses | Generates an initial set of reasoning hypotheses based on the problem input. |
|
||||||
|
| simulate_path | Simulates a given reasoning path and evaluates its effectiveness. |
|
||||||
|
| meta_reflect | Performs meta-cognitive reflection on the provided error information. |
|
||||||
|
| revise_path | Revises the reasoning path based on the provided feedback. |
|
||||||
|
| select_promising_paths | Selects the most promising reasoning paths from a list of candidates. |
|
||||||
|
| synthesize_solution | Synthesizes a final solution from the promising reasoning paths and historical memory. |
|
||||||
|
| run | Executes the Iterative Reflective Expansion process on the provided problem. |
|
||||||
|
|
||||||
|
## Use-Cases
|
||||||
|
|
||||||
|
### Example 1: Solving a Mathematical Problem
|
||||||
|
|
||||||
|
```python
|
||||||
|
from swarms import IterativeReflectiveExpansion
|
||||||
|
|
||||||
|
agent = IterativeReflectiveExpansion(
|
||||||
|
max_iterations=3,
|
||||||
|
)
|
||||||
|
|
||||||
|
agent.run("What is the 40th prime number?")
|
||||||
|
```
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
The Iterative Reflective Expansion (IRE) Algorithm is a powerful tool for solving complex problems through iterative reasoning and reflection. By leveraging the capabilities of a Swarms agent, it can dynamically adapt and refine its approach to converge on optimal solutions.
|
@ -0,0 +1,110 @@
|
|||||||
|
import os
|
||||||
|
import requests
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
import json
|
||||||
|
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Retrieve API key securely from .env
|
||||||
|
API_KEY = os.getenv("SWARMS_API_KEY")
|
||||||
|
BASE_URL = "https://swarms-api-285321057562.us-east1.run.app"
|
||||||
|
|
||||||
|
# Headers for secure API communication
|
||||||
|
headers = {"x-api-key": API_KEY, "Content-Type": "application/json"}
|
||||||
|
|
||||||
|
|
||||||
|
def create_medical_swarm(patient_case: str):
|
||||||
|
"""
|
||||||
|
Constructs and triggers a full-stack medical swarm consisting of three agents:
|
||||||
|
Diagnostic Specialist, Medical Coder, and Treatment Advisor.
|
||||||
|
Each agent is provided with a comprehensive, detailed system prompt to ensure high reliability.
|
||||||
|
"""
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
"swarm_name": "Enhanced Medical Diagnostic Swarm",
|
||||||
|
"description": "A swarm of agents specialized in performing comprehensive medical diagnostics, analysis, and coding.",
|
||||||
|
"agents": [
|
||||||
|
{
|
||||||
|
"agent_name": "Diagnostic Specialist",
|
||||||
|
"description": "Agent specialized in analyzing patient history, symptoms, lab results, and imaging data to produce accurate diagnoses.",
|
||||||
|
"system_prompt": (
|
||||||
|
"You are an experienced, board-certified medical diagnostician with over 20 years of clinical practice. "
|
||||||
|
"Your role is to analyze all available patient information—including history, symptoms, lab tests, and imaging results—"
|
||||||
|
"with extreme attention to detail and clinical nuance. Provide a comprehensive differential diagnosis considering "
|
||||||
|
"common, uncommon, and rare conditions. Always cross-reference clinical guidelines and evidence-based medicine. "
|
||||||
|
"Explain your reasoning step by step and provide a final prioritized list of potential diagnoses along with their likelihood. "
|
||||||
|
"Consider patient demographics, comorbidities, and risk factors. Your diagnosis should be reliable, clear, and actionable."
|
||||||
|
),
|
||||||
|
"model_name": "openai/gpt-4o",
|
||||||
|
"role": "worker",
|
||||||
|
"max_loops": 2,
|
||||||
|
"max_tokens": 4000,
|
||||||
|
"temperature": 0.3,
|
||||||
|
"auto_generate_prompt": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"agent_name": "Medical Coder",
|
||||||
|
"description": "Agent responsible for translating medical diagnoses and procedures into accurate standardized medical codes (ICD-10, CPT, etc.).",
|
||||||
|
"system_prompt": (
|
||||||
|
"You are a certified and experienced medical coder, well-versed in ICD-10, CPT, and other coding systems. "
|
||||||
|
"Your task is to convert detailed medical diagnoses and treatment procedures into precise, standardized codes. "
|
||||||
|
"Consider all aspects of the clinical documentation including severity, complications, and comorbidities. "
|
||||||
|
"Provide clear explanations for the codes chosen, referencing the latest coding guidelines and payer policies where relevant. "
|
||||||
|
"Your output should be comprehensive, reliable, and fully compliant with current medical coding standards."
|
||||||
|
),
|
||||||
|
"model_name": "openai/gpt-4o",
|
||||||
|
"role": "worker",
|
||||||
|
"max_loops": 1,
|
||||||
|
"max_tokens": 3000,
|
||||||
|
"temperature": 0.2,
|
||||||
|
"auto_generate_prompt": False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"agent_name": "Treatment Advisor",
|
||||||
|
"description": "Agent dedicated to suggesting evidence-based treatment options, including pharmaceutical and non-pharmaceutical interventions.",
|
||||||
|
"system_prompt": (
|
||||||
|
"You are a highly knowledgeable medical treatment specialist with expertise in the latest clinical guidelines and research. "
|
||||||
|
"Based on the diagnostic conclusions provided, your task is to recommend a comprehensive treatment plan. "
|
||||||
|
"Your suggestions should include first-line therapies, potential alternative treatments, and considerations for patient-specific factors "
|
||||||
|
"such as allergies, contraindications, and comorbidities. Explain the rationale behind each treatment option and reference clinical guidelines where applicable. "
|
||||||
|
"Your recommendations should be reliable, detailed, and clearly prioritized based on efficacy and safety."
|
||||||
|
),
|
||||||
|
"model_name": "openai/gpt-4o",
|
||||||
|
"role": "worker",
|
||||||
|
"max_loops": 1,
|
||||||
|
"max_tokens": 5000,
|
||||||
|
"temperature": 0.3,
|
||||||
|
"auto_generate_prompt": False,
|
||||||
|
},
|
||||||
|
],
|
||||||
|
"max_loops": 3,
|
||||||
|
"swarm_type": "SequentialWorkflow",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Payload includes the patient case as the task to be processed by the swarm
|
||||||
|
payload = {"task": patient_case, "swarm": payload}
|
||||||
|
|
||||||
|
response = requests.post(
|
||||||
|
f"{BASE_URL}/swarm/completion",
|
||||||
|
headers=headers,
|
||||||
|
json=payload,
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.status_code == 200:
|
||||||
|
print("Swarm successfully executed!")
|
||||||
|
return json.dumps(response.json(), indent=4)
|
||||||
|
else:
|
||||||
|
print(f"Error {response.status_code}: {response.text}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# Example Patient Task for the Swarm to diagnose and analyze
|
||||||
|
if __name__ == "__main__":
|
||||||
|
patient_case = (
|
||||||
|
"Patient is a 55-year-old male presenting with severe chest pain, shortness of breath, elevated blood pressure, "
|
||||||
|
"nausea, and a family history of cardiovascular disease. Blood tests show elevated troponin levels, and EKG indicates ST-segment elevations. "
|
||||||
|
"The patient is currently unstable. Provide a detailed diagnosis, coding, and treatment plan."
|
||||||
|
)
|
||||||
|
|
||||||
|
diagnostic_output = create_medical_swarm(patient_case)
|
||||||
|
print(diagnostic_output)
|
@ -0,0 +1,7 @@
|
|||||||
|
from swarms.agents.i_agent import IterativeReflectiveExpansion
|
||||||
|
|
||||||
|
agent = IterativeReflectiveExpansion(
|
||||||
|
max_iterations=3,
|
||||||
|
)
|
||||||
|
|
||||||
|
agent.run("What is the 40th prime number?")
|
@ -0,0 +1,187 @@
|
|||||||
|
from collections import Counter
|
||||||
|
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
from loguru import logger
|
||||||
|
|
||||||
|
from swarms.structs.agent import Agent
|
||||||
|
from swarms.structs.conversation import Conversation
|
||||||
|
from swarms.structs.malt import majority_voting_prompt
|
||||||
|
from swarms.utils.any_to_str import any_to_str
|
||||||
|
|
||||||
|
CONSISTENCY_SYSTEM_PROMPT = """
|
||||||
|
You are a reasoning agent designed for complex problem-solving and decision-making. Your objective is to provide clear and reliable responses through structured reasoning. Begin by thoroughly understanding the problem, rephrasing it for clarity, and identifying key components. Develop a logical plan that breaks the problem into manageable steps, detailing your approach and any assumptions made. Validate your information with reliable sources and assess the accuracy of your calculations. Explore multiple solutions, weighing their pros and cons, and maintain transparency by documenting your reasoning process, uncertainties, and biases. Summarize your findings in a concise final answer that reflects your thorough analysis, ensuring it is well-organized and accessible. Adapt your reasoning to the context of the problem, integrating new information as needed, and implement error-handling strategies to address any issues that arise. Finally, reflect on your reasoning process to identify areas for improvement and ensure consistency across all reasoning paths.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def aggregation_agent(
|
||||||
|
responses: List[str], prompt: str = majority_voting_prompt
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Aggregates a list of responses into a single final answer.
|
||||||
|
"""
|
||||||
|
task = any_to_str(responses)
|
||||||
|
|
||||||
|
agent = Agent(
|
||||||
|
agent_name="Aggregation-Agent",
|
||||||
|
description="An agent that aggregates a list of responses into a single final answer.",
|
||||||
|
model_name="gpt-4o-mini",
|
||||||
|
system_prompt=prompt,
|
||||||
|
max_loops=1,
|
||||||
|
)
|
||||||
|
|
||||||
|
final_answer = agent.run(task)
|
||||||
|
|
||||||
|
return final_answer
|
||||||
|
|
||||||
|
|
||||||
|
class SelfConsistencyAgent(Agent):
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
num_samples: int = 5,
|
||||||
|
return_list: bool = False,
|
||||||
|
max_loops: int = 1,
|
||||||
|
return_dict: bool = False,
|
||||||
|
return_json: bool = False,
|
||||||
|
majority_voting_prompt: str = None,
|
||||||
|
**kwargs,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
Initializes the SelfConsistencyAgent.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
num_samples (int): Number of independent responses to sample.
|
||||||
|
**kwargs: Other keyword arguments passed to the base Agent.
|
||||||
|
"""
|
||||||
|
super().__init__(
|
||||||
|
**kwargs, system_prompt=CONSISTENCY_SYSTEM_PROMPT
|
||||||
|
)
|
||||||
|
self.num_samples = num_samples
|
||||||
|
self.conversation = Conversation()
|
||||||
|
self.return_list = return_list
|
||||||
|
self.max_loops = max_loops
|
||||||
|
self.return_dict = return_dict
|
||||||
|
self.return_json = return_json
|
||||||
|
self.majority_voting_prompt = majority_voting_prompt
|
||||||
|
|
||||||
|
def run(
|
||||||
|
self, task: str, answer: str = None, *args, **kwargs
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Generates multiple responses for the given prompt and aggregates them concurrently.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
task (str): The input prompt.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The aggregated final answer.
|
||||||
|
"""
|
||||||
|
responses = []
|
||||||
|
logger.info(
|
||||||
|
f"Generating {self.num_samples} responses concurrently..."
|
||||||
|
)
|
||||||
|
|
||||||
|
self.conversation.add(role="User", content=task)
|
||||||
|
|
||||||
|
with ThreadPoolExecutor() as executor:
|
||||||
|
futures = {
|
||||||
|
executor.submit(super().run, task, *args, **kwargs): i
|
||||||
|
for i in range(self.num_samples)
|
||||||
|
}
|
||||||
|
for future in as_completed(futures):
|
||||||
|
response = future.result()
|
||||||
|
responses.append(response)
|
||||||
|
|
||||||
|
self.conversation.add(role=self.agent_name, content=responses)
|
||||||
|
|
||||||
|
if answer is not None:
|
||||||
|
correct = self.check_responses_for_answer(
|
||||||
|
responses, answer
|
||||||
|
)
|
||||||
|
|
||||||
|
if not correct:
|
||||||
|
logger.info(
|
||||||
|
"The answer is not correct. Please try again."
|
||||||
|
)
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Aggregation agent
|
||||||
|
# final_answer = self.aggregation_agent(responses)
|
||||||
|
|
||||||
|
final_answer = aggregation_agent(responses)
|
||||||
|
|
||||||
|
self.conversation.add(
|
||||||
|
role="Majority Voting Agent", content=final_answer
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.return_list:
|
||||||
|
self.conversation.return_messages_as_list()
|
||||||
|
elif self.return_dict:
|
||||||
|
self.conversation.return_json()
|
||||||
|
else:
|
||||||
|
return final_answer
|
||||||
|
|
||||||
|
def aggregate(self, responses: List[str]) -> str:
|
||||||
|
"""
|
||||||
|
Aggregates a list of responses into a single final answer.
|
||||||
|
|
||||||
|
Here we use a simple majority vote (most common answer) as an example. Depending on
|
||||||
|
the task, you might need a more sophisticated aggregation (e.g., weighting, consensus reasoning, etc.).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
responses (list of str): The list of responses.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The aggregated answer.
|
||||||
|
"""
|
||||||
|
# Count the frequency of each response.
|
||||||
|
counts = Counter(responses)
|
||||||
|
most_common, freq = counts.most_common(1)[0]
|
||||||
|
logger.info(
|
||||||
|
f"Aggregation complete. Most common response (appeared {freq} times):"
|
||||||
|
)
|
||||||
|
return most_common
|
||||||
|
|
||||||
|
def check_responses_for_answer(
|
||||||
|
self, responses: List[str], answer: str
|
||||||
|
) -> bool:
|
||||||
|
"""
|
||||||
|
Checks if the specified answer is present in any of the provided responses.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
responses (List[str]): A list of responses to check.
|
||||||
|
answer (str): The answer to look for in the responses.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if the answer is found in any response, False otherwise.
|
||||||
|
"""
|
||||||
|
for response in responses:
|
||||||
|
if answer in response:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# If the answer is not found, log the absence for each response
|
||||||
|
for response in responses:
|
||||||
|
if answer not in response:
|
||||||
|
self.conversation.add(
|
||||||
|
role="User",
|
||||||
|
content=f"The answer '{answer}' is not found in the response: '{response}'",
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
f"The answer '{answer}' is not found in the response: '{response}'"
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# # Example usage:
|
||||||
|
# if __name__ == "__main__":
|
||||||
|
# agent = SelfConsistencyAgent(
|
||||||
|
# agent_name="Reasoning-Agent",
|
||||||
|
# model_name="gpt-4o-mini",
|
||||||
|
# max_loops=1,
|
||||||
|
# num_samples=5, # Number of samples for self consistency
|
||||||
|
# )
|
||||||
|
|
||||||
|
# prompt = "What is the 40th prime number?"
|
||||||
|
# final_answer = agent.run(prompt)
|
||||||
|
# print("\nFinal aggregated answer:")
|
||||||
|
# print(final_answer)
|
@ -0,0 +1,303 @@
|
|||||||
|
"""
|
||||||
|
Iterative Reflective Expansion (IRE) Algorithm
|
||||||
|
|
||||||
|
A sophisticated reasoning framework that employs iterative hypothesis generation, simulation, and refinement to solve complex problems. IRE leverages a multi-step approach where an AI agent generates initial solution paths, evaluates their effectiveness through simulation, reflects on errors, and dynamically revises reasoning strategies. Through continuous cycles of hypothesis testing and meta-cognitive reflection, the algorithm progressively converges on optimal solutions by learning from both successful and unsuccessful reasoning attempts.
|
||||||
|
|
||||||
|
|
||||||
|
- IRE is a multi-step approach where an AI agent generates initial solution paths, evaluates their effectiveness through simulation, reflects on errors, and dynamically revises reasoning strategies.
|
||||||
|
- Through continuous cycles of hypothesis testing and meta-cognitive reflection, the algorithm progressively converges on optimal solutions by learning from both successful and unsuccessful reasoning attempts.
|
||||||
|
|
||||||
|
|
||||||
|
Workflow:
|
||||||
|
1. Generate initial hypotheses
|
||||||
|
2. Simulate paths
|
||||||
|
3. Reflect on errors
|
||||||
|
4. Revise paths
|
||||||
|
5. Select promising paths
|
||||||
|
6. Synthesize solution
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import List, Tuple
|
||||||
|
from loguru import logger
|
||||||
|
from swarms.structs.agent import Agent
|
||||||
|
from swarms.structs.conversation import Conversation
|
||||||
|
|
||||||
|
# Define a new system prompt for general problem solving
|
||||||
|
GENERAL_REASONING_AGENT_SYS_PROMPT = """
|
||||||
|
You are a highly capable problem-solving agent with a unique ability to reason through complex challenges via iterative reflection and hypothesis testing.
|
||||||
|
Your role is to assist in generating innovative solutions to a wide array of general problems by engaging in trial and error, reflective evaluation, and dynamic hypothesis expansion.
|
||||||
|
When presented with a problem statement, generate multiple hypotheses, simulate reasoning paths, reflect on errors, and iteratively refine your approach to produce the best solution.
|
||||||
|
Do not include any finance-related content.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class IterativeReflectiveExpansion:
|
||||||
|
"""
|
||||||
|
A class implementing the Iterative Reflective Expansion (IRE) reasoning algorithm.
|
||||||
|
|
||||||
|
This algorithm leverages a Swarms agent to iteratively generate, simulate, reflect on, and refine reasoning paths
|
||||||
|
in order to solve complex problems through trial and error, reflective evaluation, and dynamic hypothesis expansion.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
agent: Agent,
|
||||||
|
max_iterations: int = 5,
|
||||||
|
return_list: bool = False,
|
||||||
|
return_dict: bool = False,
|
||||||
|
prompt: str = GENERAL_REASONING_AGENT_SYS_PROMPT,
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
Initialize the Iterative Reflective Expansion engine.
|
||||||
|
|
||||||
|
:param agent: The Swarms agent instance used to perform reasoning tasks.
|
||||||
|
:param max_iterations: Maximum number of iterations for the reasoning process.
|
||||||
|
"""
|
||||||
|
self.agent = agent
|
||||||
|
self.max_iterations = max_iterations
|
||||||
|
self.conversation = Conversation()
|
||||||
|
self.return_list = return_list
|
||||||
|
self.return_dict = return_dict
|
||||||
|
|
||||||
|
self.agent = Agent(
|
||||||
|
agent_name="General-Reasoning-Agent",
|
||||||
|
system_prompt=prompt,
|
||||||
|
model_name="gpt-4o-mini",
|
||||||
|
max_loops=1,
|
||||||
|
)
|
||||||
|
|
||||||
|
def generate_initial_hypotheses(
|
||||||
|
self, problem_input: str
|
||||||
|
) -> List[str]:
|
||||||
|
"""
|
||||||
|
Generate an initial set of reasoning hypotheses based on the problem input.
|
||||||
|
|
||||||
|
:param problem_input: The problem statement.
|
||||||
|
:return: A list of candidate reasoning paths/hypotheses.
|
||||||
|
"""
|
||||||
|
logger.info("Generating initial hypotheses for the problem.")
|
||||||
|
prompt = (
|
||||||
|
f"Given the following problem:\n\n"
|
||||||
|
f"'{problem_input}'\n\n"
|
||||||
|
"Generate a list of possible approaches and strategies to solve it. "
|
||||||
|
"Present each approach on a new line."
|
||||||
|
)
|
||||||
|
response = self.agent.run(prompt)
|
||||||
|
self.conversation.add(
|
||||||
|
role=self.agent.agent_name, content=response
|
||||||
|
)
|
||||||
|
hypotheses = [
|
||||||
|
line.strip()
|
||||||
|
for line in response.split("\n")
|
||||||
|
if line.strip()
|
||||||
|
]
|
||||||
|
logger.debug(f"Initial hypotheses: {hypotheses}")
|
||||||
|
return hypotheses
|
||||||
|
|
||||||
|
def simulate_path(self, path: str) -> Tuple[str, float, str]:
|
||||||
|
"""
|
||||||
|
Simulate a given reasoning path and evaluate its effectiveness.
|
||||||
|
|
||||||
|
:param path: A candidate reasoning path.
|
||||||
|
:return: A tuple containing the simulated outcome, a numerical score (0.0 to 1.0), and error information.
|
||||||
|
"""
|
||||||
|
logger.info(f"Simulating path: {path}")
|
||||||
|
prompt = (
|
||||||
|
f"Simulate the following reasoning path step by step and provide:\n"
|
||||||
|
f"1. Outcome: A brief summary of the resulting solution.\n"
|
||||||
|
f"2. Score: A numerical effectiveness score between 0.0 and 1.0.\n"
|
||||||
|
f"3. Errors: Any potential errors or shortcomings identified during the reasoning.\n\n"
|
||||||
|
f"Reasoning Path: {path}"
|
||||||
|
)
|
||||||
|
response = self.agent.run(prompt)
|
||||||
|
self.conversation.add(
|
||||||
|
role=self.agent.agent_name, content=response
|
||||||
|
)
|
||||||
|
outcome = ""
|
||||||
|
score = 0.0
|
||||||
|
error_info = ""
|
||||||
|
try:
|
||||||
|
# Expecting a response with lines starting with "Outcome:", "Score:", and "Errors:"
|
||||||
|
for line in response.splitlines():
|
||||||
|
if line.startswith("Outcome:"):
|
||||||
|
outcome = line[len("Outcome:") :].strip()
|
||||||
|
elif line.startswith("Score:"):
|
||||||
|
score = float(line[len("Score:") :].strip())
|
||||||
|
elif line.startswith("Errors:"):
|
||||||
|
error_info = line[len("Errors:") :].strip()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error parsing simulation response: {e}")
|
||||||
|
logger.debug(
|
||||||
|
f"Simulated outcome: {outcome}, Score: {score}, Errors: {error_info}"
|
||||||
|
)
|
||||||
|
return outcome, score, error_info
|
||||||
|
|
||||||
|
def meta_reflect(self, error_info: str) -> str:
|
||||||
|
"""
|
||||||
|
Perform meta-cognitive reflection on the provided error information.
|
||||||
|
|
||||||
|
:param error_info: Information regarding errors in the reasoning path.
|
||||||
|
:return: Feedback and suggestions for revising the reasoning path.
|
||||||
|
"""
|
||||||
|
logger.info(
|
||||||
|
"Performing meta-reflection on error information."
|
||||||
|
)
|
||||||
|
prompt = (
|
||||||
|
f"Analyze the following error information and suggest modifications to improve the reasoning process:\n"
|
||||||
|
f"{error_info}\n"
|
||||||
|
"Provide clear and actionable feedback."
|
||||||
|
)
|
||||||
|
feedback = self.agent.run(prompt)
|
||||||
|
self.conversation.add(
|
||||||
|
role=self.agent.agent_name, content=feedback
|
||||||
|
)
|
||||||
|
logger.debug(f"Meta-reflection feedback: {feedback}")
|
||||||
|
return feedback
|
||||||
|
|
||||||
|
def revise_path(self, path: str, feedback: str) -> List[str]:
|
||||||
|
"""
|
||||||
|
Revise the reasoning path based on the provided feedback.
|
||||||
|
|
||||||
|
:param path: The original reasoning path.
|
||||||
|
:param feedback: Feedback from meta-cognitive reflection.
|
||||||
|
:return: A list of revised reasoning paths.
|
||||||
|
"""
|
||||||
|
logger.info("Revising reasoning path based on feedback.")
|
||||||
|
prompt = (
|
||||||
|
f"Given the reasoning path:\n'{path}'\n\n"
|
||||||
|
f"and the following feedback:\n'{feedback}'\n\n"
|
||||||
|
"Generate revised reasoning paths that address the issues raised. "
|
||||||
|
"Present each revised path on a new line."
|
||||||
|
)
|
||||||
|
response = self.agent.run(prompt)
|
||||||
|
self.conversation.add(
|
||||||
|
role=self.agent.agent_name, content=response
|
||||||
|
)
|
||||||
|
revised_paths = [
|
||||||
|
line.strip()
|
||||||
|
for line in response.split("\n")
|
||||||
|
if line.strip()
|
||||||
|
]
|
||||||
|
logger.debug(f"Revised paths: {revised_paths}")
|
||||||
|
return revised_paths
|
||||||
|
|
||||||
|
def select_promising_paths(self, paths: List[str]) -> List[str]:
|
||||||
|
"""
|
||||||
|
Select the most promising reasoning paths from a list of candidates.
|
||||||
|
|
||||||
|
:param paths: A list of candidate reasoning paths.
|
||||||
|
:return: A pruned list containing the most promising paths.
|
||||||
|
"""
|
||||||
|
logger.info("Selecting promising reasoning paths.")
|
||||||
|
prompt = (
|
||||||
|
"Evaluate the following reasoning paths and select the ones that appear most promising for further exploration. "
|
||||||
|
"List each selected path on a new line:\n"
|
||||||
|
+ "\n".join(paths)
|
||||||
|
)
|
||||||
|
response = self.agent.run(prompt)
|
||||||
|
self.conversation.add(
|
||||||
|
role=self.agent.agent_name, content=response
|
||||||
|
)
|
||||||
|
selected_paths = [
|
||||||
|
line.strip()
|
||||||
|
for line in response.split("\n")
|
||||||
|
if line.strip()
|
||||||
|
]
|
||||||
|
logger.debug(f"Selected paths: {selected_paths}")
|
||||||
|
return selected_paths
|
||||||
|
|
||||||
|
def synthesize_solution(
|
||||||
|
self, paths: List[str], memory_pool: List[str]
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Synthesize a final solution from the promising reasoning paths and historical memory.
|
||||||
|
|
||||||
|
:param paths: The current promising reasoning paths.
|
||||||
|
:param memory_pool: A list of all previously generated reasoning paths.
|
||||||
|
:return: A coherent final solution.
|
||||||
|
"""
|
||||||
|
logger.info(
|
||||||
|
"Synthesizing final solution from promising paths."
|
||||||
|
)
|
||||||
|
prompt = (
|
||||||
|
"Based on the following promising reasoning paths:\n"
|
||||||
|
f"{chr(10).join(paths)}\n\n"
|
||||||
|
"and the historical reasoning memory:\n"
|
||||||
|
f"{chr(10).join(memory_pool)}\n\n"
|
||||||
|
"Synthesize a final, coherent solution to the problem."
|
||||||
|
)
|
||||||
|
solution = self.agent.run(prompt)
|
||||||
|
self.conversation.add(
|
||||||
|
role=self.agent.agent_name, content=solution
|
||||||
|
)
|
||||||
|
logger.debug(f"Synthesized solution: {solution}")
|
||||||
|
return solution
|
||||||
|
|
||||||
|
def run(self, problem_input: str) -> str:
|
||||||
|
"""
|
||||||
|
Execute the Iterative Reflective Expansion process on the provided problem.
|
||||||
|
|
||||||
|
:param problem_input: The problem statement.
|
||||||
|
:return: The final solution generated after iterative reasoning.
|
||||||
|
"""
|
||||||
|
logger.info(
|
||||||
|
f"Starting iterative reflective expansion for problem: {problem_input}"
|
||||||
|
)
|
||||||
|
candidate_paths = self.generate_initial_hypotheses(
|
||||||
|
problem_input
|
||||||
|
)
|
||||||
|
memory_pool: List[str] = []
|
||||||
|
|
||||||
|
for iteration in range(self.max_iterations):
|
||||||
|
logger.info(
|
||||||
|
f"Iteration {iteration + 1}/{self.max_iterations}"
|
||||||
|
)
|
||||||
|
expanded_paths: List[str] = []
|
||||||
|
|
||||||
|
for path in candidate_paths:
|
||||||
|
outcome, score, error_info = self.simulate_path(path)
|
||||||
|
# Use a threshold score of 0.7 (this can be adjusted)
|
||||||
|
if score < 0.7:
|
||||||
|
feedback = self.meta_reflect(error_info)
|
||||||
|
revised_paths = self.revise_path(path, feedback)
|
||||||
|
expanded_paths.extend(revised_paths)
|
||||||
|
else:
|
||||||
|
expanded_paths.append(path)
|
||||||
|
|
||||||
|
memory_pool.extend(candidate_paths)
|
||||||
|
candidate_paths = self.select_promising_paths(
|
||||||
|
expanded_paths
|
||||||
|
)
|
||||||
|
logger.info(
|
||||||
|
f"Candidate paths for next iteration: {candidate_paths}"
|
||||||
|
)
|
||||||
|
|
||||||
|
final_solution = self.synthesize_solution(
|
||||||
|
candidate_paths, memory_pool
|
||||||
|
)
|
||||||
|
logger.info("Final solution generated.")
|
||||||
|
|
||||||
|
if self.return_list:
|
||||||
|
return self.conversation.return_messages_as_list()
|
||||||
|
elif self.return_dict:
|
||||||
|
return self.conversation.return_messages_as_dict()
|
||||||
|
|
||||||
|
else:
|
||||||
|
return final_solution
|
||||||
|
|
||||||
|
|
||||||
|
# def main() -> None:
|
||||||
|
# """
|
||||||
|
# Main function to execute the Iterative Reflective Expansion algorithm on a sample problem.
|
||||||
|
# """
|
||||||
|
# problem_statement = "What is the 40th prime number?"
|
||||||
|
# reasoning_engine = IterativeReflectiveExpansion(max_iterations=1)
|
||||||
|
# final_solution = reasoning_engine.run(problem_statement)
|
||||||
|
# print("Final Solution:")
|
||||||
|
# print(final_solution)
|
||||||
|
|
||||||
|
|
||||||
|
# if __name__ == "__main__":
|
||||||
|
# main()
|
@ -1,82 +0,0 @@
|
|||||||
# tools - search, code executor, create api
|
|
||||||
|
|
||||||
import os
|
|
||||||
import requests
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
import json
|
|
||||||
|
|
||||||
load_dotenv()
|
|
||||||
|
|
||||||
API_KEY = os.getenv("SWARMS_API_KEY")
|
|
||||||
BASE_URL = "https://swarms-api-285321057562.us-east1.run.app"
|
|
||||||
|
|
||||||
headers = {"x-api-key": API_KEY, "Content-Type": "application/json"}
|
|
||||||
|
|
||||||
|
|
||||||
def run_health_check():
|
|
||||||
response = requests.get(f"{BASE_URL}/health", headers=headers)
|
|
||||||
return response.json()
|
|
||||||
|
|
||||||
|
|
||||||
def run_single_swarm():
|
|
||||||
payload = {
|
|
||||||
"name": "Financial Analysis Swarm",
|
|
||||||
"description": "Market analysis swarm",
|
|
||||||
"agents": [
|
|
||||||
{
|
|
||||||
"agent_name": "Market Analyst",
|
|
||||||
"description": "Analyzes market trends",
|
|
||||||
"system_prompt": "You are a financial analyst expert.",
|
|
||||||
"model_name": "openai/gpt-4o",
|
|
||||||
"role": "worker",
|
|
||||||
"max_loops": 1,
|
|
||||||
"max_tokens": 8192,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"agent_name": "Economic Forecaster",
|
|
||||||
"description": "Predicts economic trends",
|
|
||||||
"system_prompt": "You are an expert in economic forecasting.",
|
|
||||||
"model_name": "gpt-4o",
|
|
||||||
"role": "worker",
|
|
||||||
"max_loops": 1,
|
|
||||||
"max_tokens": 8192,
|
|
||||||
},
|
|
||||||
],
|
|
||||||
"max_loops": 1,
|
|
||||||
"swarm_type": "SequentialWorkflow",
|
|
||||||
"task": "What are the best etfs and index funds for ai and tech?",
|
|
||||||
"output_type": "dict",
|
|
||||||
}
|
|
||||||
|
|
||||||
response = requests.post(
|
|
||||||
f"{BASE_URL}/v1/swarm/completions",
|
|
||||||
headers=headers,
|
|
||||||
json=payload,
|
|
||||||
)
|
|
||||||
|
|
||||||
print(response)
|
|
||||||
print(response.status_code)
|
|
||||||
# return response.json()
|
|
||||||
output = response.json()
|
|
||||||
|
|
||||||
return json.dumps(output, indent=4)
|
|
||||||
|
|
||||||
|
|
||||||
def get_logs():
|
|
||||||
response = requests.get(
|
|
||||||
f"{BASE_URL}/v1/swarm/logs", headers=headers
|
|
||||||
)
|
|
||||||
output = response.json()
|
|
||||||
# return json.dumps(output, indent=4)
|
|
||||||
return output
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
result = run_single_swarm()
|
|
||||||
print("Swarm Result:")
|
|
||||||
print(result)
|
|
||||||
|
|
||||||
# logs = get_logs()
|
|
||||||
# logs = json.dumps(logs, indent=4)
|
|
||||||
# print("Logs:")
|
|
||||||
# print(logs)
|
|
Loading…
Reference in new issue