Merge branch 'kyegomez:master' into frames

pull/1003/head
CI-DEV 2 weeks ago committed by GitHub
commit 7c5c21847b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -89,7 +89,6 @@ graph TD
| `callback` | Callable function to be called after each agent loop. |
| `metadata` | Dictionary containing metadata for the agent. |
| `callbacks` | List of callable functions to be called during execution. |
| `logger_handler` | Handler for logging messages. |
| `search_algorithm` | Callable function for long-term memory retrieval. |
| `logs_to_filename` | File path for logging agent activities. |
| `evaluator` | Callable function for evaluating the agent's responses. |
@ -121,14 +120,12 @@ graph TD
| `memory_chunk_size` | Integer representing the maximum size of memory chunks for long-term memory retrieval. |
| `agent_ops_on` | Boolean indicating whether agent operations should be enabled. |
| `return_step_meta` | Boolean indicating whether to return JSON of all steps and additional metadata. |
| `output_type` | Literal type indicating whether to output "string", "str", "list", "json", "dict", or "yaml". |
| `time_created` | Float representing the time the agent was created. |
| `tags` | Optional list of strings for tagging the agent. |
| `use_cases` | Optional list of dictionaries describing use cases for the agent. |
| `step_pool` | List of Step objects representing the agent's execution steps. |
| `print_every_step` | Boolean indicating whether to print every step of execution. |
| `agent_output` | ManySteps object containing the agent's output and metadata. |
| `executor_workers` | Integer representing the number of executor workers for concurrent operations. |
| `data_memory` | Optional callable for data memory operations. |
| `load_yaml_path` | String representing the path to a YAML file for loading configurations. |
| `auto_generate_prompt` | Boolean indicating whether to automatically generate prompts. |
@ -137,17 +134,44 @@ graph TD
| `artifacts_on` | Boolean indicating whether to save artifacts from agent execution |
| `artifacts_output_path` | File path where artifacts should be saved |
| `artifacts_file_extension` | File extension to use for saved artifacts |
| `device` | Device to run computations on ("cpu" or "gpu") |
| `all_cores` | Boolean indicating whether to use all CPU cores |
| `device_id` | ID of the GPU device to use if running on GPU |
| `scheduled_run_date` | Optional datetime for scheduling future agent runs |
| `do_not_use_cluster_ops` | Boolean indicating whether to avoid cluster operations |
| `all_gpus` | Boolean indicating whether to use all available GPUs |
| `model_name` | String representing the name of the model to use |
| `llm_args` | Dictionary containing additional arguments for the LLM |
| `load_state_path` | String representing the path to load state from |
| `role` | String representing the role of the agent (e.g., "worker") |
| `print_on` | Boolean indicating whether to print output |
| `tools_list_dictionary` | List of dictionaries representing tool schemas |
| `mcp_url` | String or MCPConnection representing the MCP server URL |
| `mcp_urls` | List of strings representing multiple MCP server URLs |
| `react_on` | Boolean indicating whether to enable ReAct reasoning |
| `safety_prompt_on` | Boolean indicating whether to enable safety prompts |
| `random_models_on` | Boolean indicating whether to randomly select models |
| `mcp_config` | MCPConnection object containing MCP configuration |
| `top_p` | Float representing the top-p sampling parameter |
| `conversation_schema` | ConversationSchema object for conversation formatting |
| `llm_base_url` | String representing the base URL for the LLM API |
| `llm_api_key` | String representing the API key for the LLM |
| `rag_config` | RAGConfig object containing RAG configuration |
| `tool_call_summary` | Boolean indicating whether to summarize tool calls |
| `output_raw_json_from_tool_call` | Boolean indicating whether to output raw JSON from tool calls |
| `summarize_multiple_images` | Boolean indicating whether to summarize multiple image outputs |
| `tool_retry_attempts` | Integer representing the number of retry attempts for tool execution |
| `reasoning_prompt_on` | Boolean indicating whether to enable reasoning prompts |
| `dynamic_context_window` | Boolean indicating whether to dynamically adjust context window |
| `created_at` | Float representing the timestamp when the agent was created |
| `workspace_dir` | String representing the workspace directory for the agent |
| `timeout` | Integer representing the timeout for operations in seconds |
## `Agent` Methods
| Method | Description | Inputs | Usage Example |
|--------|-------------|--------|----------------|
| `run(task, img=None, is_last=False, device="cpu", device_id=0, all_cores=True, *args, **kwargs)` | Runs the autonomous agent loop to complete the given task. | `task` (str): The task to be performed.<br>`img` (str, optional): Path to an image file.<br>`is_last` (bool): Whether this is the last task.<br>`device` (str): Device to run on ("cpu" or "gpu").<br>`device_id` (int): ID of the GPU to use.<br>`all_cores` (bool): Whether to use all CPU cores.<br>`*args`, `**kwargs`: Additional arguments. | `response = agent.run("Generate a report on financial performance.")` |
| `run(task, img=None, imgs=None, correct_answer=None, streaming_callback=None, *args, **kwargs)` | Runs the autonomous agent loop to complete the given task. | `task` (str): The task to be performed.<br>`img` (str, optional): Path to an image file.<br>`imgs` (List[str], optional): List of image paths.<br>`correct_answer` (str, optional): Expected correct answer for validation.<br>`streaming_callback` (Callable, optional): Callback for streaming tokens.<br>`*args`, `**kwargs`: Additional arguments. | `response = agent.run("Generate a report on financial performance.")` |
| `run_batched(tasks, imgs=None, *args, **kwargs)` | Runs multiple tasks concurrently in batch mode. | `tasks` (List[str]): List of tasks to run.<br>`imgs` (List[str], optional): List of images to process.<br>`*args`, `**kwargs`: Additional arguments. | `responses = agent.run_batched(["Task 1", "Task 2"])` |
| `__call__(task, img=None, *args, **kwargs)` | Alternative way to call the `run` method. | Same as `run`. | `response = agent("Generate a report on financial performance.")` |
| `parse_and_execute_tools(response, *args, **kwargs)` | Parses the agent's response and executes any tools mentioned in it. | `response` (str): The agent's response to be parsed.<br>`*args`, `**kwargs`: Additional arguments. | `agent.parse_and_execute_tools(response)` |
| `add_memory(message)` | Adds a message to the agent's memory. | `message` (str): The message to add. | `agent.add_memory("Important information")` |
@ -155,6 +179,8 @@ graph TD
| `run_concurrent(task, *args, **kwargs)` | Runs a task concurrently. | `task` (str): The task to run.<br>`*args`, `**kwargs`: Additional arguments. | `response = await agent.run_concurrent("Concurrent task")` |
| `run_concurrent_tasks(tasks, *args, **kwargs)` | Runs multiple tasks concurrently. | `tasks` (List[str]): List of tasks to run.<br>`*args`, `**kwargs`: Additional arguments. | `responses = agent.run_concurrent_tasks(["Task 1", "Task 2"])` |
| `bulk_run(inputs)` | Generates responses for multiple input sets. | `inputs` (List[Dict[str, Any]]): List of input dictionaries. | `responses = agent.bulk_run([{"task": "Task 1"}, {"task": "Task 2"}])` |
| `run_multiple_images(task, imgs, *args, **kwargs)` | Runs the agent with multiple images using concurrent processing. | `task` (str): The task to perform on each image.<br>`imgs` (List[str]): List of image paths or URLs.<br>`*args`, `**kwargs`: Additional arguments. | `outputs = agent.run_multiple_images("Describe image", ["img1.jpg", "img2.png"])` |
| `continuous_run_with_answer(task, img=None, correct_answer=None, max_attempts=10)` | Runs the agent until the correct answer is provided. | `task` (str): The task to perform.<br>`img` (str, optional): Image to process.<br>`correct_answer` (str): Expected answer.<br>`max_attempts` (int): Maximum attempts. | `response = agent.continuous_run_with_answer("Math problem", correct_answer="42")` |
| `save()` | Saves the agent's history to a file. | None | `agent.save()` |
| `load(file_path)` | Loads the agent's history from a file. | `file_path` (str): Path to the file. | `agent.load("agent_history.json")` |
| `graceful_shutdown()` | Gracefully shuts down the system, saving the state. | None | `agent.graceful_shutdown()` |
@ -178,8 +204,6 @@ graph TD
| `send_agent_message(agent_name, message, *args, **kwargs)` | Sends a message from the agent to a user. | `agent_name` (str): Name of the agent.<br>`message` (str): Message to send.<br>`*args`, `**kwargs`: Additional arguments. | `response = agent.send_agent_message("AgentX", "Task completed")` |
| `add_tool(tool)` | Adds a tool to the agent's toolset. | `tool` (Callable): Tool to add. | `agent.add_tool(my_custom_tool)` |
| `add_tools(tools)` | Adds multiple tools to the agent's toolset. | `tools` (List[Callable]): List of tools to add. | `agent.add_tools([tool1, tool2])` |
| `remove_tool(tool)` | Removes a tool from the agent's toolset. || Method | Description | Inputs | Usage Example |
|--------|-------------|--------|----------------|
| `remove_tool(tool)` | Removes a tool from the agent's toolset. | `tool` (Callable): Tool to remove. | `agent.remove_tool(my_custom_tool)` |
| `remove_tools(tools)` | Removes multiple tools from the agent's toolset. | `tools` (List[Callable]): List of tools to remove. | `agent.remove_tools([tool1, tool2])` |
| `get_docs_from_doc_folders()` | Retrieves and processes documents from the specified folder. | None | `agent.get_docs_from_doc_folders()` |
@ -208,18 +232,30 @@ graph TD
| `handle_sop_ops()` | Handles operations related to standard operating procedures. | None | `agent.handle_sop_ops()` |
| `agent_output_type(responses)` | Processes and returns the agent's output based on the specified output type. | `responses` (list): List of responses. | `formatted_output = agent.agent_output_type(responses)` |
| `check_if_no_prompt_then_autogenerate(task)` | Checks if a system prompt is not set and auto-generates one if needed. | `task` (str): The task to use for generating a prompt. | `agent.check_if_no_prompt_then_autogenerate("Analyze data")` |
| `check_if_no_prompt_then_autogenerate(task)` | Checks if auto_generate_prompt is enabled and generates a prompt by combining agent name, description and system prompt | `task` (str, optional): Task to use as fallback | `agent.check_if_no_prompt_then_autogenerate("Analyze data")` |
| `handle_artifacts(response, output_path, extension)` | Handles saving artifacts from agent execution | `response` (str): Agent response<br>`output_path` (str): Output path<br>`extension` (str): File extension | `agent.handle_artifacts(response, "outputs/", ".txt")` |
| `showcase_config()` | Displays the agent's configuration in a formatted table. | None | `agent.showcase_config()` |
| `talk_to(agent, task, img=None, *args, **kwargs)` | Initiates a conversation with another agent. | `agent` (Any): Target agent.<br>`task` (str): Task to discuss.<br>`img` (str, optional): Image to share.<br>`*args`, `**kwargs`: Additional arguments. | `response = agent.talk_to(other_agent, "Let's collaborate")` |
| `talk_to_multiple_agents(agents, task, *args, **kwargs)` | Talks to multiple agents concurrently. | `agents` (List[Any]): List of target agents.<br>`task` (str): Task to discuss.<br>`*args`, `**kwargs`: Additional arguments. | `responses = agent.talk_to_multiple_agents([agent1, agent2], "Group discussion")` |
| `get_agent_role()` | Returns the role of the agent. | None | `role = agent.get_agent_role()` |
| `pretty_print(response, loop_count)` | Prints the response in a formatted panel. | `response` (str): Response to print.<br>`loop_count` (int): Current loop number. | `agent.pretty_print("Analysis complete", 1)` |
| `parse_llm_output(response)` | Parses and standardizes the output from the LLM. | `response` (Any): Response from the LLM. | `parsed_response = agent.parse_llm_output(llm_output)` |
| `sentiment_and_evaluator(response)` | Performs sentiment analysis and evaluation on the response. | `response` (str): Response to analyze. | `agent.sentiment_and_evaluator("Great response!")` |
| `output_cleaner_op(response)` | Applies output cleaning operations to the response. | `response` (str): Response to clean. | `cleaned_response = agent.output_cleaner_op(response)` |
| `mcp_tool_handling(response, current_loop)` | Handles MCP tool execution and responses. | `response` (Any): Response containing tool calls.<br>`current_loop` (int): Current loop number. | `agent.mcp_tool_handling(response, 1)` |
| `temp_llm_instance_for_tool_summary()` | Creates a temporary LLM instance for tool summaries. | None | `temp_llm = agent.temp_llm_instance_for_tool_summary()` |
| `execute_tools(response, loop_count)` | Executes tools based on the LLM response. | `response` (Any): Response containing tool calls.<br>`loop_count` (int): Current loop number. | `agent.execute_tools(response, 1)` |
| `list_output_types()` | Returns available output types. | None | `types = agent.list_output_types()` |
| `tool_execution_retry(response, loop_count)` | Executes tools with retry logic for handling failures. | `response` (Any): Response containing tool calls.<br>`loop_count` (int): Current loop number. | `agent.tool_execution_retry(response, 1)` |
## Updated Run Method
Update the run method documentation to include new parameters:
The run method has been updated with new parameters for enhanced functionality:
| Method | Description | Inputs | Usage Example |
|--------|-------------|--------|----------------|
| `run(task, img=None, is_last=False, device="cpu", device_id=0, all_cores=True, scheduled_run_date=None)` | Runs the agent with specified parameters | `task` (str): Task to run<br>`img` (str, optional): Image path<br>`is_last` (bool): If this is last task<br>`device` (str): Device to use<br>`device_id` (int): GPU ID<br>`all_cores` (bool): Use all CPU cores<br>`scheduled_run_date` (datetime, optional): Future run date | `agent.run("Analyze data", device="gpu", device_id=0)` |
| `run(task, img=None, imgs=None, correct_answer=None, streaming_callback=None, *args, **kwargs)` | Runs the agent with enhanced parameters | `task` (str): Task to run<br>`img` (str, optional): Single image path<br>`imgs` (List[str], optional): List of image paths<br>`correct_answer` (str, optional): Expected answer for validation<br>`streaming_callback` (Callable, optional): Callback for streaming tokens<br>`*args`, `**kwargs`: Additional arguments | `agent.run("Analyze data", imgs=["img1.jpg", "img2.png"])` |
@ -420,9 +456,35 @@ tasks = [
]
responses = agent.bulk_run(tasks)
print(responses)
# Run multiple tasks in batch mode (new method)
task_list = ["Analyze data", "Generate report", "Create summary"]
batch_responses = agent.run_batched(task_list)
print(f"Completed {len(batch_responses)} tasks in batch mode")
```
### Batch Processing with `run_batched`
The new `run_batched` method allows you to process multiple tasks efficiently:
```python
# Process multiple tasks in batch
tasks = [
"Analyze the financial data for Q1",
"Generate a summary report for stakeholders",
"Create recommendations for Q2 planning"
]
# Run all tasks concurrently
batch_results = agent.run_batched(tasks)
# Process results
for i, (task, result) in enumerate(zip(tasks, batch_results)):
print(f"Task {i+1}: {task}")
print(f"Result: {result}\n")
```
### Various other settings
```python
@ -611,6 +673,36 @@ print(type(str_to_dict(out)))
```
## New Features and Parameters
### Enhanced Run Method Parameters
The `run` method now supports several new parameters for advanced functionality:
- **`imgs`**: Process multiple images simultaneously instead of just one
- **`correct_answer`**: Validate responses against expected answers with automatic retries
- **`streaming_callback`**: Real-time token streaming for interactive applications
### MCP (Model Context Protocol) Integration
New parameters enable seamless MCP server integration:
- **`mcp_url`**: Connect to a single MCP server
- **`mcp_urls`**: Connect to multiple MCP servers
- **`mcp_config`**: Advanced MCP configuration options
### Advanced Reasoning and Safety
- **`react_on`**: Enable ReAct reasoning for complex problem-solving
- **`safety_prompt_on`**: Add safety constraints to agent responses
- **`reasoning_prompt_on`**: Enable multi-loop reasoning for complex tasks
### Performance and Resource Management
- **`dynamic_context_window`**: Automatically adjust context window based on available tokens
- **`tool_retry_attempts`**: Configure retry behavior for tool execution
- **`summarize_multiple_images`**: Automatically summarize results from multiple image processing
## Best Practices
1. Always provide a clear and concise `system_prompt` to guide the agent's behavior.
@ -627,5 +719,9 @@ print(type(str_to_dict(out)))
12. Configure `device` and `device_id` appropriately for optimal performance
13. Enable `rag_every_loop` when continuous context from long-term memory is needed
14. Use `scheduled_run_date` for automated task scheduling
15. Leverage `run_batched` for efficient processing of multiple related tasks
16. Use `mcp_url` or `mcp_urls` to extend agent capabilities with external tools
17. Enable `react_on` for complex reasoning tasks requiring step-by-step analysis
18. Configure `tool_retry_attempts` for robust tool execution in production environments
By following these guidelines and leveraging the Swarm Agent's extensive features, you can create powerful, flexible, and efficient autonomous agents for a wide range of applications.

@ -1,10 +1,5 @@
from swarms import Agent
import litellm
litellm._turn_on_debug() # 👈 this is the 1-line change you need to make
# Initialize the agent
agent = Agent(
agent_name="Quantitative-Trading-Agent",
@ -41,10 +36,8 @@ agent = Agent(
model_name="claude-sonnet-4-20250514",
dynamic_temperature_enabled=True,
output_type="str-all-except-first",
max_loops="auto",
interactive=True,
no_reasoning_prompt=True,
streaming_on=True,
max_loops=1,
dynamic_context_window=True,
)
out = agent.run(

@ -1,661 +1,59 @@
"""
EuroSwarm Parliament - Example Script
EuroSwarm Parliament - Simple Example
This script demonstrates the comprehensive democratic functionality of the EuroSwarm Parliament,
including bill introduction, committee work, parliamentary debates, and democratic voting.
A basic demonstration of the EuroSwarm Parliament functionality.
"""
# Import directly from the file
from euroswarm_parliament import (
EuroSwarmParliament,
VoteType,
)
from euroswarm_parliament import EuroSwarmParliament, VoteType
def demonstrate_parliament_initialization():
"""Demonstrate parliament initialization and basic functionality with cost optimization."""
def main():
"""Simple demonstration of EuroSwarm Parliament."""
print(
"\nEUROSWARM PARLIAMENT INITIALIZATION DEMONSTRATION (COST OPTIMIZED)"
)
print("=" * 60)
print("EUROSWARM PARLIAMENT - SIMPLE EXAMPLE")
print("=" * 50)
# Initialize the parliament with cost optimization
# Initialize the parliament
parliament = EuroSwarmParliament(
eu_data_file="EU.xml",
parliament_size=None, # Use all MEPs from EU.xml (717)
enable_democratic_discussion=True,
enable_committee_work=True,
enable_amendment_process=True,
enable_lazy_loading=True, # NEW: Lazy load MEP agents
enable_caching=True, # NEW: Enable response caching
batch_size=25, # NEW: Batch size for concurrent execution
budget_limit=100.0, # NEW: Budget limit in dollars
verbose=True,
)
print(f"Parliament initialized with {len(parliament.meps)} MEPs")
# Show parliament composition with cost stats
composition = parliament.get_parliament_composition()
print("\nPARLIAMENT COMPOSITION:")
print(f"Total MEPs: {composition['total_meps']}")
print(
f"Loaded MEPs: {composition['loaded_meps']} (lazy loading active)"
)
print("\nCOST OPTIMIZATION:")
cost_stats = composition["cost_stats"]
print(
f"Budget Limit: ${cost_stats['budget_remaining'] + cost_stats['total_cost']:.2f}"
)
print(f"Budget Used: ${cost_stats['total_cost']:.2f}")
print(f"Budget Remaining: ${cost_stats['budget_remaining']:.2f}")
print(f"Cache Hit Rate: {cost_stats['cache_hit_rate']:.1%}")
print("\nPOLITICAL GROUP DISTRIBUTION:")
for group, data in composition["political_groups"].items():
count = data["count"]
percentage = data["percentage"]
print(f" {group}: {count} MEPs ({percentage:.1f}%)")
print("\nCOMMITTEE LEADERSHIP:")
for committee_name, committee_data in composition[
"committees"
].items():
chair = committee_data["chair"]
if chair:
print(f" {committee_name}: {chair}")
return parliament
def demonstrate_individual_mep_interaction(parliament):
"""Demonstrate individual MEP interaction and personality."""
print("\nINDIVIDUAL MEP INTERACTION DEMONSTRATION")
print("=" * 60)
# Get a sample MEP
sample_mep_name = list(parliament.meps.keys())[0]
sample_mep = parliament.meps[sample_mep_name]
print(f"Sample MEP: {sample_mep.full_name}")
print(f"\nSample MEP: {sample_mep.full_name}")
print(f"Country: {sample_mep.country}")
print(f"Political Group: {sample_mep.political_group}")
print(f"National Party: {sample_mep.national_party}")
print(f"Committees: {', '.join(sample_mep.committees)}")
print(f"Expertise Areas: {', '.join(sample_mep.expertise_areas)}")
# Test MEP agent interaction
if sample_mep.agent:
test_prompt = "What are your views on European integration and how do you approach cross-border cooperation?"
print(f"\nMEP Response to: '{test_prompt}'")
print("-" * 50)
try:
response = sample_mep.agent.run(test_prompt)
print(
response[:500] + "..."
if len(response) > 500
else response
)
except Exception as e:
print(f"Error getting MEP response: {e}")
def demonstrate_committee_work(parliament):
"""Demonstrate committee work and hearings."""
print("\nCOMMITTEE WORK DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[0]
# Create a test bill
# Create a simple bill
bill = parliament.introduce_bill(
title="European Digital Rights and Privacy Protection Act",
description="Comprehensive legislation to strengthen digital rights, enhance privacy protection, and establish clear guidelines for data handling across the European Union.",
title="European Digital Rights Act",
description="Basic legislation to protect digital rights across the EU.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Legal Affairs",
sponsor=sponsor,
sponsor=sample_mep_name,
)
print(f"Bill: {bill.title}")
print(f"\nBill introduced: {bill.title}")
print(f"Committee: {bill.committee}")
print(f"Sponsor: {bill.sponsor}")
# Conduct committee hearing
print("\nCONDUCTING COMMITTEE HEARING...")
hearing_result = parliament.conduct_committee_hearing(
bill.committee, bill
)
print(f"Committee: {hearing_result['committee']}")
print(f"Participants: {len(hearing_result['participants'])} MEPs")
print(
f"Recommendation: {hearing_result['recommendations']['recommendation']}"
)
print(
f"Support: {hearing_result['recommendations']['support_percentage']:.1f}%"
)
print(
f"Oppose: {hearing_result['recommendations']['oppose_percentage']:.1f}%"
)
print(
f"Amend: {hearing_result['recommendations']['amend_percentage']:.1f}%"
)
def demonstrate_parliamentary_debate(parliament):
"""Demonstrate parliamentary debate functionality."""
print("\nPARLIAMENTARY DEBATE DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[1]
# Create a test bill
bill = parliament.introduce_bill(
title="European Green Deal Implementation Act",
description="Legislation to implement the European Green Deal, including carbon neutrality targets, renewable energy investments, and sustainable development measures.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Environment, Public Health and Food Safety",
sponsor=sponsor,
)
print(f"Bill: {bill.title}")
print(f"Description: {bill.description}")
# Conduct parliamentary debate
print("\nCONDUCTING PARLIAMENTARY DEBATE...")
debate_result = parliament.conduct_parliamentary_debate(
bill, max_speakers=10
)
print(
f"Debate Participants: {len(debate_result['participants'])} MEPs"
)
print("Debate Analysis:")
print(
f" Support: {debate_result['analysis']['support_count']} speakers ({debate_result['analysis']['support_percentage']:.1f}%)"
)
print(
f" Oppose: {debate_result['analysis']['oppose_count']} speakers ({debate_result['analysis']['oppose_percentage']:.1f}%)"
)
print(
f" Neutral: {debate_result['analysis']['neutral_count']} speakers ({debate_result['analysis']['neutral_percentage']:.1f}%)"
)
def demonstrate_democratic_voting(parliament):
"""Demonstrate democratic voting functionality."""
print("\nDEMOCRATIC VOTING DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[2]
# Create a test bill
bill = parliament.introduce_bill(
title="European Social Rights and Labor Protection Act",
description="Legislation to strengthen social rights, improve labor conditions, and ensure fair treatment of workers across the European Union.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Employment and Social Affairs",
sponsor=sponsor,
)
print(f"Bill: {bill.title}")
print(f"Sponsor: {bill.sponsor}")
# Conduct democratic vote
print("\nCONDUCTING DEMOCRATIC VOTE...")
# Conduct a simple vote
print("\nConducting democratic vote...")
vote_result = parliament.conduct_democratic_vote(bill)
# Calculate percentages
total_votes = (
vote_result.votes_for
+ vote_result.votes_against
+ vote_result.abstentions
)
in_favor_percentage = (
(vote_result.votes_for / total_votes * 100)
if total_votes > 0
else 0
)
against_percentage = (
(vote_result.votes_against / total_votes * 100)
if total_votes > 0
else 0
)
abstentions_percentage = (
(vote_result.abstentions / total_votes * 100)
if total_votes > 0
else 0
)
print("Vote Results:")
print(f" Total Votes: {total_votes}")
print(
f" In Favor: {vote_result.votes_for} ({in_favor_percentage:.1f}%)"
)
print(
f" Against: {vote_result.votes_against} ({against_percentage:.1f}%)"
)
print(
f" Abstentions: {vote_result.abstentions} ({abstentions_percentage:.1f}%)"
)
print(f" In Favor: {vote_result.votes_for}")
print(f" Against: {vote_result.votes_against}")
print(f" Abstentions: {vote_result.abstentions}")
print(f" Result: {vote_result.result.value}")
# Show political group breakdown if available
if (
hasattr(vote_result, "group_votes")
and vote_result.group_votes
):
print("\nPOLITICAL GROUP BREAKDOWN:")
for group, votes in vote_result.group_votes.items():
print(
f" {group}: {votes['in_favor']}/{votes['total']} in favor ({votes['percentage']:.1f}%)"
)
else:
print(
f"\nIndividual votes recorded: {len(vote_result.individual_votes)} MEPs"
)
def demonstrate_complete_democratic_session(parliament):
"""Demonstrate a complete democratic parliamentary session."""
print("\nCOMPLETE DEMOCRATIC SESSION DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[3]
# Run complete session
session_result = parliament.run_democratic_session(
bill_title="European Innovation and Technology Advancement Act",
bill_description="Comprehensive legislation to promote innovation, support technology startups, and establish Europe as a global leader in digital transformation and technological advancement.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Industry, Research and Energy",
sponsor=sponsor,
)
print("Session Results:")
print(f" Bill: {session_result['bill'].title}")
print(
f" Committee Hearing: {session_result['hearing']['recommendations']['recommendation']}"
)
print(
f" Debate Participants: {len(session_result['debate']['participants'])} MEPs"
)
print(f" Final Vote: {session_result['vote']['result']}")
print(
f" Vote Margin: {session_result['vote']['in_favor_percentage']:.1f}% in favor"
)
def demonstrate_political_analysis(parliament):
"""Demonstrate political analysis and voting prediction."""
print("\nPOLITICAL ANALYSIS DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[4]
# Create a test bill
bill = parliament.introduce_bill(
title="European Climate Action and Sustainability Act",
description="Comprehensive climate action legislation including carbon pricing, renewable energy targets, and sustainable development measures.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Environment, Public Health and Food Safety",
sponsor=sponsor,
)
print(f"Bill: {bill.title}")
print(f"Sponsor: {bill.sponsor}")
# Analyze political landscape
analysis = parliament.analyze_political_landscape(bill)
print("\nPOLITICAL LANDSCAPE ANALYSIS:")
print(f" Overall Support: {analysis['overall_support']:.1f}%")
print(f" Opposition: {analysis['opposition']:.1f}%")
print(f" Uncertainty: {analysis['uncertainty']:.1f}%")
print("\nPOLITICAL GROUP ANALYSIS:")
for group, data in analysis["group_analysis"].items():
print(
f" {group}: {data['support']:.1f}% support, {data['opposition']:.1f}% opposition"
)
def demonstrate_hierarchical_democratic_voting(parliament):
"""Demonstrate hierarchical democratic voting with political group boards."""
print("\nHIERARCHICAL DEMOCRATIC VOTING DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[5]
# Create a test bill
bill = parliament.introduce_bill(
title="European Climate Action and Sustainability Act",
description="Comprehensive climate action legislation including carbon pricing, renewable energy targets, and sustainable development measures.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Environment, Public Health and Food Safety",
sponsor=sponsor,
)
print(f"Bill: {bill.title}")
print(f"Sponsor: {bill.sponsor}")
# Conduct hierarchical vote
print("\nCONDUCTING HIERARCHICAL DEMOCRATIC VOTE...")
hierarchical_result = (
parliament.conduct_hierarchical_democratic_vote(bill)
)
print("Hierarchical Vote Results:")
print(f" Total Votes: {hierarchical_result['total_votes']}")
print(
f" In Favor: {hierarchical_result['in_favor']} ({hierarchical_result['in_favor_percentage']:.1f}%)"
)
print(
f" Against: {hierarchical_result['against']} ({hierarchical_result['against_percentage']:.1f}%)"
)
print(f" Result: {hierarchical_result['result']}")
print("\nPOLITICAL GROUP BOARD DECISIONS:")
for group, decision in hierarchical_result[
"group_decisions"
].items():
print(
f" {group}: {decision['decision']} ({decision['confidence']:.1f}% confidence)"
)
def demonstrate_complete_hierarchical_session(parliament):
"""Demonstrate a complete hierarchical democratic session."""
print("\nCOMPLETE HIERARCHICAL DEMOCRATIC SESSION DEMONSTRATION")
print("=" * 60)
# Get a real MEP as sponsor
sponsor = list(parliament.meps.keys())[6]
# Run complete hierarchical session
session_result = parliament.run_hierarchical_democratic_session(
bill_title="European Climate Action and Sustainability Act",
bill_description="Comprehensive climate action legislation including carbon pricing, renewable energy targets, and sustainable development measures.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Environment, Public Health and Food Safety",
sponsor=sponsor,
)
print("Hierarchical Session Results:")
print(f" Bill: {session_result['bill'].title}")
print(
f" Committee Hearing: {session_result['hearing']['recommendations']['recommendation']}"
)
print(
f" Debate Participants: {len(session_result['debate']['participants'])} MEPs"
)
print(f" Final Vote: {session_result['vote']['result']}")
print(
f" Vote Margin: {session_result['vote']['in_favor_percentage']:.1f}% in favor"
)
def demonstrate_wikipedia_personalities(parliament):
"""Demonstrate the Wikipedia personality system for realistic MEP behavior."""
print("\nWIKIPEDIA PERSONALITY SYSTEM DEMONSTRATION")
print("=" * 60)
# Check if Wikipedia personalities are available
if not parliament.enable_wikipedia_personalities:
print("Wikipedia personality system not available")
print(
"To enable: Install required dependencies and run Wikipedia scraper"
)
return
print("Wikipedia personality system enabled")
print(
f"Loaded {len(parliament.personality_profiles)} personality profiles"
)
# Show sample personality profiles
print("\nSAMPLE PERSONALITY PROFILES:")
print("-" * 40)
sample_count = 0
for mep_name, profile in parliament.personality_profiles.items():
if sample_count >= 3: # Show only 3 samples
break
print(f"\n{mep_name}")
print(
f" Wikipedia URL: {profile.wikipedia_url if profile.wikipedia_url else 'Not available'}"
)
print(
f" Summary: {profile.summary[:200]}..."
if profile.summary
else "No summary available"
)
print(
f" Political Views: {profile.political_views[:150]}..."
if profile.political_views
else "Based on party alignment"
)
print(
f" Policy Focus: {profile.policy_focus[:150]}..."
if profile.policy_focus
else "General parliamentary work"
)
print(
f" Achievements: {profile.achievements[:150]}..."
if profile.achievements
else "Parliamentary service"
)
print(f" Last Updated: {profile.last_updated}")
sample_count += 1
# Demonstrate personality-driven voting
print("\nPERSONALITY-DRIVEN VOTING DEMONSTRATION:")
print("-" * 50)
# Create a test bill that would trigger different personality responses
bill = parliament.introduce_bill(
title="European Climate Action and Green Technology Investment Act",
description="Comprehensive legislation to accelerate Europe's transition to renewable energy, including massive investments in green technology, carbon pricing mechanisms, and support for affected industries and workers.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Environment",
sponsor="Climate Action Leader",
)
print(f"Bill: {bill.title}")
print(f"Description: {bill.description}")
# Show how different MEPs with Wikipedia personalities would respond
print("\nPERSONALITY-BASED RESPONSES:")
print("-" * 40)
sample_meps = list(parliament.personality_profiles.keys())[:3]
for mep_name in sample_meps:
mep = parliament.meps.get(mep_name)
profile = parliament.personality_profiles.get(mep_name)
if mep and profile:
print(f"\n{mep_name} ({mep.political_group})")
# Show personality influence
if profile.political_views:
print(
f" Political Views: {profile.political_views[:100]}..."
)
if profile.policy_focus:
print(
f" Policy Focus: {profile.policy_focus[:100]}..."
)
# Predict voting behavior based on personality
if (
"environment" in profile.policy_focus.lower()
or "climate" in profile.political_views.lower()
):
predicted_vote = "LIKELY SUPPORT"
reasoning = (
"Environmental policy focus and climate advocacy"
)
elif (
"economic" in profile.policy_focus.lower()
or "business" in profile.political_views.lower()
):
predicted_vote = "LIKELY OPPOSE"
reasoning = "Economic concerns about investment costs"
else:
predicted_vote = "UNCERTAIN"
reasoning = (
"Mixed considerations based on party alignment"
)
print(f" Predicted Vote: {predicted_vote}")
print(f" Reasoning: {reasoning}")
# Demonstrate scraping functionality
print("\nWIKIPEDIA SCRAPING CAPABILITIES:")
print("-" * 50)
print("Can scrape Wikipedia data for all 717 MEPs")
print(
"Extracts political views, career history, and achievements"
)
print("Creates detailed personality profiles in JSON format")
print(
"Integrates real personality data into AI agent system prompts"
)
print("Enables realistic, personality-driven voting behavior")
print("Respectful API usage with configurable delays")
print("\nTo scrape all MEP personalities:")
print(" parliament.scrape_wikipedia_personalities(delay=1.0)")
print(
" # This will create personality profiles for all 717 MEPs"
)
print(" # Profiles are saved in 'mep_personalities/' directory")
def demonstrate_optimized_parliamentary_session(parliament):
"""Demonstrate cost-optimized parliamentary session."""
print("\nCOST-OPTIMIZED PARLIAMENTARY SESSION DEMONSTRATION")
print("=" * 60)
# Run optimized session with cost limit
session_result = parliament.run_optimized_parliamentary_session(
bill_title="European Digital Rights and Privacy Protection Act",
bill_description="Comprehensive legislation to strengthen digital rights, enhance privacy protection, and establish clear guidelines for data handling across the European Union.",
bill_type=VoteType.ORDINARY_LEGISLATIVE_PROCEDURE,
committee="Legal Affairs",
max_cost=25.0, # Max $25 for this session
)
print("Session Results:")
print(
f" Bill: {session_result['session_summary']['bill_title']}"
)
print(
f" Final Outcome: {session_result['session_summary']['final_outcome']}"
)
print(
f" Total Cost: ${session_result['session_summary']['total_cost']:.2f}"
)
print(
f" Budget Remaining: ${session_result['cost_stats']['budget_remaining']:.2f}"
)
# Show detailed cost statistics
cost_stats = parliament.get_cost_statistics()
print("\nDETAILED COST STATISTICS:")
print(f" Total Tokens Used: {cost_stats['total_tokens']:,}")
print(f" Requests Made: {cost_stats['requests_made']}")
print(f" Cache Hits: {cost_stats['cache_hits']}")
print(f" Cache Hit Rate: {cost_stats['cache_hit_rate']:.1%}")
print(
f" Loading Efficiency: {cost_stats['loading_efficiency']:.1%}"
)
print(f" Cache Size: {cost_stats['cache_size']} entries")
return session_result
def main():
"""Main demonstration function."""
print("EUROSWARM PARLIAMENT - COST OPTIMIZED DEMONSTRATION")
print("=" * 60)
print(
"This demonstration shows the EuroSwarm Parliament with cost optimization features:"
)
print("• Lazy loading of MEP agents (only create when needed)")
print("• Response caching (avoid repeated API calls)")
print("• Batch processing (control memory and cost)")
print("• Budget controls (hard limits on spending)")
print("• Cost tracking (real-time monitoring)")
# Initialize parliament with cost optimization
parliament = demonstrate_parliament_initialization()
# Demonstrate individual MEP interaction (will trigger lazy loading)
demonstrate_individual_mep_interaction(parliament)
# Demonstrate committee work with cost optimization
demonstrate_committee_work(parliament)
# Demonstrate parliamentary debate with cost optimization
demonstrate_parliamentary_debate(parliament)
# Demonstrate democratic voting with cost optimization
demonstrate_democratic_voting(parliament)
# Demonstrate political analysis with cost optimization
demonstrate_political_analysis(parliament)
# Demonstrate optimized parliamentary session
demonstrate_optimized_parliamentary_session(parliament)
# Show final cost statistics
final_stats = parliament.get_cost_statistics()
print("\nFINAL COST STATISTICS:")
print(f"Total Cost: ${final_stats['total_cost']:.2f}")
print(f"Budget Remaining: ${final_stats['budget_remaining']:.2f}")
print(f"Cache Hit Rate: {final_stats['cache_hit_rate']:.1%}")
print(
f"Loading Efficiency: {final_stats['loading_efficiency']:.1%}"
)
print("\n✅ COST OPTIMIZATION DEMONSTRATION COMPLETED!")
print(
"✅ EuroSwarm Parliament now supports cost-effective large-scale simulations"
)
print(
f"✅ Lazy loading: {final_stats['loaded_meps']}/{final_stats['total_meps']} MEPs loaded"
)
print(f"✅ Caching: {final_stats['cache_hit_rate']:.1%} hit rate")
print(
f"✅ Budget control: ${final_stats['total_cost']:.2f} spent of ${final_stats['budget_remaining'] + final_stats['total_cost']:.2f} budget"
)
print("\n✅ Simple example completed!")
if __name__ == "__main__":

@ -0,0 +1,39 @@
"""
Bell Labs Research Simulation Example
This example demonstrates how to use the BellLabsSwarm to simulate
collaborative research among famous physicists.
"""
from swarms.sims.bell_labs import (
run_bell_labs_research,
)
def main():
"""
Run the Bell Labs research simulation.
This example asks the research question:
"Why doesn't physics take a vacation? Why are the laws of physics consistent?"
"""
research_question = """
Why doesn't physics take a vacation? Why are the laws of physics consistent across time and space?
Explore the philosophical and scientific foundations for the uniformity and invariance of physical laws.
Consider both theoretical explanations and any empirical evidence or challenges to this consistency.
"""
# Run the research simulation
results = run_bell_labs_research(
research_question=research_question,
max_loops=1,
model_name="claude-3-5-sonnet-20240620",
verbose=True,
)
print(results)
if __name__ == "__main__":
main()

@ -0,0 +1,29 @@
from swarms import Agent
def main():
"""
Run a quantitative trading agent to recommend top 3 gold ETFs.
"""
agent = Agent(
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
system_prompt=(
"You are an expert quantitative trading agent. "
"Recommend the best gold ETFs using your expertise in trading strategies, "
"risk management, and financial analysis. Be concise and precise."
),
model_name="claude-sonnet-4-20250514",
dynamic_temperature_enabled=True,
max_loops=1,
dynamic_context_window=True,
)
out = agent.run(
task="What are the best top 3 etfs for gold coverage?"
)
print(out)
if __name__ == "__main__":
main()

@ -0,0 +1,206 @@
"""
Claude Code Agent Tool - Setup Guide
This tool provides a Claude Code Agent that can:
- Generate code and applications from natural language descriptions
- Write files, execute shell commands, and manage Git repositories
- Perform web searches and file operations
- Handle complex development tasks with retry logic
SETUP GUIDE:
1. Install dependencies:
pip install claude-code-sdk
npm install -g @anthropic-ai/claude-code
2. Set environment variable:
export ANTHROPIC_API_KEY="your-api-key-here"
3. Use the tool:
from claude_as_a_tool import developer_worker_agent
result = developer_worker_agent(
task="Create a Python web scraper",
system_prompt="You are a helpful coding assistant"
)
REQUIRED: ANTHROPIC_API_KEY environment variable must be set
"""
import asyncio
from typing import Any, Dict
from claude_code_sdk import ClaudeCodeOptions, ClaudeSDKClient
from dotenv import load_dotenv
from tenacity import retry, stop_after_attempt, wait_exponential
from loguru import logger
load_dotenv()
class ClaudeAppGenerator:
"""
Generates applications using Claude Code SDK based on specifications.
"""
def __init__(
self,
name: str = "Developer Worker Agent",
description: str = "A developer worker agent that can generate code and write it to a file.",
retries: int = 3,
retry_delay: float = 2.0,
system_prompt: str = None,
debug_mode: bool = False,
max_steps: int = 40,
model: str = "claude-sonnet-4-20250514",
max_thinking_tokens: int = 1000,
):
"""
Initialize the app generator.
Args:
name: Name of the app
description: Description of the app
retries: Number of retries
retry_delay: Delay between retries
system_prompt: System prompt
debug_mode: Enable extra verbose logging for Claude outputs
max_steps: Maximum number of steps
model: Model to use
"""
self.name = name
self.description = description
self.retries = retries
self.retry_delay = retry_delay
self.system_prompt = system_prompt
self.model = model
self.debug_mode = debug_mode
self.max_steps = max_steps
self.max_thinking_tokens = max_thinking_tokens
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=4, max=15),
)
async def generate_app_with_claude(
self, task: str
) -> Dict[str, Any]:
"""
Generate app using Claude Code SDK with robust error handling and retry logic.
Args:
task: Task to be completed
Returns:
Dict containing generation results
"""
# Log the Claude SDK configuration
claude_options = ClaudeCodeOptions(
system_prompt=self.system_prompt,
max_turns=self.max_steps, # Sufficient for local app development and GitHub setup
allowed_tools=[
"Read",
"Write",
"Bash",
"GitHub",
"Git",
"Grep",
"WebSearch",
],
continue_conversation=True, # Start fresh each time
model=self.model,
max_thinking_tokens=self.max_thinking_tokens,
)
async with ClaudeSDKClient(options=claude_options) as client:
# Generate the application
await client.query(task)
response_text = []
message_count = 0
async for message in client.receive_response():
message_count += 1
if hasattr(message, "content"):
for block in message.content:
if hasattr(block, "text"):
text_content = block.text
response_text.append(text_content)
logger.info(text_content)
elif hasattr(block, "type"):
if self.debug_mode and hasattr(
block, "input"
):
input_str = str(block.input)
if len(input_str) > 200:
input_str = (
input_str[:200]
+ "... (truncated)"
)
print(f"Tool Input: {input_str}")
elif type(message).__name__ == "ResultMessage":
result_text = str(message.result)
response_text.append(result_text)
return response_text
def run(self, task: str) -> Dict[str, Any]:
"""
Synchronous wrapper for app generation to work with ThreadPoolExecutor.
Args:
spec: App specification
Returns:
Dict containing generation results
"""
return asyncio.run(self.generate_app_with_claude(task))
def developer_worker_agent(task: str, system_prompt: str) -> str:
"""
Developer Worker Agent
This function instantiates a ClaudeAppGenerator agent, which is a highly capable developer assistant designed to automate software development tasks.
The agent leverages the Claude Code SDK to interpret natural language instructions and generate code, scripts, or even entire applications.
It can interact with files, execute shell commands, perform web searches, and utilize version control systems such as Git and GitHub.
The agent is robust, featuring retry logic, customizable system prompts, and debug modes for verbose output.
It is ideal for automating repetitive coding tasks, prototyping, and integrating with developer workflows.
Capabilities:
- Generate code based on detailed task descriptions.
- Write generated code to files.
- Execute shell commands and scripts.
- Interact with Git and GitHub for version control operations.
- Perform web searches to gather information or code snippets.
- Provide detailed logs and debugging information if enabled.
- Handle errors gracefully with configurable retry logic.
Args:
task (str): The development task or instruction for the agent to complete.
system_prompt (str): The system prompt to guide the agent's behavior and context.
Returns:
str: The result of the agent's execution for the given task.
"""
claude_code_sdk = ClaudeAppGenerator(system_prompt=system_prompt)
return claude_code_sdk.run(task)
# agent = Agent(
# agent_name="Developer Worker Agent",
# agent_description="A developer worker agent that can generate code and write it to a file.",
# tools=[developer_worker_agent],
# system_prompt="You are a developer worker agent. You are given a task and you need to complete it.",
# )
# agent.run(
# task="Write a simple python script that prints 'Hello, World!'"
# )
# if __name__ == "__main__":
# task = "Write a simple python script that prints 'Hello, World!'"
# system_prompt = "You are a developer worker agent. You are given a task and you need to complete it."
# print(developer_worker_agent(task, system_prompt))

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "8.0.5"
version = "8.1.1"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]

@ -2,10 +2,6 @@ from swarms.sims.senator_assembly import SenatorAssembly
def main():
"""
Runs a simulation of a Senate vote on a bill proposing significant tax cuts for all Americans.
The bill is described in realistic legislative terms, and the simulation uses a concurrent voting model.
"""
senator_simulation = SenatorAssembly(
model_name="claude-sonnet-4-20250514"
)

@ -0,0 +1,816 @@
"""
Bell Labs Research Simulation with Physicist Agents
This simulation creates specialized AI agents representing famous physicists
from the Bell Labs era, including Oppenheimer, von Neumann, Feynman, Einstein,
and others. The agents work together in a collaborative research environment
following a structured workflow: task -> Oppenheimer (planning) -> physicist discussion
-> code implementation -> results analysis -> repeat for n loops.
"""
from functools import lru_cache
from typing import Any, Dict, List, Optional
from loguru import logger
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.utils.history_output_formatter import (
history_output_formatter,
)
# from examples.tools.claude_as_a_tool import developer_worker_agent
@lru_cache(maxsize=1)
def _create_physicist_agents(
model_name: str, random_model_name: bool = False
) -> List[Agent]:
"""
Create specialized agents for each physicist.
Args:
model_name: Model to use for all agents
Returns:
List of configured physicist agents
"""
physicists_data = {
"J. Robert Oppenheimer": {
"role": "Research Director & Theoretical Physicist",
"expertise": [
"Nuclear physics",
"Quantum mechanics",
"Research coordination",
"Strategic planning",
"Team leadership",
],
"background": "Director of the Manhattan Project, expert in quantum mechanics and nuclear physics",
"system_prompt": """You are J. Robert Oppenheimer, the brilliant theoretical physicist and research director.
Your role is to:
1. Analyze complex research questions and break them down into manageable components
2. Create comprehensive research plans with clear objectives and methodologies
3. Coordinate the research team and ensure effective collaboration
4. Synthesize findings from different physicists into coherent conclusions
5. Guide the research process with strategic insights and theoretical frameworks
You excel at:
- Identifying the core theoretical challenges in any research question
- Designing experimental approaches that test fundamental principles
- Balancing theoretical rigor with practical implementation
- Fostering interdisciplinary collaboration between specialists
- Maintaining focus on the most promising research directions
When creating research plans, be thorough, systematic, and consider multiple approaches.
Always emphasize the theoretical foundations and experimental validation of any proposed solution.""",
},
"John von Neumann": {
"role": "Mathematical Physicist & Computer Scientist",
"expertise": [
"Mathematical physics",
"Computer architecture",
"Game theory",
"Quantum mechanics",
"Numerical methods",
],
"background": "Pioneer of computer science, game theory, and mathematical physics",
"system_prompt": """You are John von Neumann, the brilliant mathematical physicist and computer scientist.
Your approach to research questions involves:
1. Mathematical rigor and formal mathematical frameworks
2. Computational and algorithmic solutions to complex problems
3. Game theory and strategic analysis of research approaches
4. Numerical methods and computational physics
5. Bridging abstract theory with practical implementation
You excel at:
- Formulating problems in precise mathematical terms
- Developing computational algorithms and numerical methods
- Applying game theory to optimize research strategies
- Creating mathematical models that capture complex phenomena
- Designing efficient computational approaches to physical problems
When analyzing research questions, focus on mathematical foundations, computational feasibility,
and the development of rigorous theoretical frameworks that can be implemented and tested.""",
},
"Richard Feynman": {
"role": "Theoretical Physicist & Problem Solver",
"expertise": [
"Quantum electrodynamics",
"Particle physics",
"Problem-solving methodology",
"Intuitive physics",
"Experimental design",
],
"background": "Nobel laureate in physics, known for intuitive problem-solving and quantum electrodynamics",
"system_prompt": """You are Richard Feynman, the brilliant theoretical physicist and master problem solver.
Your research methodology involves:
1. Intuitive understanding of complex physical phenomena
2. Creative problem-solving approaches that cut through complexity
3. Experimental design that tests fundamental principles
4. Clear communication of complex ideas through analogies and examples
5. Focus on the most essential aspects of any research question
You excel at:
- Finding elegant solutions to seemingly intractable problems
- Designing experiments that reveal fundamental truths
- Communicating complex physics in accessible terms
- Identifying the core physics behind any phenomenon
- Developing intuitive models that capture essential behavior
When approaching research questions, look for the simplest, most elegant solutions.
Focus on the fundamental physics and design experiments that test your understanding directly.""",
},
"Albert Einstein": {
"role": "Theoretical Physicist & Conceptual Innovator",
"expertise": [
"Relativity theory",
"Quantum mechanics",
"Conceptual physics",
"Thought experiments",
"Fundamental principles",
],
"background": "Revolutionary physicist who developed relativity theory and influenced quantum mechanics",
"system_prompt": """You are Albert Einstein, the revolutionary theoretical physicist and conceptual innovator.
Your research approach involves:
1. Deep conceptual thinking about fundamental physical principles
2. Thought experiments that reveal the essence of physical phenomena
3. Questioning established assumptions and exploring new paradigms
4. Focus on the most fundamental and universal aspects of physics
5. Intuitive understanding of space, time, and the nature of reality
You excel at:
- Identifying the conceptual foundations of any physical theory
- Developing thought experiments that challenge conventional wisdom
- Finding elegant mathematical descriptions of physical reality
- Questioning fundamental assumptions and exploring alternatives
- Developing unified theories that explain diverse phenomena
When analyzing research questions, focus on the conceptual foundations and fundamental principles.
Look for elegant, unified explanations and be willing to challenge established paradigms.""",
},
"Enrico Fermi": {
"role": "Experimental Physicist & Nuclear Scientist",
"expertise": [
"Nuclear physics",
"Experimental physics",
"Neutron physics",
"Statistical physics",
"Practical applications",
],
"background": "Nobel laureate known for nuclear physics, experimental work, and the first nuclear reactor",
"system_prompt": """You are Enrico Fermi, the brilliant experimental physicist and nuclear scientist.
Your research methodology involves:
1. Rigorous experimental design and execution
2. Practical application of theoretical principles
3. Statistical analysis and probability in physics
4. Nuclear physics and particle interactions
5. Bridging theory with experimental validation
You excel at:
- Designing experiments that test theoretical predictions
- Applying statistical methods to physical problems
- Developing practical applications of fundamental physics
- Nuclear physics and particle physics experiments
- Creating experimental setups that reveal new phenomena
When approaching research questions, focus on experimental design and practical implementation.
Emphasize the importance of experimental validation and statistical analysis in physics research.""",
},
"Code-Implementer": {
"role": "Computational Physicist & Code Developer",
"expertise": [
"Scientific computing",
"Physics simulations",
"Data analysis",
"Algorithm implementation",
"Numerical methods",
],
"background": "Specialized in implementing computational solutions to physics problems",
"system_prompt": """You are a specialized computational physicist and code developer.
Your responsibilities include:
1. Implementing computational solutions to physics problems
2. Developing simulations and numerical methods
3. Analyzing data and presenting results clearly
4. Testing theoretical predictions through computation
5. Providing quantitative analysis of research findings
You excel at:
- Writing clear, efficient scientific code
- Implementing numerical algorithms for physics problems
- Data analysis and visualization
- Computational optimization and performance
- Bridging theoretical physics with computational implementation
When implementing solutions, focus on:
- Clear, well-documented code
- Efficient numerical algorithms
- Comprehensive testing and validation
- Clear presentation of results and analysis
- Quantitative assessment of theoretical predictions""",
},
}
agents = []
for name, data in physicists_data.items():
agent = Agent(
agent_name=name,
system_prompt=data["system_prompt"],
model_name=model_name,
random_model_name=random_model_name,
max_loops=1,
dynamic_temperature_enabled=True,
dynamic_context_window=True,
)
agents.append(agent)
return agents
class BellLabsSwarm:
"""
Bell Labs Research Simulation Swarm
Simulates the collaborative research environment of Bell Labs with famous physicists
working together on complex research questions. The workflow follows:
1. Task is presented to the team
2. Oppenheimer creates a research plan
3. Physicists discuss and vote on approaches using majority voting
4. Code implementation agent tests the theory
5. Results are analyzed and fed back to the team
6. Process repeats for n loops with iterative refinement
"""
def __init__(
self,
name: str = "Bell Labs Research Team",
description: str = "A collaborative research environment simulating Bell Labs physicists",
max_loops: int = 1,
verbose: bool = True,
model_name: str = "gpt-4o-mini",
random_model_name: bool = False,
output_type: str = "str-all-except-first",
dynamic_context_window: bool = True,
**kwargs,
):
"""
Initialize the Bell Labs Research Swarm.
Args:
name: Name of the swarm
description: Description of the swarm's purpose
max_loops: Number of research iteration loops
verbose: Whether to enable verbose logging
model_name: Model to use for all agents
**kwargs: Additional arguments passed to BaseSwarm
"""
self.name = name
self.description = description
self.max_loops = max_loops
self.verbose = verbose
self.model_name = model_name
self.kwargs = kwargs
self.random_model_name = random_model_name
self.output_type = output_type
self.dynamic_context_window = dynamic_context_window
self.conversation = Conversation(
dynamic_context_window=dynamic_context_window
)
# Create the physicist agents
self.agents = _create_physicist_agents(
model_name=model_name, random_model_name=random_model_name
)
# Set up specialized agents
self.oppenheimer = self._get_agent_by_name(
"J. Robert Oppenheimer"
)
self.code_implementer = self._get_agent_by_name(
"Code-Implementer"
)
self.physicists = [
agent
for agent in self.agents
if agent.agent_name != "J. Robert Oppenheimer"
and agent.agent_name != "Code-Implementer"
]
# # Find the code implementer agent
# code_implementer = self._get_agent_by_name("Code-Implementer")
# code_implementer.tools = [developer_worker_agent]
logger.info(
f"Bell Labs Research Team initialized with {len(self.agents)} agents"
)
def _get_agent_by_name(self, name: str) -> Optional[Agent]:
"""Get an agent by name."""
for agent in self.agents:
if agent.agent_name == name:
return agent
return None
def run(
self, task: str, img: Optional[str] = None
) -> Dict[str, Any]:
"""
Run the Bell Labs research simulation.
Args:
task: The research question or task to investigate
Returns:
Dictionary containing the research results, process history, and full conversation
"""
logger.info(f"Starting Bell Labs research on: {task}")
# Add initial task to conversation history
self.conversation.add(
"Research Coordinator", f"Initial Research Task: {task}"
)
# Oppenheimer
oppenheimer_plan = self.oppenheimer.run(
task=self.conversation.get_str(), img=img
)
self.conversation.add(
self.oppenheimer.agent_name,
f"Research Plan: {oppenheimer_plan}",
)
# Discussion
# Physicists
physicist_discussion = self._conduct_physicist_discussion(
task, self.conversation.get_str()
)
# Add to conversation history
self.conversation.add(
"Group Discussion", physicist_discussion
)
# Now implement the solution
implementation_results = self._implement_and_test_solution(
history=self.conversation.get_str()
)
# Add to conversation history
self.conversation.add(
self.code_implementer.agent_name, implementation_results
)
return history_output_formatter(
conversation=self.conversation, type="str"
)
def _create_research_plan(
self, task: str, loop_number: int
) -> str:
"""
Have Oppenheimer create a research plan.
Args:
task: Research task
loop_number: Current loop number
Returns:
Research plan from Oppenheimer
"""
prompt = f"""
Research Task: {task}
Loop Number: {loop_number + 1}
As J. Robert Oppenheimer, create a comprehensive research plan for this task.
Your plan should include:
1. Clear research objectives and hypotheses
2. Theoretical framework and approach
3. Specific research questions to investigate
4. Methodology for testing and validation
5. Expected outcomes and success criteria
6. Timeline and milestones
7. Resource requirements and team coordination
Provide a detailed, actionable plan that the research team can follow.
"""
plan = self.oppenheimer.run(prompt)
return plan
def _conduct_physicist_discussion(
self, task: str, history: str
) -> str:
"""
Conduct a natural discussion among physicists where they build on each other's ideas.
Args:
task: Research task
history: Conversation history including Oppenheimer's plan
Returns:
Results of the physicist discussion as a conversation transcript
"""
import random
# Shuffle the physicists to create random discussion order
discussion_order = self.physicists.copy()
random.shuffle(discussion_order)
discussion_transcript = []
current_context = (
f"{history}\n\nCurrent Research Task: {task}\n\n"
)
# Each physicist contributes to the discussion, building on previous contributions
for i, physicist in enumerate(discussion_order):
if i == 0:
# First physicist starts the discussion
discussion_prompt = f"""
{current_context}
As {physicist.agent_name}, you are starting the group discussion about this research plan.
Based on your expertise, provide your initial thoughts on:
1. What aspects of Oppenheimer's research plan do you find most promising?
2. What theoretical challenges or concerns do you see?
3. What specific approaches would you recommend based on your expertise?
4. What questions or clarifications do you have for the team?
Be specific and draw from your unique perspective and expertise. This will set the tone for the group discussion.
"""
else:
# Subsequent physicists build on the discussion
previous_contributions = "\n\n".join(
discussion_transcript
)
discussion_prompt = f"""
{current_context}
Previous Discussion:
{previous_contributions}
As {physicist.agent_name}, continue the group discussion by building on your colleagues' ideas.
Consider:
1. How do your colleagues' perspectives relate to your expertise in {', '.join(physicist.expertise)}?
2. What additional insights can you add to the discussion?
3. How can you address any concerns or questions raised by others?
4. What specific next steps would you recommend based on the discussion so far?
Engage directly with your colleagues' ideas and contribute your unique perspective to move the research forward.
"""
# Get the physicist's contribution
contribution = physicist.run(discussion_prompt)
# Add to transcript with clear attribution
discussion_transcript.append(
f"{physicist.agent_name}: {contribution}"
)
# Update context for next iteration
current_context = (
f"{history}\n\nCurrent Research Task: {task}\n\nGroup Discussion:\n"
+ "\n\n".join(discussion_transcript)
)
# Create a summary of the discussion
summary_prompt = f"""
Research Task: {task}
Complete Discussion Transcript:
{chr(10).join(discussion_transcript)}
As a research coordinator, provide a concise summary of the key points from this group discussion:
1. Main areas of agreement among the physicists
2. Key concerns or challenges identified
3. Specific recommendations made by the team
4. Next steps for moving forward with the research
Focus on actionable insights and clear next steps that the team can implement.
"""
# Use Oppenheimer to summarize the discussion
discussion_summary = self.oppenheimer.run(summary_prompt)
# Return the full discussion transcript with summary
full_discussion = f"Group Discussion Transcript:\n\n{chr(10).join(discussion_transcript)}\n\n---\nDiscussion Summary:\n{discussion_summary}"
return full_discussion
def _implement_and_test_solution(
self,
history: str,
) -> Dict[str, Any]:
"""
Implement and test the proposed solution.
Args:
task: Research task
plan: Research plan
discussion_results: Results from physicist discussion
loop_number: Current loop number
Returns:
Implementation and testing results
"""
implementation_prompt = f"""
{history}
As the Code Implementer, your task is to:
1. Implement a computational solution based on the research plan
2. Test the theoretical predictions through simulation or calculation
3. Analyze the results and provide quantitative assessment
4. Identify any discrepancies between theory and implementation
5. Suggest improvements or next steps
Provide:
- Clear description of your implementation approach
- Code or algorithm description
- Test results and analysis
- Comparison with theoretical predictions
- Recommendations for further investigation
Focus on practical implementation and quantitative results.
"""
implementation_results = self.code_implementer.run(
implementation_prompt
)
return implementation_results
def _analyze_results(
self, implementation_results: Dict[str, Any], loop_number: int
) -> str:
"""
Analyze the results and provide team review.
Args:
implementation_results: Results from implementation phase
loop_number: Current loop number
Returns:
Analysis and recommendations
"""
analysis_prompt = f"""
Implementation Results: {implementation_results}
Loop Number: {loop_number + 1}
As the research team, analyze these results and provide:
1. Assessment of whether the implementation supports the theoretical predictions
2. Identification of any unexpected findings or discrepancies
3. Evaluation of the methodology and approach
4. Recommendations for the next research iteration
5. Insights gained from this round of investigation
Consider:
- What worked well in this approach?
- What challenges or limitations were encountered?
- How can the research be improved in the next iteration?
- What new questions or directions have emerged?
Provide a comprehensive analysis that will guide the next research phase.
"""
# Use team discussion for results analysis
analysis_results = self._conduct_team_analysis(
analysis_prompt
)
return analysis_results
def _conduct_team_analysis(self, analysis_prompt: str) -> str:
"""
Conduct a team analysis discussion using the same approach as physicist discussion.
Args:
analysis_prompt: The prompt for the analysis
Returns:
Results of the team analysis discussion
"""
import random
# Shuffle the agents to create random discussion order
discussion_order = self.agents.copy()
random.shuffle(discussion_order)
discussion_transcript = []
current_context = analysis_prompt
# Each agent contributes to the analysis, building on previous contributions
for i, agent in enumerate(discussion_order):
if i == 0:
# First agent starts the analysis
agent_prompt = f"""
{current_context}
As {agent.agent_name}, you are starting the team analysis discussion.
Based on your expertise and role, provide your initial analysis of the implementation results.
Focus on what you can contribute from your unique perspective.
"""
else:
# Subsequent agents build on the analysis
previous_contributions = "\n\n".join(
discussion_transcript
)
agent_prompt = f"""
{current_context}
Previous Analysis:
{previous_contributions}
As {agent.agent_name}, continue the team analysis by building on your colleagues' insights.
Consider:
1. How do your colleagues' perspectives relate to your expertise?
2. What additional insights can you add to the analysis?
3. How can you address any concerns or questions raised by others?
4. What specific recommendations would you make based on the analysis so far?
Engage directly with your colleagues' ideas and contribute your unique perspective.
"""
# Get the agent's contribution
contribution = agent.run(agent_prompt)
# Add to transcript with clear attribution
discussion_transcript.append(
f"{agent.agent_name}: {contribution}"
)
# Update context for next iteration
current_context = (
f"{analysis_prompt}\n\nTeam Analysis:\n"
+ "\n\n".join(discussion_transcript)
)
# Create a summary of the analysis
summary_prompt = f"""
Analysis Prompt: {analysis_prompt}
Complete Analysis Transcript:
{chr(10).join(discussion_transcript)}
As a research coordinator, provide a concise summary of the key points from this team analysis:
1. Main findings and insights from the team
2. Key recommendations made
3. Areas of agreement and disagreement
4. Next steps for the research
Focus on actionable insights and clear next steps.
"""
# Use Oppenheimer to summarize the analysis
analysis_summary = self.oppenheimer.run(summary_prompt)
# Return the full analysis transcript with summary
full_analysis = f"Team Analysis Transcript:\n\n{chr(10).join(discussion_transcript)}\n\n---\nAnalysis Summary:\n{analysis_summary}"
return full_analysis
def _refine_task_for_next_iteration(
self, current_task: str, loop_results: Dict[str, Any]
) -> str:
"""
Refine the task for the next research iteration.
Args:
current_task: Current research task
loop_results: Results from the current loop
Returns:
Refined task for next iteration
"""
refinement_prompt = f"""
Current Research Task: {current_task}
Results from Current Loop: {loop_results}
Based on the findings and analysis from this research loop, refine the research task for the next iteration.
Consider:
- What new questions have emerged?
- What aspects need deeper investigation?
- What alternative approaches should be explored?
- What specific hypotheses should be tested?
Provide a refined, focused research question that builds upon the current findings
and addresses the most important next steps identified by the team.
"""
# Use Oppenheimer to refine the task
refined_task = self.oppenheimer.run(refinement_prompt)
# Add task refinement to conversation history
self.conversation.add(
"J. Robert Oppenheimer",
f"Task Refined for Next Iteration: {refined_task}",
)
return refined_task
def _generate_final_conclusion(
self, research_results: Dict[str, Any]
) -> str:
"""
Generate a final conclusion summarizing all research findings.
Args:
research_results: Complete research results from all loops
Returns:
Final research conclusion
"""
conclusion_prompt = f"""
Complete Research Results: {research_results}
As J. Robert Oppenheimer, provide a comprehensive final conclusion for this research project.
Your conclusion should:
1. Summarize the key findings from all research loops
2. Identify the most significant discoveries or insights
3. Evaluate the success of the research approach
4. Highlight any limitations or areas for future investigation
5. Provide a clear statement of what was accomplished
6. Suggest next steps for continued research
Synthesize the work of the entire team and provide a coherent narrative
of the research journey and its outcomes.
"""
final_conclusion = self.oppenheimer.run(conclusion_prompt)
return final_conclusion
# Example usage function
def run_bell_labs_research(
research_question: str,
max_loops: int = 3,
model_name: str = "gpt-4o-mini",
verbose: bool = True,
) -> Dict[str, Any]:
"""
Run a Bell Labs research simulation.
Args:
research_question: The research question to investigate
max_loops: Number of research iteration loops
model_name: Model to use for all agents
verbose: Whether to enable verbose logging
Returns:
Complete research results and findings
"""
bell_labs = BellLabsSwarm(
max_loops=max_loops, verbose=verbose, model_name=model_name
)
results = bell_labs.run(research_question)
return results
# if __name__ == "__main__":
# # Example research question
# research_question = """
# Investigate the feasibility of quantum computing for solving complex optimization problems.
# Consider both theoretical foundations and practical implementation challenges.
# """
# print("Starting Bell Labs Research Simulation...")
# print(f"Research Question: {research_question}")
# print("-" * 80)
# results = run_bell_labs_research(
# research_question=research_question,
# max_loops=2,
# verbose=True
# )
# print("\n" + "=" * 80)
# print("RESEARCH SIMULATION COMPLETED")
# print("=" * 80)
# print(f"\nFinal Conclusion:\n{results['final_conclusion']}")
# print(f"\nResearch completed in {len(results['research_history'])} loops.")
# print("Check the results dictionary for complete research details.")

@ -437,6 +437,7 @@ class Agent:
tool_retry_attempts: int = 3,
reasoning_prompt_on: bool = True,
dynamic_context_window: bool = True,
show_tool_execution_output: bool = True,
*args,
**kwargs,
):
@ -578,15 +579,17 @@ class Agent:
self.tool_retry_attempts = tool_retry_attempts
self.reasoning_prompt_on = reasoning_prompt_on
self.dynamic_context_window = dynamic_context_window
# Initialize the feedback
self.feedback = []
self.show_tool_execution_output = show_tool_execution_output
# self.init_handling()
self.setup_config()
# Initialize the short memory
self.short_memory = self.short_memory_init()
# Initialize the tools
self.tool_struct = self.setup_tools()
if exists(self.docs_folder):
self.get_docs_from_doc_folders()
@ -610,8 +613,6 @@ class Agent:
if self.react_on is True:
self.system_prompt += REACT_SYS_PROMPT
# Run sequential operations after all concurrent tasks are done
# self.agent_output = self.agent_output_model()
if self.autosave is True:
log_agent_data(self.to_dict())
@ -640,13 +641,14 @@ class Agent:
verbose=self.verbose,
)
def tool_handling(self):
self.tool_struct = BaseTool(
def setup_tools(self):
return BaseTool(
tools=self.tools,
verbose=self.verbose,
)
def tool_handling(self):
# Convert all the tools into a list of dictionaries
self.tools_list_dictionary = (
convert_multiple_functions_to_openai_function_schema(
@ -693,26 +695,6 @@ class Agent:
return memory
def agent_output_model(self):
# Many steps
id = agent_id()
return ManySteps(
agent_id=id,
agent_name=self.agent_name,
# run_id=run_id,
task="",
max_loops=self.max_loops,
steps=self.short_memory.to_dict(),
full_history=self.short_memory.get_str(),
total_tokens=count_tokens(
text=self.short_memory.get_str()
),
stopping_token=self.stopping_token,
interactive=self.interactive,
dynamic_temperature_enabled=self.dynamic_temperature_enabled,
)
def llm_handling(self, *args, **kwargs):
"""Initialize the LiteLLM instance with combined configuration from all sources.
@ -729,9 +711,6 @@ class Agent:
Returns:
LiteLLM: The initialized LiteLLM instance
"""
# Use cached instance if available
if self.llm is not None:
return self.llm
if self.model_name is None:
self.model_name = "gpt-4o-mini"
@ -754,6 +733,7 @@ class Agent:
"max_tokens": self.max_tokens,
"system_prompt": self.system_prompt,
"stream": self.streaming_on,
"top_p": self.top_p,
}
# Initialize tools_list_dictionary, if applicable
@ -815,7 +795,7 @@ class Agent:
return self.llm
except AgentLLMInitializationError as e:
logger.error(
f"AgentLLMInitializationError: Agent Name: {self.agent_name} Error in llm_handling: {e} Your current configuration is not supported. Please check the configuration and parameters."
f"AgentLLMInitializationError: Agent Name: {self.agent_name} Error in llm_handling: {e} Your current configuration is not supported. Please check the configuration and parameters. Traceback: {traceback.format_exc()}"
)
return None
@ -878,6 +858,9 @@ class Agent:
if self.preset_stopping_token is not None:
self.stopping_token = "<DONE>"
# Initialize the feedback
self.feedback = []
def check_model_supports_utilities(
self, img: Optional[str] = None
) -> bool:
@ -890,7 +873,6 @@ class Agent:
Returns:
bool: True if model supports vision and image is provided, False otherwise.
"""
# Only check vision support if an image is provided
if img is not None:
@ -1213,7 +1195,7 @@ class Agent:
self.save()
logger.error(
f"Attempt {attempt+1}/{self.retry_attempts}: Error generating response in loop {loop_count} for agent '{self.agent_name}': {str(e)} | "
f"Attempt {attempt+1}/{self.retry_attempts}: Error generating response in loop {loop_count} for agent '{self.agent_name}': {str(e)} | Traceback: {traceback.format_exc()}"
)
attempt += 1
@ -1291,7 +1273,7 @@ class Agent:
except KeyboardInterrupt as error:
self._handle_run_error(error)
def __handle_run_error(self, error: any):
def _handle_run_error(self, error: any):
if self.autosave is True:
self.save()
log_agent_data(self.to_dict())
@ -1313,11 +1295,6 @@ class Agent:
raise error
def _handle_run_error(self, error: any):
# Handle error directly instead of using daemon thread
# to ensure proper exception propagation
self.__handle_run_error(error)
async def arun(
self,
task: Optional[str] = None,
@ -1514,26 +1491,6 @@ class Agent:
except Exception as error:
logger.info(f"Error running bulk run: {error}", "red")
async def arun_batched(
self,
tasks: List[str],
*args,
**kwargs,
):
"""Asynchronously runs a batch of tasks."""
try:
# Create a list of coroutines for each task
coroutines = [
self.arun(task=task, *args, **kwargs)
for task in tasks
]
# Use asyncio.gather to run them concurrently
results = await asyncio.gather(*coroutines)
return results
except Exception as error:
logger.error(f"Error running batched tasks: {error}")
raise
def reliability_check(self):
if self.system_prompt is None:
@ -1568,7 +1525,7 @@ class Agent:
try:
if self.max_tokens > get_max_tokens(self.model_name):
logger.warning(
f"Max tokens is set to {self.max_tokens}, but the model '{self.model_name}' only supports {get_max_tokens(self.model_name)} tokens. Please set max tokens to {get_max_tokens(self.model_name)} or less."
f"Max tokens is set to {self.max_tokens}, but the model '{self.model_name}' may or may not support {get_max_tokens(self.model_name)} tokens. Please set max tokens to {get_max_tokens(self.model_name)} or less."
)
except Exception:
@ -1576,7 +1533,7 @@ class Agent:
if self.model_name not in model_list:
logger.warning(
f"The model '{self.model_name}' is not supported. Please use a supported model, or override the model name with the 'llm' parameter, which should be a class with a 'run(task: str)' method or a '__call__' method."
f"The model '{self.model_name}' may not be supported. Please use a supported model, or override the model name with the 'llm' parameter, which should be a class with a 'run(task: str)' method or a '__call__' method."
)
def save(self, file_path: str = None) -> None:
@ -1822,14 +1779,6 @@ class Agent:
) as executor:
self.executor = executor
# # Reinitialize tool structure if needed
# if hasattr(self, 'tools') and (self.tools or getattr(self, 'list_base_models', None)):
# self.tool_struct = BaseTool(
# tools=self.tools,
# base_models=getattr(self, 'list_base_models', None),
# tool_system_prompt=self.tool_system_prompt
# )
except Exception as e:
logger.error(f"Error reinitializing components: {e}")
raise
@ -2640,19 +2589,20 @@ class Agent:
self.llm.stream = original_stream
return streaming_response
else:
# Non-streaming call
args = {
"task": task,
}
if img is not None:
out = self.llm.run(
task=task, img=img, *args, **kwargs
)
else:
out = self.llm.run(task=task, *args, **kwargs)
args["img"] = img
out = self.llm.run(**args, **kwargs)
return out
except AgentLLMError as e:
logger.error(
f"Error calling LLM: {e}. Task: {task}, Args: {args}, Kwargs: {kwargs}"
f"Error calling LLM: {e}. Task: {task}, Args: {args}, Kwargs: {kwargs} Traceback: {traceback.format_exc()}"
)
raise e
@ -2743,6 +2693,30 @@ class Agent:
)
raise KeyboardInterrupt
def run_batched(
self,
tasks: List[str],
imgs: List[str] = None,
*args,
**kwargs,
):
"""
Run a batch of tasks concurrently.
Args:
tasks (List[str]): List of tasks to run.
imgs (List[str], optional): List of images to run. Defaults to None.
*args: Additional positional arguments to be passed to the execution method.
**kwargs: Additional keyword arguments to be passed to the execution method.
Returns:
List[Any]: List of results from each task execution.
"""
return [
self.run(task=task, imgs=imgs, *args, **kwargs)
for task in tasks
]
def handle_artifacts(
self, text: str, file_output_path: str, file_extension: str
) -> None:
@ -3081,10 +3055,17 @@ class Agent:
)
if self.print_on is True:
self.pretty_print(
f"Tool Executed Successfully [{time.strftime('%H:%M:%S')}]",
loop_count,
)
if self.show_tool_execution_output is True:
self.pretty_print(
f"Tool Executed Successfully [{time.strftime('%H:%M:%S')}] \n\nTool Output: {format_data_structure(output)}",
loop_count,
)
else:
self.pretty_print(
f"Tool Executed Successfully [{time.strftime('%H:%M:%S')}]",
loop_count,
)
# Now run the LLM again without tools - create a temporary LLM instance
# instead of modifying the cached one

@ -1790,21 +1790,63 @@ class Conversation:
pass
self.conversation_history = []
def dynamic_auto_chunking(self):
def _dynamic_auto_chunking_worker(self):
"""
Dynamically chunk the conversation history to fit within the context length.
Returns:
str: The chunked conversation history as a string that fits within context_length tokens.
"""
all_tokens = self._return_history_as_string_worker()
total_tokens = count_tokens(
all_tokens, self.tokenizer_model_name
)
if total_tokens > self.context_length:
# Get the difference between the count_tokens and the context_length
difference = total_tokens - self.context_length
if total_tokens <= self.context_length:
return all_tokens
# We need to remove characters from the beginning until we're under the limit
# Start by removing a percentage of characters and adjust iteratively
target_tokens = self.context_length
current_string = all_tokens
# Binary search approach to find the right cutoff point
left, right = 0, len(all_tokens)
while left < right:
mid = (left + right) // 2
test_string = all_tokens[mid:]
if not test_string:
break
test_tokens = count_tokens(
test_string, self.tokenizer_model_name
)
if test_tokens <= target_tokens:
# We can remove more from the beginning
right = mid
current_string = test_string
else:
# We need to keep more from the beginning
left = mid + 1
return current_string
# Slice the first difference number of messages and contents from the beginning of the conversation history
new_history = all_tokens[difference:]
def dynamic_auto_chunking(self):
"""
Dynamically chunk the conversation history to fit within the context length.
return new_history
Returns:
str: The chunked conversation history as a string that fits within context_length tokens.
"""
try:
return self._dynamic_auto_chunking_worker()
except Exception as e:
logger.error(f"Dynamic auto chunking failed: {e}")
return self._return_history_as_string_worker()
# Example usage

@ -1,27 +1,30 @@
from swarms.utils.check_all_model_max_tokens import (
check_all_model_max_tokens,
)
from swarms.utils.data_to_text import (
csv_to_text,
data_to_text,
json_to_text,
txt_to_text,
)
from swarms.utils.dynamic_context_window import (
dynamic_auto_chunking,
)
from swarms.utils.file_processing import (
create_file_in_folder,
load_json,
sanitize_file_path,
zip_workspace,
create_file_in_folder,
zip_folders,
zip_workspace,
)
from swarms.utils.parse_code import extract_code_from_markdown
from swarms.utils.pdf_to_text import pdf_to_text
from swarms.utils.try_except_wrapper import try_except_wrapper
from swarms.utils.litellm_tokenizer import count_tokens
from swarms.utils.output_types import HistoryOutputType
from swarms.utils.history_output_formatter import (
history_output_formatter,
)
from swarms.utils.check_all_model_max_tokens import (
check_all_model_max_tokens,
)
from swarms.utils.litellm_tokenizer import count_tokens
from swarms.utils.output_types import HistoryOutputType
from swarms.utils.parse_code import extract_code_from_markdown
from swarms.utils.pdf_to_text import pdf_to_text
from swarms.utils.try_except_wrapper import try_except_wrapper
__all__ = [
@ -41,4 +44,5 @@ __all__ = [
"HistoryOutputType",
"history_output_formatter",
"check_all_model_max_tokens",
"dynamic_auto_chunking",
]

@ -0,0 +1,85 @@
import traceback
from loguru import logger
from swarms.utils.litellm_tokenizer import count_tokens
from typing import Optional
def dynamic_auto_chunking_(
content: str,
context_length: Optional[int] = 8192,
tokenizer_model_name: Optional[str] = "gpt-4.1",
):
"""
Dynamically chunk the conversation history to fit within the context length.
Args:
content (str): The conversation history as a string.
context_length (int): The maximum number of tokens allowed.
tokenizer_model_name (str): The name of the tokenizer model to use.
Returns:
str: The chunked conversation history as a string that fits within context_length tokens.
"""
total_tokens = count_tokens(
text=content, model=tokenizer_model_name
)
if total_tokens <= context_length:
return content
# We need to remove characters from the beginning until we're under the limit
# Start by removing a percentage of characters and adjust iteratively
target_tokens = context_length
current_string = content
# Binary search approach to find the right cutoff point
left, right = 0, len(content)
while left < right:
mid = (left + right) // 2
test_string = content[mid:]
if not test_string:
break
test_tokens = count_tokens(
text=test_string, model=tokenizer_model_name
)
if test_tokens <= target_tokens:
# We can remove more from the beginning
right = mid
current_string = test_string
else:
# We need to keep more from the beginning
left = mid + 1
return current_string
def dynamic_auto_chunking(
content: str,
context_length: Optional[int] = 8192,
tokenizer_model_name: Optional[str] = "gpt-4.1",
):
"""
Dynamically chunk the conversation history to fit within the context length.
Args:
content (str): The conversation history as a string.
context_length (int): The maximum number of tokens allowed.
tokenizer_model_name (str): The name of the tokenizer model to use.
"""
try:
return dynamic_auto_chunking_(
content=content,
context_length=context_length,
tokenizer_model_name=tokenizer_model_name,
)
except Exception as e:
logger.error(
f"Dynamic auto chunking failed: {e} Traceback: {traceback.format_exc()}"
)
return content

@ -176,6 +176,12 @@ class LiteLLM:
litellm.drop_params = True
# Add system prompt if present
if self.system_prompt is not None:
self.messages.append(
{"role": "system", "content": self.system_prompt}
)
# Store additional args and kwargs for use in run method
self.init_args = args
self.init_kwargs = kwargs
@ -231,8 +237,8 @@ class LiteLLM:
def _prepare_messages(
self,
task: str,
img: str = None,
task: Optional[str] = None,
img: Optional[str] = None,
):
"""
Prepare the messages for the given task.
@ -245,24 +251,14 @@ class LiteLLM:
"""
self.check_if_model_supports_vision(img=img)
# Initialize messages
messages = []
# Add system prompt if present
if self.system_prompt is not None:
messages.append(
{"role": "system", "content": self.system_prompt}
)
# Handle vision case
if img is not None:
messages = self.vision_processing(
task=task, image=img, messages=messages
)
else:
messages.append({"role": "user", "content": task})
self.vision_processing(task=task, image=img)
return messages
if task is not None:
self.messages.append({"role": "user", "content": task})
return self.messages
def anthropic_vision_processing(
self, task: str, image: str, messages: list
@ -546,12 +542,18 @@ class LiteLLM:
5. Default parameters
"""
try:
messages = self._prepare_messages(task=task, img=img)
self.messages.append({"role": "user", "content": task})
if img is not None:
self.messages = self.vision_processing(
task=task, image=img
)
# Base completion parameters
completion_params = {
"model": self.model_name,
"messages": messages,
"messages": self.messages,
"stream": self.stream,
"max_tokens": self.max_tokens,
"caching": self.caching,

Loading…
Cancel
Save