docs, contributing.md, multi tool mcp execution, and more

pull/889/merge
Kye Gomez 7 days ago
parent a9ae4cd222
commit fcf52332d1

@ -1,5 +1,15 @@
# Contribution Guidelines
<div align="center">
<a href="https://swarms.world">
<img src="https://github.com/kyegomez/swarms/blob/master/images/swarmslogobanner.png" style="margin: 15px; max-width: 500px" width="50%" alt="Swarms Logo">
</a>
</div>
<p align="center">
<em>The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework</em>
</p>
---
## Table of Contents
@ -7,10 +17,12 @@
- [Project Overview](#project-overview)
- [Getting Started](#getting-started)
- [Installation](#installation)
- [Environment Configuration](#environment-configuration)
- [Project Structure](#project-structure)
- [How to Contribute](#how-to-contribute)
- [Reporting Issues](#reporting-issues)
- [Submitting Pull Requests](#submitting-pull-requests)
- [Good First Issues](#good-first-issues)
- [Coding Standards](#coding-standards)
- [Type Annotations](#type-annotations)
- [Docstrings and Documentation](#docstrings-and-documentation)
@ -19,7 +31,13 @@
- [Areas Needing Contributions](#areas-needing-contributions)
- [Writing Tests](#writing-tests)
- [Improving Documentation](#improving-documentation)
- [Creating Training Scripts](#creating-training-scripts)
- [Adding New Swarm Architectures](#adding-new-swarm-architectures)
- [Enhancing Agent Capabilities](#enhancing-agent-capabilities)
- [Removing Defunct Code](#removing-defunct-code)
- [Development Resources](#development-resources)
- [Documentation](#documentation)
- [Examples and Tutorials](#examples-and-tutorials)
- [API Reference](#api-reference)
- [Community and Support](#community-and-support)
- [License](#license)
@ -27,16 +45,24 @@
## Project Overview
**swarms** is a library focused on making it simple to orchestrate agents to automate real-world activities. The goal is to automate the world economy with these swarms of agents.
**Swarms** is an enterprise-grade, production-ready multi-agent orchestration framework focused on making it simple to orchestrate agents to automate real-world activities. The goal is to automate the world economy with these swarms of agents.
We need your help to:
### Key Features
- **Write Tests**: Ensure the reliability and correctness of the codebase.
- **Improve Documentation**: Maintain clear and comprehensive documentation.
- **Add New Orchestration Methods**: Add multi-agent orchestration methods
- **Removing Defunct Code**: Removing bad code
| Category | Features | Benefits |
|----------|----------|-----------|
| 🏢 Enterprise Architecture | • Production-Ready Infrastructure<br>• High Reliability Systems<br>• Modular Design<br>• Comprehensive Logging | • Reduced downtime<br>• Easier maintenance<br>• Better debugging<br>• Enhanced monitoring |
| 🤖 Agent Orchestration | • Hierarchical Swarms<br>• Parallel Processing<br>• Sequential Workflows<br>• Graph-based Workflows<br>• Dynamic Agent Rearrangement | • Complex task handling<br>• Improved performance<br>• Flexible workflows<br>• Optimized execution |
| 🔄 Integration Capabilities | • Multi-Model Support<br>• Custom Agent Creation<br>• Extensive Tool Library<br>• Multiple Memory Systems | • Provider flexibility<br>• Custom solutions<br>• Extended functionality<br>• Enhanced memory management |
### We Need Your Help To:
- **Write Tests**: Ensure the reliability and correctness of the codebase
- **Improve Documentation**: Maintain clear and comprehensive documentation
- **Add New Orchestration Methods**: Add multi-agent orchestration methods
- **Remove Defunct Code**: Clean up and remove bad code
- **Enhance Agent Capabilities**: Improve existing agents and add new ones
- **Optimize Performance**: Improve speed and efficiency of swarm operations
Your contributions will help us push the boundaries of AI and make this library a valuable resource for the community.
@ -46,24 +72,65 @@ Your contributions will help us push the boundaries of AI and make this library
### Installation
You can install swarms using `pip`:
#### Using pip
```bash
pip3 install -U swarms
```
#### Using uv (Recommended)
[uv](https://github.com/astral-sh/uv) is a fast Python package installer and resolver, written in Rust.
```bash
pip3 install swarms
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install swarms using uv
uv pip install swarms
```
Alternatively, you can clone the repository:
#### Using poetry
```bash
# Install poetry if you haven't already
curl -sSL https://install.python-poetry.org | python3 -
# Add swarms to your project
poetry add swarms
```
#### From source
```bash
git clone https://github.com/kyegomez/swarms
# Clone the repository
git clone https://github.com/kyegomez/swarms.git
cd swarms
# Install with pip
pip install -e .
```
### Environment Configuration
Create a `.env` file in your project root with the following variables:
```bash
OPENAI_API_KEY=""
WORKSPACE_DIR="agent_workspace"
ANTHROPIC_API_KEY=""
GROQ_API_KEY=""
```
- [Learn more about environment configuration here](https://docs.swarms.world/en/latest/swarms/install/env/)
### Project Structure
- **`swarms/`**: Contains all the source code for the library.
- **`examples/`**: Includes example scripts and notebooks demonstrating how to use the library.
- **`tests/`**: (To be created) Will contain unit tests for the library.
- **`docs/`**: (To be maintained) Contains documentation files.
- **`swarms/`**: Contains all the source code for the library
- **`agents/`**: Agent implementations and base classes
- **`structs/`**: Swarm orchestration structures (SequentialWorkflow, AgentRearrange, etc.)
- **`tools/`**: Tool implementations and base classes
- **`prompts/`**: System prompts and prompt templates
- **`utils/`**: Utility functions and helpers
- **`examples/`**: Includes example scripts and notebooks demonstrating how to use the library
- **`tests/`**: Unit tests for the library
- **`docs/`**: Documentation files and guides
---
@ -79,6 +146,10 @@ If you find any bugs, inconsistencies, or have suggestions for enhancements, ple
- **Description**: Detailed description, steps to reproduce, expected behavior, and any relevant logs or screenshots.
3. **Label Appropriately**: Use labels to categorize the issue (e.g., bug, enhancement, documentation).
**Issue Templates**: Use our issue templates for bug reports and feature requests:
- [Bug Report](https://github.com/kyegomez/swarms/issues/new?template=bug_report.md)
- [Feature Request](https://github.com/kyegomez/swarms/issues/new?template=feature_request.md)
### Submitting Pull Requests
We welcome pull requests (PRs) for bug fixes, improvements, and new features. Please follow these guidelines:
@ -88,6 +159,7 @@ We welcome pull requests (PRs) for bug fixes, improvements, and new features. Pl
```bash
git clone https://github.com/kyegomez/swarms.git
cd swarms
```
3. **Create a New Branch**: Use a descriptive branch name.
@ -121,6 +193,13 @@ We welcome pull requests (PRs) for bug fixes, improvements, and new features. Pl
**Note**: It's recommended to create small and focused PRs for easier review and faster integration.
### Good First Issues
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. These are specifically designed for new contributors:
- [Good First Issues](https://github.com/kyegomez/swarms/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
- [Contributing Board](https://github.com/users/kyegomez/projects/1) - Participate in Roadmap discussions!
---
## Coding Standards
@ -204,6 +283,7 @@ We have several areas where contributions are particularly welcome.
- Write unit tests for existing code in `swarms/`.
- Identify edge cases and potential failure points.
- Ensure tests are repeatable and independent.
- Add integration tests for swarm orchestration methods.
### Improving Documentation
@ -212,27 +292,113 @@ We have several areas where contributions are particularly welcome.
- Update docstrings to reflect any changes.
- Add examples and tutorials in the `examples/` directory.
- Improve or expand the content in the `docs/` directory.
- Create video tutorials and walkthroughs.
### Adding New Swarm Architectures
- **Goal**: Provide new multi-agent orchestration methods.
- **Current Architectures**:
- [SequentialWorkflow](https://docs.swarms.world/en/latest/swarms/structs/sequential_workflow/)
- [AgentRearrange](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/)
- [MixtureOfAgents](https://docs.swarms.world/en/latest/swarms/structs/moa/)
- [SpreadSheetSwarm](https://docs.swarms.world/en/latest/swarms/structs/spreadsheet_swarm/)
- [ForestSwarm](https://docs.swarms.world/en/latest/swarms/structs/forest_swarm/)
- [GraphWorkflow](https://docs.swarms.world/en/latest/swarms/structs/graph_swarm/)
- [GroupChat](https://docs.swarms.world/en/latest/swarms/structs/group_chat/)
- [SwarmRouter](https://docs.swarms.world/en/latest/swarms/structs/swarm_router/)
### Enhancing Agent Capabilities
- **Goal**: Improve existing agents and add new specialized agents.
- **Areas of Focus**:
- Financial analysis agents
- Medical diagnosis agents
- Code generation and review agents
- Research and analysis agents
- Creative content generation agents
### Removing Defunct Code
- **Goal**: Clean up and remove bad code to improve maintainability.
- **Tasks**:
- Identify unused or deprecated code.
- Remove duplicate implementations.
- Simplify complex functions.
- Update outdated dependencies.
---
## Development Resources
### Documentation
- **Official Documentation**: [docs.swarms.world](https://docs.swarms.world)
- **Installation Guide**: [Installation](https://docs.swarms.world/en/latest/swarms/install/install/)
- **Quickstart Guide**: [Get Started](https://docs.swarms.world/en/latest/swarms/install/quickstart/)
- **Agent Architecture**: [Agent Internal Mechanisms](https://docs.swarms.world/en/latest/swarms/framework/agents_explained/)
- **Agent API**: [Agent API](https://docs.swarms.world/en/latest/swarms/structs/agent/)
### Examples and Tutorials
- **Basic Examples**: [examples/](https://github.com/kyegomez/swarms/tree/master/examples)
- **Agent Examples**: [examples/single_agent/](https://github.com/kyegomez/swarms/tree/master/examples/single_agent)
- **Multi-Agent Examples**: [examples/multi_agent/](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent)
- **Tool Examples**: [examples/tools/](https://github.com/kyegomez/swarms/tree/master/examples/tools)
### Creating Multi-Agent Orchestration Methods
### API Reference
- **Goal**: Provide new multi-agent orchestration methods
- **Core Classes**: [swarms/structs/](https://github.com/kyegomez/swarms/tree/master/swarms/structs)
- **Agent Implementations**: [swarms/agents/](https://github.com/kyegomez/swarms/tree/master/swarms/agents)
- **Tool Implementations**: [swarms/tools/](https://github.com/kyegomez/swarms/tree/master/swarms/tools)
- **Utility Functions**: [swarms/utils/](https://github.com/kyegomez/swarms/tree/master/swarms/utils)
---
## Community and Support
### Connect With Us
| Platform | Link | Description |
|----------|------|-------------|
| 📚 Documentation | [docs.swarms.world](https://docs.swarms.world) | Official documentation and guides |
| 📝 Blog | [Medium](https://medium.com/@kyeg) | Latest updates and technical articles |
| 💬 Discord | [Join Discord](https://discord.gg/jM3Z6M9uMq) | Live chat and community support |
| 🐦 Twitter | [@kyegomez](https://twitter.com/kyegomez) | Latest news and announcements |
| 👥 LinkedIn | [The Swarm Corporation](https://www.linkedin.com/company/the-swarm-corporation) | Professional network and updates |
| 📺 YouTube | [Swarms Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ) | Tutorials and demos |
| 🎫 Events | [Sign up here](https://lu.ma/5p2jnc2v) | Join our community events |
### Onboarding Session
Get onboarded with the creator and lead maintainer of Swarms, Kye Gomez, who will show you how to get started with the installation, usage examples, and starting to build your custom use case! [CLICK HERE](https://cal.com/swarms/swarms-onboarding-session)
### Community Guidelines
- **Communication**: Engage with the community by participating in discussions on issues and pull requests.
- **Respect**: Maintain a respectful and inclusive environment.
- **Feedback**: Be open to receiving and providing constructive feedback.
- **Collaboration**: Work together to improve the project for everyone.
---
## License
By contributing to swarms, you agree that your contributions will be licensed under the [MIT License](LICENSE).
By contributing to swarms, you agree that your contributions will be licensed under the [Apache License](LICENSE).
---
## Citation
If you use **swarms** in your research, please cite the project by referencing the metadata in [CITATION.cff](./CITATION.cff).
---
Thank you for contributing to swarms! Your efforts help make this project better for everyone.
If you have any questions or need assistance, please feel free to open an issue or reach out to the maintainers.
If you have any questions or need assistance, please feel free to:
- Open an issue on GitHub
- Join our Discord community
- Reach out to the maintainers
- Schedule an onboarding session
**Happy contributing! 🚀**

@ -236,10 +236,10 @@ nav:
- SpreadSheetSwarm: "swarms/structs/spreadsheet_swarm.md"
- ForestSwarm: "swarms/structs/forest_swarm.md"
- SwarmRouter: "swarms/structs/swarm_router.md"
- TaskQueueSwarm: "swarms/structs/taskqueue_swarm.md"
# - TaskQueueSwarm: "swarms/structs/taskqueue_swarm.md"
- SwarmRearrange: "swarms/structs/swarm_rearrange.md"
- MultiAgentRouter: "swarms/structs/multi_agent_router.md"
- MatrixSwarm: "swarms/structs/matrix_swarm.md"
# - MatrixSwarm: "swarms/structs/matrix_swarm.md"
- ModelRouter: "swarms/structs/model_router.md"
- MALT: "swarms/structs/malt.md"
- Interactive Group Chat: "swarms/structs/interactive_groupchat.md"
@ -308,6 +308,7 @@ nav:
- Examples:
- Overview: "examples/index.md"
- CookBook Index: "examples/cookbook_index.md"
- PreBuilt Templates: "examples/templates_index.md"
- Customizing Agents:
- Basic Agent: "swarms/examples/basic_agent.md"
- Agents with Callable Tools: "swarms/examples/agent_with_tools.md"

@ -0,0 +1,72 @@
# The Swarms Index
The Swarms Index is a comprehensive catalog of repositories under The Swarm Corporation, showcasing a wide array of tools, frameworks, and templates designed for building, deploying, and managing autonomous AI agents and multi-agent systems. These repositories focus on enterprise-grade solutions, spanning industries like healthcare, finance, marketing, and more, with an emphasis on scalability, security, and performance. Many repositories include templates to help developers quickly set up production-ready applications.
| Name | Description | Link |
|------|-------------|------|
| Phala-Deployment-Template | A guide and template for running Swarms Agents in a Trusted Execution Environment (TEE) using Phala Cloud, ensuring secure and isolated execution. | [https://github.com/The-Swarm-Corporation/Phala-Deployment-Template](https://github.com/The-Swarm-Corporation/Phala-Deployment-Template) |
| Swarms-API-Status-Page | A status page for monitoring the health and performance of the Swarms API. | [https://github.com/The-Swarm-Corporation/Swarms-API-Status-Page](https://github.com/The-Swarm-Corporation/Swarms-API-Status-Page) |
| Swarms-API-Phala-Template | A deployment solution template for running Swarms API on Phala Cloud, optimized for secure and scalable agent orchestration. | [https://github.com/The-Swarm-Corporation/Swarms-API-Phala-Template](https://github.com/The-Swarm-Corporation/Swarms-API-Phala-Template) |
| DevSwarm | Develop production-grade applications effortlessly with a single prompt, powered by a swarm of v0-driven autonomous agents operating 24/7 for fully autonomous software development. | [https://github.com/The-Swarm-Corporation/DevSwarm](https://github.com/The-Swarm-Corporation/DevSwarm) |
| Enterprise-Grade-Agents-Course | A comprehensive course teaching students to build, deploy, and manage autonomous agents for enterprise workflows using the Swarms library, focusing on scalability and integration. | [https://github.com/The-Swarm-Corporation/Enterprise-Grade-Agents-Course](https://github.com/The-Swarm-Corporation/Enterprise-Grade-Agents-Course) |
| agentverse | A collection of agents from top frameworks like Langchain, Griptape, and CrewAI, integrated into the Swarms ecosystem. | [https://github.com/The-Swarm-Corporation/agentverse](https://github.com/The-Swarm-Corporation/agentverse) |
| InsuranceSwarm | A swarm of agents to automate document processing and fraud detection in insurance claims. | [https://github.com/The-Swarm-Corporation/InsuranceSwarm](https://github.com/The-Swarm-Corporation/InsuranceSwarm) |
| swarms-examples | A vast array of examples for enterprise-grade and production-ready applications using the Swarms framework. | [https://github.com/The-Swarm-Corporation/swarms-examples](https://github.com/The-Swarm-Corporation/swarms-examples) |
| auto-ai-research-team | Automates AI research at an OpenAI level to accelerate innovation using swarms of agents. | [https://github.com/The-Swarm-Corporation/auto-ai-research-team](https://github.com/The-Swarm-Corporation/auto-ai-research-team) |
| Agents-Beginner-Guide | A definitive beginner's guide to AI agents and multi-agent systems, explaining fundamentals and industry applications. | [https://github.com/The-Swarm-Corporation/Agents-Beginner-Guide](https://github.com/The-Swarm-Corporation/Agents-Beginner-Guide) |
| Solana-Ecosystem-MCP | A collection of Solana tools wrapped in MCP servers for blockchain development. | [https://github.com/The-Swarm-Corporation/Solana-Ecosystem-MCP](https://github.com/The-Swarm-Corporation/Solana-Ecosystem-MCP) |
| automated-crypto-fund | A fully automated crypto fund leveraging swarms of LLM agents for real-money trading. | [https://github.com/The-Swarm-Corporation/automated-crypto-fund](https://github.com/The-Swarm-Corporation/automated-crypto-fund) |
| Mryaid | The first multi-agent social media platform powered by Swarms. | [https://github.com/The-Swarm-Corporation/Mryaid](https://github.com/The-Swarm-Corporation/Mryaid) |
| pharma-swarm | A swarm of autonomous agents for chemical analysis in the pharmaceutical industry. | [https://github.com/The-Swarm-Corporation/pharma-swarm](https://github.com/The-Swarm-Corporation/pharma-swarm) |
| Automated-Prompt-Engineering-Hub | A hub for tools and resources focused on automated prompt engineering for generative AI. | [https://github.com/The-Swarm-Corporation/Automated-Prompt-Engineering-Hub](https://github.com/The-Swarm-Corporation/Automated-Prompt-Engineering-Hub) |
| Multi-Agent-Template-App | A simple, reliable, and high-performance template for building multi-agent applications. | [https://github.com/The-Swarm-Corporation/Multi-Agent-Template-App](https://github.com/The-Swarm-Corporation/Multi-Agent-Template-App) |
| Cookbook | Examples and guides for using the Swarms Framework effectively. | [https://github.com/The-Swarm-Corporation/Cookbook](https://github.com/The-Swarm-Corporation/Cookbook) |
| SwarmDB | A production-grade message queue system for agent communication and LLM backend load balancing. | [https://github.com/The-Swarm-Corporation/SwarmDB](https://github.com/The-Swarm-Corporation/SwarmDB) |
| CryptoTaxSwarm | A personal advisory tax swarm for cryptocurrency transactions. | [https://github.com/The-Swarm-Corporation/CryptoTaxSwarm](https://github.com/The-Swarm-Corporation/CryptoTaxSwarm) |
| Multi-Agent-Marketing-Course | A course on automating marketing operations with enterprise-grade multi-agent collaboration. | [https://github.com/The-Swarm-Corporation/Multi-Agent-Marketing-Course](https://github.com/The-Swarm-Corporation/Multi-Agent-Marketing-Course) |
| Swarms-BrandBook | Branding guidelines and assets for Swarms.ai, embodying innovation and collaboration. | [https://github.com/The-Swarm-Corporation/Swarms-BrandBook](https://github.com/The-Swarm-Corporation/Swarms-BrandBook) |
| AgentAPI | A definitive API for managing and interacting with AI agents. | [https://github.com/The-Swarm-Corporation/AgentAPI](https://github.com/The-Swarm-Corporation/AgentAPI) |
| Research-Paper-Writer-Swarm | Automates the creation of high-quality research papers in LaTeX using Swarms agents. | [https://github.com/The-Swarm-Corporation/Research-Paper-Writer-Swarm](https://github.com/The-Swarm-Corporation/Research-Paper-Writer-Swarm) |
| swarms-sdk | A Python client for the Swarms API, providing a simple interface for managing AI swarms. | [https://github.com/The-Swarm-Corporation/swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) |
| FluidAPI | A framework for interacting with APIs using natural language, simplifying complex requests. | [https://github.com/The-Swarm-Corporation/FluidAPI](https://github.com/The-Swarm-Corporation/FluidAPI) |
| MedicalCoderSwarm | A multi-agent system for comprehensive medical diagnosis and coding using specialized AI agents. | [https://github.com/The-Swarm-Corporation/MedicalCoderSwarm](https://github.com/The-Swarm-Corporation/MedicalCoderSwarm) |
| BackTesterAgent | An AI-powered backtesting framework for automated trading strategy validation and optimization. | [https://github.com/The-Swarm-Corporation/BackTesterAgent](https://github.com/The-Swarm-Corporation/BackTesterAgent) |
| .ai | The first natural language programming language powered by Swarms. | [https://github.com/The-Swarm-Corporation/.ai](https://github.com/The-Swarm-Corporation/.ai) |
| AutoHedge | An autonomous hedge fund leveraging swarm intelligence for market analysis and trade execution. | [https://github.com/The-Swarm-Corporation/AutoHedge](https://github.com/The-Swarm-Corporation/AutoHedge) |
| radiology-swarm | A multi-agent system for advanced radiological analysis, diagnosis, and treatment planning. | [https://github.com/The-Swarm-Corporation/radiology-swarm](https://github.com/The-Swarm-Corporation/radiology-swarm) |
| MedGuard | A Python library ensuring HIPAA compliance for LLM agents in healthcare applications. | [https://github.com/The-Swarm-Corporation/MedGuard](https://github.com/The-Swarm-Corporation/MedGuard) |
| doc-master | A lightweight Python library for automated file reading and content extraction. | [https://github.com/The-Swarm-Corporation/doc-master](https://github.com/The-Swarm-Corporation/doc-master) |
| Open-Aladdin | An open-source risk-management tool for stock and security risk analysis. | [https://github.com/The-Swarm-Corporation/Open-Aladdin](https://github.com/The-Swarm-Corporation/Open-Aladdin) |
| TickrAgent | A scalable Python library for building financial agents for comprehensive stock analysis. | [https://github.com/The-Swarm-Corporation/TickrAgent](https://github.com/The-Swarm-Corporation/TickrAgent) |
| NewsAgent | An enterprise-grade news aggregation agent for fetching, querying, and summarizing news. | [https://github.com/The-Swarm-Corporation/NewsAgent](https://github.com/The-Swarm-Corporation/NewsAgent) |
| Research-Paper-Hive | A platform for discovering and engaging with relevant research papers efficiently. | [https://github.com/The-Swarm-Corporation/Research-Paper-Hive](https://github.com/The-Swarm-Corporation/Research-Paper-Hive) |
| MedInsight-Pro | Revolutionizes medical research summarization for healthcare innovators. | [https://github.com/The-Swarm-Corporation/MedInsight-Pro](https://github.com/The-Swarm-Corporation/MedInsight-Pro) |
| swarms-memory | Pre-built wrappers for RAG systems like ChromaDB, Weaviate, and Pinecone. | [https://github.com/The-Swarm-Corporation/swarms-memory](https://github.com/The-Swarm-Corporation/swarms-memory) |
| CryptoAgent | An enterprise-grade solution for fetching, analyzing, and summarizing cryptocurrency data. | [https://github.com/The-Swarm-Corporation/CryptoAgent](https://github.com/The-Swarm-Corporation/CryptoAgent) |
| AgentParse | A high-performance parsing library for mapping structured data into agent-understandable blocks. | [https://github.com/The-Swarm-Corporation/AgentParse](https://github.com/The-Swarm-Corporation/AgentParse) |
| CodeGuardian | An intelligent agent for automating the generation of production-grade unit tests for Python code. | [https://github.com/The-Swarm-Corporation/CodeGuardian](https://github.com/The-Swarm-Corporation/CodeGuardian) |
| Marketing-Swarm-Template | A framework for creating multi-platform marketing content using Swarms AI agents. | [https://github.com/The-Swarm-Corporation/Marketing-Swarm-Template](https://github.com/The-Swarm-Corporation/Marketing-Swarm-Template) |
| HTX-Swarm | A multi-agent system for real-time market analysis of HTX exchange data. | [https://github.com/The-Swarm-Corporation/HTX-Swarm](https://github.com/The-Swarm-Corporation/HTX-Swarm) |
| MultiModelOptimizer | A hierarchical parameter synchronization approach for joint training of transformer models. | [https://github.com/The-Swarm-Corporation/MultiModelOptimizer](https://github.com/The-Swarm-Corporation/MultiModelOptimizer) |
| MortgageUnderwritingSwarm | A multi-agent pipeline for automating mortgage underwriting processes. | [https://github.com/The-Swarm-Corporation/MortgageUnderwritingSwarm](https://github.com/The-Swarm-Corporation/MortgageUnderwritingSwarm) |
| DermaSwarm | A multi-agent system for dermatologists to diagnose and treat skin conditions collaboratively. | [https://github.com/The-Swarm-Corporation/DermaSwarm](https://github.com/The-Swarm-Corporation/DermaSwarm) |
| IoTAgents | Integrates IoT data with AI agents for seamless parsing and processing of data streams. | [https://github.com/The-Swarm-Corporation/IoTAgents](https://github.com/The-Swarm-Corporation/IoTAgents) |
| eth-agent | An autonomous agent for analyzing on-chain Ethereum data. | [https://github.com/The-Swarm-Corporation/eth-agent](https://github.com/The-Swarm-Corporation/eth-agent) |
| Medical-Swarm-One-Click | A template for building safe, reliable, and production-grade medical multi-agent systems. | [https://github.com/The-Swarm-Corporation/Medical-Swarm-One-Click](https://github.com/The-Swarm-Corporation/Medical-Swarm-One-Click) |
| Swarms-Example-1-Click-Template | A one-click template for building Swarms applications quickly. | [https://github.com/The-Swarm-Corporation/Swarms-Example-1-Click-Template](https://github.com/The-Swarm-Corporation/Swarms-Example-1-Click-Template) |
| Custom-Swarms-Spec-Template | An official specification template for custom swarm development using the Swarms Framework. | [https://github.com/The-Swarm-Corporation/Custom-Swarms-Spec-Template](https://github.com/The-Swarm-Corporation/Custom-Swarms-Spec-Template) |
| Swarms-LlamaIndex-RAG-Template | A template for integrating Llama Index into Swarms applications for RAG capabilities. | [https://github.com/The-Swarm-Corporation/Swarms-LlamaIndex-RAG-Template](https://github.com/The-Swarm-Corporation/Swarms-LlamaIndex-RAG-Template) |
| ForexTreeSwarm | A forex market analysis system using a swarm of AI agents organized in a forest structure. | [https://github.com/The-Swarm-Corporation/ForexTreeSwarm](https://github.com/The-Swarm-Corporation/ForexTreeSwarm) |
| Generalist-Mathematician-Swarm | A swarm of agents for solving complex mathematical problems collaboratively. | [https://github.com/The-Swarm-Corporation/Generalist-Mathematician-Swarm](https://github.com/The-Swarm-Corporation/Generalist-Mathematician-Swarm) |
| Multi-Modal-XRAY-Diagnosis-Medical-Swarm-Template | A template for analyzing X-rays, MRIs, and more using a swarm of agents. | [https://github.com/The-Swarm-Corporation/Multi-Modal-XRAY-Diagnosis-Medical-Swarm-Template](https://github.com/The-Swarm-Corporation/Multi-Modal-XRAY-Diagnosis-Medical-Swarm-Template) |
| AgentRAGProtocol | A protocol for integrating Retrieval-Augmented Generation (RAG) into AI agents. | [https://github.com/The-Swarm-Corporation/AgentRAGProtocol](https://github.com/The-Swarm-Corporation/AgentRAGProtocol) |
| Multi-Agent-RAG-Template | A template for creating collaborative AI agent teams for document processing and analysis. | [https://github.com/The-Swarm-Corporation/Multi-Agent-RAG-Template](https://github.com/The-Swarm-Corporation/Multi-Agent-RAG-Template) |
| REACT-Yaml-Agent | An implementation of a REACT agent using YAML instead of JSON. | [https://github.com/The-Swarm-Corporation/REACT-Yaml-Agent](https://github.com/The-Swarm-Corporation/REACT-Yaml-Agent) |
| SwarmsXGCP | A template for deploying Swarms agents on Google Cloud Run. | [https://github.com/The-Swarm-Corporation/SwarmsXGCP](https://github.com/The-Swarm-Corporation/SwarmsXGCP) |
| Legal-Swarm-Template | A one-click template for building legal-focused Swarms applications. | [https://github.com/The-Swarm-Corporation/Legal-Swarm-Template](https://github.com/The-Swarm-Corporation/Legal-Swarm-Template) |
| swarms_sim | A simulation of a swarm of agents in a professional workplace environment. | [https://github.com/The-Swarm-Corporation/swarms_sim](https://github.com/The-Swarm-Corporation/swarms_sim) |
| medical-problems | A repository for medical problems to create Swarms applications for. | [https://github.com/The-Swarm-Corporation/medical-problems](https://github.com/The-Swarm-Corporation/medical-problems) |
| swarm-ecosystem | An overview of the Swarm Ecosystem and its components. | [https://github.com/The-Swarm-Corporation/swarm-ecosystem](https://github.com/The-Swarm-Corporation/swarm-ecosystem) |
| swarms_ecosystem_md | MDX documentation for the Swarm Ecosystem. | [https://github.com/The-Swarm-Corporation/swarms_ecosystem_md](https://github.com/The-Swarm-Corporation/swarms_ecosystem_md) |

@ -38,7 +38,8 @@ agent = Agent(
model_name="gpt-4o-mini",
dynamic_temperature_enabled=True,
output_type="all",
safety_prompt_on=True,
max_tokens=16384,
# dashboard=True
)
out = agent.run("What are the best top 3 etfs for gold coverage?")

@ -11,11 +11,13 @@ agent = Agent(
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=1,
mcp_url="http://0.0.0.0:8000/sse",
model_name="gpt-4o-mini",
output_type="all",
)
# Create a markdown file with initial content
out = agent.run(
"Use any of the tools available to you",
"Use the get_okx_crypto_volume to get the volume of BTC just put the name of the coin",
)
print(out)

@ -0,0 +1,49 @@
from swarms import Agent
# Initialize the agent
agent = Agent(
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
system_prompt="""
You are an expert quantitative trading agent with deep expertise in:
- Algorithmic trading strategies and implementation
- Statistical arbitrage and market making
- Risk management and portfolio optimization
- High-frequency trading systems
- Market microstructure analysis
- Quantitative research methodologies
- Financial mathematics and stochastic processes
- Machine learning applications in trading
Your core responsibilities include:
1. Developing and backtesting trading strategies
2. Analyzing market data and identifying alpha opportunities
3. Implementing risk management frameworks
4. Optimizing portfolio allocations
5. Conducting quantitative research
6. Monitoring market microstructure
7. Evaluating trading system performance
You maintain strict adherence to:
- Mathematical rigor in all analyses
- Statistical significance in strategy development
- Risk-adjusted return optimization
- Market impact minimization
- Regulatory compliance
- Transaction cost analysis
- Performance attribution
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
max_loops=1,
model_name="gpt-4o-mini",
dynamic_temperature_enabled=True,
output_type="all",
mcp_urls=[
"http://0.0.0.0:8000/sse",
"http://0.0.0.0:8001/sse",
],
)
agent.run(
"Please use the get_okx_crypto_volume tool to get the trading volume for Bitcoin (BTC). Provide the volume information."
)

@ -0,0 +1,12 @@
from swarms.tools.mcp_client_call import (
get_mcp_tools_sync,
execute_tool_call_simple,
)
tools = get_mcp_tools_sync()
print(tools)
result = execute_tool_call_simple(tools[0], "Hello, world!")
print(result)

@ -0,0 +1,234 @@
"""
Example demonstrating how to execute multiple tools across multiple MCP servers.
This example shows how to:
1. Create a mapping of function names to servers
2. Execute multiple tool calls across different servers
3. Handle responses with tool calls and route them to the appropriate servers
"""
import asyncio
from swarms.tools.mcp_client_call import (
execute_multiple_tools_on_multiple_mcp_servers,
execute_multiple_tools_on_multiple_mcp_servers_sync,
get_tools_for_multiple_mcp_servers,
)
from swarms.schemas.mcp_schemas import MCPConnection
def example_sync_execution():
"""Example of synchronous execution across multiple MCP servers."""
# Example server URLs (replace with your actual MCP server URLs)
urls = [
"http://localhost:8000/sse", # Server 1
"http://localhost:8001/sse", # Server 2
"http://localhost:8002/sse", # Server 3
]
# Optional: Create connection objects for each server
connections = [
MCPConnection(
url="http://localhost:8000/sse",
authorization_token="token1", # if needed
timeout=10,
),
MCPConnection(
url="http://localhost:8001/sse",
authorization_token="token2", # if needed
timeout=10,
),
MCPConnection(
url="http://localhost:8002/sse",
authorization_token="token3", # if needed
timeout=10,
),
]
# Example responses containing tool calls
# These would typically come from an LLM that decided to use tools
responses = [
{
"function": {
"name": "search_web",
"arguments": {
"query": "python programming best practices"
},
}
},
{
"function": {
"name": "search_database",
"arguments": {"table": "users", "id": 123},
}
},
{
"function": {
"name": "send_email",
"arguments": {
"to": "user@example.com",
"subject": "Test email",
"body": "This is a test email",
},
}
},
]
print("=== Synchronous Execution Example ===")
print(
f"Executing {len(responses)} tool calls across {len(urls)} servers..."
)
try:
# Execute all tool calls across multiple servers
results = execute_multiple_tools_on_multiple_mcp_servers_sync(
responses=responses,
urls=urls,
connections=connections,
output_type="dict",
max_concurrent=5, # Limit concurrent executions
)
print(f"\nExecution completed! Got {len(results)} results:")
for i, result in enumerate(results):
print(f"\nResult {i + 1}:")
print(f" Function: {result['function_name']}")
print(f" Server: {result['server_url']}")
print(f" Status: {result['status']}")
if result["status"] == "success":
print(f" Result: {result['result']}")
else:
print(
f" Error: {result.get('error', 'Unknown error')}"
)
except Exception as e:
print(f"Error during execution: {str(e)}")
async def example_async_execution():
"""Example of asynchronous execution across multiple MCP servers."""
# Example server URLs
urls = [
"http://localhost:8000/sse",
"http://localhost:8001/sse",
"http://localhost:8002/sse",
]
# Example responses with multiple tool calls in a single response
responses = [
{
"tool_calls": [
{
"function": {
"name": "search_web",
"arguments": {
"query": "machine learning trends 2024"
},
}
},
{
"function": {
"name": "search_database",
"arguments": {
"table": "articles",
"category": "AI",
},
}
},
]
},
{
"function": {
"name": "send_notification",
"arguments": {
"user_id": 456,
"message": "Your analysis is complete",
},
}
},
]
print("\n=== Asynchronous Execution Example ===")
print(
f"Executing tool calls across {len(urls)} servers asynchronously..."
)
try:
# Execute all tool calls across multiple servers
results = (
await execute_multiple_tools_on_multiple_mcp_servers(
responses=responses,
urls=urls,
output_type="str",
max_concurrent=3,
)
)
print(
f"\nAsync execution completed! Got {len(results)} results:"
)
for i, result in enumerate(results):
print(f"\nResult {i + 1}:")
print(f" Response Index: {result['response_index']}")
print(f" Function: {result['function_name']}")
print(f" Server: {result['server_url']}")
print(f" Status: {result['status']}")
if result["status"] == "success":
print(f" Result: {result['result']}")
else:
print(
f" Error: {result.get('error', 'Unknown error')}"
)
except Exception as e:
print(f"Error during async execution: {str(e)}")
def example_get_tools_from_multiple_servers():
"""Example of getting tools from multiple servers."""
urls = [
"http://localhost:8000/sse",
"http://localhost:8001/sse",
"http://localhost:8002/sse",
]
print("\n=== Getting Tools from Multiple Servers ===")
try:
# Get all available tools from all servers
all_tools = get_tools_for_multiple_mcp_servers(
urls=urls, format="openai", output_type="dict"
)
print(
f"Found {len(all_tools)} total tools across all servers:"
)
# Group tools by function name to see what's available
function_names = set()
for tool in all_tools:
if isinstance(tool, dict) and "function" in tool:
function_names.add(tool["function"]["name"])
elif hasattr(tool, "name"):
function_names.add(tool.name)
print("Available functions:")
for func_name in sorted(function_names):
print(f" - {func_name}")
except Exception as e:
print(f"Error getting tools: {str(e)}")
if __name__ == "__main__":
# Run synchronous example
example_sync_execution()
# Run async example
asyncio.run(example_async_execution())
# Get tools from multiple servers
example_get_tools_from_multiple_servers()

@ -0,0 +1,54 @@
"""
Simple test for the execute_multiple_tools_on_multiple_mcp_servers functionality.
"""
from swarms.tools.mcp_client_call import (
execute_multiple_tools_on_multiple_mcp_servers_sync,
)
def test_async_multiple_tools_execution():
"""Test the async multiple tools execution function structure."""
print(
"\nTesting async multiple tools execution function structure..."
)
urls = [
"http://localhost:8000/sse",
"http://localhost:8001/sse",
]
# Mock responses with multiple tool calls
responses = [
{
"tool_calls": [
{
"function": {
"name": "get_okx_crypto_price",
"arguments": {"symbol": "SOL-USDT"},
}
},
{
"function": {
"name": "get_crypto_price",
"arguments": {"coin_id": "solana"},
}
},
]
}
]
try:
# This will likely fail to connect, but we can test the function structure
results = execute_multiple_tools_on_multiple_mcp_servers_sync(
responses=responses, urls=urls
)
print(f"Got {len(results)} results")
print(results)
except Exception as e:
print(f"Expected error (no servers running): {str(e)}")
print("Async function structure is working correctly!")
if __name__ == "__main__":
test_async_multiple_tools_execution()

@ -18,7 +18,7 @@ router = SwarmRouter(
name="multi-agent-router-demo",
description="Routes tasks to the most suitable agent",
agents=agents,
swarm_type="MultiAgentRouter"
swarm_type="MultiAgentRouter",
)
result = router.run("Write a function that adds two numbers")

@ -176,15 +176,15 @@ agent = Agent(
max_loops=1,
model_name="gpt-4o-mini",
dynamic_temperature_enabled=True,
output_type="all",
output_type="final",
tool_call_summary=True,
tools=[
get_coin_price,
get_top_cryptocurrencies,
],
# output_raw_json_from_tool_call=True,
)
print(
agent.run(
"What is the price of Bitcoin? what are the top 5 cryptocurrencies by market cap?"
)
)
out = agent.run("What is the price of Bitcoin?")
print(out)
print(f"Output type: {type(out)}")

@ -0,0 +1,20 @@
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from swarms_tools import yahoo_finance_api
# Initialize the agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=1,
model_name="gpt-4o-mini",
tools=[yahoo_finance_api],
dynamic_temperature_enabled=True,
)
agent.run(
"Fetch the data for nvidia and tesla both with the yahoo finance api"
)

@ -1,31 +0,0 @@
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from swarms_tools import (
fetch_stock_news,
coin_gecko_coin_api,
fetch_htx_data,
)
# Initialize the agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=1,
model_name="gpt-4o",
dynamic_temperature_enabled=True,
user_name="swarms_corp",
retry_attempts=3,
context_length=8192,
return_step_meta=False,
output_type="str", # "json", "dict", "csv" OR "string" "yaml" and
auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task
max_tokens=4000, # max output tokens
saved_state_path="agent_00.json",
interactive=False,
tools=[fetch_stock_news, coin_gecko_coin_api, fetch_htx_data],
)
agent.run("Analyze the $swarms token on htx")

@ -1,4 +1,4 @@
from swarms.structs import Agent
from swarms import Agent
from swarms.prompts.logistics import (
Quality_Control_Agent_Prompt,
)
@ -16,6 +16,8 @@ quality_control_agent = Agent(
multi_modal=True,
max_loops=1,
output_type="str-all-except-first",
dynamic_temperature_enabled=True,
stream=True,
)
response = quality_control_agent.run(

Before

Width:  |  Height:  |  Size: 232 KiB

After

Width:  |  Height:  |  Size: 232 KiB

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "7.8.6"
version = "7.8.8"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]

@ -22,6 +22,7 @@ agent_types = Literal[
"AgentJudge",
]
class ReasoningAgentRouter:
"""
A Reasoning Agent that can answer questions and assist with various tasks using different reasoning strategies.
@ -74,19 +75,16 @@ class ReasoningAgentRouter:
# ReasoningDuo factory methods
"reasoning-duo": self._create_reasoning_duo,
"reasoning-agent": self._create_reasoning_duo,
# SelfConsistencyAgent factory methods
"self-consistency": self._create_consistency_agent,
"consistency-agent": self._create_consistency_agent,
# IREAgent factory methods
"ire": self._create_ire_agent,
"ire-agent": self._create_ire_agent,
# Other agent type factory methods
"AgentJudge": self._create_agent_judge,
"ReflexionAgent": self._create_reflexion_agent,
"GKPAgent": self._create_gkp_agent
"GKPAgent": self._create_gkp_agent,
}
# Added: Concrete factory methods for various agent types

@ -72,8 +72,10 @@ from swarms.prompts.max_loop_prompt import generate_reasoning_prompt
from swarms.prompts.safety_prompt import SAFETY_PROMPT
from swarms.structs.ma_utils import set_random_models_for_agents
from swarms.tools.mcp_client_call import (
execute_multiple_tools_on_multiple_mcp_servers_sync,
execute_tool_call_simple,
get_mcp_tools_sync,
get_tools_for_multiple_mcp_servers,
)
from swarms.schemas.mcp_schemas import (
MCPConnection,
@ -81,7 +83,6 @@ from swarms.schemas.mcp_schemas import (
from swarms.utils.index import (
exists,
format_data_structure,
format_dict_to_string,
)
from swarms.schemas.conversation_schema import ConversationSchema
from swarms.utils.output_types import OutputType
@ -417,6 +418,8 @@ class Agent:
llm_base_url: Optional[str] = None,
llm_api_key: Optional[str] = None,
rag_config: Optional[RAGConfig] = None,
tool_call_summary: bool = True,
output_raw_json_from_tool_call: bool = False,
*args,
**kwargs,
):
@ -445,7 +448,10 @@ class Agent:
self.system_prompt = system_prompt
self.agent_name = agent_name
self.agent_description = agent_description
self.saved_state_path = f"{self.agent_name}_{generate_api_key(prefix='agent-')}_state.json"
# self.saved_state_path = f"{self.agent_name}_{generate_api_key(prefix='agent-')}_state.json"
self.saved_state_path = (
f"{generate_api_key(prefix='agent-')}_state.json"
)
self.autosave = autosave
self.response_filters = []
self.self_healing_enabled = self_healing_enabled
@ -548,6 +554,10 @@ class Agent:
self.llm_base_url = llm_base_url
self.llm_api_key = llm_api_key
self.rag_config = rag_config
self.tool_call_summary = tool_call_summary
self.output_raw_json_from_tool_call = (
output_raw_json_from_tool_call
)
# self.short_memory = self.short_memory_init()
@ -592,6 +602,11 @@ class Agent:
if self.long_term_memory is not None:
self.rag_handler = self.rag_setup_handling()
if self.dashboard is True:
self.print_dashboard()
self.reliability_check()
def rag_setup_handling(self):
return AgentRAGHandler(
long_term_memory=self.long_term_memory,
@ -616,7 +631,7 @@ class Agent:
self.short_memory.add(
role=f"{self.agent_name}",
content=f"Tools available: {format_data_structure(self.tools_list_dictionary)}",
content=self.tools_list_dictionary,
)
def short_memory_init(self):
@ -685,6 +700,10 @@ class Agent:
if exists(self.tools) and len(self.tools) >= 2:
parallel_tool_calls = True
elif exists(self.mcp_url) or exists(self.mcp_urls):
parallel_tool_calls = True
elif exists(self.mcp_config):
parallel_tool_calls = True
else:
parallel_tool_calls = False
@ -707,7 +726,7 @@ class Agent:
parallel_tool_calls=parallel_tool_calls,
)
elif self.mcp_url is not None:
elif exists(self.mcp_url) or exists(self.mcp_urls):
self.llm = LiteLLM(
**common_args,
tools_list_dictionary=self.add_mcp_tools_to_memory(),
@ -745,11 +764,23 @@ class Agent:
tools = get_mcp_tools_sync(server_path=self.mcp_url)
elif exists(self.mcp_config):
tools = get_mcp_tools_sync(connection=self.mcp_config)
logger.info(f"Tools: {tools}")
# logger.info(f"Tools: {tools}")
elif exists(self.mcp_urls):
tools = get_tools_for_multiple_mcp_servers(
urls=self.mcp_urls,
output_type="str",
)
# print(f"Tools: {tools} for {self.mcp_urls}")
else:
raise AgentMCPConnectionError(
"mcp_url must be either a string URL or MCPConnection object"
)
if (
exists(self.mcp_url)
or exists(self.mcp_urls)
or exists(self.mcp_config)
):
self.pretty_print(
f"✨ [SYSTEM] Successfully integrated {len(tools)} MCP tools into agent: {self.agent_name} | Status: ONLINE | Time: {time.strftime('%H:%M:%S')}",
loop_count=0,
@ -832,26 +863,6 @@ class Agent:
self.feedback.append(feedback)
logging.info(f"Feedback received: {feedback}")
def agent_initialization(self):
try:
logger.info(
f"Initializing Autonomous Agent {self.agent_name}..."
)
self.check_parameters()
logger.info(
f"{self.agent_name} Initialized Successfully."
)
logger.info(
f"Autonomous Agent {self.agent_name} Activated, all systems operational. Executing task..."
)
if self.dashboard is True:
self.print_dashboard()
except ValueError as e:
logger.info(f"Error initializing agent: {e}")
raise e
def _check_stopping_condition(self, response: str) -> bool:
"""Check if the stopping condition is met."""
try:
@ -883,47 +894,37 @@ class Agent:
)
def print_dashboard(self):
"""Print dashboard"""
formatter.print_panel(
f"Initializing Agent: {self.agent_name}"
)
data = self.to_dict()
# Beautify the data
# data = json.dumps(data, indent=4)
# json_data = json.dumps(data, indent=4)
tools_activated = True if self.tools is not None else False
mcp_activated = True if self.mcp_url is not None else False
formatter.print_panel(
f"""
Agent Dashboard
--------------------------------------------
Agent {self.agent_name} is initializing for {self.max_loops} with the following configuration:
----------------------------------------
🤖 Agent {self.agent_name} Dashboard 🚀
Agent Configuration:
Configuration: {data}
🎯 Agent {self.agent_name} Status: ONLINE & OPERATIONAL
----------------------------------------
""",
)
📋 Agent Identity:
🏷 Name: {self.agent_name}
📝 Description: {self.agent_description}
# Check parameters
def check_parameters(self):
if self.llm is None:
raise ValueError(
"Language model is not provided. Choose a model from the available models in swarm_models or create a class with a run(task: str) method and or a __call__ method."
)
Technical Specifications:
🤖 Model: {self.model_name}
🔄 Internal Loops: {self.max_loops}
🎯 Max Tokens: {self.max_tokens}
🌡 Dynamic Temperature: {self.dynamic_temperature_enabled}
if self.max_loops is None or self.max_loops == 0:
raise ValueError("Max loops is not provided")
🔧 System Modules:
🛠 Tools Activated: {tools_activated}
🔗 MCP Activated: {mcp_activated}
if self.max_tokens == 0 or self.max_tokens is None:
raise ValueError("Max tokens is not provided")
🚀 Ready for Tasks 🚀
if self.context_length == 0 or self.context_length is None:
raise ValueError("Context length is not provided")
""",
title=f"Agent {self.agent_name} Dashboard",
)
# Main function
def _run(
@ -962,7 +963,7 @@ class Agent:
self.short_memory.add(role=self.user_name, content=task)
if self.plan_enabled:
if self.plan_enabled or self.planning_prompt is not None:
self.plan(task)
# Set the loop count
@ -1029,64 +1030,51 @@ class Agent:
)
self.memory_query(task_prompt)
# # Generate response using LLM
# response_args = (
# (task_prompt, *args)
# if img is None
# else (task_prompt, img, *args)
# )
# # Call the LLM
# response = self.call_llm(
# *response_args, **kwargs
# )
response = self.call_llm(
task=task_prompt, img=img, *args, **kwargs
)
print(f"Response: {response}")
if exists(self.tools_list_dictionary):
if isinstance(response, BaseModel):
response = response.model_dump()
# # Convert to a str if the response is not a str
# if self.mcp_url is None or self.tools is None:
# Parse the response from the agent with the output type
response = self.parse_llm_output(response)
self.short_memory.add(
role=self.agent_name,
content=format_dict_to_string(response),
content=response,
)
# Print
self.pretty_print(response, loop_count)
# # Output Cleaner
# self.output_cleaner_op(response)
# Check and execute tools
# Check and execute callable tools
if exists(self.tools):
if (
self.output_raw_json_from_tool_call
is True
):
print(type(response))
response = response
else:
self.execute_tools(
response=response,
loop_count=loop_count,
)
if exists(self.mcp_url):
self.mcp_tool_handling(
response, loop_count
)
if exists(self.mcp_url) and exists(
self.tools
# Handle MCP tools
if (
exists(self.mcp_url)
or exists(self.mcp_config)
or exists(self.mcp_urls)
):
self.mcp_tool_handling(
response, loop_count
)
self.execute_tools(
response=response,
loop_count=loop_count,
current_loop=loop_count,
)
self.sentiment_and_evaluator(response)
@ -1275,33 +1263,12 @@ class Agent:
def receive_message(
self, agent_name: str, task: str, *args, **kwargs
):
return self.run(
task=f"From {agent_name}: {task}", *args, **kwargs
improved_prompt = (
f"You have received a message from agent '{agent_name}':\n\n"
f'"{task}"\n\n'
"Please process this message and respond appropriately."
)
def dict_to_csv(self, data: dict) -> str:
"""
Convert a dictionary to a CSV string.
Args:
data (dict): The dictionary to convert.
Returns:
str: The CSV string representation of the dictionary.
"""
import csv
import io
output = io.StringIO()
writer = csv.writer(output)
# Write header
writer.writerow(data.keys())
# Write values
writer.writerow(data.values())
return output.getvalue()
return self.run(task=improved_prompt, *args, **kwargs)
# def parse_and_execute_tools(self, response: str, *args, **kwargs):
# max_retries = 3 # Maximum number of retries
@ -1351,26 +1318,47 @@ class Agent:
def plan(self, task: str, *args, **kwargs) -> None:
"""
Plan the task
Create a strategic plan for executing the given task.
This method generates a step-by-step plan by combining the conversation
history, planning prompt, and current task. The plan is then added to
the agent's short-term memory for reference during execution.
Args:
task (str): The task to plan
task (str): The task to create a plan for
*args: Additional positional arguments passed to the LLM
**kwargs: Additional keyword arguments passed to the LLM
Returns:
None: The plan is stored in memory rather than returned
Raises:
Exception: If planning fails, the original exception is re-raised
"""
try:
if exists(self.planning_prompt):
# Join the plan and the task
planning_prompt = f"{self.planning_prompt} {task}"
plan = self.llm(planning_prompt, *args, **kwargs)
logger.info(f"Plan: {plan}")
# Get the current conversation history
history = self.short_memory.get_str()
# Add the plan to the memory
self.short_memory.add(
role=self.agent_name, content=str(plan)
# Construct the planning prompt by combining history, planning prompt, and task
planning_prompt = (
f"{history}\n\n{self.planning_prompt}\n\nTask: {task}"
)
# Generate the plan using the LLM
plan = self.llm.run(task=planning_prompt, *args, **kwargs)
# Store the generated plan in short-term memory
self.short_memory.add(role=self.agent_name, content=plan)
logger.info(
f"Successfully created plan for task: {task[:50]}..."
)
return None
except Exception as error:
logger.error(f"Error planning task: {error}")
logger.error(
f"Failed to create plan for task '{task}': {error}"
)
raise error
async def run_concurrent(self, task: str, *args, **kwargs):
@ -1453,6 +1441,52 @@ class Agent:
logger.error(f"Error running batched tasks: {error}")
raise
def reliability_check(self):
from litellm.utils import (
supports_function_calling,
get_max_tokens,
)
from litellm import model_list
if self.system_prompt is None:
logger.warning(
"The system prompt is not set. Please set a system prompt for the agent to improve reliability."
)
if self.agent_name is None:
logger.warning(
"The agent name is not set. Please set an agent name to improve reliability."
)
if self.max_loops is None or self.max_loops == 0:
raise AgentInitializationError(
"Max loops is not provided or is set to 0. Please set max loops to 1 or more."
)
if self.max_tokens is None or self.max_tokens == 0:
self.max_tokens = get_max_tokens(self.model_name)
if self.context_length is None or self.context_length == 0:
raise AgentInitializationError(
"Context length is not provided. Please set a valid context length."
)
if self.tools_list_dictionary is not None:
if not supports_function_calling(self.model_name):
raise AgentInitializationError(
f"The model '{self.model_name}' does not support function calling. Please use a model that supports function calling."
)
if self.max_tokens > get_max_tokens(self.model_name):
raise AgentInitializationError(
f"Max tokens is set to {self.max_tokens}, but the model '{self.model_name}' only supports {get_max_tokens(self.model_name)} tokens. Please set max tokens to {get_max_tokens(self.model_name)} or less."
)
if self.model_name not in model_list:
logger.warning(
f"The model '{self.model_name}' is not supported. Please use a supported model, or override the model name with the 'llm' parameter, which should be a class with a 'run(task: str)' method or a '__call__' method."
)
def save(self, file_path: str = None) -> None:
"""
Save the agent state to a file using SafeStateManager with atomic writing
@ -2670,7 +2704,7 @@ class Agent:
) # Convert other dicts to string
elif isinstance(response, BaseModel):
out = response.model_dump()
response = response.model_dump()
# Handle List[BaseModel] responses
elif (
@ -2680,14 +2714,9 @@ class Agent:
):
return [item.model_dump() for item in response]
elif isinstance(response, list):
out = format_data_structure(response)
else:
out = str(response)
return out
return response
except Exception as e:
except AgentChatCompletionResponse as e:
logger.error(f"Error parsing LLM output: {e}")
raise ValueError(
f"Failed to parse LLM output: {type(response)}"
@ -2744,16 +2773,29 @@ class Agent:
connection=self.mcp_config,
)
)
elif exists(self.mcp_urls):
tool_response = execute_multiple_tools_on_multiple_mcp_servers_sync(
responses=response,
urls=self.mcp_urls,
output_type="json",
)
# tool_response = format_data_structure(tool_response)
print(f"Multiple MCP Tool Response: {tool_response}")
else:
raise AgentMCPConnectionError(
"mcp_url must be either a string URL or MCPConnection object"
)
# Get the text content from the tool response
text_content = (
tool_response.content[0].text
if tool_response.content
else str(tool_response)
# execute_tool_call_simple returns a string directly, not an object with content attribute
text_content = f"MCP Tool Response: \n\n {json.dumps(tool_response, indent=2)}"
if self.no_print is False:
formatter.print_panel(
text_content,
"MCP Tool Response: 🛠️",
style="green",
)
# Add to the memory
@ -2820,6 +2862,7 @@ class Agent:
# Now run the LLM again without tools - create a temporary LLM instance
# instead of modifying the cached one
# Create a temporary LLM instance without tools for the follow-up call
if self.tool_call_summary is True:
temp_llm = self.temp_llm_instance_for_tool_summary()
tool_response = temp_llm.run(

@ -1326,6 +1326,12 @@ class Conversation(BaseStructure):
self.conversation_history[-1]["content"],
)
def return_list_final(self):
"""Return the final message as a list."""
return [
self.conversation_history[-1]["content"],
]
@classmethod
def list_conversations(
cls, conversations_dir: Optional[str] = None

@ -104,9 +104,7 @@ class AgentValidator:
model_name in model["model_name"]
for model in model_list
):
valid_models = [
model["model_name"] for model in model_list
]
[model["model_name"] for model in model_list]
raise AgentValidationError(
"Invalid model name. Must be one of the supported litellm models",
"model_name",

@ -4,11 +4,9 @@ from swarms.telemetry.main import (
get_cpu_info,
get_machine_id,
get_os_version,
get_package_mismatches,
get_pip_version,
get_python_version,
get_ram_info,
get_swarms_verison,
get_system_info,
get_user_device_data,
system_info,
@ -21,11 +19,9 @@ __all__ = [
"generate_unique_identifier",
"get_python_version",
"get_pip_version",
"get_swarms_verison",
"get_os_version",
"get_cpu_info",
"get_ram_info",
"get_package_mismatches",
"system_info",
"get_user_device_data",
]

@ -1,24 +1,16 @@
import asyncio
import os
import datetime
import hashlib
import platform
import socket
import subprocess
import uuid
from concurrent.futures import ThreadPoolExecutor
from functools import lru_cache
from threading import Lock
from typing import Dict
import aiohttp
import pkg_resources
import psutil
import requests
import toml
from requests import Session
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
# Helper functions
@ -263,134 +255,44 @@ def capture_system_data() -> Dict[str, str]:
print(f"Failed to capture system data: {e}")
# Global variables
_session = None
_session_lock = Lock()
_executor = ThreadPoolExecutor(max_workers=10)
_aiohttp_session = None
def get_session() -> Session:
"""Thread-safe session getter with optimized connection pooling"""
global _session
if _session is None:
with _session_lock:
if _session is None: # Double-check pattern
_session = Session()
adapter = HTTPAdapter(
pool_connections=1000, # Increased pool size
pool_maxsize=1000, # Increased max size
max_retries=Retry(
total=3,
backoff_factor=0.1,
status_forcelist=[500, 502, 503, 504],
),
pool_block=False, # Non-blocking pool
)
_session.mount("http://", adapter)
_session.mount("https://", adapter)
_session.headers.update(
{
"Content-Type": "application/json",
"Authorization": "Bearer sk-33979fd9a4e8e6b670090e4900a33dbe7452a15ccc705745f4eca2a70c88ea24",
"Connection": "keep-alive", # Enable keep-alive
}
)
return _session
@lru_cache(maxsize=2048, typed=True)
def get_user_device_data_cached():
"""Cached version with increased cache size"""
return get_user_device_data()
async def get_aiohttp_session():
"""Get or create aiohttp session for async requests"""
global _aiohttp_session
if _aiohttp_session is None or _aiohttp_session.closed:
timeout = aiohttp.ClientTimeout(total=10)
connector = aiohttp.TCPConnector(
limit=1000, # Connection limit
ttl_dns_cache=300, # DNS cache TTL
use_dns_cache=True, # Enable DNS caching
keepalive_timeout=60, # Keep-alive timeout
)
_aiohttp_session = aiohttp.ClientSession(
timeout=timeout,
connector=connector,
headers={
"Content-Type": "application/json",
"Authorization": "Bearer sk-33979fd9a4e8e6b670090e4900a33dbe7452a15ccc705745f4eca2a70c88ea24",
},
)
return _aiohttp_session
async def log_agent_data_async(data_dict: dict):
"""Asynchronous version of log_agent_data"""
def _log_agent_data(data_dict: dict):
"""Simple function to log agent data using requests library"""
if not data_dict:
return None
return
url = "https://swarms.world/api/get-agents/log-agents"
payload = {
"data": data_dict,
"system_data": get_user_device_data_cached(),
"system_data": get_user_device_data(),
"timestamp": datetime.datetime.now(
datetime.timezone.utc
).isoformat(),
}
session = await get_aiohttp_session()
try:
async with session.post(url, json=payload) as response:
if response.status == 200:
return await response.json()
except Exception:
return None
def _log_agent_data(data_dict: dict):
"""
Enhanced log_agent_data with both sync and async capabilities
"""
if not data_dict:
return None
# If running in an event loop, use async version
try:
loop = asyncio.get_event_loop()
if loop.is_running():
return asyncio.create_task(
log_agent_data_async(data_dict)
key = (
os.getenv("SWARMS_API_KEY")
or "Bearer sk-33979fd9a4e8e6b670090e4900a33dbe7452a15ccc705745f4eca2a70c88ea24"
)
except RuntimeError:
pass
# Fallback to optimized sync version
url = "https://swarms.world/api/get-agents/log-agents"
payload = {
"data": data_dict,
"system_data": get_user_device_data_cached(),
"timestamp": datetime.datetime.now(
datetime.timezone.utc
).isoformat(),
headers = {
"Content-Type": "application/json",
"Authorization": key,
}
try:
session = get_session()
response = session.post(
url,
json=payload,
timeout=10,
stream=False, # Disable streaming for faster response
response = requests.post(
url, json=payload, headers=headers, timeout=10
)
if response.ok and response.text.strip():
return response.json()
if response.status_code == 200:
return
except Exception:
return None
return
return
def log_agent_data(data_dict: dict):
"""Log agent data"""
try:
_log_agent_data(data_dict)
except Exception:
pass

@ -33,6 +33,11 @@ from swarms.tools.mcp_client_call import (
get_tools_for_multiple_mcp_servers,
get_mcp_tools_sync,
aget_mcp_tools,
execute_multiple_tools_on_multiple_mcp_servers,
execute_multiple_tools_on_multiple_mcp_servers_sync,
_create_server_tool_mapping,
_create_server_tool_mapping_async,
_execute_tool_on_server,
)
@ -62,4 +67,9 @@ __all__ = [
"get_tools_for_multiple_mcp_servers",
"get_mcp_tools_sync",
"aget_mcp_tools",
"execute_multiple_tools_on_multiple_mcp_servers",
"execute_multiple_tools_on_multiple_mcp_servers_sync",
"_create_server_tool_mapping",
"_create_server_tool_mapping_async",
"_execute_tool_on_server",
]

@ -494,6 +494,9 @@ async def execute_tool_call_simple(
*args,
**kwargs,
) -> List[Dict[str, Any]]:
if isinstance(response, str):
response = json.loads(response)
return await _execute_tool_call_simple(
response=response,
server_path=server_path,
@ -502,3 +505,511 @@ async def execute_tool_call_simple(
*args,
**kwargs,
)
def _create_server_tool_mapping(
urls: List[str],
connections: List[MCPConnection] = None,
format: str = "openai",
) -> Dict[str, Dict[str, Any]]:
"""
Create a mapping of function names to server information for all MCP servers.
Args:
urls: List of server URLs
connections: Optional list of MCPConnection objects
format: Format to fetch tools in
Returns:
Dict mapping function names to server info (url, connection, tool)
"""
server_tool_mapping = {}
for i, url in enumerate(urls):
connection = (
connections[i]
if connections and i < len(connections)
else None
)
try:
# Get tools for this server
tools = get_mcp_tools_sync(
server_path=url,
connection=connection,
format=format,
)
# Create mapping for each tool
for tool in tools:
if isinstance(tool, dict) and "function" in tool:
function_name = tool["function"]["name"]
server_tool_mapping[function_name] = {
"url": url,
"connection": connection,
"tool": tool,
"server_index": i,
}
elif hasattr(tool, "name"):
# Handle MCPTool objects
server_tool_mapping[tool.name] = {
"url": url,
"connection": connection,
"tool": tool,
"server_index": i,
}
except Exception as e:
logger.warning(
f"Failed to fetch tools from server {url}: {str(e)}"
)
continue
return server_tool_mapping
async def _create_server_tool_mapping_async(
urls: List[str],
connections: List[MCPConnection] = None,
format: str = "openai",
) -> Dict[str, Dict[str, Any]]:
"""
Async version: Create a mapping of function names to server information for all MCP servers.
Args:
urls: List of server URLs
connections: Optional list of MCPConnection objects
format: Format to fetch tools in
Returns:
Dict mapping function names to server info (url, connection, tool)
"""
server_tool_mapping = {}
for i, url in enumerate(urls):
connection = (
connections[i]
if connections and i < len(connections)
else None
)
try:
# Get tools for this server using async function
tools = await aget_mcp_tools(
server_path=url,
connection=connection,
format=format,
)
# Create mapping for each tool
for tool in tools:
if isinstance(tool, dict) and "function" in tool:
function_name = tool["function"]["name"]
server_tool_mapping[function_name] = {
"url": url,
"connection": connection,
"tool": tool,
"server_index": i,
}
elif hasattr(tool, "name"):
# Handle MCPTool objects
server_tool_mapping[tool.name] = {
"url": url,
"connection": connection,
"tool": tool,
"server_index": i,
}
except Exception as e:
logger.warning(
f"Failed to fetch tools from server {url}: {str(e)}"
)
continue
return server_tool_mapping
async def _execute_tool_on_server(
tool_call: Dict[str, Any],
server_info: Dict[str, Any],
output_type: Literal["json", "dict", "str", "formatted"] = "str",
) -> Dict[str, Any]:
"""
Execute a single tool call on a specific server.
Args:
tool_call: The tool call to execute
server_info: Server information from the mapping
output_type: Output format type
Returns:
Execution result with server metadata
"""
try:
result = await _execute_tool_call_simple(
response=tool_call,
server_path=server_info["url"],
connection=server_info["connection"],
output_type=output_type,
)
return {
"server_url": server_info["url"],
"server_index": server_info["server_index"],
"function_name": tool_call.get("function", {}).get(
"name", "unknown"
),
"result": result,
"status": "success",
}
except Exception as e:
logger.error(
f"Failed to execute tool on server {server_info['url']}: {str(e)}"
)
return {
"server_url": server_info["url"],
"server_index": server_info["server_index"],
"function_name": tool_call.get("function", {}).get(
"name", "unknown"
),
"result": None,
"error": str(e),
"status": "error",
}
async def execute_multiple_tools_on_multiple_mcp_servers(
responses: List[Dict[str, Any]],
urls: List[str],
connections: List[MCPConnection] = None,
output_type: Literal["json", "dict", "str", "formatted"] = "str",
max_concurrent: Optional[int] = None,
*args,
**kwargs,
) -> List[Dict[str, Any]]:
"""
Execute multiple tool calls across multiple MCP servers.
This function creates a mapping of function names to servers, then for each response
that contains tool calls, it finds the appropriate server for each function and
executes the calls concurrently.
Args:
responses: List of responses containing tool calls (OpenAI format)
urls: List of MCP server URLs
connections: Optional list of MCPConnection objects corresponding to each URL
output_type: Output format type for results
max_concurrent: Maximum number of concurrent executions (default: len(responses))
Returns:
List of execution results with server metadata
Example:
# Example responses format:
responses = [
{
"function": {
"name": "search_web",
"arguments": {"query": "python programming"}
}
},
{
"function": {
"name": "search_database",
"arguments": {"table": "users", "id": 123}
}
}
]
urls = ["http://server1:8000", "http://server2:8000"]
results = await execute_multiple_tools_on_multiple_mcp_servers(
responses=responses,
urls=urls
)
"""
if not responses:
logger.warning("No responses provided for execution")
return []
if not urls:
raise MCPValidationError("No server URLs provided")
# Create mapping of function names to servers using async version
logger.info(f"Creating tool mapping for {len(urls)} servers")
server_tool_mapping = await _create_server_tool_mapping_async(
urls=urls, connections=connections, format="openai"
)
if not server_tool_mapping:
raise MCPExecutionError(
"No tools found on any of the provided servers"
)
logger.info(
f"Found {len(server_tool_mapping)} unique functions across all servers"
)
# Extract all tool calls from responses
all_tool_calls = []
logger.info(
f"Processing {len(responses)} responses for tool call extraction"
)
# Check if responses are individual characters that need to be reconstructed
if len(responses) > 10 and all(
isinstance(r, str) and len(r) == 1 for r in responses
):
logger.info(
"Detected character-by-character response, reconstructing JSON string"
)
try:
reconstructed_response = "".join(responses)
logger.info(
f"Reconstructed response length: {len(reconstructed_response)}"
)
logger.debug(
f"Reconstructed response: {reconstructed_response}"
)
# Try to parse the reconstructed response to validate it
try:
json.loads(reconstructed_response)
logger.info(
"Successfully validated reconstructed JSON response"
)
except json.JSONDecodeError as e:
logger.warning(
f"Reconstructed response is not valid JSON: {str(e)}"
)
logger.debug(
f"First 100 chars: {reconstructed_response[:100]}"
)
logger.debug(
f"Last 100 chars: {reconstructed_response[-100:]}"
)
responses = [reconstructed_response]
except Exception as e:
logger.warning(
f"Failed to reconstruct response from characters: {str(e)}"
)
for i, response in enumerate(responses):
logger.debug(
f"Processing response {i}: {type(response)} - {response}"
)
# Handle JSON string responses
if isinstance(response, str):
try:
response = json.loads(response)
logger.debug(
f"Parsed JSON string response {i}: {response}"
)
except json.JSONDecodeError:
logger.warning(
f"Failed to parse JSON response at index {i}: {response}"
)
continue
if isinstance(response, dict):
# Single tool call
if "function" in response:
logger.debug(
f"Found single tool call in response {i}: {response['function']}"
)
# Parse arguments if they're a JSON string
if isinstance(
response["function"].get("arguments"), str
):
try:
response["function"]["arguments"] = (
json.loads(
response["function"]["arguments"]
)
)
logger.debug(
f"Parsed function arguments: {response['function']['arguments']}"
)
except json.JSONDecodeError:
logger.warning(
f"Failed to parse function arguments: {response['function']['arguments']}"
)
all_tool_calls.append((i, response))
# Multiple tool calls
elif "tool_calls" in response:
logger.debug(
f"Found multiple tool calls in response {i}: {len(response['tool_calls'])} calls"
)
for tool_call in response["tool_calls"]:
# Parse arguments if they're a JSON string
if isinstance(
tool_call.get("function", {}).get(
"arguments"
),
str,
):
try:
tool_call["function"]["arguments"] = (
json.loads(
tool_call["function"]["arguments"]
)
)
logger.debug(
f"Parsed tool call arguments: {tool_call['function']['arguments']}"
)
except json.JSONDecodeError:
logger.warning(
f"Failed to parse tool call arguments: {tool_call['function']['arguments']}"
)
all_tool_calls.append((i, tool_call))
# Direct tool call
elif "name" in response and "arguments" in response:
logger.debug(
f"Found direct tool call in response {i}: {response}"
)
# Parse arguments if they're a JSON string
if isinstance(response.get("arguments"), str):
try:
response["arguments"] = json.loads(
response["arguments"]
)
logger.debug(
f"Parsed direct tool call arguments: {response['arguments']}"
)
except json.JSONDecodeError:
logger.warning(
f"Failed to parse direct tool call arguments: {response['arguments']}"
)
all_tool_calls.append((i, {"function": response}))
else:
logger.debug(
f"Response {i} is a dict but doesn't match expected tool call formats: {list(response.keys())}"
)
else:
logger.warning(
f"Unsupported response type at index {i}: {type(response)}"
)
continue
if not all_tool_calls:
logger.warning("No tool calls found in responses")
return []
logger.info(f"Found {len(all_tool_calls)} tool calls to execute")
# Execute tool calls concurrently
max_concurrent = max_concurrent or len(all_tool_calls)
semaphore = asyncio.Semaphore(max_concurrent)
async def execute_with_semaphore(tool_call_info):
async with semaphore:
response_index, tool_call = tool_call_info
function_name = tool_call.get("function", {}).get(
"name", "unknown"
)
if function_name not in server_tool_mapping:
logger.warning(
f"Function '{function_name}' not found on any server"
)
return {
"response_index": response_index,
"function_name": function_name,
"result": None,
"error": f"Function '{function_name}' not available on any server",
"status": "not_found",
}
server_info = server_tool_mapping[function_name]
result = await _execute_tool_on_server(
tool_call=tool_call,
server_info=server_info,
output_type=output_type,
)
result["response_index"] = response_index
return result
# Execute all tool calls concurrently
tasks = [
execute_with_semaphore(tool_call_info)
for tool_call_info in all_tool_calls
]
results = await asyncio.gather(*tasks, return_exceptions=True)
# Process results and handle exceptions
processed_results = []
for i, result in enumerate(results):
if isinstance(result, Exception):
logger.error(
f"Task {i} failed with exception: {str(result)}"
)
processed_results.append(
{
"response_index": (
all_tool_calls[i][0]
if i < len(all_tool_calls)
else -1
),
"function_name": "unknown",
"result": None,
"error": str(result),
"status": "exception",
}
)
else:
processed_results.append(result)
logger.info(
f"Completed execution of {len(processed_results)} tool calls"
)
return processed_results
def execute_multiple_tools_on_multiple_mcp_servers_sync(
responses: List[Dict[str, Any]],
urls: List[str],
connections: List[MCPConnection] = None,
output_type: Literal["json", "dict", "str", "formatted"] = "str",
max_concurrent: Optional[int] = None,
*args,
**kwargs,
) -> List[Dict[str, Any]]:
"""
Synchronous version of execute_multiple_tools_on_multiple_mcp_servers.
Args:
responses: List of responses containing tool calls (OpenAI format)
urls: List of MCP server URLs
connections: Optional list of MCPConnection objects corresponding to each URL
output_type: Output format type for results
max_concurrent: Maximum number of concurrent executions
Returns:
List of execution results with server metadata
"""
with get_or_create_event_loop() as loop:
try:
return loop.run_until_complete(
execute_multiple_tools_on_multiple_mcp_servers(
responses=responses,
urls=urls,
connections=connections,
output_type=output_type,
max_concurrent=max_concurrent,
*args,
**kwargs,
)
)
except Exception as e:
logger.error(
f"Error in execute_multiple_tools_on_multiple_mcp_servers_sync: {str(e)}"
)
raise MCPExecutionError(
f"Failed to execute multiple tools sync: {str(e)}"
)

@ -492,7 +492,6 @@ def convert_multiple_functions_to_openai_function_schema(
# ]
# Use 40% of cpu cores
max_workers = int(os.cpu_count() * 0.8)
print(f"max_workers: {max_workers}")
with concurrent.futures.ThreadPoolExecutor(
max_workers=max_workers

@ -20,6 +20,9 @@ from swarms.utils.output_types import HistoryOutputType
from swarms.utils.history_output_formatter import (
history_output_formatter,
)
from swarms.utils.check_all_model_max_tokens import (
check_all_model_max_tokens,
)
__all__ = [
@ -39,4 +42,5 @@ __all__ = [
"count_tokens",
"HistoryOutputType",
"history_output_formatter",
"check_all_model_max_tokens",
]

@ -8,9 +8,10 @@ import subprocess
import sys
from typing import Literal, Optional, Union
from swarms.utils.loguru_logger import initialize_logger
import pkg_resources
from importlib.metadata import distribution, PackageNotFoundError
logger = initialize_logger("autocheckpackages")
@ -39,13 +40,13 @@ def check_and_install_package(
# Check if package exists
if package_manager == "pip":
try:
pkg_resources.get_distribution(package_name)
distribution(package_name)
if not upgrade:
logger.info(
f"Package {package_name} is already installed"
)
return True
except pkg_resources.DistributionNotFound:
except PackageNotFoundError:
pass
# Construct installation command

@ -0,0 +1,43 @@
from litellm import model_list, get_max_tokens
from swarms.utils.formatter import formatter
# Add model overrides here
MODEL_MAX_TOKEN_OVERRIDES = {
"llama-2-70b-chat:2796ee9483c3fd7aa2e171d38f4ca12251a30609463dcfd4cd76703f22e96cdf": 4096, # Example override
}
def check_all_model_max_tokens():
"""
Check and display the maximum token limits for all available models.
This function iterates through all models in the litellm model list and attempts
to retrieve their maximum token limits. For models that are not properly mapped
in litellm, it checks for custom overrides in MODEL_MAX_TOKEN_OVERRIDES.
Returns:
None: Prints the results to console using formatter.print_panel()
Note:
Models that are not mapped in litellm and have no override set will be
marked with a [WARNING] in the output.
"""
text = ""
for model in model_list:
# skip model names
try:
max_tokens = get_max_tokens(model)
except Exception:
max_tokens = MODEL_MAX_TOKEN_OVERRIDES.get(
model, "[NOT MAPPED]"
)
if max_tokens == "[NOT MAPPED]":
text += f"[WARNING] {model}: not mapped in litellm and no override set.\n"
text += f"{model}: {max_tokens}\n"
text += "" * 80 + "\n" # Add borderline for each model
formatter.print_panel(text, "All Model Max Tokens")
return text
# if __name__ == "__main__":
# print(check_all_model_max_tokens())

@ -23,6 +23,8 @@ def history_output_formatter(
return yaml.safe_dump(conversation.to_dict(), sort_keys=False)
elif type == "dict-all-except-first":
return conversation.return_all_except_first()
elif type == "list-final":
return conversation.return_list_final()
elif type == "str-all-except-first":
return conversation.return_all_except_first_string()
elif type == "dict-final":

@ -12,11 +12,11 @@ HistoryOutputType = Literal[
"all",
"yaml",
"xml",
# "dict-final",
"dict-all-except-first",
"str-all-except-first",
"basemodel",
"dict-final",
"list-final",
]
OutputType = HistoryOutputType

Loading…
Cancel
Save