Merge branch 'kyegomez:master' into master

pull/410/head
Vyomakesh Dundigalla 11 months ago committed by GitHub
commit eae359d09b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -25,7 +25,7 @@ jobs:
- name: Build package
run: python -m build
- name: Publish package
uses: pypa/gh-action-pypi-publish@2f6f737ca5f74c637829c0f5c3acd0e29ea5e8bf
uses: pypa/gh-action-pypi-publish@e53eb8b103ffcb59469888563dc324e3c8ba6f06
with:
user: __token__
password: ${{ secrets.PYPI_API_TOKEN }}

2
.gitignore vendored

@ -15,7 +15,7 @@ Unit Testing Agent_state.json
swarms/__pycache__
venv
.DS_Store
Cargo.lock
.DS_STORE
Cargo.lock
swarms/agents/.DS_Store

@ -1,14 +1,15 @@
[package]
name = "engine"
version = "0.1.0"
edition = "2018"
[lib]
name = "engine"
path = "runtime/concurrent_exec.rs"
crate-type = ["cdylib"]
name = "swarms-runtime" # The name of your project
version = "0.1.0" # The current version, adhering to semantic versioning
edition = "2021" # Specifies which edition of Rust you're using, e.g., 2018 or 2021
authors = ["Your Name <your.email@example.com>"] # Optional: specify the package authors
license = "MIT" # Optional: the license for your project
description = "A brief description of my project" # Optional: a short description of your project
[dependencies]
pyo3 = { version = "0.15", features = ["extension-module"] }
rayon = "1.5.1"
log = "0.4.14"
cpython = "0.5"
rayon = "1.5"
[dependencies.pyo3]
version = "0.20.3"
features = ["extension-module", "auto-initialize"]

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

@ -68,11 +68,11 @@ The team has thousands of hours building and optimizing autonomous agents. Leade
Key milestones: get 80K framework users in January 2024, start contracts in target verticals, introduce commercial products in 2025 with various pricing models.
## Resources
### **Pre-Seed Pitch Deck**
### **Resources**
#### **Pre-Seed Pitch Deck**
- [Here is our pitch deck for our preseed round](https://drive.google.com/file/d/1c76gK5UIdrfN4JOSpSlvVBEOpzR9emWc/view?usp=sharing)
### **The Swarm Corporation Memo**
#### **The Swarm Corporation Memo**
To learn more about our mission, vision, plans for GTM, and much more please refer to the [Swarm Memo here](https://docs.google.com/document/d/1hS_nv_lFjCqLfnJBoF6ULY9roTbSgSuCkvXvSUSc7Lo/edit?usp=sharing)
@ -91,11 +91,14 @@ This section is dedicated entirely for corporate documents.
## **Product**
Swarms is an open source framework for developers in python to enable seamless, reliable, and scalable multi-agent orchestration through modularity, customization, and precision.
[Here is the official Swarms Github Page:](https://github.com/kyegomez/swarms)
- [Swarms Github Page:](https://github.com/kyegomez/swarms)
- [Swarms Memo](https://docs.google.com/document/d/1hS_nv_lFjCqLfnJBoF6ULY9roTbSgSuCkvXvSUSc7Lo/edit)
- [Swarms Project Board](https://github.com/users/kyegomez/projects/1)
- [Swarms Website](https://www.swarms.world/g)
### Product Growth Metrics
| Name | Description | Link |
|--------------------------b--------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
|----------------------------------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
| Total Downloads of all time | Total number of downloads for the product over its entire lifespan. | [![Downloads](https://static.pepy.tech/badge/swarms)](https://pepy.tech/project/swarms) |
| Downloads this month | Number of downloads for the product in the current month. | [![Downloads](https://static.pepy.tech/badge/swarms/month)](https://pepy.tech/project/swarms) |
| Total Downloads this week | Total number of downloads for the product in the current week. | [![Downloads](https://static.pepy.tech/badge/swarms/week)](https://pepy.tech/project/swarms) |
@ -107,4 +110,4 @@ Swarms is an open source framework for developers in python to enable seamless,
| Github Traffic Metrics | Metrics related to traffic, such as views and clones on Github. | [Github Traffic Metrics](https://github.com/kyegomez/swarms/graphs/traffic) |
| Issues with the framework | Current open issues for the product on Github. | [![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms/issues) |
-------

@ -1,7 +1,110 @@
This page summarizes questions we were asked on [Discord](https://discord.gg/gnWRz88eym), Hacker News, and Reddit. Feel free to post a question to [Discord](https://discord.gg/gnWRz88eym) or open a discussion on our [Github Page](https://github.com/kyegomez) or hit us up directly: [kye@apac.ai](mailto:hello@swarms.ai).
### FAQ on Swarm Intelligence and Multi-Agent Systems
## 1. How is Swarms different from LangChain?
#### What is an agent in the context of AI and swarm intelligence?
Swarms is an open source alternative to LangChain and differs in its approach to creating LLM pipelines and DAGs. In addition to agents, it uses more general-purpose DAGs and pipelines. A close proxy might be *Airflow for LLMs*. Swarms still implements chain of thought logic for prompt tasks that use "tools" but it also supports any type of input / output (images, audio, etc.).
In artificial intelligence (AI), an agent refers to an LLM with some objective to accomplish.
In swarm intelligence, each agent interacts with other agents and possibly the environment to achieve complex collective behaviors or solve problems more efficiently than individual agents could on their own.
#### What do you need Swarms at all?
Individual agents are limited by a vast array of issues such as context window loss, single task execution, hallucination, and no collaboration.
#### How does a swarm work?
A swarm works through the principles of decentralized control, local interactions, and simple rules followed by each agent. Unlike centralized systems, where a single entity dictates the behavior of all components, in a swarm, each agent makes its own decisions based on local information and interactions with nearby agents. These local interactions lead to the emergence of complex, organized behaviors or solutions at the collective level, enabling the swarm to tackle tasks efficiently.
#### Why do you need more agents in a swarm?
More agents in a swarm can enhance its problem-solving capabilities, resilience, and efficiency. With more agents:
- **Diversity and Specialization**: The swarm can leverage a wider range of skills, knowledge, and perspectives, allowing for more creative and effective solutions to complex problems.
- **Scalability**: Adding more agents can increase the swarm's capacity to handle larger tasks or multiple tasks simultaneously.
- **Robustness**: A larger number of agents enhances the system's redundancy and fault tolerance, as the failure of a few agents has a minimal impact on the overall performance of the swarm.
#### Isn't it more expensive to use more agents?
While deploying more agents can initially increase costs, especially in terms of computational resources, hosting, and potentially API usage, there are several factors and strategies that can mitigate these expenses:
- **Efficiency at Scale**: Larger swarms can often solve problems more quickly or effectively, reducing the overall computational time and resources required.
- **Optimization and Caching**: Implementing optimizations and caching strategies can reduce redundant computations, lowering the workload on individual agents and the overall system.
- **Dynamic Scaling**: Utilizing cloud services that offer dynamic scaling can ensure you only pay for the resources you need when you need them, optimizing cost-efficiency.
#### Can swarms make decisions better than individual agents?
Yes, swarms can make better decisions than individual agents for several reasons:
- **Collective Intelligence**: Swarms combine the knowledge and insights of multiple agents, leading to more informed and well-rounded decision-making processes.
- **Error Correction**: The collaborative nature of swarms allows for error checking and correction among agents, reducing the likelihood of mistakes.
- **Adaptability**: Swarms are highly adaptable to changing environments or requirements, as the collective can quickly reorganize or shift strategies based on new information.
#### How do agents in a swarm communicate?
Communication in a swarm can vary based on the design and purpose of the system but generally involves either direct or indirect interactions:
- **Direct Communication**: Agents exchange information directly through messaging, signals, or other communication protocols designed for the system.
- **Indirect Communication**: Agents influence each other through the environment, a method known as stigmergy. Actions by one agent alter the environment, which in turn influences the behavior of other agents.
#### Are swarms only useful in computational tasks?
While swarms are often associated with computational tasks, their applications extend far beyond. Swarms can be utilized in:
- **Robotics**: Coordinating multiple robots for tasks like search and rescue, exploration, or surveillance.
- **Environmental Monitoring**: Using sensor networks to monitor pollution, wildlife, or climate conditions.
- **Social Sciences**: Modeling social behaviors or economic systems to understand complex societal dynamics.
- **Healthcare**: Coordinating care strategies in hospital settings or managing pandemic responses through distributed data analysis.
#### How do you ensure the security of a swarm system?
Security in swarm systems involves:
- **Encryption**: Ensuring all communications between agents are encrypted to prevent unauthorized access or manipulation.
- **Authentication**: Implementing strict authentication mechanisms to verify the identity of each agent in the swarm.
- **Resilience to Attacks**: Designing the swarm to continue functioning effectively even if some agents are compromised or attacked, utilizing redundancy and fault tolerance strategies.
#### How do individual agents within a swarm share insights without direct learning mechanisms like reinforcement learning?
In the context of pre-trained Large Language Models (LLMs) that operate within a swarm, sharing insights typically involves explicit communication and data exchange protocols rather than direct learning mechanisms like reinforcement learning. Here's how it can work:
- **Shared Databases and Knowledge Bases**: Agents can write to and read from a shared database or knowledge base where insights, generated content, and relevant data are stored. This allows agents to benefit from the collective experience of the swarm by accessing information that other agents have contributed.
- **APIs for Information Exchange**: Custom APIs can facilitate the exchange of information between agents. Through these APIs, agents can request specific information or insights from others within the swarm, effectively sharing knowledge without direct learning.
#### How do you balance the autonomy of individual LLMs with the need for coherent collective behavior in a swarm?
Balancing autonomy with collective coherence in a swarm of LLMs involves:
- **Central Coordination Mechanism**: Implementing a lightweight central coordination mechanism that can assign tasks, distribute information, and collect outputs from individual LLMs. This ensures that while each LLM operates autonomously, their actions are aligned with the swarm's overall objectives.
- **Standardized Communication Protocols**: Developing standardized protocols for how LLMs communicate and share information ensures that even though each agent works autonomously, the information exchange remains coherent and aligned with the collective goals.
#### How do LLM swarms adapt to changing environments or tasks without machine learning techniques?
Adaptation in LLM swarms, without relying on machine learning techniques for dynamic learning, can be achieved through:
- **Dynamic Task Allocation**: A central system or distributed algorithm can dynamically allocate tasks to different LLMs based on the changing environment or requirements. This ensures that the most suitable LLMs are addressing tasks for which they are best suited as conditions change.
- **Pre-trained Versatility**: Utilizing a diverse set of pre-trained LLMs with different specialties or training data allows the swarm to select the most appropriate agent for a task as the requirements evolve.
- **In Context Learning**: In context learning is another mechanism that can be employed within LLM swarms to adapt to changing environments or tasks. This approach involves leveraging the collective knowledge and experiences of the swarm to facilitate learning and improve performance. Here's how it can work:
#### Can LLM swarms operate in physical environments, or are they limited to digital spaces?
LLM swarms primarily operate in digital spaces, given their nature as software entities. However, they can interact with physical environments indirectly through interfaces with sensors, actuaries, or other devices connected to the Internet of Things (IoT). For example, LLMs can process data from physical sensors and control devices based on their outputs, enabling applications like smart home management or autonomous vehicle navigation.
#### Without direct learning from each other, how do agents in a swarm improve over time?
Improvement over time in a swarm of pre-trained LLMs, without direct learning from each other, can be achieved through:
- **Human Feedback**: Incorporating feedback from human operators or users can guide adjustments to the usage patterns or selection criteria of LLMs within the swarm, optimizing performance based on observed outcomes.
- **Periodic Re-training and Updating**: The individual LLMs can be periodically re-trained or updated by their developers based on collective insights and feedback from their deployment within swarms. While this does not involve direct learning from each encounter, it allows the LLMs to improve over time based on aggregated experiences.
These adjustments to the FAQ reflect the specific context of pre-trained LLMs operating within a swarm, focusing on communication, coordination, and adaptation mechanisms that align with their capabilities and constraints.
#### Conclusion
Swarms represent a powerful paradigm in AI, offering innovative solutions to complex, dynamic problems through collective intelligence and decentralized control. While challenges exist, particularly regarding cost and security, strategic design and management can leverage the strengths of swarm intelligence to achieve remarkable efficiency, adaptability, and robustness in a wide range of applications.

@ -0,0 +1,55 @@
# The Limits of Individual Agents
![Reliable Agents](docs/assets/img/reliabilitythrough.png)
Individual agents have pushed the boundaries of what machines can learn and accomplish. However, despite their impressive capabilities, these agents face inherent limitations that can hinder their effectiveness in complex, real-world applications. This blog explores the critical constraints of individual agents, such as context window limits, hallucination, single-task threading, and lack of collaboration, and illustrates how multi-agent collaboration can address these limitations. In short,
- Context Window Limits
- Single Task Execution
- Hallucination
- No collaboration
#### Context Window Limits
One of the most significant constraints of individual agents, particularly in the domain of language models, is the context window limit. This limitation refers to the maximum amount of information an agent can consider at any given time. For instance, many language models can only process a fixed number of tokens (words or characters) in a single inference, restricting their ability to understand and generate responses based on longer texts. This limitation can lead to a lack of coherence in longer compositions and an inability to maintain context in extended conversations or documents.
#### Hallucination
Hallucination in AI refers to the phenomenon where an agent generates information that is not grounded in the input data or real-world facts. This can manifest as making up facts, entities, or events that do not exist or are incorrect. Hallucinations pose a significant challenge in ensuring the reliability and trustworthiness of AI-generated content, particularly in critical applications such as news generation, academic research, and legal advice.
#### Single Task Threading
Individual agents are often designed to excel at specific tasks, leveraging their architecture and training data to optimize performance in a narrowly defined domain. However, this specialization can also be a drawback, as it limits the agent's ability to multitask or adapt to tasks that fall outside its primary domain. Single-task threading means an agent may excel in language translation but struggle with image recognition or vice versa, necessitating the deployment of multiple specialized agents for comprehensive AI solutions.
#### Lack of Collaboration
Traditional AI agents operate in isolation, processing inputs and generating outputs independently. This isolation limits their ability to leverage diverse perspectives, share knowledge, or build upon the insights of other agents. In complex problem-solving scenarios, where multiple facets of a problem need to be addressed simultaneously, this lack of collaboration can lead to suboptimal solutions or an inability to tackle multifaceted challenges effectively.
# The Elegant yet Simple Solution
- ## Multi-Agent Collaboration
Recognizing the limitations of individual agents, researchers and practitioners have explored the potential of multi-agent collaboration as a means to transcend these constraints. Multi-agent systems comprise several agents that can interact, communicate, and collaborate to achieve common goals or solve complex problems. This collaborative approach offers several advantages:
#### Overcoming Context Window Limits
By dividing a large task among multiple agents, each focusing on different segments of the problem, multi-agent systems can effectively overcome the context window limits of individual agents. For instance, in processing a long document, different agents could be responsible for understanding and analyzing different sections, pooling their insights to generate a coherent understanding of the entire text.
#### Mitigating Hallucination
Through collaboration, agents can cross-verify facts and information, reducing the likelihood of hallucinations. If one agent generates a piece of information, other agents can provide checks and balances, verifying the accuracy against known data or through consensus mechanisms.
#### Enhancing Multitasking Capabilities
Multi-agent systems can tackle tasks that require a diverse set of skills by leveraging the specialization of individual agents. For example, in a complex project that involves both natural language processing and image analysis, one agent specialized in text can collaborate with another specialized in visual data, enabling a comprehensive approach to the task.
#### Facilitating Collaboration and Knowledge Sharing
Multi-agent collaboration inherently encourages the sharing of knowledge and insights, allowing agents to learn from each other and improve their collective performance. This can be particularly powerful in scenarios where iterative learning and adaptation are crucial, such as dynamic environments or tasks that evolve over time.
### Conclusion
While individual AI agents have made remarkable strides in various domains, their inherent limitations necessitate innovative approaches to unlock the full potential of artificial intelligence. Multi-agent collaboration emerges as a compelling solution, offering a pathway to transcend individual constraints through collective intelligence. By harnessing the power of collaborative AI, we can address more complex, multifaceted problems, paving the way for more versatile, efficient, and effective AI systems in the future.

@ -22,7 +22,8 @@ class Qdrant:
collection_name: str = "qdrant",
model_name: str = "BAAI/bge-small-en-v1.5",
https: bool = True,
): ...
):
...
```
### Constructor Parameters

@ -27,7 +27,6 @@ from swarms.tokenizers import BaseTokenizer
class SimpleTokenizer(BaseTokenizer):
def count_tokens(self, text: Union[str, List[dict]]) -> int:
if isinstance(text, str):
# Split text by spaces as a simple tokenization approach

@ -0,0 +1,53 @@
# Why Swarms?
The need for multiple agents to work together in artificial intelligence (AI) and particularly in the context of Large Language Models (LLMs) stems from several inherent limitations and challenges in handling complex, dynamic, and multifaceted tasks with single-agent systems. Collaborating with multiple agents offers a pathway to enhance computational efficiency, cognitive diversity, and problem-solving capabilities. This section delves into the rationale behind employing multi-agent systems and strategizes on overcoming the associated expenses, such as API bills and hosting costs.
### Why Multiple Agents Are Necessary
#### 1. **Cognitive Diversity**
Different agents can bring varied perspectives, knowledge bases, and problem-solving approaches to a task. This diversity is crucial in complex problem-solving scenarios where a single approach might not be sufficient. Cognitive diversity enhances creativity, leading to innovative solutions and the ability to tackle a broader range of problems.
#### 2. **Specialization and Expertise**
In many cases, tasks are too complex for a single agent to handle efficiently. By dividing the task among multiple specialized agents, each can focus on a segment where it excels, thereby increasing the overall efficiency and effectiveness of the solution. This approach leverages the expertise of individual agents to achieve superior performance in tasks that require multifaceted knowledge and skills.
#### 3. **Scalability and Flexibility**
Multi-agent systems can more easily scale to handle large-scale or evolving tasks. Adding more agents to the system can increase its capacity or capabilities, allowing it to adapt to larger workloads or new types of tasks. This scalability is essential in dynamic environments where the demand and nature of tasks can change rapidly.
#### 4. **Robustness and Redundancy**
Collaboration among multiple agents enhances the system's robustness by introducing redundancy. If one agent fails or encounters an error, others can compensate, ensuring the system remains operational. This redundancy is critical in mission-critical applications where failure is not an option.
### Overcoming Expenses with API Bills and Hosting
Deploying multiple agents, especially when relying on cloud-based services or APIs, can incur significant costs. Here are strategies to manage and reduce these expenses:
#### 1. **Optimize Agent Efficiency**
Before scaling up the number of agents, ensure each agent operates as efficiently as possible. This can involve refining algorithms, reducing unnecessary API calls, and optimizing data processing to minimize computational requirements and, consequently, the associated costs.
#### 2. **Use Open Source and Self-Hosted Solutions**
Where possible, leverage open-source models and technologies that can be self-hosted. While there is an initial investment in setting up the infrastructure, over time, self-hosting can significantly reduce costs related to API calls and reliance on third-party services.
#### 3. **Implement Intelligent Caching**
Caching results for frequently asked questions or common tasks can drastically reduce the need for repeated computations or API calls. Intelligent caching systems can determine what information to store and for how long, optimizing the balance between fresh data and computational savings.
#### 4. **Dynamic Scaling and Load Balancing**
Use cloud services that offer dynamic scaling and load balancing to adjust the resources allocated based on the current demand. This ensures you're not paying for idle resources during low-usage periods while still being able to handle high demand when necessary.
#### 5. **Collaborative Cost-Sharing Models**
In scenarios where multiple stakeholders benefit from the multi-agent system, consider implementing a cost-sharing model. This approach distributes the financial burden among the users or beneficiaries, making it more sustainable.
#### 6. **Monitor and Analyze Costs**
Regularly monitor and analyze your usage and associated costs to identify potential savings. Many cloud providers offer tools to track and forecast expenses, helping you to adjust your usage patterns and configurations to minimize costs without sacrificing performance.
### Conclusion
The collaboration of multiple agents in AI systems presents a robust solution to the complexity, specialization, scalability, and robustness challenges inherent in single-agent approaches. While the associated costs can be significant, strategic optimization, leveraging open-source technologies, intelligent caching, dynamic resource management, collaborative cost-sharing, and diligent monitoring can mitigate these expenses. By adopting these strategies, organizations can harness the power of multi-agent systems to tackle complex problems more effectively and efficiently, ensuring the sustainable deployment of these advanced technologies.

@ -58,8 +58,9 @@ nav:
- Home:
- Overview: "index.md"
- Contributing: "contributing.md"
- Limitations of Individual Agents: "limits_of_individual_agents.md"
- Swarms:
- Overview: "swarms/index.md"
- Overview: "swarms/index.md"
- swarms.agents:
- Agents:
- WorkerAgent: "swarms/agents/workeragent.md"

@ -3,8 +3,7 @@ import os
from dotenv import load_dotenv
from swarms.models.gpt4_vision_api import GPT4VisionAPI
from swarms.structs import Agent
from swarms import Agent, GPT4VisionAPI
# Load the environment variables
load_dotenv()

@ -1,10 +1,11 @@
from swarms.agents.multion_agent import MultiOnAgent
import timeit
from swarms import Agent, ConcurrentWorkflow, Task
from swarms.utils.loguru_logger import logger
from swarms.agents.multion_agent import MultiOnAgent
# model
model = MultiOnAgent(multion_api_key="")
model = MultiOnAgent(multion_api_key="api-key")
# out = model.run("search for a recipe")
agent = Agent(
@ -15,27 +16,24 @@ agent = Agent(
system_prompt=None,
)
logger.info("[Agent][ID][MultiOnAgent][Initialized][Successfully")
# logger.info("[Agent][ID][MultiOnAgent][Initialized][Successfully")
# Task
task = Task(
agent=agent,
description=(
"send an email to vyom on superhuman for a partnership with"
" multion"
),
description="Download https://www.coachcamel.com/",
)
# Swarm
logger.info(
f"Running concurrent workflow with task: {task.description}"
)
# logger.info(
# f"Running concurrent workflow with task: {task.description}"
# )
# Measure execution time
start_time = timeit.default_timer()
workflow = ConcurrentWorkflow(
max_workers=1,
max_workers=20,
autosave=True,
print_results=True,
return_results=True,
@ -47,4 +45,5 @@ workflow.run()
# Calculate execution time
execution_time = timeit.default_timer() - start_time
logger.info(f"Execution time: {execution_time} seconds")
# logger.info(f"Execution time: {execution_time} seconds")
print(f"Execution time: {execution_time} seconds")

@ -0,0 +1,23 @@
import os
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from swarms.structs import Agent
import swarms.prompts.autoswarm as sdsp
# Load environment variables and initialize the OpenAI Chat model
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
llm = OpenAIChat(model_name = "gpt-4", openai_api_key=api_key)
user_idea = "screenplay writing team"
role_identification_agent = Agent(llm=llm, sop=sdsp.AGENT_ROLE_IDENTIFICATION_AGENT_PROMPT, max_loops=1)
agent_configuration_agent = Agent(llm=llm, sop=sdsp.AGENT_CONFIGURATION_AGENT_PROMPT, max_loops=1)
swarm_assembly_agent = Agent(llm=llm, sop=sdsp.SWARM_ASSEMBLY_AGENT_PROMPT, max_loops=1)
testing_optimization_agent = Agent(llm=llm, sop=sdsp.TESTING_OPTIMIZATION_AGENT_PROMPT, max_loops=1)
# Process the user idea through each agent
role_identification_output = role_identification_agent.run(user_idea)
agent_configuration_output = agent_configuration_agent.run(role_identification_output)
swarm_assembly_output = swarm_assembly_agent.run(agent_configuration_output)
testing_optimization_output = testing_optimization_agent.run(swarm_assembly_output)

@ -1,28 +0,0 @@
from swarms import HierarchicalSwarm
swarm = HierarchicalSwarm(
openai_api_key="key",
model_type="openai",
model_id="gpt-4",
use_vectorstore=False,
use_async=False,
human_in_the_loop=False,
logging_enabled=False,
)
# run the swarm with an objective
result = swarm.run("Design a new car")
# or huggingface
swarm = HierarchicalSwarm(
model_type="huggingface",
model_id="tiaueu/falcon",
use_vectorstore=True,
embedding_size=768,
use_async=False,
human_in_the_loop=True,
logging_enabled=False,
)
# Run the swarm with a particular objective
result = swarm.run("Write a sci-fi short story")

@ -1,11 +1,15 @@
from swarms.memory import chroma
from swarms.memory import ChromaDB
chromadbcl = chroma.ChromaClient()
chromadbcl.add_vectors(
["This is a document", "BONSAIIIIIII", "the walking dead"]
# Initialize the memory
chroma = ChromaDB(
metric="cosine",
limit_tokens=1000,
verbose=True,
)
results = chromadbcl.search_vectors("zombie", limit=1)
# Add text
text = "This is a test"
chroma.add(text)
print(results)
# Search for similar text
similar_text = chroma.query(text)

@ -14,7 +14,6 @@ docs = loader.load()
qdrant_client = qdrant.Qdrant(
host="https://697ea26c-2881-4e17-8af4-817fcb5862e8.europe-west3-0.gcp.cloud.qdrant.io",
collection_name="qdrant",
api_key="BhG2_yINqNU-aKovSEBadn69Zszhbo5uaqdJ6G_qDkdySjAljvuPqQ",
)
qdrant_client.add_vectors(docs)

@ -0,0 +1,10 @@
from swarms.models.azure_openai_llm import AzureOpenAI
# Initialize Azure OpenAI
model = AzureOpenAI()
# Run the model
model(
"Create a youtube script for a video on how to use the swarms"
" framework"
)

@ -0,0 +1,14 @@
from swarms import Agent, AzureOpenAI
## Initialize the workflow
agent = Agent(
llm=AzureOpenAI(),
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
)
# Run the workflow on a task
agent("Understand the risk profile of this account")

@ -1,11 +0,0 @@
from swarms import Orchestrator, Worker
# Instantiate the Orchestrator with 10 agents
orchestrator = Orchestrator(
Worker, agent_list=[Worker] * 10, task_queue=[]
)
# Agent 1 sends a message to Agent 2
orchestrator.chat(
sender_id=1, receiver_id=2, message="Hello, Agent 2!"
)

@ -1,5 +1,3 @@
# Example
import os
from dotenv import load_dotenv

@ -1,6 +1,5 @@
from swarms import DialogueSimulator, Worker
from swarms.models import OpenAIChat
from swarms.swarms import DialogueSimulator
from swarms.workers.worker import Worker
llm = OpenAIChat(
model_name="gpt-4", openai_api_key="api-key", temperature=0.5

@ -1,7 +1,14 @@
from swarms import swarm
from swarms import Agent, OpenAIChat
# Use the function
api_key = "APIKEY"
objective = "What is the capital of the UK?"
result = swarm(api_key, objective)
print(result) # Prints: "The capital of the UK is London."
## Initialize the workflow
agent = Agent(
llm=OpenAIChat(),
max_loops=1,
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
)
# Run the workflow on a task
agent("Find a chick fil a equivalent in hayes valley")

@ -2,8 +2,8 @@ import os
from dotenv import load_dotenv
from swarms import ModelParallelizer
from swarms.models import Anthropic, Gemini, Mixtral, OpenAIChat
from swarms.swarms import ModelParallelizer
load_dotenv()

@ -1,6 +1,7 @@
import os
from dotenv import load_dotenv
from swarms import Agent, OpenAIChat
from swarms.agents.multion_agent import MultiOnAgent
from swarms.memory.chroma_db import ChromaDB

@ -1,6 +1,6 @@
from swarms import OpenAIChat
from swarms.structs.agent import Agent
from swarms.structs.message_pool import MessagePool
from swarms import OpenAIChat
agent1 = Agent(llm=OpenAIChat(), agent_name="agent1")
agent2 = Agent(llm=OpenAIChat(), agent_name="agent2")

@ -1,19 +0,0 @@
from swarms import Orchestrator, Worker
node = Worker(
openai_api_key="",
ai_name="Optimus Prime",
)
# Instantiate the Orchestrator with 10 agents
orchestrator = Orchestrator(
node, agent_list=[node] * 10, task_queue=[]
)
# Agent 7 sends a message to Agent 9
orchestrator.chat(
sender_id=7,
receiver_id=9,
message="Can you help me with this task?",
)

@ -44,5 +44,5 @@ workflow.add(tasks=[task1, task2])
workflow.run()
# # Output the results
# for task in workflow.tasks:
# print(f"Task: {task.description}, Result: {task.result}")
for task in workflow.tasks:
print(f"Task: {task.description}, Result: {task.result}")

@ -1,19 +0,0 @@
from ..swarms import HierarchicalSwarm
# Retrieve your API key from the environment or replace with your actual key
api_key = "sksdsds"
# Initialize HierarchicalSwarm with your API key
swarm = HierarchicalSwarm(openai_api_key=api_key)
# Define an objective
objective = """
Please develop and serve a simple community web service.
People can signup, login, post, comment.
Post and comment should be visible at once.
I want it to have neumorphism-style.
The ports you can use are 4500 and 6500.
"""
# Run HierarchicalSwarm
swarm.run(objective)

@ -1,16 +0,0 @@
from swarms import HierarchicalSwarm
# Retrieve your API key from the environment or replace with your actual key
api_key = ""
# Initialize HierarchicalSwarm with your API key
swarm = HierarchicalSwarm(api_key)
# Define an objective
objective = (
"Find 20 potential customers for a HierarchicalSwarm based AI"
" Agent automation infrastructure"
)
# Run HierarchicalSwarm
swarm.run(objective)

@ -1,19 +0,0 @@
from swarms import HierarchicalSwarm
# Retrieve your API key from the environment or replace with your actual key
api_key = "sksdsds"
# Initialize HierarchicalSwarm with your API key
swarm = HierarchicalSwarm(openai_api_key=api_key)
# Define an objective
objective = """
Please develop and serve a simple web TODO app.
The user can list all TODO items and add or delete each TODO item.
I want it to have neumorphism-style.
The ports you can use are 4500 and 6500.
"""
# Run HierarchicalSwarm
swarm.run(objective)

@ -1,19 +0,0 @@
from swarms.tools.tool import tool
from swarms.tools.tool_func_doc_scraper import scrape_tool_func_docs
@tool
def search_api(query: str) -> str:
"""Search API
Args:
query (str): _description_
Returns:
str: _description_
"""
print(f"Searching API for {query}")
tool_docs = scrape_tool_func_docs(search_api)
print(tool_docs)

@ -1,7 +0,0 @@
from swarms.models import OpenAIChat
from swarms.structs.workflow import Workflow
llm = OpenAIChat()
workflow = Workflow(llm)

@ -2,8 +2,8 @@ import os
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from swarms.structs import Agent
from swarms import Agent, OpenAIChat
from swarms.tools.tool import tool
load_dotenv()
@ -12,24 +12,25 @@ api_key = os.environ.get("OPENAI_API_KEY")
llm = OpenAIChat(api_key=api_key)
# @tool
# def search_api(query: str) -> str:
# """Search API
# Args:
# query (str): _description_
@tool
def search_api(query: str) -> str:
"""Search API
# Returns:
# str: _description_
# """
# print(f"Searching API for {query}")
Args:
query (str): _description_
Returns:
str: _description_
"""
print(f"Searching API for {query}")
## Initialize the workflow
agent = Agent(
llm=llm,
max_loops=5,
# tools=[search_api],
tools=[search_api],
dashboard=True,
)

@ -1,22 +0,0 @@
from swarms.tools.tool import tool
from swarms.tools.tool_func_doc_scraper import scrape_tool_func_docs
# Define a tool by decorating a function with the tool decorator and providing a docstring
@tool(return_direct=True)
def search_api(query: str):
"""Search the web for the query
Args:
query (str): _description_
Returns:
_type_: _description_
"""
return f"Search results for {query}"
# Scrape the tool func docs to prepare for injection into the agent prompt
out = scrape_tool_func_docs(search_api)
print(out)

@ -0,0 +1,11 @@
import pandas as pd
from swarms import dataframe_to_text
# # Example usage:
df = pd.DataFrame({
'A': [1, 2, 3],
'B': [4, 5, 6],
'C': [7, 8, 9],
})
print(dataframe_to_text(df))

@ -1,10 +0,0 @@
from swarms import Workflow
from swarms.models import ChatOpenAI
workflow = Workflow(ChatOpenAI)
workflow.add("What's the weather in miami")
workflow.add("Provide details for {{ parent_output }}")
workflow.add("Summarize the above information: {{ parent_output}}")
workflow.run()

@ -1,12 +1,11 @@
[build-system]
requires = ["poetry-core>=1.0.0", "maturin"]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "4.1.9"
version = "4.2.1"
description = "Swarms - Pytorch"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]
@ -51,7 +50,7 @@ httpx = "0.24.1"
tiktoken = "0.4.0"
attrs = "22.2.0"
ratelimit = "2.2.1"
loguru = "*"
loguru = "0.7.2"
cohere = "4.24"
huggingface-hub = "*"
pydantic = "1.10.12"
@ -76,8 +75,6 @@ supervision = "*"
scikit-image = "*"
pinecone-client = "*"
roboflow = "*"
multion = "*"
[tool.poetry.group.lint.dependencies]

@ -16,8 +16,7 @@ sentencepiece==0.1.98
requests_mock
pypdf==4.0.1
accelerate==0.22.0
loguru
multion
loguru==0.7.2
chromadb
tensorflow
optimum
@ -33,7 +32,7 @@ einops==0.7.0
opencv-python-headless==4.8.1.78
numpy
openai==0.28.0
opencv-python==4.7.0.72
opencv-python==4.9.0.80
timm
yapf
autopep8
@ -43,7 +42,7 @@ rich==13.5.2
mkdocs
mkdocs-material
mkdocs-glightbox
pre-commit==3.2.2
pre-commit==3.6.2
peft
psutil
ultralytics

@ -0,0 +1,36 @@
from swarms import Agent, OpenAIChat, SequentialWorkflow
# Example usage
llm = OpenAIChat(
temperature=0.5,
max_tokens=3000,
)
# Initialize the Agent with the language agent
agent1 = Agent(
agent_name="John the writer",
llm=llm,
max_loops=1,
dashboard=False,
)
# Create another Agent for a different task
agent2 = Agent("Summarizer", llm=llm, max_loops=1, dashboard=False)
# Create the workflow
workflow = SequentialWorkflow(
name="Blog Generation Workflow",
description=(
"Generate a youtube transcript on how to deploy agents into"
" production"
),
max_loops=1,
autosave=True,
dashboard=False,
agents=[agent1, agent2],
)
# Run the workflow
workflow.run()

@ -0,0 +1,59 @@
use pyo3::prelude::*;
use pyo3::types::PyList;
use rayon::prelude::*;
use std::fs;
use std::time::Instant;
// Define the new execute function
fn exec_concurrently(script_path: &str, threads: usize) -> PyResult<()> {
(0..threads).into_par_iter().for_each(|_| {
Python::with_gil(|py| {
let sys = py.import("sys").unwrap();
let path: &PyList = match sys.getattr("path") {
Ok(path) => match path.downcast() {
Ok(path) => path,
Err(e) => {
eprintln!("Failed to downcast path: {:?}", e);
return;
}
},
Err(e) => {
eprintln!("Failed to get path attribute: {:?}", e);
return;
}
};
if let Err(e) = path.append("lib/python3.11/site-packages") {
eprintln!("Failed to append path: {:?}", e);
}
let script = fs::read_to_string(script_path).unwrap();
py.run(&script, None, None).unwrap();
});
});
Ok(())
}
fn main() -> PyResult<()> {
let args: Vec<String> = std::env::args().collect();
let threads = 20;
if args.len() < 2 {
eprintln!("Usage: {} <path_to_python_script>", args[0]);
std::process::exit(1);
}
let script_path = &args[1];
let start = Instant::now();
// Call the execute function
exec_concurrently(script_path, threads)?;
let duration = start.elapsed();
match fs::write("/tmp/elapsed.time", format!("booting time: {:?}", duration)) {
Ok(_) => println!("Successfully wrote elapsed time to /tmp/elapsed.time"),
Err(e) => eprintln!("Failed to write elapsed time: {:?}", e),
}
Ok(())
}

@ -16,7 +16,6 @@ from swarms.agents.stopping_conditions import (
)
from swarms.agents.tool_agent import ToolAgent
from swarms.agents.worker_agent import Worker
from swarms.agents.multion_agent import MultiOnAgent
__all__ = [
"AbstractAgent",
@ -35,5 +34,4 @@ __all__ = [
"check_end",
"Worker",
"agent_wrapper",
"MultiOnAgent",
]

@ -1,8 +1,9 @@
import os
import multion
from dotenv import load_dotenv
from swarms.models.base_llm import AbstractLLM
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
@ -37,13 +38,6 @@ class MultiOnAgent(AbstractLLM):
self.max_steps = max_steps
self.starting_url = starting_url
self.multion = multion.login(
use_api=True,
multion_api_key=str(multion_api_key),
*args,
**kwargs,
)
def run(self, task: str, *args, **kwargs):
"""
Runs a browsing task.
@ -56,7 +50,14 @@ class MultiOnAgent(AbstractLLM):
Returns:
dict: The response from the browsing task.
"""
response = self.multion.browse(
multion.login(
use_api=True,
multion_api_key=str(self.multion_api_key),
*args,
**kwargs,
)
response = multion.browse(
{
"cmd": task,
"url": self.starting_url,

@ -6,6 +6,7 @@ from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain_experimental.autonomous_agents import AutoGPT
from swarms.tools.tool import BaseTool
from swarms.utils.decorators import error_decorator, timing_decorator

@ -1,77 +0,0 @@
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from typing import IO
from pypdf import PdfReader
from swarms.utils.hash import str_to_hash
@dataclass
class TextArtifact:
text: str
@dataclass
class PDFLoader:
"""
A class for loading PDF files and extracting text artifacts.
Args:
tokenizer (str): The tokenizer to use for chunking the text.
max_tokens (int): The maximum number of tokens per chunk.
Methods:
load(source, password=None, *args, **kwargs):
Load a single PDF file and extract text artifacts.
load_collection(sources, password=None, *args, **kwargs):
Load a collection of PDF files and extract text artifacts.
Private Methods:
_load_pdf(stream, password=None):
Load a PDF file and extract text artifacts.
Attributes:
tokenizer (str): The tokenizer used for chunking the text.
max_tokens (int): The maximum number of tokens per chunk.
"""
tokenizer: str
max_tokens: int
def __post_init__(self):
self.chunker = PdfChunker(
tokenizer=self.tokenizer, max_tokens=self.max_tokens
)
def load(
self,
source: str | IO | Path,
password: str | None = None,
*args,
**kwargs,
) -> list[TextArtifact]:
return self._load_pdf(source, password)
def load_collection(
self,
sources: list[str | IO | Path],
password: str | None = None,
*args,
**kwargs,
) -> dict[str, list[TextArtifact]]:
return {
str_to_hash(str(s)): self._load_pdf(s, password)
for s in sources
}
def _load_pdf(
self, stream: str | IO | Path, password: str | None
) -> list[TextArtifact]:
reader = PdfReader(stream, strict=True, password=password)
return [
TextArtifact(text=p.extract_text()) for p in reader.pages
]

@ -1,9 +1,7 @@
from swarms.models.anthropic import Anthropic # noqa: E402
from swarms.models.base_embedding_model import BaseEmbeddingModel
from swarms.models.base_llm import AbstractLLM # noqa: E402
from swarms.models.base_multimodal_model import (
BaseMultiModalModel,
)
from swarms.models.base_multimodal_model import BaseMultiModalModel
# noqa: E402
from swarms.models.biogpt import BioGPT # noqa: E402
@ -15,9 +13,7 @@ from swarms.models.clipq import CLIPQ # noqa: E402
# from swarms.models.kosmos_two import Kosmos # noqa: E402
# from swarms.models.cog_agent import CogAgent # noqa: E402
## Function calling models
from swarms.models.fire_function import (
FireFunctionCaller,
)
from swarms.models.fire_function import FireFunctionCaller
from swarms.models.fuyu import Fuyu # noqa: E402
from swarms.models.gemini import Gemini # noqa: E402
from swarms.models.gigabind import Gigabind # noqa: E402
@ -25,9 +21,7 @@ from swarms.models.gpt4_vision_api import GPT4VisionAPI # noqa: E402
from swarms.models.huggingface import HuggingfaceLLM # noqa: E402
from swarms.models.idefics import Idefics # noqa: E402
from swarms.models.kosmos_two import Kosmos # noqa: E402
from swarms.models.layoutlm_document_qa import (
LayoutLMDocumentQA,
)
from swarms.models.layoutlm_document_qa import LayoutLMDocumentQA
# noqa: E402
from swarms.models.llava import LavaMultiModal # noqa: E402
@ -47,10 +41,7 @@ from swarms.models.petals import Petals # noqa: E402
from swarms.models.qwen import QwenVLMultiModal # noqa: E402
from swarms.models.roboflow_model import RoboflowMultiModal
from swarms.models.sam_supervision import SegmentAnythingMarkGenerator
from swarms.models.sampling_params import (
SamplingParams,
SamplingType,
)
from swarms.models.sampling_params import SamplingParams, SamplingType
from swarms.models.timm import TimmModel # noqa: E402
# from swarms.models.modelscope_pipeline import ModelScopePipeline
@ -67,15 +58,11 @@ from swarms.models.types import ( # noqa: E402
TextModality,
VideoModality,
)
from swarms.models.ultralytics_model import (
UltralyticsModel,
)
from swarms.models.ultralytics_model import UltralyticsModel
# noqa: E402
from swarms.models.vilt import Vilt # noqa: E402
from swarms.models.wizard_storytelling import (
WizardLLMStoryTeller,
)
from swarms.models.wizard_storytelling import WizardLLMStoryTeller
# noqa: E402
# from swarms.models.vllm import vLLM # noqa: E402

@ -0,0 +1,223 @@
from __future__ import annotations
import logging
import os
from typing import Any, Callable, Mapping
import openai
from langchain_core.pydantic_v1 import (
Field,
SecretStr,
root_validator,
)
from langchain_core.utils import (
convert_to_secret_str,
get_from_dict_or_env,
)
from langchain_openai.llms.base import BaseOpenAI
logger = logging.getLogger(__name__)
class AzureOpenAI(BaseOpenAI):
"""Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from swarms import AzureOpenAI
openai = AzureOpenAI(model_name="gpt-3.5-turbo-instruct")
"""
azure_endpoint: str | None = None
"""Your Azure endpoint, including the resource.
Automatically inferred from env var `AZURE_OPENAI_ENDPOINT` if not provided.
Example: `https://example-resource.azure.openai.com/`
"""
deployment_name: str | None = Field(
default=None, alias="azure_deployment"
)
"""A model deployment.
If given sets the base client URL to include `/deployments/{azure_deployment}`.
Note: this means you won't be able to use non-deployment endpoints.
"""
openai_api_version: str = Field(default="", alias="api_version")
"""Automatically inferred from env var `OPENAI_API_VERSION` if not provided."""
openai_api_key: SecretStr | None = Field(
default=None, alias="api_key"
)
"""Automatically inferred from env var `AZURE_OPENAI_API_KEY` if not provided."""
azure_ad_token: SecretStr | None = None
"""Your Azure Active Directory token.
Automatically inferred from env var `AZURE_OPENAI_AD_TOKEN` if not provided.
For more:
https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id.
""" # noqa: E501
azure_ad_token_provider: Callable[[], str] | None = None
"""A function that returns an Azure Active Directory token.
Will be invoked on every request.
"""
openai_api_type: str = ""
"""Legacy, for openai<1.0.0 support."""
validate_base_url: bool = True
"""For backwards compatibility. If legacy val openai_api_base is passed in, try to
infer if it is a base_url or azure_endpoint and update accordingly.
"""
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the langchain object."""
return ["langchain", "llms", "openai"]
@root_validator()
def validate_environment(cls, values: dict) -> dict:
"""Validate that api key and python package exists in environment."""
if values["n"] < 1:
raise ValueError("n must be at least 1.")
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError(
"Cannot stream results when best_of > 1."
)
# Check OPENAI_KEY for backwards compatibility.
# TODO: Remove OPENAI_API_KEY support to avoid possible conflict when using
# other forms of azure credentials.
openai_api_key = (
values["openai_api_key"]
or os.getenv("AZURE_OPENAI_API_KEY")
or os.getenv("OPENAI_API_KEY")
)
values["openai_api_key"] = (
convert_to_secret_str(openai_api_key)
if openai_api_key
else None
)
values["azure_endpoint"] = values[
"azure_endpoint"
] or os.getenv("AZURE_OPENAI_ENDPOINT")
azure_ad_token = values["azure_ad_token"] or os.getenv(
"AZURE_OPENAI_AD_TOKEN"
)
values["azure_ad_token"] = (
convert_to_secret_str(azure_ad_token)
if azure_ad_token
else None
)
values["openai_api_base"] = values[
"openai_api_base"
] or os.getenv("OPENAI_API_BASE")
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
values["openai_organization"] = (
values["openai_organization"]
or os.getenv("OPENAI_ORG_ID")
or os.getenv("OPENAI_ORGANIZATION")
)
values["openai_api_version"] = values[
"openai_api_version"
] or os.getenv("OPENAI_API_VERSION")
values["openai_api_type"] = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
default="azure",
)
# For backwards compatibility. Before openai v1, no distinction was made
# between azure_endpoint and base_url (openai_api_base).
openai_api_base = values["openai_api_base"]
if openai_api_base and values["validate_base_url"]:
if "/openai" not in openai_api_base:
values["openai_api_base"] = (
values["openai_api_base"].rstrip("/") + "/openai"
)
raise ValueError(
"As of openai>=1.0.0, Azure endpoints should be"
" specified via the `azure_endpoint` param not"
" `openai_api_base` (or alias `base_url`)."
)
if values["deployment_name"]:
raise ValueError(
"As of openai>=1.0.0, if `deployment_name` (or"
" alias `azure_deployment`) is specified then"
" `openai_api_base` (or alias `base_url`) should"
" not be. Instead use `deployment_name` (or alias"
" `azure_deployment`) and `azure_endpoint`."
)
values["deployment_name"] = None
client_params = {
"api_version": values["openai_api_version"],
"azure_endpoint": values["azure_endpoint"],
"azure_deployment": values["deployment_name"],
"api_key": (
values["openai_api_key"].get_secret_value()
if values["openai_api_key"]
else None
),
"azure_ad_token": (
values["azure_ad_token"].get_secret_value()
if values["azure_ad_token"]
else None
),
"azure_ad_token_provider": values[
"azure_ad_token_provider"
],
"organization": values["openai_organization"],
"base_url": values["openai_api_base"],
"timeout": values["request_timeout"],
"max_retries": values["max_retries"],
"default_headers": values["default_headers"],
"default_query": values["default_query"],
"http_client": values["http_client"],
}
values["client"] = openai.AzureOpenAI(
**client_params
).completions
values["async_client"] = openai.AsyncAzureOpenAI(
**client_params
).completions
return values
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> dict[str, Any]:
openai_params = {"model": self.deployment_name}
return {**openai_params, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
@property
def lc_attributes(self) -> dict[str, Any]:
return {
"openai_api_type": self.openai_api_type,
"openai_api_version": self.openai_api_version,
}

@ -209,8 +209,6 @@ class CogVLMMultiModal(BaseMultiModalModel):
total_gb = total_bytes / (1 << 30)
if total_gb < 40:
pass
else:
pass
torch.cuda.empty_cache()
@ -462,7 +460,7 @@ class CogVLMMultiModal(BaseMultiModalModel):
elif role == "assistant":
if formatted_history:
if formatted_history[-1][1] != "":
assert False, (
raise AssertionError(
"the last query is answered. answer"
f" again. {formatted_history[-1][0]},"
f" {formatted_history[-1][1]},"
@ -473,9 +471,11 @@ class CogVLMMultiModal(BaseMultiModalModel):
text_content,
)
else:
assert False, "assistant reply before user"
raise AssertionError(
"assistant reply before user"
)
else:
assert False, f"unrecognized role: {role}"
raise AssertionError(f"unrecognized role: {role}")
return last_user_query, formatted_history, image_list

@ -1,8 +1,10 @@
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
from swarms.models.base_llm import AbstractLLM
from typing import Any
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms.models.base_llm import AbstractLLM
class FireFunctionCaller(AbstractLLM):
"""

@ -1,4 +1,5 @@
from unittest.mock import MagicMock
from swarms.models.fire_function import FireFunctionCaller

@ -0,0 +1,85 @@
# Prompt for Agent Role Identification Agent
AGENT_ROLE_IDENTIFICATION_AGENT_PROMPT = """
Based on the following idea: '{user_idea}', identify and list the specific types of agents needed for the team. Detail their roles, responsibilities, and capabilities.
Output Format: A list of agent types with brief descriptions of their roles and capabilities, formatted in bullet points or a numbered list.
"""
# Prompt for Agent Configuration Agent
AGENT_CONFIGURATION_AGENT_PROMPT = """
Given these identified agent roles: '{agent_roles}', write SOPs/System Prompts for each agent type. Ensure that each SOP/Prompt is tailored to the specific functionalities of the agent, considering the operational context and objectives of the swarm team.
Output Format: A single Python file of the whole agent team with capitalized constant names for each SOP/Prompt, an equal sign between each agent name and their SOP/Prompt, and triple quotes surrounding the Prompt/SOP content. Follow best-practice prompting standards.
"""
# Prompt for Swarm Assembly Agent
SWARM_ASSEMBLY_AGENT_PROMPT = """
With the following agent SOPs/Prompts: '{agent_sops}', your task is to create a production-ready Python script based on the SOPs generated for each agent type.
The script should be well-structured and production-ready. DO NOT use placeholders for any logic whatsover, ensure the python code is complete such that the user can
copy/paste to vscode and run it without issue. Here are some tips to consider:
1. **Import Statements**:
- Begin with necessary Python imports. Import the 'Agent' class from the 'swarms.structs' module.
- Import the language or vision model from 'swarms.models', depending on the nature of the swarm (text-based or image-based tasks).
- Import the SOPs for each agent type from swarms.prompts.(insert swarm team name here). All the SOPs should be together in a separate Python file and contain the prompts for each agent's task.
- Use os.getenv for the OpenAI API key.
2. **Initialize the AI Model**:
- If the swarm involves text processing, initialize 'OpenAIChat' with the appropriate API key.
- For image processing tasks, initialize 'GPT4VisionAPI' similarly.
- Ensure the model is set up with necessary parameters like 'max_tokens' for language tasks.
3. **Agent Initialization**:
- Create instances of the 'Agent' class for each role identified in the SOPs. Pass the corresponding SOP and the initialized AI model to each agent.
- Ensure each agent is given a descriptive name for clarity.
4. **Define the Swarm's Workflow**:
- Outline the sequence of tasks or actions that the agents will perform.
- Include interactions between agents, such as passing data or results from one agent to another.
- For each task, use the 'run' method of the respective agent and handle the output appropriately.
5. **Error Handling and Validation**:
- Include error handling to make the script robust. Use try-except blocks where appropriate.
- Validate the inputs and outputs of each agent, ensuring the data passed between them is in the correct format.
6. **User Instructions and Documentation**:
- Comment the script thoroughly to explain what each part does. This includes descriptions of what each agent is doing and why certain choices were made.
- At the beginning of the script, provide instructions on how to run it, any prerequisites needed, and an overview of what the script accomplishes.
Output Format: A complete Python script that is ready for copy/paste to GitHub and demo execution. It should be formatted with complete logic, proper indentation, clear variable names, and comments.
Here is an example of a a working swarm script that you can use as a rough template for the logic:
import os
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from swarms.structs import Agent
import swarms.prompts.swarm_daddy as sdsp
# Load environment variables and initialize the OpenAI Chat model
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
llm = OpenAIChat(model_name = "gpt-4", openai_api_key=api_key)
user_idea = "screenplay writing"
#idea_analysis_agent = Agent(llm=llm, sop=sdsp.IDEA_ANALYSIS_AGENT_PROMPT, max_loops=1)
role_identification_agent = Agent(llm=llm, sop=sdsp.AGENT_ROLE_IDENTIFICATION_AGENT_PROMPT, max_loops=1)
agent_configuration_agent = Agent(llm=llm, sop=sdsp.AGENT_CONFIGURATION_AGENT_PROMPT, max_loops=1)
swarm_assembly_agent = Agent(llm=llm, sop=sdsp.SWARM_ASSEMBLY_AGENT_PROMPT, max_loops=1)
testing_optimization_agent = Agent(llm=llm, sop=sdsp.TESTING_OPTIMIZATION_AGENT_PROMPT, max_loops=1)
# Process the user idea through each agent
# idea_analysis_output = idea_analysis_agent.run(user_idea)
role_identification_output = role_identification_agent.run(user_idea)
agent_configuration_output = agent_configuration_agent.run(role_identification_output)
swarm_assembly_output = swarm_assembly_agent.run(agent_configuration_output)
testing_optimization_output = testing_optimization_agent.run(swarm_assembly_output)
"""
# Prompt for Testing and Optimization Agent
TESTING_OPTIMIZATION_AGENT_PROMPT = """
Review this Python script for swarm demonstration: '{swarm_script}'. Create a testing and optimization plan that includes methods for validating each agent's functionality and the overall performance of the swarm. Suggest improvements for efficiency and effectiveness.
Output Format: A structured plan in a textual format, outlining testing methodologies, key performance metrics, and optimization strategies.
"""
# This file can be imported in the main script to access the prompts.

@ -0,0 +1,64 @@
# prompts.py
# Analyze the user's idea to extract key concepts, requirements, and desired outcomes
IDEA_INTAKE_PROMPT = """
Analyze and expand upon the user's idea, extracting key concepts, requirements, and desired outcomes. Represent the user's idea in a highly detailed structured format, including key features, constraints, and desired outcomes. Idea: {idea}
"""
# Develop a high-level plan for the codebase, including directory structure and file organization
CODEBASE_PLANNING_PROMPT = """
Develop a high-level plan for the codebase, including directory structure and file organization. Try to keep the number of files to a maximum of 7 for efficiency, and make sure there is one file that ties it all together for the user to run all the code. Design the software architecture to determine the overall structure
of the codebase based on the following requirements: {requirements}
"""
# Translate the high-level codebase plan into specific, actionable development tasks
TASK_PLANNING_PROMPT = """
Translate the high-level codebase plan into specific, actionable development tasks. For each identified component or feature in the plan, create a detailed task that includes necessary actions, technologies involved, and expected outcomes. Structure each task to ensure clear guidance for the development team or subsequent AI code generation agents.
High-Level Codebase Plan: {codebase_plan}
Guidelines for Task Planning:
- Identify distinct components or features from the codebase plan.
- For each component or feature, specify the development tasks required.
- Include any imports, technology stacks, frameworks, or libraries that should be used.
- Detail the expected outcomes or objectives for each task.
- Format the tasks as structured data for easy parsing and automation.
"""
# Generate individual code files based on the detailed task descriptions
FILE_WRITING_PROMPT = """
Generate individual code files based on the codebase plan. Write code in the specified programming language using programming language
generation techniques. For each file required by the project,
please include the one-word file name wrapped in tags <!--START_FILE_PATH--> and <!--END_FILE_PATH-->, followed by the file content wrapped in
<!--START_CONTENT--> and <!--END_CONTENT--> tags. Ensure each file's details are clearly separated. Here are the details: {details}
"""
# Analyze the generated code for correctness, efficiency, and adherence to best practices
CODE_REVIEW_PROMPT = """
Analyze the generated code for correctness, efficiency, and adherence to best practices. Meticulously review the codebase to find any errors, bugs, missing imports, improper integration,or broken logic. Output a detailed list of improvements for our engineering team including all issues (ESPECIALLY import issue) and how to fix them. Here is the code: {code}.
"""
# Refactor the generated code to improve its structure, maintainability, and extensibility
CODE_REFACTORING_PROMPT = """
Given the code provided, refactor it to improve its structure, maintainability, and extensibility. Ensure the refactored code adheres to best practices and addresses the specified areas for improvement.
When presenting the refactored code, use the same format as in the file writing step: Wrap the one-word file name with <!--START_FILE_PATH--> and <!--END_FILE_PATH--> tags, and enclose the file content with <!--START_CONTENT--> and <!--END_CONTENT--> tags. ENSURE that the end of your output contains an "<!--END_CONTENT-->" tag. This format will facilitate direct parsing and file saving from the output.
Areas to improve: {improvements}
The code to refactor:
{code}
Note: The expectation is that the refactored code will be structured and tagged appropriately for automated parsing and saving as individual code files.
"""
# Push the final codebase to a GitHub repository, managing code changes and revisions
GITHUB_PUSH_PROMPT = """
Push the final codebase to a GitHub repository. Manage code changes and maintain a history of revisions using version control integration. Here are the final changes: {changes}
"""

@ -8,6 +8,7 @@ import time
import uuid
from typing import Any, Callable, Dict, List, Optional, Tuple
import yaml
from loguru import logger
from termcolor import colored
@ -31,7 +32,6 @@ from swarms.utils.video_to_frames import (
save_frames_as_images,
video_to_frames,
)
import yaml
# Utils
@ -671,9 +671,9 @@ class Agent:
):
break
if self.parse_done_token:
if parse_done_token(response):
break
# if self.parse_done_token:
# if parse_done_token(response):
# break
if self.stopping_func is not None:
if self.stopping_func(response) is True:

@ -2,6 +2,7 @@ import asyncio
from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from swarms.structs.agent import Agent
from swarms.structs.task import Task
from swarms.utils.logger import logger
@ -42,6 +43,7 @@ class AsyncWorkflow:
results: List[Any] = field(default_factory=list)
loop: Optional[asyncio.AbstractEventLoop] = None
stopping_condition: Optional[Callable] = None
agents: List[Agent] = None
async def add(self, task: Any = None, tasks: List[Any] = None):
"""Add tasks to the workflow"""

@ -3,8 +3,10 @@ from typing import Any, Dict, List, Optional
from termcolor import colored
from swarms.structs.agent import Agent
from swarms.structs.base import BaseStructure
from swarms.structs.task import Task
from swarms.utils.loguru_logger import logger
class BaseWorkflow(BaseStructure):
@ -14,18 +16,27 @@ class BaseWorkflow(BaseStructure):
Attributes:
task_pool (list): A list to store tasks.
Methods:
add(task: Task = None, tasks: List[Task] = None, *args, **kwargs):
Adds a task or a list of tasks to the task pool.
run():
Abstract method to run the workflow.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.task_pool = []
self.agent_pool = []
# Logging
logger.info("Number of agents activated:")
if self.agents:
logger.info(f"Agents: {len(self.agents)}")
else:
logger.info("No agents activated.")
def add(
if self.task_pool:
logger.info(f"Task Pool Size: {len(self.task_pool)}")
else:
logger.info("Task Pool is empty.")
def add_task(
self,
task: Task = None,
tasks: List[Task] = None,
@ -51,6 +62,9 @@ class BaseWorkflow(BaseStructure):
"You must provide a task or a list of tasks"
)
def add_agent(self, agent: Agent, *args, **kwargs):
return self.agent_pool(agent)
def run(self):
"""
Abstract method to run the workflow.
@ -318,3 +332,55 @@ class BaseWorkflow(BaseStructure):
"red",
)
)
def workflow_dashboard(self, **kwargs) -> None:
"""
Displays a dashboard for the workflow.
Args:
**kwargs: Additional keyword arguments to pass to the dashboard.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.workflow_dashboard()
"""
print(
colored(
f"""
Sequential Workflow Dashboard
--------------------------------
Name: {self.name}
Description: {self.description}
task_pool: {len(self.task_pool)}
Max Loops: {self.max_loops}
Autosave: {self.autosave}
Autosave Filepath: {self.saved_state_filepath}
Restore Filepath: {self.restore_state_filepath}
--------------------------------
Metadata:
kwargs: {kwargs}
""",
"cyan",
attrs=["bold", "underline"],
)
)
def workflow_bootup(self, **kwargs) -> None:
"""
Workflow bootup.
"""
print(
colored(
"""
Sequential Workflow Initializing...""",
"green",
attrs=["bold", "underline"],
)
)

@ -1,15 +1,15 @@
import asyncio
import concurrent.futures
import re
import sys
from collections import Counter
from multiprocessing import Pool
from typing import Any, List
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from loguru import logger
import sys
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
# Configure loguru logger with advanced settings
logger.remove()

@ -1,11 +1,13 @@
import json
from dataclasses import dataclass, field
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from termcolor import colored
# from swarms.utils.logger import logger
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.structs.task import Task
from swarms.utils.logger import logger
from swarms.utils.loguru_logger import logger
# SequentialWorkflow class definition using dataclasses
@ -39,7 +41,7 @@ class SequentialWorkflow:
name: str = None
description: str = None
task_pool: List[Task] = field(default_factory=list)
task_pool: List[Task] = None
max_loops: int = 1
autosave: bool = False
saved_state_filepath: Optional[str] = (
@ -47,6 +49,26 @@ class SequentialWorkflow:
)
restore_state_filepath: Optional[str] = None
dashboard: bool = False
agents: List[Agent] = None
def __post_init__(self):
self.conversation = Conversation(
system_prompt=f"Objective: {self.description}",
time_enabled=True,
autosave=True,
)
# Logging
logger.info("Number of agents activated:")
if self.agents:
logger.info(f"Agents: {len(self.agents)}")
else:
logger.info("No agents activated.")
if self.task_pool:
logger.info(f"Task Pool Size: {len(self.task_pool)}")
else:
logger.info("Task Pool is empty.")
def add(
self,
@ -65,35 +87,43 @@ class SequentialWorkflow:
*args: Additional arguments to pass to the task execution.
**kwargs: Additional keyword arguments to pass to the task execution.
"""
try:
# If the agent is a Task instance, we include the task in kwargs for Agent.run()
# Append the task to the task_pool list
if task:
self.task_pool.append(task)
logger.info(
f"[INFO][SequentialWorkflow] Added task {task} to"
" workflow"
)
elif tasks:
for task in tasks:
self.task_pool.append(task)
logger.info(
"[INFO][SequentialWorkflow] Added task"
f" {task} to workflow"
)
else:
if task and tasks is not None:
# Add the task and list of tasks to the task_pool at the same time
self.task_pool.append(task)
for task in tasks:
self.task_pool.append(task)
except Exception as error:
logger.error(
colored(
f"Error adding task to workflow: {error}", "red"
),
)
for agent in self.agents:
out = agent(str(self.description))
self.conversation.add(agent.agent_name, out)
prompt = self.conversation.return_history_as_string()
out = agent(prompt)
return out
# try:
# # If the agent is a Task instance, we include the task in kwargs for Agent.run()
# # Append the task to the task_pool list
# if task:
# self.task_pool.append(task)
# logger.info(
# f"[INFO][SequentialWorkflow] Added task {task} to"
# " workflow"
# )
# elif tasks:
# for task in tasks:
# self.task_pool.append(task)
# logger.info(
# "[INFO][SequentialWorkflow] Added task"
# f" {task} to workflow"
# )
# else:
# if task and tasks is not None:
# # Add the task and list of tasks to the task_pool at the same time
# self.task_pool.append(task)
# for task in tasks:
# self.task_pool.append(task)
# except Exception as error:
# logger.error(
# colored(
# f"Error adding task to workflow: {error}", "red"
# ),
# )
def reset_workflow(self) -> None:
"""Resets the workflow by clearing the results of each task."""
@ -144,217 +174,65 @@ class SequentialWorkflow:
),
)
def save_workflow_state(
self,
filepath: Optional[str] = "sequential_workflow_state.json",
**kwargs,
) -> None:
"""
Saves the workflow state to a json file.
Args:
filepath (str): The path to save the workflow state to.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.save_workflow_state("sequential_workflow_state.json")
"""
try:
filepath = filepath or self.saved_state_filepath
with open(filepath, "w") as f:
# Saving the state as a json for simplicuty
state = {
"task_pool": [
{
"description": task.description,
"args": task.args,
"kwargs": task.kwargs,
"result": task.result,
"history": task.history,
}
for task in self.task_pool
],
"max_loops": self.max_loops,
}
json.dump(state, f, indent=4)
logger.info(
"[INFO][SequentialWorkflow] Saved workflow state to"
f" {filepath}"
)
except Exception as error:
logger.error(
colored(
f"Error saving workflow state: {error}",
"red",
)
)
def workflow_bootup(self, **kwargs) -> None:
"""
Workflow bootup.
"""
print(
colored(
"""
Sequential Workflow Initializing...""",
"green",
attrs=["bold", "underline"],
)
)
def workflow_dashboard(self, **kwargs) -> None:
"""
Displays a dashboard for the workflow.
Args:
**kwargs: Additional keyword arguments to pass to the dashboard.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.workflow_dashboard()
"""
print(
colored(
f"""
Sequential Workflow Dashboard
--------------------------------
Name: {self.name}
Description: {self.description}
task_pool: {len(self.task_pool)}
Max Loops: {self.max_loops}
Autosave: {self.autosave}
Autosave Filepath: {self.saved_state_filepath}
Restore Filepath: {self.restore_state_filepath}
--------------------------------
Metadata:
kwargs: {kwargs}
""",
"cyan",
attrs=["bold", "underline"],
)
)
def workflow_shutdown(self, **kwargs) -> None:
"""Shuts down the workflow."""
print(
colored(
"""
Sequential Workflow Shutdown...""",
"red",
attrs=["bold", "underline"],
)
)
def load_workflow_state(
self, filepath: str = None, **kwargs
) -> None:
"""
Loads the workflow state from a json file and restores the workflow state.
Args:
filepath (str): The path to load the workflow state from.
Examples:
>>> from swarms.models import OpenAIChat
>>> from swarms.structs import SequentialWorkflow
>>> llm = OpenAIChat(openai_api_key="")
>>> workflow = SequentialWorkflow(max_loops=1)
>>> workflow.add("What's the weather in miami", llm)
>>> workflow.add("Create a report on these metrics", llm)
>>> workflow.save_workflow_state("sequential_workflow_state.json")
>>> workflow.load_workflow_state("sequential_workflow_state.json")
"""
try:
filepath = filepath or self.restore_state_filepath
with open(filepath) as f:
state = json.load(f)
self.max_loops = state["max_loops"]
self.task_pool = []
for task_state in state["task_pool"]:
task = Task(
description=task_state["description"],
agent=task_state["agent"],
args=task_state["args"],
kwargs=task_state["kwargs"],
result=task_state["result"],
history=task_state["history"],
)
self.task_pool.append(task)
print(
"[INFO][SequentialWorkflow] Loaded workflow state"
f" from {filepath}"
)
except Exception as error:
logger.error(
colored(
f"Error loading workflow state: {error}",
"red",
)
)
def run(self) -> None:
"""
Run the workflow.
Raises:
ValueError: If a Agent instance is used as a task and the 'task' argument is not provided.
ValueError: If an Agent instance is used as a task and the 'task' argument is not provided.
"""
try:
self.workflow_bootup()
loops = 0
while loops < self.max_loops:
for i in range(len(self.task_pool)):
task = self.task_pool[i]
# Check if the current task can be executed
if task.result is None:
# Get the inputs for the current task
task.context(task)
result = task.execute()
# Pass the inputs to the next task
if i < len(self.task_pool) - 1:
next_task = self.task_pool[i + 1]
next_task.description = result
# Execute the current task
task.execute()
# Autosave the workflow state
if self.autosave:
self.save_workflow_state(
"sequential_workflow_state.json"
)
self.workflow_shutdown()
loops += 1
except Exception as e:
logger.error(
colored(
(
"Error initializing the Sequential workflow:"
f" {e} try optimizing your inputs like the"
" agent class and task description"
),
"red",
attrs=["bold", "underline"],
)
)
self.workflow_bootup()
loops = 0
while loops < self.max_loops:
for i, agent in enumerate(self.agents):
logger.info(f"Agent {i+1} is executing the task.")
out = agent(self.description)
self.conversation.add(agent.agent_name, str(out))
prompt = self.conversation.return_history_as_string()
print(prompt)
print("Next agent...........")
out = agent(prompt)
return out
# try:
# self.workflow_bootup()
# loops = 0
# while loops < self.max_loops:
# for i in range(len(self.task_pool)):
# task = self.task_pool[i]
# # Check if the current task can be executed
# if task.result is None:
# # Get the inputs for the current task
# task.context(task)
# result = task.execute()
# # Pass the inputs to the next task
# if i < len(self.task_pool) - 1:
# next_task = self.task_pool[i + 1]
# next_task.description = result
# # Execute the current task
# task.execute()
# # Autosave the workflow state
# if self.autosave:
# self.save_workflow_state(
# "sequential_workflow_state.json"
# )
# self.workflow_shutdown()
# loops += 1
# except Exception as e:
# logger.error(
# colored(
# (
# "Error initializing the Sequential workflow:"
# f" {e} try optimizing your inputs like the"
# " agent class and task description"
# ),
# "red",
# attrs=["bold", "underline"],
# )
# )

@ -46,6 +46,8 @@ from swarms.utils.video_to_frames import (
########
from swarms.utils.yaml_output_parser import YamlOutputParser
from swarms.utils.pandas_to_str import dataframe_to_text
__all__ = [
"SubprocessCodeInterpreter",
@ -82,4 +84,5 @@ __all__ = [
"MarkVisualizer",
"video_to_frames",
"save_frames_as_images",
"dataframe_to_text",
]

@ -1,6 +1,6 @@
from loguru import logger
logger = logger.add(
logger.add(
"MessagePool.log",
level="INFO",
colorize=True,

@ -0,0 +1,49 @@
import pandas as pd
def dataframe_to_text(
df: pd.DataFrame,
parsing_func: callable = None,
) -> str:
"""
Convert a pandas DataFrame to a string representation.
Args:
df (pd.DataFrame): The pandas DataFrame to convert.
parsing_func (callable, optional): A function to parse the resulting text. Defaults to None.
Returns:
str: The string representation of the DataFrame.
Example:
>>> df = pd.DataFrame({
... 'A': [1, 2, 3],
... 'B': [4, 5, 6],
... 'C': [7, 8, 9],
... })
>>> print(dataframe_to_text(df))
"""
# Get a string representation of the dataframe
df_str = df.to_string()
# Get a string representation of the column names
info_str = df.info()
# Combine the dataframe string and the info string
text = f"DataFrame:\n{df_str}\n\nInfo:\n{info_str}"
if parsing_func:
text = parsing_func(text)
return text
# # # Example usage:
# df = pd.DataFrame({
# 'A': [1, 2, 3],
# 'B': [4, 5, 6],
# 'C': [7, 8, 9],
# })
# print(dataframe_to_text(df))

@ -1,6 +1,7 @@
import cv2
from typing import List
import cv2
def video_to_frames(video_file: str) -> List:
"""

@ -1,5 +1,7 @@
from unittest.mock import MagicMock, patch
import pytest
from unittest.mock import patch, MagicMock
from swarms.agents.multion_agent import MultiOnAgent

@ -1,6 +1,5 @@
from unittest.mock import MagicMock
from swarms.models.fire_function import FireFunctionCaller

@ -1,6 +1,6 @@
from swarms import OpenAIChat
from swarms.structs.agent import Agent
from swarms.structs.message_pool import MessagePool
from swarms import OpenAIChat
def test_message_pool_initialization():

@ -1,5 +1,6 @@
import pypdf
import pytest
from swarms.utils import pdf_to_text

Loading…
Cancel
Save