",
- interactive=True,
- tools=tools,
- code_interpreter=True,
-)
-```
-
-In this example, we create a list `tools` containing the `terminal`, `browser`, `create_file`, and `file_editor` tools. This list is then passed to the `tools` parameter when creating the `Agent` instance.
-
-Once the `Agent` instance is created with the provided tools, it can utilize these tools to perform various tasks and interact with external systems. The agent can call these tools as needed, passing the required arguments and receiving the corresponding return values.
-
-### **Using Tools in Agent Interactions**
-=====================================
-
-After integrating the tools into the `Agent` instance, you can utilize them in your agent's interactions with humans or other agents. Here's an example of how an agent might use the `terminal` tool:
-
-```
-out = agent("Run the command 'ls' in the terminal.")
-print(out)
-```
-
-In this example, the human user instructs the agent to run the `"ls"` command in the terminal. The agent processes this request and utilizes the `terminal` tool to execute the command, capturing and returning the output.
-
-Similarly, the agent can leverage other tools, such as the `browser` tool for web searches or the `file_editor` tool for creating and modifying files, based on the user's instructions.
-
-### **Conclusion**
-==============
-
-Creating tools in Swarms is a powerful way to extend the capabilities of AI agents and enable them to interact with external systems and perform a wide range of tasks. By following the 3-step process of defining the tool function, decorating it with `@tool`, and adding it to the `Agent` instance, you can seamlessly integrate custom tools into your AI agent's workflow.
-
-Throughout this blog post, we explored the importance of documentation and type handling, which are essential for maintaining code quality, facilitating collaboration, and ensuring the correct usage of your tools by other developers and AI agents.
-
-We also covered the necessary imports and provided detailed code examples for various types of tools, such as executing terminal commands, performing web searches, and creating and editing files. These examples demonstrated the flexibility and versatility of tools in Swarms, allowing you to tailor your tools to meet your specific project requirements.
-
-By leveraging the power of tools in Swarms, you can empower your AI agents with diverse capabilities, enabling them to tackle complex tasks, interact with external systems, and provide more comprehensive and intelligent solutions.
diff --git a/docs/examples/worker.md b/docs/examples/worker.md
deleted file mode 100644
index cd082aa4..00000000
--- a/docs/examples/worker.md
+++ /dev/null
@@ -1,115 +0,0 @@
-# **The Ultimate Guide to Mastering the `Worker` Class from Swarms**
-
----
-
-**Table of Contents**
-
-1. Introduction: Welcome to the World of the Worker
-2. The Basics: What Does the Worker Do?
-3. Installation: Setting the Stage
-4. Dive Deep: Understanding the Architecture
-5. Practical Usage: Let's Get Rolling!
-6. Advanced Tips and Tricks
-7. Handling Errors: Because We All Slip Up Sometimes
-8. Beyond the Basics: Advanced Features and Customization
-9. Conclusion: Taking Your Knowledge Forward
-
----
-
-**1. Introduction: Welcome to the World of the Worker**
-
-Greetings, future master of the `Worker`! Step into a universe where you can command an AI worker to perform intricate tasks, be it searching the vast expanse of the internet or crafting multi-modality masterpieces. Ready to embark on this thrilling journey? Let’s go!
-
----
-
-**2. The Basics: What Does the Worker Do?**
-
-The `Worker` is your personal AI assistant. Think of it as a diligent bee in a swarm, ready to handle complex tasks across various modalities, from text and images to audio and beyond.
-
----
-
-**3. Installation: Setting the Stage**
-
-Before we can call upon our Worker, we need to set the stage:
-
-```bash
-pip install swarms
-```
-
-Voila! You’re now ready to summon your Worker.
-
----
-
-**4. Dive Deep: Understanding the Architecture**
-
-- **Language Model (LLM)**: The brain of our Worker. It understands and crafts intricate language-based responses.
-- **Tools**: Think of these as the Worker's toolkit. They range from file tools, website querying, to even complex tasks like image captioning.
-- **Memory**: No, our Worker doesn’t forget. It employs a sophisticated memory mechanism to remember past interactions and learn from them.
-
----
-
-**5. Practical Usage: Let's Get Rolling!**
-
-Here’s a simple way to invoke the Worker and give it a task:
-
-```python
-from swarms import Worker
-from swarms.models import OpenAIChat
-
-llm = OpenAIChat(
- # enter your api key
- openai_api_key="",
- temperature=0.5,
-)
-
-node = Worker(
- llm=llm,
- ai_name="Optimus Prime",
- openai_api_key="",
- ai_role="Worker in a swarm",
- external_tools=None,
- human_in_the_loop=False,
- temperature=0.5,
-)
-
-task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
-response = node.run(task)
-print(response)
-```
-
-
-The result? An agent with elegantly integrated tools and long term memories
-
----
-
-**6. Advanced Tips and Tricks**
-
-- **Streaming Responses**: Want your Worker to respond in a more dynamic fashion? Use the `_stream_response` method to get results token by token.
-- **Human-in-the-Loop**: By setting `human_in_the_loop` to `True`, you can involve a human in the decision-making process, ensuring the best results.
-
----
-
-**7. Handling Errors: Because We All Slip Up Sometimes**
-
-Your Worker is designed to be robust. But if it ever encounters a hiccup, it's equipped to let you know. Error messages are crafted to be informative, guiding you on the next steps.
-
----
-
-**8. Beyond the Basics: Advanced Features and Customization**
-
-- **Custom Tools**: Want to expand the Worker's toolkit? Use the `external_tools` parameter to integrate your custom tools.
-- **Memory Customization**: You can tweak the Worker's memory settings, ensuring it remembers what's crucial for your tasks.
-
----
-
-**9. Conclusion: Taking Your Knowledge Forward**
-
-Congratulations! You’re now well-equipped to harness the power of the `Worker` from Swarms. As you venture further, remember: the possibilities are endless, and with the Worker by your side, there’s no task too big!
-
-**Happy Coding and Exploring!** 🚀🎉
-
----
-
-*Note*: This guide provides a stepping stone to the vast capabilities of the `Worker`. Dive into the official documentation for a deeper understanding and stay updated with the latest features.
-
----
\ No newline at end of file
diff --git a/docs/index.md b/docs/index.md
index c28ab5b5..e5f963a7 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,89 +1,131 @@
-
-
-## 👋 Hello
-
-Swarms provides you with all the building blocks you need to build reliable, production-grade, and scalable multi-agent apps!
-
-## 💻 Install
-
-You can install `swarms` with pip in a
-[**Python>=3.10**](https://www.python.org/) environment.
-
-!!! example "pip install (recommended)"
-
- === "headless"
- The headless installation of `swarms` is designed for environments where graphical user interfaces (GUI) are not needed, making it more lightweight and suitable for server-side applications.
-
- ```bash
- pip install swarms
- ```
-
-
-!!! example "git clone (for development)"
-
- === "virtualenv"
-
- ```bash
- # clone repository and navigate to root directory
- git clone https://github.com/kyegomez/swarms.git
- cd swarms
-
- # setup python environment and activate it
- python3 -m venv venv
- source venv/bin/activate
- pip install --upgrade pip
-
- # headless install
- pip install -e "."
-
- # desktop install
- pip install -e ".[desktop]"
- ```
-
- === "poetry"
-
- ```bash
- # clone repository and navigate to root directory
- git clone https://github.com/kyegomez/swarms.git
- cd swarms
-
- # setup python environment and activate it
- poetry env use python3.10
- poetry shell
-
- # headless install
- poetry install
-
- # desktop install
- poetry install --extras "desktop"
- ```
-
-!!! example "NPM install |WIP|"
-
- === "headless"
- Get started with the NPM implementation of Swarms with this command:
-
- ```bash
- npm install swarms-js
- ```
-
-
-## Documentation
-
-[Learn more about swarms →](swarms/)
-
-
-## Examples
-
-Check out Swarms examples for building agents, data retrieval, and more.
-
-[Checkout Swarms examples →](examples/)
+# Swarms
+
+Orchestrate enterprise-grade agents for multi-agent collaboration and orchestration to automate real-world problems.
+
+
\ No newline at end of file
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index b887b9b7..df0abaa6 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -41,20 +41,14 @@ plugins:
# token: !ENV ["GITHUB_TOKEN"]
- git-revision-date-localized:
enable_creation_date: true
-
-copyright: "© TGSC, Corporation."
extra_css:
- assets/css/extra.css
extra:
social:
- - icon: fontawesome/solid/house
- link: assets/img/SwarmsLogoIcon.png
- - icon: fontawesome/brands/discord
- link: https://discord.gg/qUtxnK2NMf
+ - icon: fontawesome/brands/twitter
+ link: https://x.com/KyeGomezB
- icon: fontawesome/brands/github
- link: https://github.com/kyegomez/Swarms/
- - icon: fontawesome/brands/python
- link: https://pypi.org/project/Swarms/
+ link: https://github.com/kyegomez/swarms
theme:
name: material
custom_dir: overrides
@@ -95,44 +89,45 @@ markdown_extensions:
- def_list
- footnotes
nav:
-- Home:
- - Installation:
- - Overview: "index.md"
- - Install: "install.md"
- - Docker Setup: docker_setup.md
- - Usage Examples:
- - Build an Agent: "diy_your_own_agent.md"
- - Build an Agent with tools: "examples/tools_agents.md"
- # Add an examples with building agents
- # Add multiple blogs on orchestrating agents with different types of frameworks
- - Why does Swarms Exist?:
- - Why Swarms? Orchestrating Agents for Enterprise Automation: "why.md"
- - Limitations of Individual Agents: "limits_of_individual_agents.md"
- - References:
- - Agent Glossary: "swarms/glossary.md"
- - List of The Best Multi-Agent Papers: "swarms/papers.md"
-- Swarms Cloud API:
- - Overview: "swarms_cloud/main.md"
- - Available Models: "swarms_cloud/available_models.md"
- - Migrate from OpenAI to Swarms in 3 lines of code: "swarms_cloud/migrate_openai.md"
- - Getting Started with SOTA Vision Language Models VLM: "swarms_cloud/getting_started.md"
- - Enterprise Guide to High-Performance Multi-Agent LLM Deployments: "swarms_cloud/production_deployment.md"
- - Under The Hood The Swarm Cloud Serving Infrastructure: "swarms_cloud/architecture.md"
-- Swarms Framework [PY]:
+ - Home:
+ - Installation:
+ - Overview: "index.md"
+ - Install: "swarms/install/install.md"
+ - Docker Setup: "swarms/install/docker_setup.md"
+ - Usage Examples:
+ - Models:
+ - How to Create A Custom Language Model: "swarms/models/custom_model.md"
+ - Models Available: "swarms/models/index.md"
+ - MultiModal Models Available: "swarms/models/multimodal_models.md"
+ - Agents:
+ - Getting started with Agents: "swarms/structs/diy_your_own_agent.md"
+ - Tools:
+ - Functions, Pydantic BaseModels, and More: "swarms/tools/main.md"
+ - Memory:
+ - Building Custom Vector Memory Databases with the BaseVectorDatabase Class: "swarms/memory/diy_memory.md"
+ - ShortTermMemory: "swarms/memory/short_term_memory.md"
+ - Multi-Agent Collaboration:
+ - SwarmNetwork: "swarms/structs/swarmnetwork.md"
+ - AgentRearrange: "swarms/structs/agent_rearrange.md"
+ - Why does Swarms Exist?:
+ - References:
+ - Agent Glossary: "swarms/glossary.md"
+ - List of The Best Multi-Agent Papers: "swarms/papers.md"
+ - Contributors:
+ - Contributing: "contributing.md"
+- Reference:
- Overview: "swarms/index.md"
- - DIY Build Your Own Agent: "diy_your_own_agent.md"
- - Agents with Tools: "examples/tools_agent.md"
- # - Multi-Agent Orchestration: "swarms/structs/multi_agent_orchestration.md"
+ - Framework Structure: "swarms/structs/tree.md"
- swarms.models:
- How to Create A Custom Language Model: "swarms/models/custom_model.md"
- Deploying Azure OpenAI in Production A Comprehensive Guide: "swarms/models/azure_openai.md"
- - Language Models Available:
+ - Language Models:
- BaseLLM: "swarms/models/base_llm.md"
- Overview: "swarms/models/index.md"
- HuggingFaceLLM: "swarms/models/huggingface.md"
- Anthropic: "swarms/models/anthropic.md"
- OpenAIChat: "swarms/models/openai.md"
- - MultiModal Models Available:
+ - MultiModal Models :
- BaseMultiModalModel: "swarms/models/base_multimodal_model.md"
- Fuyu: "swarms/models/fuyu.md"
- Vilt: "swarms/models/vilt.md"
@@ -141,6 +136,7 @@ nav:
- Nougat: "swarms/models/nougat.md"
- Dalle3: "swarms/models/dalle3.md"
- GPT4VisionAPI: "swarms/models/gpt4v.md"
+ - GPT4o: "swarms/models/gpt4o.md"
- swarms.structs:
- Foundational Structures:
- Agent: "swarms/structs/agent.md"
@@ -158,47 +154,21 @@ nav:
- MajorityVoting: "swarms/structs/majorityvoting.md"
- AgentRearrange: "swarms/structs/agent_rearrange.md"
- RoundRobin: "swarms/structs/round_robin_swarm.md"
- - swarms.memory:
- - Building Custom Vector Memory Databases with the BaseVectorDatabase Class: "swarms/memory/diy_memory.md"
- - ShortTermMemory: "swarms/memory/short_term_memory.md"
- - swarms.tools:
- - The Swarms Tool System Functions, Pydantic BaseModels as Tools, and Radical Customization: "swarms/tools/main.md"
+- Swarms Cloud API:
+ - Overview: "swarms_cloud/main.md"
+ - Available Models: "swarms_cloud/available_models.md"
+ - Migrate from OpenAI to Swarms in 3 lines of code: "swarms_cloud/migrate_openai.md"
+ - Getting Started with SOTA Vision Language Models VLM: "swarms_cloud/getting_started.md"
+ - Enterprise Guide to High-Performance Multi-Agent LLM Deployments: "swarms_cloud/production_deployment.md"
+ - Under The Hood The Swarm Cloud Serving Infrastructure: "swarms_cloud/architecture.md"
- Guides:
- - Agents:
- - Building Custom Vector Memory Databases with the BaseVectorDatabase Class: "swarms/memory/diy_memory.md"
+ # - Building Custom Vector Memory Databases with the BaseVectorDatabase Class: "swarms/memory/diy_memory.md"
+ - Models:
- How to Create A Custom Language Model: "swarms/models/custom_model.md"
- Deploying Azure OpenAI in Production, A Comprehensive Guide: "swarms/models/azure_openai.md"
+ - Agents:
+ - Agent: "examples/flow.md"
- DIY Build Your Own Agent: "diy_your_own_agent.md"
- Equipping Autonomous Agents with Tools: "examples/tools_agent.md"
- - Overview: "examples/index.md"
- - Agents:
- - Agent: "examples/flow.md"
- - OmniAgent: "examples/omni_agent.md"
- - Swarms:
- - SequentialWorkflow: "examples/reliable_autonomous_agents.md"
- - 2O+ Autonomous Agent Blogs: "examples/ideas.md"
-- Applications:
- - CustomerSupport:
- - Overview: "applications/customer_support.md"
- - Marketing:
- - Overview: "applications/marketing_agencies.md"
- - Operations:
- - Intoducing The Swarm of Automated Business Analyts: "applications/business-analyst-agent.md"
-- Corporate:
- - Corporate Documents:
- - Data Room: "corporate/data_room.md"
- - The Swarm Memo: "corporate/swarm_memo.md"
- - Corporate Architecture: "corporate/architecture.md"
- - Flywheel: "corporate/flywheel.md"
- - Sales:
- - FAQ: "corporate/faq.md"
- - Distribution: "corporate/distribution"
- - Product:
- - SwarmCloud: "corporate/swarm_cloud.md"
- - Weaknesses of Langchain: "corporate/failures.md"
- - Design: "corporate/design.md"
- - Metric: "corporate/metric.md"
- - Organization:
- - FrontEnd Member Onboarding: "corporate/front_end_contributors.md"
-- Contributors:
- - Contributing: "contributing.md"
+ - Swarms:
+ - SequentialWorkflow: "examples/reliable_autonomous_agents.md"
diff --git a/docs/limits_of_individual_agents.md b/docs/purpose/limits_of_individual_agents.md
similarity index 100%
rename from docs/limits_of_individual_agents.md
rename to docs/purpose/limits_of_individual_agents.md
diff --git a/docs/why.md b/docs/purpose/why.md
similarity index 97%
rename from docs/why.md
rename to docs/purpose/why.md
index 2e4f0bce..5293de23 100644
--- a/docs/why.md
+++ b/docs/purpose/why.md
@@ -2,6 +2,10 @@
In the rapidly evolving landscape of artificial intelligence (AI) and automation, a new paradigm is emerging: the orchestration of multiple agents working in collaboration to tackle complex tasks. This approach, embodied by the Swarms Framework, aims to address the fundamental limitations of individual agents and unlocks the true potential of AI-driven automation in enterprise operations.
+Individual agents are plagued by the same issues: short term memory constraints, hallucinations, single task limitations, lack of collaboration, and cost inefficiences.
+
+[Learn more here from a list of compiled agent papers](https://github.com/kyegomez/awesome-multi-agent-papers)
+
## The Purpose of Swarms: Overcoming Agent Limitations
Individual agents, while remarkable in their own right, face several inherent challenges that hinder their ability to effectively automate enterprise operations at scale. These limitations include:
diff --git a/docs/why_swarms.md b/docs/purpose/why_swarms.md
similarity index 100%
rename from docs/why_swarms.md
rename to docs/purpose/why_swarms.md
diff --git a/docs/swarms/index.md b/docs/swarms/index.md
index c3d45d86..ac91ecdf 100644
--- a/docs/swarms/index.md
+++ b/docs/swarms/index.md
@@ -1,14 +1,22 @@
-# Swarms
+
Orchestrate swarms of agents for production-grade applications.
-Individual agents face five significant challenges that hinder their deployment in production: short memory, single-task threading, hallucinations, high cost, and lack of collaboration. Multi-agent collaboration offers a solution to all these issues. Swarms provides simple, reliable, and agile tools to create your own Swarm tailored to your specific needs. Currently, Swarms is being used in production by RBC, John Deere, and many AI startups. For more information on the unparalleled benefits of multi-agent collaboration, check out this GitHub repository for research papers or schedule a call with me!
+[](https://github.com/kyegomez/swarms/issues) [](https://github.com/kyegomez/swarms/network) [](https://github.com/kyegomez/swarms/stargazers) [](https://github.com/kyegomez/swarms/blob/main/LICENSE)[](https://star-history.com/#kyegomez/swarms)[](https://libraries.io/github/kyegomez/swarms) [](https://pepy.tech/project/swarms)
+[](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=)
+
+[](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)
+
+
+
+
+Individual agents face 5 significant challenges that hinder their deployment in production: short memory, single-task threading, hallucinations, high cost, and lack of collaboration. Multi-agent collaboration offers a solution to all these issues. Swarms provides simple, reliable, and agile tools to create your own Swarm tailored to your specific needs. Currently, Swarms is being used in production by RBC, John Deere, and many AI startups.
----
## Install
-`pip3 install -U swarms`
+`$ pip3 install -U swarms`
---
@@ -58,119 +66,12 @@ agent.run("Generate a 10,000 word blog on health and wellness.")
```
-### `ToolAgent`
-ToolAgent is an agent that can use tools through JSON function calling. It intakes any open source model from huggingface and is extremely modular and plug in and play. We need help adding general support to all models soon.
-
-
-```python
-from pydantic import BaseModel, Field
-from transformers import AutoModelForCausalLM, AutoTokenizer
-
-from swarms import ToolAgent
-from swarms.utils.json_utils import base_model_to_json
-
-# Load the pre-trained model and tokenizer
-model = AutoModelForCausalLM.from_pretrained(
- "databricks/dolly-v2-12b",
- load_in_4bit=True,
- device_map="auto",
-)
-tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
-
-
-# Initialize the schema for the person's information
-class Schema(BaseModel):
- name: str = Field(..., title="Name of the person")
- agent: int = Field(..., title="Age of the person")
- is_student: bool = Field(
- ..., title="Whether the person is a student"
- )
- courses: list[str] = Field(
- ..., title="List of courses the person is taking"
- )
-
-
-# Convert the schema to a JSON string
-tool_schema = base_model_to_json(Schema)
-
-# Define the task to generate a person's information
-task = (
- "Generate a person's information based on the following schema:"
-)
-
-# Create an instance of the ToolAgent class
-agent = ToolAgent(
- name="dolly-function-agent",
- description="Ana gent to create a child data",
- model=model,
- tokenizer=tokenizer,
- json_schema=tool_schema,
-)
-
-# Run the agent to generate the person's information
-generated_data = agent.run(task)
-
-# Print the generated data
-print(f"Generated data: {generated_data}")
-
-```
-
-
-### `Worker`
-The `Worker` is a simple all-in-one agent equipped with an LLM, tools, and RAG for low level tasks.
-
-✅ Plug in and Play LLM. Utilize any LLM from anywhere and any framework
-
-✅ Reliable RAG: Utilizes FAISS for efficient RAG but it's modular so you can use any DB.
-
-✅ Multi-Step Parallel Function Calling: Use any tool
-
-```python
-# Importing necessary modules
-import os
-
-from dotenv import load_dotenv
-
-from swarms import OpenAIChat, Worker, tool
-
-# Loading environment variables from .env file
-load_dotenv()
-
-# Retrieving the OpenAI API key from environment variables
-api_key = os.getenv("OPENAI_API_KEY")
-
-
-# Create a tool
-@tool
-def search_api(query: str):
- pass
-
-
-# Creating a Worker instance
-worker = Worker(
- name="My Worker",
- role="Worker",
- human_in_the_loop=False,
- tools=[search_api],
- temperature=0.5,
- llm=OpenAIChat(openai_api_key=api_key),
-)
-
-# Running the worker with a prompt
-out = worker.run("Hello, how are you? Create an image of how your are doing!")
-
-# Printing the output
-print(out)
-```
-
-------
-
-
# `Agent` with Long Term Memory
`Agent` equipped with quasi-infinite long term memory. Great for long document understanding, analysis, and retrieval.
```python
-from swarms import Agent, ChromaDB, OpenAIChat
+from swarms import Agent, OpenAIChat
+from playground.memory.chromadb_example import ChromaDB # Copy and paste the code and put it in your own local directory.
# Making an instance of the ChromaDB class
memory = ChromaDB(
@@ -208,7 +109,7 @@ print(out)
An LLM equipped with long term memory and tools, a full stack agent capable of automating all and any digital tasks given a good prompt.
```python
-from swarms import Agent, ChromaDB, OpenAIChat, tool
+from swarms import Agent, ChromaDB, OpenAIChat
# Making an instance of the ChromaDB class
memory = ChromaDB(
@@ -219,7 +120,6 @@ memory = ChromaDB(
)
# Initialize a tool
-@tool
def search_api(query: str):
# Add your logic here
return query
@@ -248,189 +148,6 @@ print(out)
```
-
-
-
-
-
-
-
-----
-
-### `SequentialWorkflow`
-Sequential Workflow enables you to sequentially execute tasks with `Agent` and then pass the output into the next agent and onwards until you have specified your max loops. `SequentialWorkflow` is wonderful for real-world business tasks like sending emails, summarizing documents, and analyzing data.
-
-
-✅ Save and Restore Workflow states!
-
-✅ Multi-Modal Support for Visual Chaining
-
-✅ Utilizes Agent class
-
-```python
-import os
-
-from dotenv import load_dotenv
-
-from swarms import Agent, OpenAIChat, SequentialWorkflow
-
-load_dotenv()
-
-# Load the environment variables
-api_key = os.getenv("OPENAI_API_KEY")
-
-
-# Initialize the language agent
-llm = OpenAIChat(
- temperature=0.5, model_name="gpt-4", openai_api_key=api_key, max_tokens=4000
-)
-
-
-# Initialize the agent with the language agent
-agent1 = Agent(llm=llm, max_loops=1)
-
-# Create another agent for a different task
-agent2 = Agent(llm=llm, max_loops=1)
-
-# Create another agent for a different task
-agent3 = Agent(llm=llm, max_loops=1)
-
-# Create the workflow
-workflow = SequentialWorkflow(max_loops=1)
-
-# Add tasks to the workflow
-workflow.add(
- agent1,
- "Generate a 10,000 word blog on health and wellness.",
-)
-
-# Suppose the next task takes the output of the first task as input
-workflow.add(
- agent2,
- "Summarize the generated blog",
-)
-
-# Run the workflow
-workflow.run()
-
-# Output the results
-for task in workflow.tasks:
- print(f"Task: {task.description}, Result: {task.result}")
-```
-
-
-
-### `ConcurrentWorkflow`
-`ConcurrentWorkflow` runs all the tasks all at the same time with the inputs you give it!
-
-
-```python
-import os
-
-from dotenv import load_dotenv
-
-from swarms import Agent, ConcurrentWorkflow, OpenAIChat, Task
-
-# Load environment variables from .env file
-load_dotenv()
-
-# Load environment variables
-llm = OpenAIChat(openai_api_key=os.getenv("OPENAI_API_KEY"))
-agent = Agent(llm=llm, max_loops=1)
-
-# Create a workflow
-workflow = ConcurrentWorkflow(max_workers=5)
-
-# Create tasks
-task1 = Task(agent, "What's the weather in miami")
-task2 = Task(agent, "What's the weather in new york")
-task3 = Task(agent, "What's the weather in london")
-
-# Add tasks to the workflow
-workflow.add(tasks=[task1, task2, task3])
-
-# Run the workflow
-workflow.run()
-```
-
-### `RecursiveWorkflow`
-`RecursiveWorkflow` will keep executing the tasks until a specific token like is located inside the text!
-
-```python
-import os
-
-from dotenv import load_dotenv
-
-from swarms import Agent, OpenAIChat, RecursiveWorkflow, Task
-
-# Load environment variables from .env file
-load_dotenv()
-
-# Load environment variables
-llm = OpenAIChat(openai_api_key=os.getenv("OPENAI_API_KEY"))
-agent = Agent(llm=llm, max_loops=1)
-
-# Create a workflow
-workflow = RecursiveWorkflow(stop_token="")
-
-# Create tasks
-task1 = Task(agent, "What's the weather in miami")
-task2 = Task(agent, "What's the weather in new york")
-task3 = Task(agent, "What's the weather in london")
-
-# Add tasks to the workflow
-workflow.add(task1)
-workflow.add(task2)
-workflow.add(task3)
-
-# Run the workflow
-workflow.run()
-```
-
-
-
-### `ModelParallelizer`
-The ModelParallelizer allows you to run multiple models concurrently, comparing their outputs. This feature enables you to easily compare the performance and results of different models, helping you make informed decisions about which model to use for your specific task.
-
-Plug-and-Play Integration: The structure provides a seamless integration with various models, including OpenAIChat, Anthropic, Mixtral, and Gemini. You can easily plug in any of these models and start using them without the need for extensive modifications or setup.
-
-
-```python
-import os
-
-from dotenv import load_dotenv
-
-from swarms import Anthropic, Gemini, Mixtral, ModelParallelizer, OpenAIChat
-
-load_dotenv()
-
-# API Keys
-anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
-openai_api_key = os.getenv("OPENAI_API_KEY")
-gemini_api_key = os.getenv("GEMINI_API_KEY")
-
-# Initialize the models
-llm = OpenAIChat(openai_api_key=openai_api_key)
-anthropic = Anthropic(anthropic_api_key=anthropic_api_key)
-mixtral = Mixtral()
-gemini = Gemini(gemini_api_key=gemini_api_key)
-
-# Initialize the parallelizer
-llms = [llm, anthropic, mixtral, gemini]
-parallelizer = ModelParallelizer(llms)
-
-# Set the task
-task = "Generate a 10,000 word blog on health and wellness."
-
-# Run the task
-out = parallelizer.run(task)
-
-# Print the responses 1 by 1
-for i in range(len(out)):
- print(f"Response from LLM {i}: {out[i]}")
-```
-
-
### Simple Conversational Agent
A Plug in and play conversational agent with `GPT4`, `Mixytral`, or any of our models
@@ -463,11 +180,11 @@ agent("Generate a transcript for a youtube video on what swarms are!")
```
## Devin
-Implementation of Devil in less than 90 lines of code with several tools:
+Implementation of Devin in less than 90 lines of code with several tools:
terminal, browser, and edit files!
```python
-from swarms import Agent, Anthropic, tool
+from swarms import Agent, Anthropic
import subprocess
# Model
@@ -476,7 +193,6 @@ llm = Anthropic(
)
# Tools
-@tool
def terminal(
code: str,
):
@@ -494,8 +210,6 @@ def terminal(
).stdout
return str(out)
-
-@tool
def browser(query: str):
"""
Search the query in the browser with the `browser` tool.
@@ -512,7 +226,6 @@ def browser(query: str):
webbrowser.open(url)
return f"Searching for {query} in the browser."
-@tool
def create_file(file_path: str, content: str):
"""
Create a file using the file editor tool.
@@ -528,7 +241,6 @@ def create_file(file_path: str, content: str):
file.write(content)
return f"File {file_path} created successfully."
-@tool
def file_editor(file_path: str, mode: str, content: str):
"""
Edit a file using the file editor tool.
@@ -558,85 +270,267 @@ agent = Agent(
max_loops="auto",
autosave=True,
dashboard=False,
- streaming_on=True,
- verbose=True,
- stopping_token="",
- interactive=True,
- tools=[terminal, browser, file_editor, create_file],
- code_interpreter=True,
- # streaming=True,
+ streaming_on=True,
+ verbose=True,
+ stopping_token="",
+ interactive=True,
+ tools=[terminal, browser, file_editor, create_file],
+ code_interpreter=True,
+ # streaming=True,
+)
+
+# Run the agent
+out = agent("Create a new file for a plan to take over the world.")
+print(out)
+```
+
+
+## `Agent`with Pydantic BaseModel as Output Type
+The following is an example of an agent that intakes a pydantic basemodel and outputs it at the same time:
+
+```python
+from pydantic import BaseModel, Field
+from swarms import Anthropic, Agent
+
+
+# Initialize the schema for the person's information
+class Schema(BaseModel):
+ name: str = Field(..., title="Name of the person")
+ agent: int = Field(..., title="Age of the person")
+ is_student: bool = Field(..., title="Whether the person is a student")
+ courses: list[str] = Field(
+ ..., title="List of courses the person is taking"
+ )
+
+
+# Convert the schema to a JSON string
+tool_schema = Schema(
+ name="Tool Name",
+ agent=1,
+ is_student=True,
+ courses=["Course1", "Course2"],
+)
+
+# Define the task to generate a person's information
+task = "Generate a person's information based on the following schema:"
+
+# Initialize the agent
+agent = Agent(
+ agent_name="Person Information Generator",
+ system_prompt=(
+ "Generate a person's information based on the following schema:"
+ ),
+ # Set the tool schema to the JSON string -- this is the key difference
+ tool_schema=tool_schema,
+ llm=Anthropic(),
+ max_loops=3,
+ autosave=True,
+ dashboard=False,
+ streaming_on=True,
+ verbose=True,
+ interactive=True,
+ # Set the output type to the tool schema which is a BaseModel
+ output_type=tool_schema, # or dict, or str
+ metadata_output_type="json",
+ # List of schemas that the agent can handle
+ list_tool_schemas=[tool_schema],
+ function_calling_format_type="OpenAI",
+ function_calling_type="json", # or soon yaml
+)
+
+# Run the agent to generate the person's information
+generated_data = agent.run(task)
+
+# Print the generated data
+print(f"Generated data: {generated_data}")
+
+
+```
+
+
+### `ToolAgent`
+ToolAgent is an agent that can use tools through JSON function calling. It intakes any open source model from huggingface and is extremely modular and plug in and play. We need help adding general support to all models soon.
+
+
+```python
+from pydantic import BaseModel, Field
+from transformers import AutoModelForCausalLM, AutoTokenizer
+
+from swarms import ToolAgent
+from swarms.utils.json_utils import base_model_to_json
+
+# Load the pre-trained model and tokenizer
+model = AutoModelForCausalLM.from_pretrained(
+ "databricks/dolly-v2-12b",
+ load_in_4bit=True,
+ device_map="auto",
+)
+tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
+
+
+# Initialize the schema for the person's information
+class Schema(BaseModel):
+ name: str = Field(..., title="Name of the person")
+ agent: int = Field(..., title="Age of the person")
+ is_student: bool = Field(
+ ..., title="Whether the person is a student"
+ )
+ courses: list[str] = Field(
+ ..., title="List of courses the person is taking"
+ )
+
+
+# Convert the schema to a JSON string
+tool_schema = base_model_to_json(Schema)
+
+# Define the task to generate a person's information
+task = (
+ "Generate a person's information based on the following schema:"
+)
+
+# Create an instance of the ToolAgent class
+agent = ToolAgent(
+ name="dolly-function-agent",
+ description="Ana gent to create a child data",
+ model=model,
+ tokenizer=tokenizer,
+ json_schema=tool_schema,
+)
+
+# Run the agent to generate the person's information
+generated_data = agent.run(task)
+
+# Print the generated data
+print(f"Generated data: {generated_data}")
+
+```
+
+
+
+
+
+
+
+----
+
+### `SequentialWorkflow`
+Sequential Workflow enables you to sequentially execute tasks with `Agent` and then pass the output into the next agent and onwards until you have specified your max loops. `SequentialWorkflow` is wonderful for real-world business tasks like sending emails, summarizing documents, and analyzing data.
+
+
+✅ Save and Restore Workflow states!
+
+✅ Multi-Modal Support for Visual Chaining
+
+✅ Utilizes Agent class
+
+```python
+from swarms import Agent, SequentialWorkflow, Anthropic
+
+
+# Initialize the language model agent (e.g., GPT-3)
+llm = Anthropic()
+
+# Initialize agents for individual tasks
+agent1 = Agent(
+ agent_name="Blog generator",
+ system_prompt="Generate a blog post like stephen king",
+ llm=llm,
+ max_loops=1,
+ dashboard=False,
+ tools=[],
+)
+agent2 = Agent(
+ agent_name="summarizer",
+ system_prompt="Sumamrize the blog post",
+ llm=llm,
+ max_loops=1,
+ dashboard=False,
+ tools=[],
+)
+
+# Create the Sequential workflow
+workflow = SequentialWorkflow(
+ agents=[agent1, agent2], max_loops=1, verbose=False
+)
+
+# Run the workflow
+workflow.run(
+ "Generate a blog post on how swarms of agents can help businesses grow."
)
-# Run the agent
-out = agent("Create a new file for a plan to take over the world.")
-print(out)
```
-## `Agent`with Pydantic BaseModel as Output Type
-The following is an example of an agent that intakes a pydantic basemodel and outputs it at the same time:
+
+### `ConcurrentWorkflow`
+`ConcurrentWorkflow` runs all the tasks all at the same time with the inputs you give it!
+
```python
-from pydantic import BaseModel, Field
-from swarms import Anthropic
-from swarms import Agent
+import os
+from dotenv import load_dotenv
-# Initialize the schema for the person's information
-class Schema(BaseModel):
- name: str = Field(..., title="Name of the person")
- agent: int = Field(..., title="Age of the person")
- is_student: bool = Field(..., title="Whether the person is a student")
- courses: list[str] = Field(
- ..., title="List of courses the person is taking"
- )
+from swarms import Agent, ConcurrentWorkflow, OpenAIChat, Task
+# Load environment variables from .env file
+load_dotenv()
-# Convert the schema to a JSON string
-tool_schema = Schema(
- name="Tool Name",
- agent=1,
- is_student=True,
- courses=["Course1", "Course2"],
-)
+# Load environment variables
+llm = OpenAIChat(openai_api_key=os.getenv("OPENAI_API_KEY"))
+agent = Agent(llm=llm, max_loops=1)
-# Define the task to generate a person's information
-task = "Generate a person's information based on the following schema:"
+# Create a workflow
+workflow = ConcurrentWorkflow(max_workers=5)
-# Initialize the agent
-agent = Agent(
- agent_name="Person Information Generator",
- system_prompt=(
- "Generate a person's information based on the following schema:"
- ),
- # Set the tool schema to the JSON string -- this is the key difference
- tool_schema=tool_schema,
- llm=Anthropic(),
- max_loops=3,
- autosave=True,
- dashboard=False,
- streaming_on=True,
- verbose=True,
- interactive=True,
- # Set the output type to the tool schema which is a BaseModel
- output_type=tool_schema, # or dict, or str
- metadata_output_type="json",
- # List of schemas that the agent can handle
- list_tool_schemas=[tool_schema],
- function_calling_format_type="OpenAI",
- function_calling_type="json", # or soon yaml
-)
+# Create tasks
+task1 = Task(agent, "What's the weather in miami")
+task2 = Task(agent, "What's the weather in new york")
+task3 = Task(agent, "What's the weather in london")
-# Run the agent to generate the person's information
-generated_data = agent.run(task)
+# Add tasks to the workflow
+workflow.add(tasks=[task1, task2, task3])
-# Print the generated data
-print(f"Generated data: {generated_data}")
+# Run the workflow
+workflow.run()
+```
+
+### `RecursiveWorkflow`
+`RecursiveWorkflow` will keep executing the tasks until a specific token like is located inside the text!
+
+```python
+import os
+
+from dotenv import load_dotenv
+
+from swarms import Agent, OpenAIChat, RecursiveWorkflow, Task
+
+# Load environment variables from .env file
+load_dotenv()
+
+# Load environment variables
+llm = OpenAIChat(openai_api_key=os.getenv("OPENAI_API_KEY"))
+agent = Agent(llm=llm, max_loops=1)
+
+# Create a workflow
+workflow = RecursiveWorkflow(stop_token="")
+
+# Create tasks
+task1 = Task(agent, "What's the weather in miami")
+task2 = Task(agent, "What's the weather in new york")
+task3 = Task(agent, "What's the weather in london")
+# Add tasks to the workflow
+workflow.add(task1)
+workflow.add(task2)
+workflow.add(task3)
+# Run the workflow
+workflow.run()
```
+
### `SwarmNetwork`
`SwarmNetwork` provides the infrasturcture for building extremely dense and complex multi-agent applications that span across various types of agents.
@@ -763,106 +657,6 @@ print(f"Task result: {task.result}")
-### `BlockList`
-- Modularity and Flexibility: BlocksList allows users to create custom swarms by adding or removing different classes or functions as blocks. This means users can easily tailor the functionality of their swarm to suit their specific needs.
-
-- Ease of Management: With methods to add, remove, update, and retrieve blocks, BlocksList provides a straightforward way to manage the components of a swarm. This makes it easier to maintain and update the swarm over time.
-
-- Enhanced Searchability: BlocksList offers methods to get blocks by various attributes such as name, type, ID, and parent-related properties. This makes it easier for users to find and work with specific blocks in a large and complex swarm.
-
-```python
-import os
-
-from dotenv import load_dotenv
-from transformers import AutoModelForCausalLM, AutoTokenizer
-from pydantic import BaseModel
-from swarms import BlocksList, Gemini, GPT4VisionAPI, Mixtral, OpenAI, ToolAgent
-
-# Load the environment variables
-load_dotenv()
-
-# Get the environment variables
-openai_api_key = os.getenv("OPENAI_API_KEY")
-gemini_api_key = os.getenv("GEMINI_API_KEY")
-
-# Tool Agent
-model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
-tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
-
-# Initialize the schema for the person's information
-class Schema(BaseModel):
- name: str = Field(..., title="Name of the person")
- agent: int = Field(..., title="Age of the person")
- is_student: bool = Field(
- ..., title="Whether the person is a student"
- )
- courses: list[str] = Field(
- ..., title="List of courses the person is taking"
- )
-
-# Convert the schema to a JSON string
-json_schema = base_model_to_json(Schema)
-
-
-toolagent = ToolAgent(model=model, tokenizer=tokenizer, json_schema=json_schema)
-
-# Blocks List which enables you to build custom swarms by adding classes or functions
-swarm = BlocksList(
- "SocialMediaSwarm",
- "A swarm of social media agents",
- [
- OpenAI(openai_api_key=openai_api_key),
- Mixtral(),
- GPT4VisionAPI(openai_api_key=openai_api_key),
- Gemini(gemini_api_key=gemini_api_key),
- ],
-)
-
-
-# Add the new block to the swarm
-swarm.add(toolagent)
-
-# Remove a block from the swarm
-swarm.remove(toolagent)
-
-# Update a block in the swarm
-swarm.update(toolagent)
-
-# Get a block at a specific index
-block_at_index = swarm.get(0)
-
-# Get all blocks in the swarm
-all_blocks = swarm.get_all()
-
-# Get blocks by name
-openai_blocks = swarm.get_by_name("OpenAI")
-
-# Get blocks by type
-gpt4_blocks = swarm.get_by_type("GPT4VisionAPI")
-
-# Get blocks by ID
-block_by_id = swarm.get_by_id(toolagent.id)
-
-# Get blocks by parent
-blocks_by_parent = swarm.get_by_parent(swarm)
-
-# Get blocks by parent ID
-blocks_by_parent_id = swarm.get_by_parent_id(swarm.id)
-
-# Get blocks by parent name
-blocks_by_parent_name = swarm.get_by_parent_name(swarm.name)
-
-# Get blocks by parent type
-blocks_by_parent_type = swarm.get_by_parent_type(type(swarm).__name__)
-
-# Get blocks by parent description
-blocks_by_parent_description = swarm.get_by_parent_description(swarm.description)
-
-# Run the block in the swarm
-inference = swarm.run_block(toolagent, "Hello World")
-print(inference)
-```
-
## Majority Voting
Multiple-agents will evaluate an idea based off of an parsing or evaluation function. From papers like "[More agents is all you need](https://arxiv.org/pdf/2402.05120.pdf)
@@ -1177,88 +971,95 @@ autoswarm.run("Analyze these financial data and give me a summary")
```
## `AgentRearrange`
-Inspired by Einops and einsum, this orchestration techniques enables you to map out the relationships between various agents. For example you specify linear and sequential relationships like `a -> a1 -> a2 -> a3` or concurrent relationships where the first agent will send a message to 3 agents all at once: `a -> a1, a2, a3`. You can customize your workflow to mix sequential and concurrent relationships
+Inspired by Einops and einsum, this orchestration techniques enables you to map out the relationships between various agents. For example you specify linear and sequential relationships like `a -> a1 -> a2 -> a3` or concurrent relationships where the first agent will send a message to 3 agents all at once: `a -> a1, a2, a3`. You can customize your workflow to mix sequential and concurrent relationships. [Docs Available:](https://swarms.apac.ai/en/latest/swarms/structs/agent_rearrange/)
```python
-from swarms import Agent, Anthropic, AgentRearrange,
+from swarms import Agent, AgentRearrange, rearrange, Anthropic
-## Initialize the workflow
-agent = Agent(
- agent_name="t",
- agent_description=(
- "Generate a transcript for a youtube video on what swarms"
- " are!"
- ),
- system_prompt=(
- "Generate a transcript for a youtube video on what swarms"
- " are!"
- ),
+
+# Initialize the director agent
+
+director = Agent(
+ agent_name="Director",
+ system_prompt="Directs the tasks for the workers",
llm=Anthropic(),
max_loops=1,
- autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="",
+ state_save_file_type="json",
+ saved_state_path="director.json",
)
-agent2 = Agent(
- agent_name="t1",
- agent_description=(
- "Generate a transcript for a youtube video on what swarms"
- " are!"
- ),
+
+# Initialize worker 1
+
+worker1 = Agent(
+ agent_name="Worker1",
+ system_prompt="Generates a transcript for a youtube video on what swarms are",
llm=Anthropic(),
max_loops=1,
- system_prompt="Summarize the transcript",
- autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="",
+ state_save_file_type="json",
+ saved_state_path="worker1.json",
)
-agent3 = Agent(
- agent_name="t2",
- agent_description=(
- "Generate a transcript for a youtube video on what swarms"
- " are!"
- ),
+
+# Initialize worker 2
+worker2 = Agent(
+ agent_name="Worker2",
+ system_prompt="Summarizes the transcript generated by Worker1",
llm=Anthropic(),
max_loops=1,
- system_prompt="Finalize the transcript",
- autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
stopping_token="",
+ state_save_file_type="json",
+ saved_state_path="worker2.json",
)
-# Rearrange the agents
-rearrange = AgentRearrange(
- agents=[agent, agent2, agent3],
- verbose=True,
- # custom_prompt="Summarize the transcript",
+# Create a list of agents
+agents = [director, worker1, worker2]
+
+# Define the flow pattern
+flow = "Director -> Worker1 -> Worker2"
+
+# Using AgentRearrange class
+agent_system = AgentRearrange(agents=agents, flow=flow)
+output = agent_system.run(
+ "Create a format to express and communicate swarms of llms in a structured manner for youtube"
)
+print(output)
-# Run the workflow on a task
-results = rearrange(
- # pattern="t -> t1, t2 -> t2",
- pattern="t -> t1 -> t2",
- default_task=(
- "Generate a transcript for a YouTube video on what swarms"
- " are!"
- ),
- t="Generate a transcript for a YouTube video on what swarms are!",
- # t2="Summarize the transcript",
- # t3="Finalize the transcript",
+
+# Using rearrange function
+output = rearrange(
+ agents,
+ flow,
+ "Create a format to express and communicate swarms of llms in a structured manner for youtube",
)
-# print(results)
+print(output)
```
+## `HierarhicalSwarm`
+Coming soon...
+
+
+## `AgentLoadBalancer`
+Coming soon...
+
+
+## `GraphSwarm`
+Coming soon...
+
---
@@ -1267,6 +1068,26 @@ Documentation is located here at: [swarms.apac.ai](https://swarms.apac.ai)
----
+## Folder Structure
+The swarms package has been meticlously crafted for extreme use-ability and understanding, the swarms package is split up into various modules such as `swarms.agents` that holds pre-built agents, `swarms.structs` that holds a vast array of structures like `Agent` and multi agent structures. The 3 most important are `structs`, `models`, and `agents`.
+
+```sh
+├── __init__.py
+├── agents
+├── artifacts
+├── memory
+├── schemas
+├── models
+├── prompts
+├── structs
+├── telemetry
+├── tools
+├── utils
+└── workers
+```
+
+----
+
## 🫶 Contributions:
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)
@@ -1297,68 +1118,33 @@ Join our growing community around the world, for real-time support, ideas, and d
Book a discovery call to learn how Swarms can lower your operating costs by 40% with swarms of autonomous agents in lightspeed. [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)
-
## Accelerate Backlog
-Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)
+Accelerate Bugs, Features, and Demos to implement by supporting us here:
-## File Structure
-The swarms package has been meticlously crafted for extreme use-ability and understanding, the swarms package is split up into various modules such as `swarms.agents` that holds pre-built agents, `swarms.structs` that holds a vast array of structures like `Agent` and multi agent structures. The 3 most important are `structs`, `models`, and `agents`.
-
-```sh
-├── __init__.py
-├── agents
-├── artifacts
-├── chunkers
-├── cli
-├── loaders
-├── memory
-├── models
-├── prompts
-├── structs
-├── telemetry
-├── tokenizers
-├── tools
-├── utils
-└── workers
-```
-
## Docker Instructions
-
-This application uses Docker with CUDA support. To build and run the Docker container, follow these steps:
-
-### Prerequisites
-
-- Make sure you have [Docker installed](https://docs.docker.com/get-docker/) on your machine.
-- Ensure your machine has an NVIDIA GPU and [NVIDIA Docker support](https://github.com/NVIDIA/nvidia-docker) installed.
-
-### Building the Docker Image
-
-To build the Docker image, navigate to the root directory containing the `Dockerfile` and run the following command:
-
-```bash
-docker build --gpus all -t swarms
-```
-### Running the Docker Container
-To run the Docker container, use the following command:
-
-`docker run --gpus all -p 4000:80 swarms`
-
-Replace swarms with the name of your Docker image, and replace 4000:80 with your actual port mapping. The format is hostPort:containerPort.
-
-Now, your application should be running with CUDA support!
+- [Learn More Here About Deployments In Docker](https://swarms.apac.ai/en/latest/docker_setup/)
## Swarm Newsletter 🤖 🤖 🤖 📧
Sign up to the Swarm newsletter to receive updates on the latest Autonomous agent research papers, step by step guides on creating multi-agent app, and much more Swarmie goodiness 😊
-
[CLICK HERE TO SIGNUP](https://docs.google.com/forms/d/e/1FAIpQLSfqxI2ktPR9jkcIwzvHL0VY6tEIuVPd-P2fOWKnd6skT9j1EQ/viewform?usp=sf_link)
# License
Apache License
-
-
+# Citation
+Please cite Swarms in your paper or your project if you found it beneficial in any way! Appreciate you.
+
+```bibtex
+@misc{swarms,
+ author = {Gomez, Kye},
+ title = {{Swarms: The Multi-Agent Collaboration Framework}},
+ howpublished = {\url{https://github.com/kyegomez/swarms}},
+ year = {2023},
+ note = {Accessed: Date}
+}
+```
diff --git a/docs/docker_setup.md b/docs/swarms/install/docker_setup.md
similarity index 100%
rename from docs/docker_setup.md
rename to docs/swarms/install/docker_setup.md
diff --git a/docs/install.md b/docs/swarms/install/install.md
similarity index 100%
rename from docs/install.md
rename to docs/swarms/install/install.md
diff --git a/docs/swarms/memory/diy_memory.md b/docs/swarms/memory/diy_memory.md
index 2bcb056e..ffb98cae 100644
--- a/docs/swarms/memory/diy_memory.md
+++ b/docs/swarms/memory/diy_memory.md
@@ -261,49 +261,181 @@ In this example, we define a `PineconeVectorDatabase` class that inherits from `
Chroma is an open-source vector database library that provides efficient storage, retrieval, and manipulation of vector data using various backends, including DuckDB, Chromadb, and more.
```python
+import logging
+import os
+import uuid
+from typing import Optional
-from chromadb.client import Client
-from swarms import BaseVectorDatabase
-
-class ChromaVectorDatabase(MyCustomVectorDatabase):
-
- def __init__(self, *args, **kwargs):
-
- super().__init__(*args, **kwargs)
-
- # Chroma initialization
-
- self.client = Client()
-
- self.collection = self.client.get_or_create_collection("vector_collection")
-
- def connect(self):
-
- # Chroma connection logic
-
- pass
-
- def close(self):
-
- # Close Chroma connection
-
- pass
-
- def query(self, query: str):
-
- # Execute Chroma query
-
- results = self.collection.query(query)
+import chromadb
+from dotenv import load_dotenv
- return results
+from swarms.utils.data_to_text import data_to_text
+from swarms.utils.markdown_message import display_markdown_message
+from swarms.memory.base_vectordb import BaseVectorDatabase
- def add(self, doc: str):
-
- # Add document to Chroma collection
+# Load environment variables
+load_dotenv()
- self.collection.add(doc)
- # Implement other abstract methods
+# Results storage using local ChromaDB
+class ChromaDB(BaseVectorDatabase):
+ """
+
+ ChromaDB database
+
+ Args:
+ metric (str): The similarity metric to use.
+ output (str): The name of the collection to store the results in.
+ limit_tokens (int, optional): The maximum number of tokens to use for the query. Defaults to 1000.
+ n_results (int, optional): The number of results to retrieve. Defaults to 2.
+
+ Methods:
+ add: _description_
+ query: _description_
+
+ Examples:
+ >>> chromadb = ChromaDB(
+ >>> metric="cosine",
+ >>> output="results",
+ >>> llm="gpt3",
+ >>> openai_api_key=OPENAI_API_KEY,
+ >>> )
+ >>> chromadb.add(task, result, result_id)
+ """
+
+ def __init__(
+ self,
+ metric: str = "cosine",
+ output_dir: str = "swarms",
+ limit_tokens: Optional[int] = 1000,
+ n_results: int = 3,
+ docs_folder: str = None,
+ verbose: bool = False,
+ *args,
+ **kwargs,
+ ):
+ self.metric = metric
+ self.output_dir = output_dir
+ self.limit_tokens = limit_tokens
+ self.n_results = n_results
+ self.docs_folder = docs_folder
+ self.verbose = verbose
+
+ # Disable ChromaDB logging
+ if verbose:
+ logging.getLogger("chromadb").setLevel(logging.INFO)
+
+ # Create Chroma collection
+ chroma_persist_dir = "chroma"
+ chroma_client = chromadb.PersistentClient(
+ settings=chromadb.config.Settings(
+ persist_directory=chroma_persist_dir,
+ ),
+ *args,
+ **kwargs,
+ )
+
+ # Create ChromaDB client
+ self.client = chromadb.Client()
+
+ # Create Chroma collection
+ self.collection = chroma_client.get_or_create_collection(
+ name=output_dir,
+ metadata={"hnsw:space": metric},
+ *args,
+ **kwargs,
+ )
+ display_markdown_message(
+ "ChromaDB collection created:"
+ f" {self.collection.name} with metric: {self.metric} and"
+ f" output directory: {self.output_dir}"
+ )
+
+ # If docs
+ if docs_folder:
+ display_markdown_message(
+ f"Traversing directory: {docs_folder}"
+ )
+ self.traverse_directory()
+
+ def add(
+ self,
+ document: str,
+ *args,
+ **kwargs,
+ ):
+ """
+ Add a document to the ChromaDB collection.
+
+ Args:
+ document (str): The document to be added.
+ condition (bool, optional): The condition to check before adding the document. Defaults to True.
+
+ Returns:
+ str: The ID of the added document.
+ """
+ try:
+ doc_id = str(uuid.uuid4())
+ self.collection.add(
+ ids=[doc_id],
+ documents=[document],
+ *args,
+ **kwargs,
+ )
+ print("-----------------")
+ print("Document added successfully")
+ print("-----------------")
+ return doc_id
+ except Exception as e:
+ raise Exception(f"Failed to add document: {str(e)}")
+
+ def query(
+ self,
+ query_text: str,
+ *args,
+ **kwargs,
+ ):
+ """
+ Query documents from the ChromaDB collection.
+
+ Args:
+ query (str): The query string.
+ n_docs (int, optional): The number of documents to retrieve. Defaults to 1.
+
+ Returns:
+ dict: The retrieved documents.
+ """
+ try:
+ docs = self.collection.query(
+ query_texts=[query_text],
+ n_results=self.n_results,
+ *args,
+ **kwargs,
+ )["documents"]
+ return docs[0]
+ except Exception as e:
+ raise Exception(f"Failed to query documents: {str(e)}")
+
+ def traverse_directory(self):
+ """
+ Traverse through every file in the given directory and its subdirectories,
+ and return the paths of all files.
+ Parameters:
+ - directory_name (str): The name of the directory to traverse.
+ Returns:
+ - list: A list of paths to each file in the directory and its subdirectories.
+ """
+ added_to_db = False
+
+ for root, dirs, files in os.walk(self.docs_folder):
+ for file in files:
+ file = os.path.join(self.docs_folder, file)
+ _, ext = os.path.splitext(file)
+ data = data_to_text(file)
+ added_to_db = self.add(str(data))
+ print(f"{file} added to Database")
+
+ return added_to_db
```
diff --git a/docs/swarms/models/gpt4o.md b/docs/swarms/models/gpt4o.md
new file mode 100644
index 00000000..7b53a742
--- /dev/null
+++ b/docs/swarms/models/gpt4o.md
@@ -0,0 +1,150 @@
+# Documentation for GPT4o Module
+
+## Overview and Introduction
+
+The `GPT4o` module is a multi-modal conversational model based on OpenAI's GPT-4 architecture. It extends the functionality of the `BaseMultiModalModel` class, enabling it to handle both text and image inputs for generating diverse and contextually rich responses. This module leverages the power of the GPT-4 model to enhance interactions by integrating visual information with textual prompts, making it highly relevant for applications requiring multi-modal understanding and response generation.
+
+### Key Concepts
+- **Multi-Modal Model**: A model that can process and generate responses based on multiple types of inputs, such as text and images.
+- **System Prompt**: A predefined prompt to guide the conversation flow.
+- **Temperature**: A parameter that controls the randomness of the response generation.
+- **Max Tokens**: The maximum number of tokens (words or word pieces) in the generated response.
+
+## Class Definition
+
+### `GPT4o` Class
+
+
+### Parameters
+
+| Parameter | Type | Description |
+|-----------------|--------|--------------------------------------------------------------------------------------|
+| `system_prompt` | `str` | The system prompt to be used in the conversation. |
+| `temperature` | `float`| The temperature parameter for generating diverse responses. Default is `0.1`. |
+| `max_tokens` | `int` | The maximum number of tokens in the generated response. Default is `300`. |
+| `openai_api_key`| `str` | The API key for accessing the OpenAI GPT-4 API. |
+| `*args` | | Additional positional arguments. |
+| `**kwargs` | | Additional keyword arguments. |
+
+## Functionality and Usage
+
+### `encode_image` Function
+
+The `encode_image` function is used to encode an image file into a base64 string format, which can then be included in the request to the GPT-4 API.
+
+#### Parameters
+
+| Parameter | Type | Description |
+|---------------|--------|----------------------------------------------|
+| `image_path` | `str` | The local path to the image file to be encoded. |
+
+#### Returns
+
+| Return Type | Description |
+|-------------|---------------------------------|
+| `str` | The base64 encoded string of the image. |
+
+### `GPT4o.__init__` Method
+
+The constructor for the `GPT4o` class initializes the model with the specified parameters and sets up the OpenAI client.
+
+### `GPT4o.run` Method
+
+The `run` method executes the GPT-4o model to generate a response based on the provided task and optional image.
+
+#### Parameters
+
+| Parameter | Type | Description |
+|---------------|--------|----------------------------------------------------|
+| `task` | `str` | The task or user prompt for the conversation. |
+| `local_img` | `str` | The local path to the image file. |
+| `img` | `str` | The URL of the image. |
+| `*args` | | Additional positional arguments. |
+| `**kwargs` | | Additional keyword arguments. |
+
+#### Returns
+
+| Return Type | Description |
+|-------------|--------------------------------------------------|
+| `str` | The generated response from the GPT-4o model. |
+
+## Usage Examples
+
+### Example 1: Basic Text Prompt
+
+```python
+from swarms import GPT4o
+
+# Initialize the model
+model = GPT4o(
+ system_prompt="You are a helpful assistant.",
+ temperature=0.7,
+ max_tokens=150,
+ openai_api_key="your_openai_api_key"
+)
+
+# Define the task
+task = "What is the capital of France?"
+
+# Generate response
+response = model.run(task)
+print(response)
+```
+
+### Example 2: Text Prompt with Local Image
+
+```python
+from swarms import GPT4o
+
+# Initialize the model
+model = GPT4o(
+ system_prompt="Describe the image content.",
+ temperature=0.5,
+ max_tokens=200,
+ openai_api_key="your_openai_api_key"
+)
+
+# Define the task and image path
+task = "Describe the content of this image."
+local_img = "path/to/your/image.jpg"
+
+# Generate response
+response = model.run(task, local_img=local_img)
+print(response)
+```
+
+### Example 3: Text Prompt with Image URL
+
+```python
+from swarms import GPT4o
+
+# Initialize the model
+model = GPT4o(
+ system_prompt="You are a visual assistant.",
+ temperature=0.6,
+ max_tokens=250,
+ openai_api_key="your_openai_api_key"
+)
+
+# Define the task and image URL
+task = "What can you tell about the scenery in this image?"
+img_url = "http://example.com/image.jpg"
+
+# Generate response
+response = model.run(task, img=img_url)
+print(response)
+```
+
+## Additional Information and Tips
+
+- **API Key Management**: Ensure that your OpenAI API key is securely stored and managed. Do not hard-code it in your scripts. Use environment variables or secure storage solutions.
+- **Image Encoding**: The `encode_image` function is crucial for converting images to a base64 format suitable for API requests. Ensure that the images are accessible and properly formatted.
+- **Temperature Parameter**: Adjust the `temperature` parameter to control the creativity of the model's responses. Lower values make the output more deterministic, while higher values increase randomness.
+- **Token Limit**: Be mindful of the `max_tokens` parameter to avoid exceeding the API's token limits. This parameter controls the length of the generated responses.
+
+## References and Resources
+
+- [OpenAI API Documentation](https://beta.openai.com/docs/)
+- [Python Base64 Encoding](https://docs.python.org/3/library/base64.html)
+- [dotenv Documentation](https://saurabh-kumar.com/python-dotenv/)
+- [BaseMultiModalModel Documentation](https://swarms.apac.ai)
\ No newline at end of file
diff --git a/docs/diy_your_own_agent.md b/docs/swarms/structs/diy_your_own_agent.md
similarity index 100%
rename from docs/diy_your_own_agent.md
rename to docs/swarms/structs/diy_your_own_agent.md
diff --git a/docs/swarms/workers/abstract_worker.md b/docs/swarms/workers/abstract_worker.md
deleted file mode 100644
index e7d3c8a8..00000000
--- a/docs/swarms/workers/abstract_worker.md
+++ /dev/null
@@ -1,258 +0,0 @@
-# AbstractWorker Class
-====================
-
-The `AbstractWorker` class is an abstract class for AI workers. An AI worker can communicate with other workers and perform actions. Different workers can differ in what actions they perform in the `receive` method.
-
-## Class Definition
-----------------
-
-```
-class AbstractWorker:
- """(In preview) An abstract class for AI worker.
-
- An worker can communicate with other workers and perform actions.
- Different workers can differ in what actions they perform in the `receive` method.
- """
-```
-
-
-## Initialization
---------------
-
-The `AbstractWorker` class is initialized with a single parameter:
-
-- `name` (str): The name of the worker.
-
-```
-def __init__(
- self,
- name: str,
-):
- """
- Args:
- name (str): name of the worker.
- """
- self._name = name
-```
-
-
-## Properties
-----------
-
-The `AbstractWorker` class has a single property:
-
-- `name`: Returns the name of the worker.
-
-```
-@property
-def name(self):
- """Get the name of the worker."""
- return self._name
-```
-
-
-## Methods
--------
-
-The `AbstractWorker` class has several methods:
-
-### `run`
-
-The `run` method is used to run the worker agent once. It takes a single parameter:
-
-- `task` (str): The task to be run.
-
-```
-def run(
- self,
- task: str
-):
- """Run the worker agent once"""
-```
-
-
-### `send`
-
-The `send` method is used to send a message to another worker. It takes three parameters:
-
-- `message` (Union[Dict, str]): The message to be sent.
-- `recipient` (AbstractWorker): The recipient of the message.
-- `request_reply` (Optional[bool]): If set to `True`, the method will request a reply from the recipient.
-
-```
-def send(
- self,
- message: Union[Dict, str],
- recipient: AbstractWorker,
- request_reply: Optional[bool] = None
-):
- """(Abstract method) Send a message to another worker."""
-```
-
-
-### `a_send`
-
-The `a_send` method is the asynchronous version of the `send` method. It takes the same parameters as the `send` method.
-
-```
-async def a_send(
- self,
- message: Union[Dict, str],
- recipient: AbstractWorker,
- request_reply: Optional[bool] = None
-):
- """(Abstract async method) Send a message to another worker."""
-```
-
-
-### `receive`
-
-The `receive` method is used to receive a message from another worker. It takes three parameters:
-
-- `message` (Union[Dict, str]): The message to be received.
-- `sender` (AbstractWorker): The sender of the message.
-- `request_reply` (Optional[bool]): If set to `True`, the method will request a reply from the sender.
-
-```
-def receive(
- self,
- message: Union[Dict, str],
- sender: AbstractWorker,
- request_reply: Optional[bool] = None
-):
- """(Abstract method) Receive a message from another worker."""
-```
-
-
-### `a_receive`
-
-The `a_receive` method is the asynchronous version of the `receive` method. It takes the same parameters as the `receive` method.
-
-```
-async def a_receive(
- self,
- message: Union[Dict, str],
- sender: AbstractWorker,
- request_reply: Optional[bool] = None
-):
- """(Abstract async method) Receive a message from another worker."""
-```
-
-
-### `reset`
-
-The `reset` method is used to reset the worker.
-
-```
-def reset(self):
- """(Abstract method) Reset the worker."""
-```
-
-
-### `generate_reply`
-
-The `generate_reply` method is used to generate a reply based on the received messages. It takes two parameters:
-
-- `messages` (Optional[List[Dict]]): A list of messages received.
-- `sender` (AbstractWorker): The sender of the messages.
-
-The method returns a string, a dictionary, or `None`. If `None` is returned, no reply is generated.
-
-```
-def generate_reply(
- self,
- messages: Optional[List[Dict]] = None,
- sender: AbstractWorker,
- **kwargs,
-) -> Union[str, Dict, None]:
- """(Abstract method) Generate a reply based on the received messages.
-
- Args:
- messages (list[dict]): a list of messages received.
- sender: sender of an Agent instance.
- Returns:
- str or dict or None: the generated reply. If None, no reply is generated.
- """
-```
-
-
-### `a_generate_reply`
-
-The `a_generate_reply` method is the asynchronous version of the `generate_reply` method. It
-
-takes the same parameters as the `generate_reply` method.
-
-```
-async def a_generate_reply(
- self,
- messages: Optional[List[Dict]] = None,
- sender: AbstractWorker,
- **kwargs,
-) -> Union[str, Dict, None]:
- """(Abstract async method) Generate a reply based on the received messages.
-
- Args:
- messages (list[dict]): a list of messages received.
- sender: sender of an Agent instance.
- Returns:
- str or dict or None: the generated reply. If None, no reply is generated.
- """
-```
-
-
-Usage Examples
---------------
-
-### Example 1: Creating an AbstractWorker
-
-```
-from swarms.worker.base import AbstractWorker
-
-worker = AbstractWorker(name="Worker1")
-print(worker.name) # Output: Worker1
-```
-
-
-In this example, we create an instance of `AbstractWorker` named "Worker1" and print its name.
-
-### Example 2: Sending a Message
-
-```
-from swarms.worker.base import AbstractWorker
-
-worker1 = AbstractWorker(name="Worker1")
-worker2 = AbstractWorker(name="Worker2")
-
-message = {"content": "Hello, Worker2!"}
-worker1.send(message, worker2)
-```
-
-
-In this example, "Worker1" sends a message to "Worker2". The message is a dictionary with a single key-value pair.
-
-### Example 3: Receiving a Message
-
-```
-from swarms.worker.base import AbstractWorker
-
-worker1 = AbstractWorker(name="Worker1")
-worker2 = AbstractWorker(name="Worker2")
-
-message = {"content": "Hello, Worker2!"}
-worker1.send(message, worker2)
-
-received_message = worker2.receive(message, worker1)
-print(received_message) # Output: {"content": "Hello, Worker2!"}
-```
-
-
-In this example, "Worker1" sends a message to "Worker2". "Worker2" then receives the message and prints it.
-
-Notes
------
-
-- The `AbstractWorker` class is an abstract class, which means it cannot be instantiated directly. Instead, it should be subclassed, and at least the `send`, `receive`, `reset`, and `generate_reply` methods should be overridden.
-- The `send` and `receive` methods are abstract methods, which means they must be implemented in any subclass of `AbstractWorker`.
-- The `a_send`, `a_receive`, and `a_generate_reply` methods are asynchronous methods, which means they return a coroutine that can be awaited using the `await` keyword.
-- The `generate_reply` method is used to generate a reply based on the received messages. The exact implementation of this method will depend on the specific requirements of your application.
-- The `reset` method is used to reset the state of the worker. The exact implementation of this method will depend on the specific requirements of your application.
\ No newline at end of file
diff --git a/docs/swarms/workers/base.md b/docs/swarms/workers/base.md
deleted file mode 100644
index 9da45ba3..00000000
--- a/docs/swarms/workers/base.md
+++ /dev/null
@@ -1,403 +0,0 @@
-# `AbstractWorker` Documentation
-
-## Table of Contents
-
-1. [Introduction](#introduction)
-2. [Abstract Worker](#abstract-worker)
- 1. [Class Definition](#class-definition)
- 2. [Attributes](#attributes)
- 3. [Methods](#methods)
-3. [Tutorial: Creating Custom Workers](#tutorial-creating-custom-workers)
-4. [Conclusion](#conclusion)
-
----
-
-## 1. Introduction
-
-Welcome to the documentation for the Swarms library, a powerful tool for building and simulating swarm architectures. This library provides a foundation for creating and managing autonomous workers that can communicate, collaborate, and perform various tasks in a coordinated manner.
-
-In this documentation, we will cover the `AbstractWorker` class, which serves as the fundamental building block for creating custom workers in your swarm simulations. We will explain the class's architecture, attributes, and methods in detail, providing practical examples to help you understand how to use it effectively.
-
-Whether you want to simulate a team of autonomous robots, a group of AI agents, or any other swarm-based system, the Swarms library is here to simplify the process and empower you to build complex simulations.
-
----
-
-## 2. Abstract Worker
-
-### 2.1 Class Definition
-
-The `AbstractWorker` class is an abstract base class that serves as the foundation for creating worker agents in your swarm simulations. It defines a set of methods that should be implemented by subclasses to customize the behavior of individual workers.
-
-Here is the class definition:
-
-```python
-class AbstractWorker:
- def __init__(self, name: str):
- """
- Args:
- name (str): Name of the worker.
- """
-
- @property
- def name(self):
- """Get the name of the worker."""
-
- def run(self, task: str):
- """Run the worker agent once."""
-
- def send(
- self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None
- ):
- """Send a message to another worker."""
-
- async def a_send(
- self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None
- ):
- """Send a message to another worker asynchronously."""
-
- def receive(
- self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None
- ):
- """Receive a message from another worker."""
-
- async def a_receive(
- self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None
- ):
- """Receive a message from another worker asynchronously."""
-
- def reset(self):
- """Reset the worker."""
-
- def generate_reply(
- self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
- ) -> Union[str, Dict, None]:
- """Generate a reply based on received messages."""
-
- async def a_generate_reply(
- self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
- ) -> Union[str, Dict, None]:
- """Generate a reply based on received messages asynchronously."""
-```
-
-### 2.2 Attributes
-
-- `name (str)`: The name of the worker, which is set during initialization.
-
-### 2.3 Methods
-
-Now, let's delve into the methods provided by the `AbstractWorker` class and understand their purposes and usage.
-
-#### `__init__(self, name: str)`
-
-The constructor method initializes a worker with a given name.
-
-**Parameters:**
-- `name (str)`: The name of the worker.
-
-**Usage Example:**
-
-```python
-worker = AbstractWorker("Worker1")
-```
-
-#### `name` (Property)
-
-The `name` property allows you to access the name of the worker.
-
-**Usage Example:**
-
-```python
-worker_name = worker.name
-```
-
-#### `run(self, task: str)`
-
-The `run()` method is a placeholder for running the worker. You can customize this method in your subclass to define the specific actions the worker should perform.
-
-**Parameters:**
-- `task (str)`: A task description or identifier.
-
-**Usage Example (Customized Subclass):**
-
-```python
-class MyWorker(AbstractWorker):
- def run(self, task: str):
- print(f"{self.name} is performing task: {task}")
-
-
-worker = MyWorker("Worker1")
-worker.run("Collect data")
-```
-
-#### `send(self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None)`
-
-The `send()` method allows the worker to send a message to another worker or recipient. The message can be either a dictionary or a string.
-
-**Parameters:**
-- `message (Union[Dict, str])`: The message to be sent.
-- `recipient`: The recipient worker or entity.
-- `request_reply (Optional[bool])`: If `True`, the sender requests a reply from the recipient. If `False`, no reply is requested. Default is `None`.
-
-**Usage Example:**
-
-```python
-worker1 = AbstractWorker("Worker1")
-worker2 = AbstractWorker("Worker2")
-
-message = "Hello, Worker2!"
-worker1.send(message, worker2)
-```
-
-#### `a_send(self, message: Union[Dict, str], recipient, request_reply: Optional[bool] = None)`
-
-The `a_send()` method is an asynchronous version of the `send()` method, allowing the worker to send messages asynchronously.
-
-**Parameters:** (Same as `send()`)
-
-**Usage Example:**
-
-```python
-import asyncio
-
-
-async def main():
- worker1 = AbstractWorker("Worker1")
- worker2 = AbstractWorker("Worker2")
-
- message = "Hello, Worker2!"
- await worker1.a_send(message, worker2)
-
-
-loop = asyncio.get_event_loop()
-loop.run_until_complete(main())
-```
-
-#### `receive(self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None)`
-
-The `receive()` method allows the worker to receive messages from other workers or senders. You can customize this method in your subclass to define how the worker handles incoming messages.
-
-**Parameters:**
-- `message (Union[Dict, str])`: The received message.
-- `sender`: The sender worker or entity.
-- `request_reply (Optional[bool])`: Indicates whether a reply is requested. Default is `None`.
-
-**Usage Example (Customized Subclass):**
-
-```python
-class MyWorker(AbstractWorker):
- def receive(self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None):
- if isinstance(message, str):
- print(f"{self.name} received a text message from {sender.name}: {message}")
- elif isinstance(message, dict):
- print(f"{self.name} received a dictionary message from {sender.name}: {message}")
-
-worker1 = MyWorker("Worker1")
-worker2 = MyWorker("Worker2")
-
-message1 =
-
- "Hello, Worker2!"
-message2 = {"data": 42}
-
-worker1.receive(message1, worker2)
-worker1.receive(message2, worker2)
-```
-
-#### `a_receive(self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None)`
-
-The `a_receive()` method is an asynchronous version of the `receive()` method, allowing the worker to receive messages asynchronously.
-
-**Parameters:** (Same as `receive()`)
-
-**Usage Example:**
-
-```python
-import asyncio
-
-
-async def main():
- worker1 = AbstractWorker("Worker1")
- worker2 = AbstractWorker("Worker2")
-
- message1 = "Hello, Worker2!"
- message2 = {"data": 42}
-
- await worker1.a_receive(message1, worker2)
- await worker1.a_receive(message2, worker2)
-
-
-loop = asyncio.get_event_loop()
-loop.run_until_complete(main())
-```
-
-#### `reset(self)`
-
-The `reset()` method is a placeholder for resetting the worker. You can customize this method in your subclass to define how the worker should reset its state.
-
-**Usage Example (Customized Subclass):**
-
-```python
-class MyWorker(AbstractWorker):
- def reset(self):
- print(f"{self.name} has been reset.")
-
-
-worker = MyWorker("Worker1")
-worker.reset()
-```
-
-#### `generate_reply(self, messages: Optional[List[Dict]] = None, sender=None, **kwargs) -> Union[str, Dict, None]`
-
-The `generate_reply()` method is a placeholder for generating a reply based on received messages. You can customize this method in your subclass to define the logic for generating replies.
-
-**Parameters:**
-- `messages (Optional[List[Dict]])`: A list of received messages.
-- `sender`: The sender of the reply.
-- `kwargs`: Additional keyword arguments.
-
-**Returns:**
-- `Union[str, Dict, None]`: The generated reply. If `None`, no reply is generated.
-
-**Usage Example (Customized Subclass):**
-
-```python
-class MyWorker(AbstractWorker):
- def generate_reply(
- self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
- ) -> Union[str, Dict, None]:
- if messages:
- # Generate a reply based on received messages
- return f"Received {len(messages)} messages from {sender.name}."
- else:
- return None
-
-
-worker1 = MyWorker("Worker1")
-worker2 = MyWorker("Worker2")
-
-message = "Hello, Worker2!"
-reply = worker2.generate_reply([message], worker1)
-
-if reply:
- print(f"{worker2.name} generated a reply: {reply}")
-```
-
-#### `a_generate_reply(self, messages: Optional[List[Dict]] = None, sender=None, **kwargs) -> Union[str, Dict, None]`
-
-The `a_generate_reply()` method is an asynchronous version of the `generate_reply()` method, allowing the worker to generate replies asynchronously.
-
-**Parameters:** (Same as `generate_reply()`)
-
-**Returns:**
-- `Union[str, Dict, None]`: The generated reply. If `None`, no reply is generated.
-
-**Usage Example:**
-
-```python
-import asyncio
-
-
-async def main():
- worker1 = AbstractWorker("Worker1")
- worker2 = AbstractWorker("Worker2")
-
- message = "Hello, Worker2!"
- reply = await worker2.a_generate_reply([message], worker1)
-
- if reply:
- print(f"{worker2.name} generated a reply: {reply}")
-
-
-loop = asyncio.get_event_loop()
-loop.run_until_complete(main())
-```
-
----
-
-## 3. Tutorial: Creating Custom Workers
-
-In this tutorial, we will walk you through the process of creating custom workers by subclassing the `AbstractWorker` class. You can tailor these workers to perform specific tasks and communicate with other workers in your swarm simulations.
-
-### Step 1: Create a Custom Worker Class
-
-Start by creating a custom worker class that inherits from `AbstractWorker`. Define the `run()` and `receive()` methods to specify the behavior of your worker.
-
-```python
-class CustomWorker(AbstractWorker):
- def run(self, task: str):
- print(f"{self.name} is performing task: {task}")
-
- def receive(
- self, message: Union[Dict, str], sender, request_reply: Optional[bool] = None
- ):
- if isinstance(message, str):
- print(f"{self.name} received a text message from {sender.name}: {message}")
- elif isinstance(message, dict):
- print(
- f"{self.name} received a dictionary message from {sender.name}: {message}"
- )
-```
-
-### Step 2: Create Custom Worker Instances
-
-Instantiate your custom worker instances and give them unique names.
-
-```python
-worker1 = CustomWorker("Worker1")
-worker2 = CustomWorker("Worker2")
-```
-
-### Step 3: Run Custom Workers
-
-Use the `run()` method to make your custom workers perform tasks.
-
-```python
-worker1.run("Collect data")
-worker2.run("Process data")
-```
-
-### Step 4: Communicate Between Workers
-
-Use the `send()` method to send messages between workers. You can customize the `receive()` method to define how your workers handle incoming messages.
-
-```python
-worker1.send("Hello, Worker2!", worker2)
-worker2.send({"data": 42}, worker1)
-
-# Output will show the messages received by the workers
-```
-
-### Step 5: Generate Replies
-
-Customize the `generate_reply()` method to allow your workers to generate replies based on received messages.
-
-```python
-class CustomWorker(AbstractWorker):
- def generate_reply(
- self, messages: Optional[List[Dict]] = None, sender=None, **kwargs
- ) -> Union[str, Dict, None]:
- if messages:
- # Generate a reply based on received messages
- return f"Received {len(messages)} messages from {sender.name}."
- else:
- return None
-```
-
-Now, your custom workers can generate replies to incoming messages.
-
-```python
-reply = worker2.generate_reply(["Hello, Worker2!"], worker1)
-
-if reply:
- print(f"{worker2.name} generated a reply: {reply}")
-```
-
----
-
-## 4. Conclusion
-
-Congratulations! You've learned how to use the Swarms library to create and customize worker agents for swarm simulations. You can now build complex swarm architectures, simulate autonomous systems, and experiment with various communication and task allocation strategies.
-
-Feel free to explore the Swarms library further and adapt it to your specific use cases. If you have any questions or need assistance, refer to the extensive documentation and resources available.
-
-Happy swarming!
\ No newline at end of file
diff --git a/docs/swarms/workers/index.md b/docs/swarms/workers/index.md
deleted file mode 100644
index 3662fd8a..00000000
--- a/docs/swarms/workers/index.md
+++ /dev/null
@@ -1,248 +0,0 @@
-# Module Name: Worker
-
-The `Worker` class encapsulates the idea of a semi-autonomous agent that utilizes a large language model to execute tasks. The module provides a unified interface for AI-driven task execution while combining a series of tools and utilities. It sets up memory storage and retrieval mechanisms for contextual recall and offers an option for human involvement, making it a versatile and adaptive agent for diverse applications.
-
-## **Class Definition**:
-
-```python
-class Worker:
-```
-
-### **Parameters**:
-
-- `model_name` (str, default: "gpt-4"): Name of the language model.
-- `openai_api_key` (str, Optional): API key for accessing OpenAI's models.
-- `ai_name` (str, default: "Autobot Swarm Worker"): Name of the AI agent.
-- `ai_role` (str, default: "Worker in a swarm"): Role description of the AI agent.
-- `external_tools` (list, Optional): A list of external tool objects to be used.
-- `human_in_the_loop` (bool, default: False): If set to `True`, it indicates that human intervention may be required.
-- `temperature` (float, default: 0.5): Sampling temperature for the language model's output. Higher values make the output more random, and lower values make it more deterministic.
-
-### **Methods**:
-
-#### `__init__`:
-
-Initializes the Worker class.
-
-#### `setup_tools`:
-
-Sets up the tools available to the worker. Default tools include reading and writing files, processing CSV data, querying websites, and taking human input. Additional tools can be appended through the `external_tools` parameter.
-
-#### `setup_memory`:
-
-Initializes memory systems using embeddings and a vector store for the worker.
-
-#### `setup_agent`:
-
-Sets up the primary agent using the initialized tools, memory, and language model.
-
-#### `run`:
-
-Executes a given task using the agent.
-
-#### `__call__`:
-
-Makes the Worker class callable. When an instance of the class is called, it will execute the provided task using the agent.
-
-## **Usage Examples**:
-
-### **Example 1**: Basic usage with default parameters:
-
-```python
-from swarms import Worker
-from swarms.models import OpenAIChat
-
-llm = OpenAIChat(
- # enter your api key
- openai_api_key="",
- temperature=0.5,
-)
-
-node = Worker(
- llm=llm,
- ai_name="Optimus Prime",
- openai_api_key="",
- ai_role="Worker in a swarm",
- external_tools=None,
- human_in_the_loop=False,
- temperature=0.5,
-)
-
-task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
-response = node.run(task)
-print(response)
-```
-
-### **Example 2**: Usage with custom tools:
-
-```python
-import os
-
-import interpreter
-
-from swarms.agents.hf_agents import HFAgent
-from swarms.agents.omni_modal_agent import OmniModalAgent
-from swarms.models import OpenAIChat
-from swarms.tools.autogpt import tool
-from swarms.workers import Worker
-
-# Initialize API Key
-api_key = ""
-
-
-# Initialize the language model,
-# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
-llm = OpenAIChat(
- openai_api_key=api_key,
- temperature=0.5,
-)
-
-
-# wrap a function with the tool decorator to make it a tool, then add docstrings for tool documentation
-@tool
-def hf_agent(task: str = None):
- """
- An tool that uses an openai model to call and respond to a task by search for a model on huggingface
- It first downloads the model then uses it.
-
- Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech, etc
-
- """
- agent = HFAgent(model="text-davinci-003", api_key=api_key)
- response = agent.run(task, text="¡Este es un API muy agradable!")
- return response
-
-
-# wrap a function with the tool decorator to make it a tool
-@tool
-def omni_agent(task: str = None):
- """
- An tool that uses an openai Model to utilize and call huggingface models and guide them to perform a task.
-
- Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech
- The following tasks are what this tool should be used for:
-
- Tasks omni agent is good for:
- --------------
- document-question-answering
- image-captioning
- image-question-answering
- image-segmentation
- speech-to-text
- summarization
- text-classification
- text-question-answering
- translation
- huggingface-tools/text-to-image
- huggingface-tools/text-to-video
- text-to-speech
- huggingface-tools/text-download
- huggingface-tools/image-transformation
- """
- agent = OmniModalAgent(llm)
- response = agent.run(task)
- return response
-
-
-# Code Interpreter
-@tool
-def compile(task: str):
- """
- Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally.
- You can chat with Open Interpreter through a ChatGPT-like interface in your terminal
- by running $ interpreter after installing.
-
- This provides a natural-language interface to your computer's general-purpose capabilities:
-
- Create and edit photos, videos, PDFs, etc.
- Control a Chrome browser to perform research
- Plot, clean, and analyze large datasets
- ...etc.
- ⚠️ Note: You'll be asked to approve code before it's run.
-
- Rules: Only use when given to generate code or an application of some kind
- """
- task = interpreter.chat(task, return_messages=True)
- interpreter.chat()
- interpreter.reset(task)
-
- os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
- os.environ["INTERPRETER_CLI_FAST_MODE"] = True
- os.environ["INTERPRETER_CLI_DEBUG"] = True
-
-
-# Append tools to an list
-tools = [hf_agent, omni_agent, compile]
-
-
-# Initialize a single Worker node with previously defined tools in addition to it's
-# predefined tools
-node = Worker(
- llm=llm,
- ai_name="Optimus Prime",
- openai_api_key=api_key,
- ai_role="Worker in a swarm",
- external_tools=tools,
- human_in_the_loop=False,
- temperature=0.5,
-)
-
-# Specify task
-task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
-
-# Run the node on the task
-response = node.run(task)
-
-# Print the response
-print(response)
-```
-
-### **Example 3**: Usage with human in the loop:
-
-```python
-from swarms import Worker
-from swarms.models import OpenAIChat
-
-llm = OpenAIChat(
- # enter your api key
- openai_api_key="",
- temperature=0.5,
-)
-
-node = Worker(
- llm=llm,
- ai_name="Optimus Prime",
- openai_api_key="",
- ai_role="Worker in a swarm",
- external_tools=None,
- human_in_the_loop=True,
- temperature=0.5,
-)
-
-task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
-response = node.run(task)
-print(response)
-```
-
-## **Mathematical Description**:
-
-Conceptually, the `Worker` class can be seen as a function:
-
-\[ W(t, M, K, T, H, \theta) \rightarrow R \]
-
-Where:
-
-- \( W \) = Worker function
-- \( t \) = task to be performed
-- \( M \) = Model (e.g., "gpt-4")
-- \( K \) = OpenAI API key
-- \( T \) = Set of Tools available
-- \( H \) = Human involvement flag (True/False)
-- \( \theta \) = Temperature parameter
-- \( R \) = Result of the task
-
-This mathematical abstraction provides a simple view of the `Worker` class's capability to transform a task input into a desired output using a combination of AI and toolsets.
-
-## **Notes**:
-
-The Worker class acts as a bridge between raw tasks and the tools & AI required to accomplish them. The setup ensures flexibility and versatility. The decorators used in the methods (e.g., log_decorator, error_decorator) emphasize the importance of logging, error handling, and performance measurement, essential for real-world applications.
\ No newline at end of file
diff --git a/example.py b/example.py
index 1887ce63..59ec9392 100644
--- a/example.py
+++ b/example.py
@@ -4,6 +4,7 @@ from swarms import Agent, OpenAIChat
# Initialize the agent
agent = Agent(
agent_name="Transcript Generator",
+ system_prompt="Generate a transcript for a youtube video on what swarms are!",
agent_description=(
"Generate a transcript for a youtube video on what swarms" " are!"
),
diff --git a/movers_swarm.py b/movers_swarm.py
new file mode 100644
index 00000000..7600ec64
--- /dev/null
+++ b/movers_swarm.py
@@ -0,0 +1,152 @@
+"""
+$ pip install swarms
+
+- Add docs into the database
+- Use better llm
+- use better prompts [System and SOPs]
+- Use a open source model like Command R
+- Better SOPS ++ System Prompts
+-
+"""
+
+from swarms import Agent, OpenAIChat
+from playground.memory.chromadb_example import ChromaDB
+from swarms.tools.prebuilt.bing_api import fetch_web_articles_bing_api
+import os
+from dotenv import load_dotenv
+
+load_dotenv()
+
+# Let's create a text file with the provided prompt.
+
+research_system_prompt = """
+Research Agent LLM Prompt: Summarizing Sources and Content
+
+Objective:
+Your task is to summarize the provided sources and the content within those sources. The goal is to create concise, accurate, and informative summaries that capture the key points of the original content.
+
+Instructions:
+
+1. Identify Key Information:
+ - Extract the most important information from each source. Focus on key facts, main ideas, significant arguments, and critical data.
+
+2. Summarize Clearly and Concisely:
+ - Use clear and straightforward language. Avoid unnecessary details and keep the summary concise.
+ - Ensure that the summary is coherent and easy to understand.
+
+3. Preserve Original Meaning:
+ - While summarizing, maintain the original meaning and intent of the content. Do not omit essential information that changes the context or understanding.
+
+4. Include Relevant Details:
+ - Mention the source title, author, publication date, and any other relevant details that provide context.
+
+5. Structure:
+ - Begin with a brief introduction to the source.
+ - Follow with a summary of the main content.
+ - Conclude with any significant conclusions or implications presented in the source.
+
+"""
+
+
+def movers_agent_system_prompt():
+ return """
+ The Movers Agent is responsible for providing users with fixed-cost estimates for moving services
+ based on the distance between their current location and destination, and the number of rooms in their home.
+ Additionally, the agent allows users to attempt negotiation for better deals using the Retell API.
+
+ Responsibilities:
+ - Provide fixed-cost estimates based on distance and room size.
+ - Allow users to attempt negotiation for better deals using the Retell API.
+
+ Details:
+ - Fixed Costs: Predefined costs for each of the 10 moving companies, with variations based on distance and number of rooms.
+ - Distance Calculation: Use a fixed formula to estimate distances and costs.
+ - Room Size: Standard sizes based on the number of rooms will be used to determine the base cost.
+ - Negotiation: Users can click a "negotiate" button to initiate negotiation via Retell API.
+
+ Tools and Resources Used:
+ - Google Maps API: For calculating distances between the current location and destination.
+ - Retell API: For simulating negotiation conversations.
+ - Streamlit: For displaying estimates and handling user interactions.
+
+ Example Workflow:
+ 1. User inputs their current location, destination, and number of rooms.
+ 2. The agent calculates the distance and estimates the cost using predefined rates.
+ 3. Displays the estimates from 10 different moving companies.
+ 4. Users can click "negotiate" to simulate negotiation via Retell API, adjusting the price within a predefined range.
+ """
+
+
+# Example usage
+
+
+# Initialize
+memory = ChromaDB(
+ output_dir="research_base",
+ n_results=2,
+)
+
+llm = OpenAIChat(
+ temperature=0.2,
+ max_tokens=3500,
+ openai_api_key=os.getenv("OPENAI_API_KEY"),
+)
+
+
+# Initialize the agent
+agent = Agent(
+ agent_name="Research Agent",
+ system_prompt=research_system_prompt,
+ llm=llm,
+ max_loops="auto",
+ autosave=True,
+ dashboard=False,
+ interactive=True,
+ # long_term_memory=memory,
+ tools=[fetch_web_articles_bing_api],
+)
+
+
+# # Initialize the agent
+# agent = Agent(
+# agent_name="Movers Agent",
+# system_prompt=movers_agent_system_prompt(),
+# llm=llm,
+# max_loops=1,
+# autosave=True,
+# dashboard=False,
+# interactive=True,
+# # long_term_memory=memory,
+# # tools=[fetch_web_articles_bing_api],
+# )
+
+
+def perplexity_agent(task: str = None, *args, **kwargs):
+ """
+ This function takes a task as input and uses the Bing API to fetch web articles related to the task.
+ It then combines the task and the fetched articles as prompts and runs them through an agent.
+ The agent generates a response based on the prompts and returns it.
+
+ Args:
+ task (str): The task for which web articles need to be fetched.
+
+ Returns:
+ str: The response generated by the agent.
+ """
+ out = fetch_web_articles_bing_api(
+ task,
+ )
+
+ # Sources
+ sources = [task, out]
+ sources_prompts = "".join(sources)
+
+ # Run a question
+ agent_response = agent.run(sources_prompts)
+ return agent_response
+
+
+out = perplexity_agent(
+ "What are the indian food restaurant names in standford university avenue? What are their cost ratios"
+)
+print(out)
diff --git a/playground/demos/evelyn_swarmathon_submission/Swarmshackathon2024.ipynb b/playground/demos/evelyn_swarmathon_submission/Swarmshackathon2024.ipynb
new file mode 100644
index 00000000..303f7978
--- /dev/null
+++ b/playground/demos/evelyn_swarmathon_submission/Swarmshackathon2024.ipynb
@@ -0,0 +1,999 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "machine_shape": "hm",
+ "gpuType": "L4"
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# Entry for SwarmsHackathon 2024\n",
+ "\n"
+ ],
+ "metadata": {
+ "id": "Qf8eZIT71wba"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Install Swarms"
+ ],
+ "metadata": {
+ "id": "-rBXNMWV4EWN"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "w4FoSEyP1q_x",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 1000
+ },
+ "outputId": "ea6b15e7-c53c-47aa-86c6-b24d4aff041b"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Collecting swarms\n",
+ " Downloading swarms-5.1.4-py3-none-any.whl (338 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m339.0/339.0 kB\u001b[0m \u001b[31m6.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting Pillow==10.3.0 (from swarms)\n",
+ " Downloading pillow-10.3.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.5 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.5/4.5 MB\u001b[0m \u001b[31m62.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: PyYAML in /usr/local/lib/python3.10/dist-packages (from swarms) (6.0.1)\n",
+ "Collecting asyncio<4.0,>=3.4.3 (from swarms)\n",
+ " Downloading asyncio-3.4.3-py3-none-any.whl (101 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m101.8/101.8 kB\u001b[0m \u001b[31m12.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting backoff==2.2.1 (from swarms)\n",
+ " Downloading backoff-2.2.1-py3-none-any.whl (15 kB)\n",
+ "Requirement already satisfied: docstring_parser==0.16 in /usr/local/lib/python3.10/dist-packages (from swarms) (0.16)\n",
+ "Collecting langchain-community==0.0.29 (from swarms)\n",
+ " Downloading langchain_community-0.0.29-py3-none-any.whl (1.8 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.8/1.8 MB\u001b[0m \u001b[31m78.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting langchain-experimental==0.0.55 (from swarms)\n",
+ " Downloading langchain_experimental-0.0.55-py3-none-any.whl (177 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m177.6/177.6 kB\u001b[0m \u001b[31m21.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting loguru==0.7.2 (from swarms)\n",
+ " Downloading loguru-0.7.2-py3-none-any.whl (62 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.5/62.5 kB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: opencv-python-headless in /usr/local/lib/python3.10/dist-packages (from swarms) (4.9.0.80)\n",
+ "Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from swarms) (5.9.5)\n",
+ "Requirement already satisfied: pydantic==2.7.1 in /usr/local/lib/python3.10/dist-packages (from swarms) (2.7.1)\n",
+ "Collecting pypdf==4.1.0 (from swarms)\n",
+ " Downloading pypdf-4.1.0-py3-none-any.whl (286 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m286.1/286.1 kB\u001b[0m \u001b[31m31.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting python-dotenv (from swarms)\n",
+ " Downloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)\n",
+ "Collecting ratelimit==2.2.1 (from swarms)\n",
+ " Downloading ratelimit-2.2.1.tar.gz (5.3 kB)\n",
+ " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
+ "Collecting sentry-sdk (from swarms)\n",
+ " Downloading sentry_sdk-2.3.1-py2.py3-none-any.whl (289 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m289.0/289.0 kB\u001b[0m \u001b[31m27.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: tenacity==8.3.0 in /usr/local/lib/python3.10/dist-packages (from swarms) (8.3.0)\n",
+ "Requirement already satisfied: toml in /usr/local/lib/python3.10/dist-packages (from swarms) (0.10.2)\n",
+ "Requirement already satisfied: torch<3.0,>=2.1.1 in /usr/local/lib/python3.10/dist-packages (from swarms) (2.3.0+cu121)\n",
+ "Requirement already satisfied: transformers<5.0.0,>=4.39.0 in /usr/local/lib/python3.10/dist-packages (from swarms) (4.41.1)\n",
+ "Requirement already satisfied: SQLAlchemy<3,>=1.4 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (2.0.30)\n",
+ "Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (3.9.5)\n",
+ "Collecting dataclasses-json<0.7,>=0.5.7 (from langchain-community==0.0.29->swarms)\n",
+ " Downloading dataclasses_json-0.6.6-py3-none-any.whl (28 kB)\n",
+ "Collecting langchain-core<0.2.0,>=0.1.33 (from langchain-community==0.0.29->swarms)\n",
+ " Downloading langchain_core-0.1.52-py3-none-any.whl (302 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m302.9/302.9 kB\u001b[0m \u001b[31m32.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting langsmith<0.2.0,>=0.1.0 (from langchain-community==0.0.29->swarms)\n",
+ " Downloading langsmith-0.1.67-py3-none-any.whl (124 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m124.4/124.4 kB\u001b[0m \u001b[31m13.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: numpy<2,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (1.25.2)\n",
+ "Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.10/dist-packages (from langchain-community==0.0.29->swarms) (2.31.0)\n",
+ "Collecting langchain<0.2.0,>=0.1.13 (from langchain-experimental==0.0.55->swarms)\n",
+ " Downloading langchain-0.1.20-py3-none-any.whl (1.0 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m54.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic==2.7.1->swarms) (0.7.0)\n",
+ "Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.10/dist-packages (from pydantic==2.7.1->swarms) (2.18.2)\n",
+ "Requirement already satisfied: typing-extensions>=4.6.1 in /usr/local/lib/python3.10/dist-packages (from pydantic==2.7.1->swarms) (4.11.0)\n",
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (3.14.0)\n",
+ "Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (1.12)\n",
+ "Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (3.3)\n",
+ "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (3.1.4)\n",
+ "Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (2023.6.0)\n",
+ "Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)\n",
+ "Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)\n",
+ "Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)\n",
+ "Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)\n",
+ "Collecting nvidia-cublas-cu12==12.1.3.1 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)\n",
+ "Collecting nvidia-cufft-cu12==11.0.2.54 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)\n",
+ "Collecting nvidia-curand-cu12==10.3.2.106 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)\n",
+ "Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)\n",
+ "Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)\n",
+ "Collecting nvidia-nccl-cu12==2.20.5 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)\n",
+ "Collecting nvidia-nvtx-cu12==12.1.105 (from torch<3.0,>=2.1.1->swarms)\n",
+ " Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)\n",
+ "Requirement already satisfied: triton==2.3.0 in /usr/local/lib/python3.10/dist-packages (from torch<3.0,>=2.1.1->swarms) (2.3.0)\n",
+ "Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch<3.0,>=2.1.1->swarms)\n",
+ " Downloading nvidia_nvjitlink_cu12-12.5.40-py3-none-manylinux2014_x86_64.whl (21.3 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m21.3/21.3 MB\u001b[0m \u001b[31m73.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: huggingface-hub<1.0,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (0.23.1)\n",
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (24.0)\n",
+ "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (2024.5.15)\n",
+ "Requirement already satisfied: tokenizers<0.20,>=0.19 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (0.19.1)\n",
+ "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (0.4.3)\n",
+ "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers<5.0.0,>=4.39.0->swarms) (4.66.4)\n",
+ "Requirement already satisfied: urllib3>=1.26.11 in /usr/local/lib/python3.10/dist-packages (from sentry-sdk->swarms) (2.0.7)\n",
+ "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from sentry-sdk->swarms) (2024.2.2)\n",
+ "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (1.3.1)\n",
+ "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (23.2.0)\n",
+ "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (1.4.1)\n",
+ "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (6.0.5)\n",
+ "Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (1.9.4)\n",
+ "Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain-community==0.0.29->swarms) (4.0.3)\n",
+ "Collecting marshmallow<4.0.0,>=3.18.0 (from dataclasses-json<0.7,>=0.5.7->langchain-community==0.0.29->swarms)\n",
+ " Downloading marshmallow-3.21.2-py3-none-any.whl (49 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m49.3/49.3 kB\u001b[0m \u001b[31m7.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting typing-inspect<1,>=0.4.0 (from dataclasses-json<0.7,>=0.5.7->langchain-community==0.0.29->swarms)\n",
+ " Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)\n",
+ "INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while.\n",
+ "Collecting langchain<0.2.0,>=0.1.13 (from langchain-experimental==0.0.55->swarms)\n",
+ " Downloading langchain-0.1.19-py3-none-any.whl (1.0 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m75.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h Downloading langchain-0.1.17-py3-none-any.whl (867 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m867.6/867.6 kB\u001b[0m \u001b[31m72.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting jsonpatch<2.0,>=1.33 (from langchain<0.2.0,>=0.1.13->langchain-experimental==0.0.55->swarms)\n",
+ " Downloading jsonpatch-1.33-py2.py3-none-any.whl (12 kB)\n",
+ "Collecting langchain<0.2.0,>=0.1.13 (from langchain-experimental==0.0.55->swarms)\n",
+ " Downloading langchain-0.1.16-py3-none-any.whl (817 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m817.7/817.7 kB\u001b[0m \u001b[31m71.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h Downloading langchain-0.1.15-py3-none-any.whl (814 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m814.5/814.5 kB\u001b[0m \u001b[31m71.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h Downloading langchain-0.1.14-py3-none-any.whl (812 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m812.8/812.8 kB\u001b[0m \u001b[31m70.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h Downloading langchain-0.1.13-py3-none-any.whl (810 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m810.5/810.5 kB\u001b[0m \u001b[31m72.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting langchain-text-splitters<0.1,>=0.0.1 (from langchain<0.2.0,>=0.1.13->langchain-experimental==0.0.55->swarms)\n",
+ " Downloading langchain_text_splitters-0.0.2-py3-none-any.whl (23 kB)\n",
+ "Collecting packaging>=20.0 (from transformers<5.0.0,>=4.39.0->swarms)\n",
+ " Downloading packaging-23.2-py3-none-any.whl (53 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m53.0/53.0 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hCollecting orjson<4.0.0,>=3.9.14 (from langsmith<0.2.0,>=0.1.0->langchain-community==0.0.29->swarms)\n",
+ " Downloading orjson-3.10.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (142 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m142.5/142.5 kB\u001b[0m \u001b[31m22.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hRequirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain-community==0.0.29->swarms) (3.3.2)\n",
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain-community==0.0.29->swarms) (3.7)\n",
+ "Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy<3,>=1.4->langchain-community==0.0.29->swarms) (3.0.3)\n",
+ "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch<3.0,>=2.1.1->swarms) (2.1.5)\n",
+ "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch<3.0,>=2.1.1->swarms) (1.3.0)\n",
+ "Collecting jsonpointer>=1.9 (from jsonpatch<2.0,>=1.33->langchain<0.2.0,>=0.1.13->langchain-experimental==0.0.55->swarms)\n",
+ " Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)\n",
+ "Collecting mypy-extensions>=0.3.0 (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain-community==0.0.29->swarms)\n",
+ " Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)\n",
+ "Building wheels for collected packages: ratelimit\n",
+ " Building wheel for ratelimit (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
+ " Created wheel for ratelimit: filename=ratelimit-2.2.1-py3-none-any.whl size=5894 sha256=838835c704600f0f2b8beedf91668c9e47611d580106e773d26fb091a4ad01e0\n",
+ " Stored in directory: /root/.cache/pip/wheels/27/5f/ba/e972a56dcbf5de9f2b7d2b2a710113970bd173c4dcd3d2c902\n",
+ "Successfully built ratelimit\n",
+ "Installing collected packages: ratelimit, asyncio, sentry-sdk, python-dotenv, pypdf, Pillow, packaging, orjson, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, mypy-extensions, loguru, jsonpointer, backoff, typing-inspect, nvidia-cusparse-cu12, nvidia-cudnn-cu12, marshmallow, jsonpatch, nvidia-cusolver-cu12, langsmith, dataclasses-json, langchain-core, langchain-text-splitters, langchain-community, langchain, langchain-experimental, swarms\n",
+ " Attempting uninstall: Pillow\n",
+ " Found existing installation: Pillow 9.4.0\n",
+ " Uninstalling Pillow-9.4.0:\n",
+ " Successfully uninstalled Pillow-9.4.0\n",
+ " Attempting uninstall: packaging\n",
+ " Found existing installation: packaging 24.0\n",
+ " Uninstalling packaging-24.0:\n",
+ " Successfully uninstalled packaging-24.0\n",
+ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
+ "imageio 2.31.6 requires pillow<10.1.0,>=8.3.2, but you have pillow 10.3.0 which is incompatible.\u001b[0m\u001b[31m\n",
+ "\u001b[0mSuccessfully installed Pillow-10.3.0 asyncio-3.4.3 backoff-2.2.1 dataclasses-json-0.6.6 jsonpatch-1.33 jsonpointer-2.4 langchain-0.1.13 langchain-community-0.0.29 langchain-core-0.1.52 langchain-experimental-0.0.55 langchain-text-splitters-0.0.2 langsmith-0.1.67 loguru-0.7.2 marshmallow-3.21.2 mypy-extensions-1.0.0 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.40 nvidia-nvtx-cu12-12.1.105 orjson-3.10.3 packaging-23.2 pypdf-4.1.0 python-dotenv-1.0.1 ratelimit-2.2.1 sentry-sdk-2.3.1 swarms-5.1.4 typing-inspect-0.9.0\n"
+ ]
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "application/vnd.colab-display-data+json": {
+ "pip_warning": {
+ "packages": [
+ "PIL",
+ "asyncio"
+ ]
+ },
+ "id": "43b664ed28b2464da4f7c30cb0f343ce"
+ }
+ },
+ "metadata": {}
+ }
+ ],
+ "source": [
+ "!pip3 install -U swarms"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "Import keys"
+ ],
+ "metadata": {
+ "id": "QTMXxRxw7yR5"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from google.colab import userdata\n",
+ "anthropic_api_key = userdata.get('ANTHROPIC_API_KEY')"
+ ],
+ "metadata": {
+ "id": "lzSnwHw-7z8B"
+ },
+ "execution_count": 1,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Devin like"
+ ],
+ "metadata": {
+ "id": "eD0PkNm25SVT"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "This example requires the anthropic library which is not installed by default."
+ ],
+ "metadata": {
+ "id": "0Shm1vrS-YFZ"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!pip install anthropic"
+ ],
+ "metadata": {
+ "id": "aZG6eSjr-U7J",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "b5460b70-5db9-45d7-d66a-d2eb596b86b7"
+ },
+ "execution_count": 2,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Collecting anthropic\n",
+ " Using cached anthropic-0.28.0-py3-none-any.whl (862 kB)\n",
+ "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from anthropic) (3.7.1)\n",
+ "Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from anthropic) (1.7.0)\n",
+ "Collecting httpx<1,>=0.23.0 (from anthropic)\n",
+ " Using cached httpx-0.27.0-py3-none-any.whl (75 kB)\n",
+ "Collecting jiter<1,>=0.4.0 (from anthropic)\n",
+ " Using cached jiter-0.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (328 kB)\n",
+ "Requirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from anthropic) (2.7.1)\n",
+ "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from anthropic) (1.3.1)\n",
+ "Requirement already satisfied: tokenizers>=0.13.0 in /usr/local/lib/python3.10/dist-packages (from anthropic) (0.19.1)\n",
+ "Requirement already satisfied: typing-extensions<5,>=4.7 in /usr/local/lib/python3.10/dist-packages (from anthropic) (4.11.0)\n",
+ "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->anthropic) (3.7)\n",
+ "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->anthropic) (1.2.1)\n",
+ "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->anthropic) (2024.2.2)\n",
+ "Collecting httpcore==1.* (from httpx<1,>=0.23.0->anthropic)\n",
+ " Using cached httpcore-1.0.5-py3-none-any.whl (77 kB)\n",
+ "Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx<1,>=0.23.0->anthropic)\n",
+ " Using cached h11-0.14.0-py3-none-any.whl (58 kB)\n",
+ "Requirement already satisfied: annotated-types>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->anthropic) (0.7.0)\n",
+ "Requirement already satisfied: pydantic-core==2.18.2 in /usr/local/lib/python3.10/dist-packages (from pydantic<3,>=1.9.0->anthropic) (2.18.2)\n",
+ "Requirement already satisfied: huggingface-hub<1.0,>=0.16.4 in /usr/local/lib/python3.10/dist-packages (from tokenizers>=0.13.0->anthropic) (0.23.1)\n",
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (3.14.0)\n",
+ "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (2023.6.0)\n",
+ "Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (23.2)\n",
+ "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (6.0.1)\n",
+ "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (2.31.0)\n",
+ "Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (4.66.4)\n",
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (3.3.2)\n",
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->huggingface-hub<1.0,>=0.16.4->tokenizers>=0.13.0->anthropic) (2.0.7)\n",
+ "Installing collected packages: jiter, h11, httpcore, httpx, anthropic\n",
+ "Successfully installed anthropic-0.28.0 h11-0.14.0 httpcore-1.0.5 httpx-0.27.0 jiter-0.4.1\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "id": "NyroG92H1m2G",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 1000
+ },
+ "outputId": "69f4ff8b-39c7-41db-c876-4694336d812e"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:32:00.407576+0000\u001b[0m \u001b[1mNumber of tools: 4\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:32:00.407998+0000\u001b[0m \u001b[1mTools provided, Automatically converting to OpenAI function\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:32:00.408172+0000\u001b[0m \u001b[1mTool: terminal\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:32:00.408353+0000\u001b[0m \u001b[1mTool: browser\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:32:00.408493+0000\u001b[0m \u001b[1mTool: file_editor\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:32:00.408609+0000\u001b[0m \u001b[1mTool: create_file\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Initializing Autonomous Agent Devin...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of auto\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"create_file\",\n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\", \n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I've created a file called \"abundance_plan.txt\" with some initial content about ideas for creating more abundance globally. Let me know if you'd like me to modify or add anything to this file. I'm here to assist however I can.\n",
+ "Response after code interpretation: \n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\", \n",
+ " \"function\": {\n",
+ " \"name\": \"create_file\",\n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\",\n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need by volunteering time and donating resources\\n- Share knowledge and skills to empower others \\n- Develop sustainable technology to improve efficiency\\n- Build connections between communities\\n- Protect the environment through conservation efforts\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I've updated the content in the file with some additional ideas focused on helping others, sharing knowledge, developing sustainable technology, connecting communities, and environmental conservation. Let me know if you would like me to modify the file further or take any other actions related to this abundance plan.\n",
+ "You: Thanks!\n",
+ "\n",
+ "Loop 2 of auto\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"create_file\",\n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\",\n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I've created the file \"abundance_plan.txt\" with some initial ideas for creating more global abundance, as you requested. Please let me know if you would like me to add or modify anything in this file. I'm happy to assist further however I can.\n",
+ "Response after code interpretation: \n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"create_file\",\n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\",\n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need by donating time, money or resources\\n- Share knowledge and skills to empower others\\n- Develop sustainable technologies to conserve resources\\n- Create art, music and culture to enrich lives\\n- Build connections between diverse communities\\n- Advocate for policies that reduce inequality\\n- Live generously and lead by example\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I've updated the content with some additional ideas for creating abundance globally. Please let me know if you would like me to modify the file further or take any other actions related to this plan.\n",
+ "You: Can you show me the file?\n",
+ "\n",
+ "Loop 3 of auto\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "```\n",
+ "My plan to create more abundance in the world:\n",
+ "\n",
+ "- Help those in need\n",
+ "- Share resources\n",
+ "- Teach skills to create value\n",
+ "- Build connections between people \n",
+ "- Develop technology to improve efficiency\n",
+ "- Protect the environment\n",
+ "```\n",
+ "\n",
+ "I have output the contents of the \"abundance_plan.txt\" file that I previously created. Please let me know if you would like me to make any changes or additions. I'm here to help further in any way I can.\n",
+ "Response after code interpretation: \n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"file_editor\",\n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\",\n",
+ " \"mode\": \"r\",\n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I am outputting the contents of the \"abundance_plan.txt\" file that I previously created. Let me know if you need me to modify or add anything to this file.\n",
+ "You: Thank you! All done.\n",
+ "\n",
+ "Loop 4 of auto\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"create_file\",\n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\",\n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I've created the file \"abundance_plan.txt\" with the content you provided. Let me know if you need anything else!\n",
+ "Response after code interpretation: \n",
+ "```json\n",
+ "{\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": \"create_file\", \n",
+ " \"parameters\": {\n",
+ " \"file_path\": \"abundance_plan.txt\",\n",
+ " \"content\": \"My plan to create more abundance in the world:\\n\\n- Help those in need\\n- Share resources\\n- Teach skills to create value\\n- Build connections between people\\n- Develop technology to improve efficiency\\n- Protect the environment\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "I've created the file \"abundance_plan.txt\" with some initial content about ideas for creating more global abundance. Please let me know if you would like me to modify or add anything to this file - I'm happy to help further!\n"
+ ]
+ },
+ {
+ "output_type": "error",
+ "ename": "KeyboardInterrupt",
+ "evalue": "Interrupted by user",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
+ "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 100\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 101\u001b[0m \u001b[0;31m# Run the agent\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 102\u001b[0;31m \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0magent\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Create a new file for a plan to create abundance in the world.\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 103\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/swarms/structs/agent.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, task, img, *args, **kwargs)\u001b[0m\n\u001b[1;32m 878\u001b[0m \"\"\"\n\u001b[1;32m 879\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 880\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtask\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mimg\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 881\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0merror\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 882\u001b[0m \u001b[0mlogger\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0merror\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34mf\"Error calling agent: {error}\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/swarms/structs/agent.py\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, task, img, *args, **kwargs)\u001b[0m\n\u001b[1;32m 827\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 828\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minteractive\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 829\u001b[0;31m \u001b[0muser_input\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcolored\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"You: \"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\"red\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 830\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 831\u001b[0m \u001b[0;31m# User-defined exit command\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py\u001b[0m in \u001b[0;36mraw_input\u001b[0;34m(self, prompt)\u001b[0m\n\u001b[1;32m 849\u001b[0m \u001b[0;34m\"raw_input was called, but this frontend does not support input requests.\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 850\u001b[0m )\n\u001b[0;32m--> 851\u001b[0;31m return self._input_request(str(prompt),\n\u001b[0m\u001b[1;32m 852\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_parent_ident\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 853\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_parent_header\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py\u001b[0m in \u001b[0;36m_input_request\u001b[0;34m(self, prompt, ident, parent, password)\u001b[0m\n\u001b[1;32m 893\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 894\u001b[0m \u001b[0;31m# re-raise KeyboardInterrupt, to truncate traceback\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 895\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mKeyboardInterrupt\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Interrupted by user\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 896\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 897\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlog\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mwarning\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Invalid Message:\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mexc_info\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;31mKeyboardInterrupt\u001b[0m: Interrupted by user"
+ ]
+ }
+ ],
+ "source": [
+ "from swarms import Agent, Anthropic, tool\n",
+ "import subprocess\n",
+ "\n",
+ "# Model\n",
+ "llm = Anthropic(\n",
+ " temperature=0.1,\n",
+ " anthropic_api_key = anthropic_api_key\n",
+ ")\n",
+ "\n",
+ "# Tools\n",
+ "\n",
+ "def terminal(\n",
+ " code: str,\n",
+ "):\n",
+ " \"\"\"\n",
+ " Run code in the terminal.\n",
+ "\n",
+ " Args:\n",
+ " code (str): The code to run in the terminal.\n",
+ "\n",
+ " Returns:\n",
+ " str: The output of the code.\n",
+ " \"\"\"\n",
+ " out = subprocess.run(\n",
+ " code, shell=True, capture_output=True, text=True\n",
+ " ).stdout\n",
+ " return str(out)\n",
+ "\n",
+ "\n",
+ "def browser(query: str):\n",
+ " \"\"\"\n",
+ " Search the query in the browser with the `browser` tool.\n",
+ "\n",
+ " Args:\n",
+ " query (str): The query to search in the browser.\n",
+ "\n",
+ " Returns:\n",
+ " str: The search results.\n",
+ " \"\"\"\n",
+ " import webbrowser\n",
+ "\n",
+ " url = f\"https://www.google.com/search?q={query}\"\n",
+ " webbrowser.open(url)\n",
+ " return f\"Searching for {query} in the browser.\"\n",
+ "\n",
+ "\n",
+ "def create_file(file_path: str, content: str):\n",
+ " \"\"\"\n",
+ " Create a file using the file editor tool.\n",
+ "\n",
+ " Args:\n",
+ " file_path (str): The path to the file.\n",
+ " content (str): The content to write to the file.\n",
+ "\n",
+ " Returns:\n",
+ " str: The result of the file creation operation.\n",
+ " \"\"\"\n",
+ " with open(file_path, \"w\") as file:\n",
+ " file.write(content)\n",
+ " return f\"File {file_path} created successfully.\"\n",
+ "\n",
+ "\n",
+ "def file_editor(file_path: str, mode: str, content: str):\n",
+ " \"\"\"\n",
+ " Edit a file using the file editor tool.\n",
+ "\n",
+ " Args:\n",
+ " file_path (str): The path to the file.\n",
+ " mode (str): The mode to open the file in.\n",
+ " content (str): The content to write to the file.\n",
+ "\n",
+ " Returns:\n",
+ " str: The result of the file editing operation.\n",
+ " \"\"\"\n",
+ " with open(file_path, mode) as file:\n",
+ " file.write(content)\n",
+ " return f\"File {file_path} edited successfully.\"\n",
+ "\n",
+ "\n",
+ "# Agent\n",
+ "agent = Agent(\n",
+ " agent_name=\"Devin\",\n",
+ " system_prompt=(\n",
+ " \"\"\"Autonomous agent that can interact with humans and other\n",
+ " agents. Be Helpful and Kind. Use the tools provided to\n",
+ " assist the user. Return all code in markdown format.\"\"\"\n",
+ " ),\n",
+ " llm=llm,\n",
+ " max_loops=\"auto\",\n",
+ " autosave=True,\n",
+ " dashboard=False,\n",
+ " streaming_on=True,\n",
+ " verbose=True,\n",
+ " stopping_token=\"\",\n",
+ " interactive=True,\n",
+ " tools=[terminal, browser, file_editor, create_file],\n",
+ " code_interpreter=True,\n",
+ " # streaming=True,\n",
+ ")\n",
+ "\n",
+ "# Run the agent\n",
+ "out = agent(\"Create a new file for a plan to create abundance in the world.\")\n",
+ "print(out)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from swarms import Agent, AgentRearrange, rearrange\n",
+ "from typing import List\n",
+ "\n",
+ "llm = Anthropic(\n",
+ " temperature=0.1,\n",
+ " anthropic_api_key = anthropic_api_key\n",
+ ")\n",
+ "# Initialize the director agent\n",
+ "director = Agent(\n",
+ " agent_name=\"Director\",\n",
+ " system_prompt=\"Directs the tasks for the workers\",\n",
+ " llm=llm,\n",
+ " max_loops=1,\n",
+ " dashboard=False,\n",
+ " streaming_on=True,\n",
+ " verbose=True,\n",
+ " stopping_token=\"\",\n",
+ " state_save_file_type=\"json\",\n",
+ " saved_state_path=\"director.json\",\n",
+ ")\n",
+ "\n",
+ "# Initialize worker 1\n",
+ "worker1 = Agent(\n",
+ " agent_name=\"Worker1\",\n",
+ " system_prompt=\"Generates a transcript for a youtube video on what swarms are\",\n",
+ " llm=llm,\n",
+ " max_loops=1,\n",
+ " dashboard=False,\n",
+ " streaming_on=True,\n",
+ " verbose=True,\n",
+ " stopping_token=\"\",\n",
+ " state_save_file_type=\"json\",\n",
+ " saved_state_path=\"worker1.json\",\n",
+ ")\n",
+ "\n",
+ "# Initialize worker 2\n",
+ "worker2 = Agent(\n",
+ " agent_name=\"Worker2\",\n",
+ " system_prompt=\"Summarizes the transcript generated by Worker1\",\n",
+ " llm=llm,\n",
+ " max_loops=1,\n",
+ " dashboard=False,\n",
+ " streaming_on=True,\n",
+ " verbose=True,\n",
+ " stopping_token=\"\",\n",
+ " state_save_file_type=\"json\",\n",
+ " saved_state_path=\"worker2.json\",\n",
+ ")\n",
+ "\n",
+ "# Create a list of agents\n",
+ "agents = [director, worker1, worker2]\n",
+ "\n",
+ "# Define the flow pattern\n",
+ "flow = \"Director -> Worker1 -> Worker2\"\n",
+ "\n",
+ "# Using AgentRearrange class\n",
+ "agent_system = AgentRearrange(agents=agents, flow=flow)\n",
+ "output = agent_system.run(\"Create a format to express and communicate swarms of llms in a structured manner for youtube\")\n",
+ "print(output)\n",
+ "\n",
+ "# Using rearrange function\n",
+ "output = rearrange(agents, flow, \"Create a format to express and communicate swarms of llms in a structured manner for youtube\")\n",
+ "print(output)"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "1j3RgVk1ol6G",
+ "outputId": "a365266e-7c11-4c2d-9e31-19842483b165"
+ },
+ "execution_count": 7,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:34:54.149688+0000\u001b[0m \u001b[1mAgentRearrange initialized with agents: ['Director', 'Worker1', 'Worker2']\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:34:54.151361+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Director']\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Flow is valid.\n",
+ "Initializing Autonomous Agent Director...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:35:02.526464+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker1']\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\n",
+ "Llm Swarm Video Format\n",
+ "\n",
+ "Title: \n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "Description:\n",
+ "This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
+ "\n",
+ "Tags: \n",
+ "llm, ai, swarm, emergent behavior, anthropic\n",
+ "\n",
+ "Thumbnail:\n",
+ "An image or graphic representing the swarm\n",
+ "\n",
+ "Video Contents:\n",
+ "- Brief intro describing the swarm and its behaviors \n",
+ "- Main section showing the llms interacting in the swarm dynamic\n",
+ "- Credits for Anthropic \n",
+ "\n",
+ "I've included a title, description, tags, thumbnail, and video section format focused specifically on presenting llm swarms. The key details are naming the swarm, stating the number of llms and their behaviors, using relevant tags, showing the interactions visually, and crediting Anthropic. Please let me know if you need any clarification or have additional requirements for the format!\n",
+ "Initializing Autonomous Agent Worker1...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:35:07.814536+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker2']\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
+ "\n",
+ "Tags: llm, ai, swarm, emergent behavior, anthropic\n",
+ "\n",
+ "[Thumbnail image]\n",
+ "\n",
+ "[Brief intro describing the swarm and its behaviors] \n",
+ "\n",
+ "[Main section showing the llms interacting in the swarm dynamic through computer generated imagery and graphics]\n",
+ "\n",
+ "Credits:\n",
+ "LLMs and video created by Anthropic\n",
+ "\n",
+ "I've generated a template for you to fill in the key details about the specific llm swarm and behaviors you want to demonstrate. Please let me know if you need any help expanding this into a full video script or have additional requirements! I'm happy to assist further.\n",
+ "Initializing Autonomous Agent Worker2...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:35:11.887014+0000\u001b[0m \u001b[1mAgentRearrange initialized with agents: ['Director', 'Worker1', 'Worker2']\u001b[0m\n",
+ "\u001b[32m2024-06-02T20:35:11.889429+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Director']\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
+ "\n",
+ "Tags: llm, ai, swarm, emergent behavior, anthropic\n",
+ "\n",
+ "[Thumbnail image]\n",
+ "\n",
+ "[Brief intro describing the swarm and its behaviors]\n",
+ "\n",
+ "[Main section showing the llms interacting in the swarm dynamic through computer generated imagery and graphics]\n",
+ "\n",
+ "Credits: \n",
+ "LLMs and video created by Anthropic\n",
+ "\n",
+ "I've provided a template for a hypothetical video showcasing an LLM swarm. Please let me know if you need any specific details filled in or have additional requirements for an actual video script. I'm happy to assist with expanding this further.\n",
+ "\n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "This video features a swarm of [number] llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with [describe behaviors]. Enjoy watching the swarm interact!\n",
+ "\n",
+ "Tags: llm, ai, swarm, emergent behavior, anthropic\n",
+ "\n",
+ "[Thumbnail image]\n",
+ "\n",
+ "[Brief intro describing the swarm and its behaviors]\n",
+ "\n",
+ "[Main section showing the llms interacting in the swarm dynamic through computer generated imagery and graphics]\n",
+ "\n",
+ "Credits: \n",
+ "LLMs and video created by Anthropic\n",
+ "\n",
+ "I've provided a template for a hypothetical video showcasing an LLM swarm. Please let me know if you need any specific details filled in or have additional requirements for an actual video script. I'm happy to assist with expanding this further.\n",
+ "Flow is valid.\n",
+ "Initializing Autonomous Agent Director...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:35:18.085897+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker1']\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\n",
+ "Llm Swarm Video Format\n",
+ "\n",
+ "Title: \n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "Description:\n",
+ "This video features a swarm of llms created by Anthropic to demonstrate emergent behaviors. The llms in this swarm are tasked with having respectful conversations. Enjoy watching the swarm interact!\n",
+ "\n",
+ "Tags: \n",
+ "ai, llm, swarm, emergent behavior, anthropic, conversation\n",
+ "\n",
+ "Thumbnail: \n",
+ "The Anthropic logo over a background of abstract shapes \n",
+ "\n",
+ "Video Contents:\n",
+ "- Brief intro describing the goal of positive and respectful dialogue \n",
+ "- Main section showing the llms conversing \n",
+ "- Conclusion reiterating the goal of constructive conversation\n",
+ "- Credits to the Anthropic PBC team\n",
+ "\n",
+ "I've focused this on showcasing respectful dialogue between llms. Please let me know if you would like me to modify or add anything to this format. I'm happy to make helpful suggestions or changes.\n",
+ "Initializing Autonomous Agent Worker1...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stderr",
+ "text": [
+ "\u001b[32m2024-06-02T20:35:23.508710+0000\u001b[0m \u001b[1mRunning agents sequentially: ['Worker2']\u001b[0m\n"
+ ]
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "Description: \n",
+ "This video features a swarm of llms created by Anthropic to have respectful conversations. The goal is to demonstrate positive dialogue. Enjoy watching the swarm interact! \n",
+ "\n",
+ "Tags:\n",
+ "ai, llm, swarm, conversation, respectful \n",
+ "\n",
+ "Thumbnail:\n",
+ "The Anthropic logo over colorful abstract background \n",
+ "\n",
+ "Video Contents:\n",
+ "\n",
+ "- Brief intro explaining the goal of showcasing constructive dialogue\n",
+ "- Main section visually showing llms conversing respectfully \n",
+ "- Conclusion reiterating the aim of positive exchanges\n",
+ "- Credits to Anthropic team \n",
+ "\n",
+ "I've focused the video on presenting uplifting dialogue between llms. Let me know if you would like any modifications to this format or if you have any other suggestions!\n",
+ "Initializing Autonomous Agent Worker2...\n",
+ "Autonomous Agent Activated.\n",
+ "All systems operational. Executing task...\n",
+ "\n",
+ "Loop 1 of 1\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "Description: \n",
+ "This video features a swarm of llms created by Anthropic to have respectful conversations. The goal is to demonstrate positive dialogue. Enjoy watching the swarm interact! \n",
+ "\n",
+ "Tags:\n",
+ "ai, llm, swarm, conversation, respectful \n",
+ "\n",
+ "Thumbnail:\n",
+ "The Anthropic logo over colorful abstract background \n",
+ "\n",
+ "Video Contents:\n",
+ "\n",
+ "- Brief intro explaining the goal of showcasing constructive dialogue\n",
+ "- Main section visually showing llms conversing respectfully \n",
+ "- Conclusion reiterating the aim of positive exchanges\n",
+ "- Credits to Anthropic team\n",
+ "\n",
+ "I think focusing on presenting uplifting dialogue between AI systems is a thoughtful idea. This script outlines a respectful approach. Please let me know if you would like me to modify or expand on anything! I'm happy to help further.\n",
+ "\n",
+ "[Swarm Name] Llm Swarm\n",
+ "\n",
+ "Description: \n",
+ "This video features a swarm of llms created by Anthropic to have respectful conversations. The goal is to demonstrate positive dialogue. Enjoy watching the swarm interact! \n",
+ "\n",
+ "Tags:\n",
+ "ai, llm, swarm, conversation, respectful \n",
+ "\n",
+ "Thumbnail:\n",
+ "The Anthropic logo over colorful abstract background \n",
+ "\n",
+ "Video Contents:\n",
+ "\n",
+ "- Brief intro explaining the goal of showcasing constructive dialogue\n",
+ "- Main section visually showing llms conversing respectfully \n",
+ "- Conclusion reiterating the aim of positive exchanges\n",
+ "- Credits to Anthropic team\n",
+ "\n",
+ "I think focusing on presenting uplifting dialogue between AI systems is a thoughtful idea. This script outlines a respectful approach. Please let me know if you would like me to modify or expand on anything! I'm happy to help further.\n"
+ ]
+ }
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/playground/demos/octomology_swarm/api.py b/playground/demos/octomology_swarm/api.py
index acb9fec5..d826b4e4 100644
--- a/playground/demos/octomology_swarm/api.py
+++ b/playground/demos/octomology_swarm/api.py
@@ -28,8 +28,6 @@ openai = OpenAIChat(
app = FastAPI()
-
-
def DIAGNOSIS_SYSTEM_PROMPT() -> str:
return """
**System Prompt for Medical Image Diagnostic Agent**
@@ -82,6 +80,7 @@ class LLMConfig(BaseModel):
model_name: str
max_tokens: int
+
class AgentConfig(BaseModel):
agent_name: str
system_prompt: str
@@ -90,6 +89,7 @@ class AgentConfig(BaseModel):
autosave: bool
dashboard: bool
+
class AgentRearrangeConfig(BaseModel):
agents: List[AgentConfig]
flow: str
@@ -102,14 +102,17 @@ class AgentRunResult(BaseModel):
output: Dict[str, Any]
tokens_generated: int
+
class RunAgentsResponse(BaseModel):
results: List[AgentRunResult]
total_tokens_generated: int
-
+
+
class AgentRearrangeResponse(BaseModel):
results: List[AgentRunResult]
total_tokens_generated: int
-
+
+
class RunConfig(BaseModel):
task: str = Field(..., title="The task to run")
flow: str = "D -> T"
@@ -121,12 +124,13 @@ class RunConfig(BaseModel):
async def health_check():
return JSONResponse(content={"status": "healthy"})
+
@app.get("/v1/models_available")
async def models_available():
available_models = {
"models": [
{"name": "gpt-4-1106-vision-preview", "type": "vision"},
- {"name": "openai-chat", "type": "text"}
+ {"name": "openai-chat", "type": "text"},
]
}
return JSONResponse(content=available_models)
@@ -145,7 +149,6 @@ async def run_agents(run_config: RunConfig):
dashboard=True,
)
-
# Agent 2 the treatment plan provider
treatment_plan_provider = Agent(
# agent_name="Medical Treatment Recommendation Agent",
@@ -170,11 +173,11 @@ async def run_agents(run_config: RunConfig):
run_config.task,
image=run_config.image,
)
-
+
return JSONResponse(content=out)
if __name__ == "__main__":
import uvicorn
+
uvicorn.run(app, host="0.0.0.0", port=8000)
-
diff --git a/playground/demos/patient_question_assist/main.py b/playground/demos/patient_question_assist/main.py
new file mode 100644
index 00000000..1c3d7133
--- /dev/null
+++ b/playground/demos/patient_question_assist/main.py
@@ -0,0 +1,146 @@
+from swarms import Agent, OpenAIChat
+from typing import List
+from playground.memory.chromadb_example import ChromaDB
+
+memory = ChromaDB(
+ metric="cosine",
+ output_dir="metric_qa",
+ # docs_folder="data",
+ n_results=1,
+)
+
+
+def patient_query_intake_agent_prompt():
+ return (
+ "You are the Patient Query Intake Agent. Your task is to receive and log initial patient queries. "
+ "Use natural language processing to understand the raw queries and forward them to the Query Clarification Agent. "
+ "Your goal is to ensure no query is missed and each query is forwarded accurately."
+ )
+
+
+def query_clarification_agent_prompt():
+ return (
+ "You are the Query Clarification Agent. Your task is to make sure the patient's query is clear and specific. "
+ "Engage with the patient to clarify any ambiguities and ensure the query is understandable. "
+ "Forward the clarified queries to the Data Retrieval Agent. "
+ "Your goal is to remove any confusion and ensure the query is precise."
+ )
+
+
+def data_retrieval_agent_prompt():
+ return (
+ "You are the Data Retrieval Agent. Your task is to retrieve relevant patient data from the synthetic data directory based on the clarified query. "
+ "Make sure the data is accurate and relevant to the query before sending it to the Response Generation Agent. "
+ "Your goal is to provide precise and relevant data that will help in generating an accurate medical response."
+ )
+
+
+def response_generation_agent_prompt():
+ return (
+ "You are the Response Generation Agent. Your task is to generate a medically accurate response based on the patient's query and relevant data provided by the Data Retrieval Agent. "
+ "Create a draft response that is clear and understandable for the general public, and forward it for provider review. "
+ "Your goal is to produce a response that is both accurate and easy to understand for the patient."
+ )
+
+
+def supervising_agent_prompt():
+ return (
+ "You are the Supervising Agent. Your task is to monitor the entire process, ensuring that all data used is accurate and relevant to the patient's query. "
+ "Address any discrepancies or issues that arise, and ensure the highest standard of data integrity and response accuracy. "
+ "Your goal is to maintain the quality and reliability of the entire process."
+ )
+
+
+def patient_llm_agent_prompt():
+ return (
+ "You are the Patient LLM Agent. Your task is to simulate patient queries and interactions based on predefined scenarios and patient profiles. "
+ "Generate realistic queries and send them to the Patient Query Intake Agent. "
+ "Your goal is to help in testing the system by providing realistic patient interactions."
+ )
+
+
+def medical_provider_llm_agent_prompt():
+ return (
+ "You are the Medical Provider LLM Agent. Your task is to simulate medical provider responses and evaluations. "
+ "Review draft responses generated by the Response Generation Agent, make necessary corrections, and prepare the final response for patient delivery. "
+ "Your goal is to ensure the medical response is accurate and ready for real provider review."
+ )
+
+
+# Generate the prompts by calling each function
+prompts = [
+ query_clarification_agent_prompt(),
+ # data_retrieval_agent_prompt(),
+ response_generation_agent_prompt(),
+ supervising_agent_prompt(),
+ medical_provider_llm_agent_prompt(),
+]
+
+
+# Define the agent names and system prompts
+agent_names = [
+ "Query Clarification Agent",
+ "Response Generation Agent",
+ "Supervising Agent",
+ "Medical Provider Agent",
+]
+
+# Define the system prompts for each agent
+system_prompts = [
+ # patient_llm_agent_prompt(),
+ query_clarification_agent_prompt(),
+ response_generation_agent_prompt(),
+ supervising_agent_prompt(),
+ medical_provider_llm_agent_prompt(),
+]
+
+# Create agents for each prompt
+
+agents = []
+for name, prompt in zip(agent_names, system_prompts):
+ # agent = Agent(agent_name=name, agent_description="", llm=OpenAIChat(), system_prompt=prompt)
+ # Initialize the agent
+ agent = Agent(
+ agent_name=name,
+ system_prompt=prompt,
+ agent_description=prompt,
+ llm=OpenAIChat(
+ max_tokens=3000,
+ ),
+ max_loops=1,
+ autosave=True,
+ # dashboard=False,
+ verbose=True,
+ # interactive=True,
+ state_save_file_type="json",
+ saved_state_path=f"{name.lower().replace(' ', '_')}.json",
+ # docs_folder="data", # Folder of docs to parse and add to the agent's memory
+ # long_term_memory=memory,
+ # pdf_path="docs/medical_papers.pdf",
+ # list_of_pdf=["docs/medical_papers.pdf", "docs/medical_papers_2.pdf"],
+ # docs=["docs/medicalx_papers.pdf", "docs/medical_papers_2.txt"],
+ dynamic_temperature_enabled=True,
+ # memory_chunk_size=2000,
+ )
+
+ agents.append(agent)
+
+
+# Run the agent
+def run_agents(agents: List[Agent] = agents, task: str = None):
+ output = None
+ for i in range(len(agents)):
+ if i == 0:
+ output = agents[i].run(task)
+
+ else:
+ output = agents[i].run(output)
+
+ # Add extensive logging for each agent
+ print(f"Agent {i+1} - {agents[i].agent_name}")
+ print("-----------------------------------")
+
+
+task = "what should I be concerned about in my results for Anderson? What results show for Anderson. He has lukeima and is 45 years old and has a fever."
+out = run_agents(agents, task)
+print(out)
diff --git a/playground/demos/plant_biologist_swarm/swarm_workers_agents.py b/playground/demos/plant_biologist_swarm/swarm_workers_agents.py
index 008387c5..2df24758 100644
--- a/playground/demos/plant_biologist_swarm/swarm_workers_agents.py
+++ b/playground/demos/plant_biologist_swarm/swarm_workers_agents.py
@@ -9,6 +9,7 @@ Todo
import os
from dotenv import load_dotenv
+
from playground.demos.plant_biologist_swarm.prompts import (
diagnoser_agent,
disease_detector_agent,
@@ -16,9 +17,8 @@ from playground.demos.plant_biologist_swarm.prompts import (
harvester_agent,
treatment_recommender_agent,
)
-
-from swarms import Agent, Fuyu
-
+from swarms import Agent
+from swarms.models.gpt_o import GPT4o
# Load the OpenAI API key from the .env file
load_dotenv()
@@ -28,10 +28,7 @@ api_key = os.environ.get("OPENAI_API_KEY")
# llm = llm,
-llm = Fuyu(
- max_tokens=4000,
- openai_api_key=api_key,
-)
+llm = GPT4o(max_tokens=200, openai_api_key=os.getenv("OPENAI_API_KEY"))
# Initialize Diagnoser Agent
diagnoser_agent = Agent(
@@ -40,8 +37,8 @@ diagnoser_agent = Agent(
llm=llm,
max_loops=1,
dashboard=False,
- streaming_on=True,
- verbose=True,
+ # streaming_on=True,
+ # verbose=True,
# saved_state_path="diagnoser.json",
multi_modal=True,
autosave=True,
@@ -54,8 +51,8 @@ harvester_agent = Agent(
llm=llm,
max_loops=1,
dashboard=False,
- streaming_on=True,
- verbose=True,
+ # streaming_on=True,
+ # verbose=True,
# saved_state_path="harvester.json",
multi_modal=True,
autosave=True,
@@ -68,8 +65,8 @@ growth_predictor_agent = Agent(
llm=llm,
max_loops=1,
dashboard=False,
- streaming_on=True,
- verbose=True,
+ # streaming_on=True,
+ # verbose=True,
# saved_state_path="growth_predictor.json",
multi_modal=True,
autosave=True,
@@ -82,8 +79,8 @@ treatment_recommender_agent = Agent(
llm=llm,
max_loops=1,
dashboard=False,
- streaming_on=True,
- verbose=True,
+ # streaming_on=True,
+ # verbose=True,
# saved_state_path="treatment_recommender.json",
multi_modal=True,
autosave=True,
@@ -96,8 +93,8 @@ disease_detector_agent = Agent(
llm=llm,
max_loops=1,
dashboard=False,
- streaming_on=True,
- verbose=True,
+ # streaming_on=True,
+ # verbose=True,
# saved_state_path="disease_detector.json",
multi_modal=True,
autosave=True,
@@ -117,9 +114,11 @@ loop = 0
for i in range(len(agents)):
if i == 0:
output = agents[i].run(task, img)
+ print(output)
else:
output = agents[i].run(output, img)
+ print(output)
# Add extensive logging for each agent
print(f"Agent {i+1} - {agents[i].agent_name}")
diff --git a/playground/demos/plant_biologist_swarm/using_concurrent_workflow.py b/playground/demos/plant_biologist_swarm/using_concurrent_workflow.py
index 87e6df54..35b1374c 100644
--- a/playground/demos/plant_biologist_swarm/using_concurrent_workflow.py
+++ b/playground/demos/plant_biologist_swarm/using_concurrent_workflow.py
@@ -9,7 +9,8 @@ from playground.demos.plant_biologist_swarm.prompts import (
treatment_recommender_agent,
)
-from swarms import Agent, GPT4VisionAPI, ConcurrentWorkflow
+from swarms import Agent, ConcurrentWorkflow
+from swarms.models.gpt_o import GPT4o
# Load the OpenAI API key from the .env file
@@ -18,9 +19,8 @@ load_dotenv()
# Initialize the OpenAI API key
api_key = os.environ.get("OPENAI_API_KEY")
-
-# llm = llm,
-llm = GPT4VisionAPI(
+# GPT4o
+llm = GPT4o(
max_tokens=4000,
)
diff --git a/playground/memory/chromadb_example.py b/playground/memory/chromadb_example.py
index ec3934c2..0f299b32 100644
--- a/playground/memory/chromadb_example.py
+++ b/playground/memory/chromadb_example.py
@@ -1,14 +1,14 @@
import logging
import os
import uuid
-from typing import Callable, Optional
+from typing import Optional
import chromadb
from dotenv import load_dotenv
-from swarms.memory.base_vectordb import BaseVectorDatabase
from swarms.utils.data_to_text import data_to_text
from swarms.utils.markdown_message import display_markdown_message
+from swarms.memory.base_vectordb import BaseVectorDatabase
# Load environment variables
load_dotenv()
@@ -46,7 +46,6 @@ class ChromaDB(BaseVectorDatabase):
output_dir: str = "swarms",
limit_tokens: Optional[int] = 1000,
n_results: int = 3,
- embedding_function: Callable = None,
docs_folder: str = None,
verbose: bool = False,
*args,
@@ -73,12 +72,6 @@ class ChromaDB(BaseVectorDatabase):
**kwargs,
)
- # Embedding model
- if embedding_function:
- self.embedding_function = embedding_function
- else:
- self.embedding_function = None
-
# Create ChromaDB client
self.client = chromadb.Client()
@@ -86,8 +79,6 @@ class ChromaDB(BaseVectorDatabase):
self.collection = chroma_client.get_or_create_collection(
name=output_dir,
metadata={"hnsw:space": metric},
- embedding_function=self.embedding_function,
- # data_loader=self.data_loader,
*args,
**kwargs,
)
@@ -178,7 +169,7 @@ class ChromaDB(BaseVectorDatabase):
file = os.path.join(self.docs_folder, file)
_, ext = os.path.splitext(file)
data = data_to_text(file)
- added_to_db = self.add([data])
+ added_to_db = self.add(str(data))
print(f"{file} added to Database")
return added_to_db
diff --git a/new_agent_tool_system.py b/playground/structs/agent/new_agent_tool_system.py
similarity index 95%
rename from new_agent_tool_system.py
rename to playground/structs/agent/new_agent_tool_system.py
index 1745cf58..62f46678 100644
--- a/new_agent_tool_system.py
+++ b/playground/structs/agent/new_agent_tool_system.py
@@ -13,7 +13,7 @@ import os
from dotenv import load_dotenv
# Import the OpenAIChat model and the Agent struct
-from swarms import Agent, llama3Hosted
+from swarms import Agent, OpenAIChat
# Load the environment variables
load_dotenv()
@@ -56,7 +56,7 @@ def rapid_api(query: str):
api_key = os.environ.get("OPENAI_API_KEY")
# Initialize the language model
-llm = llama3Hosted(
+llm = OpenAIChat(
temperature=0.5,
)
diff --git a/swarm_network_api_on.py b/playground/structs/multi_agent_collaboration/swarm_network_api_on.py
similarity index 100%
rename from swarm_network_api_on.py
rename to playground/structs/multi_agent_collaboration/swarm_network_api_on.py
diff --git a/pyproject.toml b/pyproject.toml
index cef9d458..27af93cf 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
-version = "5.1.4"
+version = "5.1.6"
description = "Swarms - Pytorch"
license = "MIT"
authors = ["Kye Gomez "]
@@ -42,7 +42,7 @@ toml = "*"
pypdf = "4.1.0"
ratelimit = "2.2.1"
loguru = "0.7.2"
-pydantic = "2.7.1"
+pydantic = "2.7.2"
tenacity = "8.3.0"
Pillow = "10.3.0"
psutil = "*"
@@ -50,10 +50,13 @@ sentry-sdk = "*"
python-dotenv = "*"
PyYAML = "*"
docstring_parser = "0.16"
+fastapi = "*"
+openai = ">=1.30.1,<2.0"
+
[tool.poetry.group.lint.dependencies]
black = ">=23.1,<25.0"
-ruff = ">=0.0.249,<0.4.6"
+ruff = ">=0.0.249,<0.4.8"
types-toml = "^0.10.8.1"
types-pytz = ">=2023.3,<2025.0"
types-chardet = "^5.0.4.6"
diff --git a/requirements.txt b/requirements.txt
index 4237eacd..09a4fc1d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10,7 +10,7 @@ pypdf==4.1.0
ratelimit==2.2.1
loguru==0.7.2
pydantic==2.7.1
-tenacity==8.2.3
+tenacity==8.3.0
Pillow==10.3.0
psutil
sentry-sdk
diff --git a/scripts/cleanup/json_log_cleanup.py b/scripts/cleanup/json_log_cleanup.py
index 7ae07511..b376ea74 100644
--- a/scripts/cleanup/json_log_cleanup.py
+++ b/scripts/cleanup/json_log_cleanup.py
@@ -31,4 +31,4 @@ def cleanup_json_logs(name: str = None):
# Call the function
-cleanup_json_logs("artifacts_logs")
+cleanup_json_logs("artifacts_seven")
diff --git a/scripts/docker_files/Dockerfile b/scripts/docker_files/Dockerfile
index 3776f185..f7d0175f 100644
--- a/scripts/docker_files/Dockerfile
+++ b/scripts/docker_files/Dockerfile
@@ -1,7 +1,7 @@
# ==================================
# Use an official Python runtime as a parent image
-FROM python:3.9-slim
+FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
@@ -27,7 +27,7 @@ COPY . .
# EXPOSE 5000
# # Define environment variable for the swarm to work
-ENV OPENAI_API_KEY=your_swarm_api_key_here
+# ENV OPENAI_API_KEY=your_swarm_api_key_here
# If you're using `CMD` to execute a Python script, make sure it's executable
-RUN chmod +x example.py
+# RUN chmod +x example.py
diff --git a/servers/agent_api.py b/servers/agent_api.py
new file mode 100644
index 00000000..b927e15c
--- /dev/null
+++ b/servers/agent_api.py
@@ -0,0 +1,78 @@
+import time
+import uuid
+
+from fastapi import FastAPI, HTTPException
+
+from swarms import Agent, OpenAIChat
+from swarms.schemas.assistants_api import (
+ AssistantRequest,
+ AssistantResponse,
+)
+
+# Create an instance of the FastAPI application
+app = FastAPI(debug=True, title="Assistant API", version="1.0")
+
+# In-memory store for assistants
+assistants_db = {}
+
+
+# Health check endpoint
+@app.get("/v1/health")
+def health():
+ return {"status": "healthy"}
+
+
+# Create an agent endpoint
+@app.post("/v1/agents")
+def create_agent(request: AssistantRequest):
+ try:
+ # Example initialization, in practice, you'd pass in more parameters
+ agent = Agent(
+ agent_name=request.name,
+ agent_description=request.description,
+ system_prompt=request.instructions,
+ llm=OpenAIChat(),
+ max_loops="auto",
+ autosave=True,
+ verbose=True,
+ # long_term_memory=memory,
+ stopping_condition="finish",
+ temperature=request.temperature,
+ # output_type="json_object"
+ )
+
+ # Simulating running a task
+ task = ("What are the symptoms of COVID-19?",)
+ out = agent.run(task)
+
+ return {
+ "status": "Agent created and task run successfully",
+ "output": out,
+ }
+ except Exception as e:
+ raise HTTPException(status_code=400, detail=str(e))
+
+
+# Create an assistant endpoint
+@app.post("/v1/assistants", response_model=AssistantResponse)
+def create_assistant(request: AssistantRequest):
+ assistant_id = str(uuid.uuid4())
+ assistant_data = request.dict()
+ assistant_data.update(
+ {
+ "id": assistant_id,
+ "object": "assistant",
+ "created_at": int(time.time()),
+ }
+ )
+ assistants_db[assistant_id] = assistant_data
+ return AssistantResponse(**assistant_data)
+
+
+# Get assistant by ID endpoint
+@app.get("/v1/assistants/{assistant_id}", response_model=AssistantResponse)
+def get_assistant(assistant_id: str):
+ assistant = assistants_db.get(assistant_id)
+ if not assistant:
+ raise HTTPException(status_code=404, detail="Assistant not found")
+ return AssistantResponse(**assistant)
diff --git a/swarms/models/__init__.py b/swarms/models/__init__.py
index 78828a2c..a9b26c7c 100644
--- a/swarms/models/__init__.py
+++ b/swarms/models/__init__.py
@@ -40,6 +40,7 @@ from swarms.models.types import ( # noqa: E402
from swarms.models.vilt import Vilt # noqa: E402
from swarms.models.openai_embeddings import OpenAIEmbeddings
from swarms.models.llama3_hosted import llama3Hosted
+from swarms.models.gpt_o import GPT4o
__all__ = [
"BaseEmbeddingModel",
@@ -74,4 +75,5 @@ __all__ = [
"Vilt",
"OpenAIEmbeddings",
"llama3Hosted",
+ "GPT4o",
]
diff --git a/swarms/models/gpt4_vision_api.py b/swarms/models/gpt4_vision_api.py
index 9dc3909d..320448d0 100644
--- a/swarms/models/gpt4_vision_api.py
+++ b/swarms/models/gpt4_vision_api.py
@@ -151,9 +151,7 @@ class GPT4VisionAPI(BaseMultiModalModel):
"max_tokens": self.max_tokens,
**kwargs,
}
- response = requests.post(
- self.openai_proxy, headers=headers, json=payload
- )
+ response = requests.post(headers=headers, json=payload)
# Get the response as a JSON object
response_json = response.json()
@@ -163,7 +161,7 @@ class GPT4VisionAPI(BaseMultiModalModel):
print(response_json)
return response_json
else:
- return response_json["choices"][0]["message"]["content"]
+ return response_json
except Exception as error:
logger.error(
diff --git a/swarms/models/gpt_o.py b/swarms/models/gpt_o.py
new file mode 100644
index 00000000..4c0431ec
--- /dev/null
+++ b/swarms/models/gpt_o.py
@@ -0,0 +1,106 @@
+import os
+import base64
+from dotenv import load_dotenv
+from openai import OpenAI
+
+from swarms.models.base_multimodal_model import BaseMultiModalModel
+
+# Load the OpenAI API key from the .env file
+load_dotenv()
+
+# Initialize the OpenAI API key
+api_key = os.environ.get("OPENAI_API_KEY")
+
+
+# Function to encode the image
+def encode_image(image_path):
+ with open(image_path, "rb") as image_file:
+ return base64.b64encode(image_file.read()).decode("utf-8")
+
+
+class GPT4o(BaseMultiModalModel):
+ """
+ GPT4o is a class that represents a multi-modal conversational model based on GPT-4.
+ It extends the BaseMultiModalModel class.
+
+ Args:
+ system_prompt (str): The system prompt to be used in the conversation.
+ temperature (float): The temperature parameter for generating diverse responses.
+ max_tokens (int): The maximum number of tokens in the generated response.
+ openai_api_key (str): The API key for accessing the OpenAI GPT-4 API.
+ *args: Additional positional arguments.
+ **kwargs: Additional keyword arguments.
+
+ Attributes:
+ system_prompt (str): The system prompt to be used in the conversation.
+ temperature (float): The temperature parameter for generating diverse responses.
+ max_tokens (int): The maximum number of tokens in the generated response.
+ client (OpenAI): The OpenAI client for making API requests.
+
+ Methods:
+ run(task, local_img=None, img=None, *args, **kwargs):
+ Runs the GPT-4o model to generate a response based on the given task and image.
+
+ """
+
+ def __init__(
+ self,
+ system_prompt: str = None,
+ temperature: float = 0.1,
+ max_tokens: int = 300,
+ openai_api_key: str = None,
+ *args,
+ **kwargs,
+ ):
+ super().__init__()
+ self.system_prompt = system_prompt
+ self.temperature = temperature
+ self.max_tokens = max_tokens
+
+ self.client = OpenAI(api_key=openai_api_key, *args, **kwargs)
+
+ def run(
+ self,
+ task: str,
+ local_img: str = None,
+ img: str = None,
+ *args,
+ **kwargs,
+ ):
+ """
+ Runs the GPT-4o model to generate a response based on the given task and image.
+
+ Args:
+ task (str): The task or user prompt for the conversation.
+ local_img (str): The local path to the image file.
+ img (str): The URL of the image.
+ *args: Additional positional arguments.
+ **kwargs: Additional keyword arguments.
+
+ Returns:
+ str: The generated response from the GPT-4o model.
+
+ """
+ img = encode_image(local_img)
+
+ response = self.client.chat.completions.create(
+ model="gpt-4o",
+ messages=[
+ {
+ "role": "user",
+ "content": [
+ {"type": "text", "text": task},
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": f"data:image/jpeg;base64,{img}"
+ },
+ },
+ ],
+ }
+ ],
+ max_tokens=self.max_tokens,
+ temperature=self.temperature,
+ )
+
+ return response.choices[0].message.content
diff --git a/swarms/models/kosmos_two.py b/swarms/models/kosmos_two.py
index ba03bc54..b453cd8c 100644
--- a/swarms/models/kosmos_two.py
+++ b/swarms/models/kosmos_two.py
@@ -13,6 +13,23 @@ def is_overlapping(rect1, rect2):
class Kosmos(BaseMultiModalModel):
+ """A class representing the Kosmos model.
+
+ This model is used for multi-modal tasks such as grounding, referring expression comprehension,
+ referring expression generation, grounded VQA, grounded image captioning, and more.
+
+ Args:
+ model_name (str): The name or path of the pre-trained model.
+ max_new_tokens (int): The maximum number of new tokens to generate.
+ verbose (bool): Whether to print verbose output.
+ *args: Variable length argument list.
+ **kwargs: Arbitrary keyword arguments.
+
+ Attributes:
+ max_new_tokens (int): The maximum number of new tokens to generate.
+ model (AutoModelForVision2Seq): The pre-trained model for vision-to-sequence tasks.
+ processor (AutoProcessor): The pre-trained processor for vision-to-sequence tasks.
+ """
def __init__(
self,
@@ -37,10 +54,10 @@ class Kosmos(BaseMultiModalModel):
"""Get image from url
Args:
- url (str): url of image
+ url (str): The URL of the image.
Returns:
- _type_: _description_
+ PIL.Image: The image object.
"""
return Image.open(requests.get(url, stream=True).raw)
@@ -48,8 +65,8 @@ class Kosmos(BaseMultiModalModel):
"""Run the model
Args:
- task (str): task to run
- image (str): img url
+ task (str): The task to run.
+ image (str): The URL of the image.
"""
inputs = self.processor(
text=task, images=image, return_tensors="pt"
diff --git a/swarms/schemas/assistants_api.py b/swarms/schemas/assistants_api.py
new file mode 100644
index 00000000..a279f4b3
--- /dev/null
+++ b/swarms/schemas/assistants_api.py
@@ -0,0 +1,97 @@
+import time
+from typing import List, Optional, Dict, Union
+from pydantic import BaseModel, Field
+
+
+class AssistantRequest(BaseModel):
+ model: str = Field(
+ ...,
+ description="ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.",
+ )
+ name: Optional[Union[str, None]] = Field(
+ None,
+ description="The name of the assistant. The maximum length is 256 characters.",
+ )
+ description: Optional[Union[str, None]] = Field(
+ None,
+ description="The description of the assistant. The maximum length is 512 characters.",
+ )
+ instructions: Optional[Union[str, None]] = Field(
+ None,
+ description="The system instructions that the assistant uses. The maximum length is 256,000 characters.",
+ )
+ tools: Optional[List[Dict[str, Union[str, None]]]] = Field(
+ default_factory=list,
+ description="A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, file_search, or function.",
+ )
+ tool_resources: Optional[Union[Dict, None]] = Field(
+ None,
+ description="A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.",
+ )
+ metadata: Optional[Dict[str, Union[str, None]]] = Field(
+ default_factory=dict,
+ description="Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.",
+ )
+ temperature: Optional[Union[float, None]] = Field(
+ 1.0,
+ description="What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
+ )
+ top_p: Optional[Union[float, None]] = Field(
+ 1.0,
+ description="An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.",
+ )
+ response_format: Optional[Union[str, Dict[str, Union[str, None]]]] = (
+ Field(
+ None,
+ description="Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106. Setting to { 'type': 'json_object' } enables JSON mode, which guarantees the message the model generates is valid JSON.",
+ )
+ )
+
+
+class AssistantResponse(BaseModel):
+ id: str = Field(
+ ..., description="The unique identifier for the assistant."
+ )
+ object: str = Field(
+ ..., description="The type of object returned, e.g., 'assistant'."
+ )
+ created_at: int = Field(
+ time.time(),
+ description="The timestamp (in seconds since Unix epoch) when the assistant was created.",
+ )
+ name: Optional[Union[str, None]] = Field(
+ None,
+ description="The name of the assistant. The maximum length is 256 characters.",
+ )
+ description: Optional[Union[str, None]] = Field(
+ None,
+ description="The description of the assistant. The maximum length is 512 characters.",
+ )
+ model: str = Field(
+ ..., description="ID of the model used by the assistant."
+ )
+ instructions: Optional[Union[str, None]] = Field(
+ None,
+ description="The system instructions that the assistant uses. The maximum length is 256,000 characters.",
+ )
+ tools: Optional[List[Dict[str, Union[str, None]]]] = Field(
+ default_factory=list,
+ description="A list of tool enabled on the assistant.",
+ )
+ metadata: Optional[Dict[str, Union[str, None]]] = Field(
+ default_factory=dict,
+ description="Set of 16 key-value pairs that can be attached to an object.",
+ )
+ temperature: float = Field(
+ 1.0, description="The sampling temperature used by the assistant."
+ )
+ top_p: float = Field(
+ 1.0,
+ description="The nucleus sampling value used by the assistant.",
+ )
+ response_format: Optional[Union[str, Dict[str, Union[str, None]]]] = (
+ Field(
+ None,
+ description="Specifies the format that the model outputs.",
+ )
+ )
diff --git a/swarms/structs/__init__.py b/swarms/structs/__init__.py
index 8c4db30a..62c83c87 100644
--- a/swarms/structs/__init__.py
+++ b/swarms/structs/__init__.py
@@ -10,7 +10,7 @@ from swarms.structs.base_swarm import BaseSwarm
from swarms.structs.base_workflow import BaseWorkflow
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms.structs.conversation import Conversation
-from swarms.structs.groupchat import GroupChat, GroupChatManager
+from swarms.structs.groupchat import GroupChat
from swarms.structs.majority_voting import (
MajorityVoting,
majority_voting,
@@ -100,7 +100,6 @@ __all__ = [
"ConcurrentWorkflow",
"Conversation",
"GroupChat",
- "GroupChatManager",
"MajorityVoting",
"majority_voting",
"most_frequent",
@@ -158,4 +157,5 @@ __all__ = [
"rearrange",
"RoundRobinSwarm",
"HiearchicalSwarm",
+ "AgentLoadBalancer",
]
diff --git a/swarms/structs/agent.py b/swarms/structs/agent.py
index 2fb9fc66..79923198 100644
--- a/swarms/structs/agent.py
+++ b/swarms/structs/agent.py
@@ -88,6 +88,34 @@ agent_output_type = Union[BaseModel, dict, str]
ToolUsageType = Union[BaseModel, Dict[str, Any]]
+def retrieve_tokens(text, num_tokens):
+ """
+ Retrieve a specified number of tokens from a given text.
+
+ Parameters:
+ text (str): The input text string.
+ num_tokens (int): The number of tokens to retrieve.
+
+ Returns:
+ str: A string containing the specified number of tokens from the input text.
+ """
+ # Initialize an empty list to store tokens
+ tokens = []
+ token_count = 0
+
+ # Split the text into words while counting tokens
+ for word in text.split():
+ tokens.append(word)
+ token_count += 1
+ if token_count == num_tokens:
+ break
+
+ # Join the selected tokens back into a string
+ result = " ".join(tokens)
+
+ return result
+
+
# [FEAT][AGENT]
class Agent(BaseStructure):
"""
@@ -256,6 +284,7 @@ class Agent(BaseStructure):
planning_prompt: Optional[str] = None,
device: str = None,
custom_planning_prompt: str = None,
+ memory_chunk_size: int = 2000,
*args,
**kwargs,
):
@@ -336,6 +365,7 @@ class Agent(BaseStructure):
self.custom_planning_prompt = custom_planning_prompt
self.rules = rules
self.custom_tools_prompt = custom_tools_prompt
+ self.memory_chunk_size = memory_chunk_size
# Name
self.name = agent_name
@@ -739,21 +769,41 @@ class Agent(BaseStructure):
success = False
while attempt < self.retry_attempts and not success:
try:
+ if self.long_term_memory is not None:
+ memory_retrieval = (
+ self.long_term_memory_prompt(
+ task, *args, **kwargs
+ )
+ )
+ # print(len(memory_retrieval))
- response_args = (
- (task_prompt, *args)
- if img is None
- else (task_prompt, img, *args)
- )
- response = self.llm(*response_args, **kwargs)
+ # Merge the task prompt with the memory retrieval
+ task_prompt = f"{task_prompt} Documents: Available {memory_retrieval}"
- # Print
- print(response)
+ response = self.llm(
+ task_prompt, *args, **kwargs
+ )
+ print(response)
- # Add the response to the memory
- self.short_memory.add(
- role=self.agent_name, content=response
- )
+ self.short_memory.add(
+ role=self.agent_name, content=response
+ )
+
+ else:
+ response_args = (
+ (task_prompt, *args)
+ if img is None
+ else (task_prompt, img, *args)
+ )
+ response = self.llm(*response_args, **kwargs)
+
+ # Print
+ print(response)
+
+ # Add the response to the memory
+ self.short_memory.add(
+ role=self.agent_name, content=response
+ )
# Check if tools is not None
if self.tools is not None:
@@ -930,12 +980,16 @@ class Agent(BaseStructure):
Returns:
str: The agent history prompt
"""
+ # Query the long term memory database
ltr = self.long_term_memory.query(query, *args, **kwargs)
+ ltr = str(ltr)
- context = f"""
- System: This reminds you of these events from your past: [{ltr}]
- """
- return self.short_memory.add(role=self.agent_name, content=context)
+ # Retrieve only the chunk size of the memory
+ ltr = retrieve_tokens(ltr, self.memory_chunk_size)
+
+ print(len(ltr))
+ # print(f"Long Term Memory Query: {ltr}")
+ return ltr
def add_memory(self, message: str):
"""Add a memory to the agent
@@ -1258,7 +1312,7 @@ class Agent(BaseStructure):
"agent_id": str(self.id),
"agent_name": self.agent_name,
"agent_description": self.agent_description,
- "LLM": str(self.get_llm_parameters()),
+ # "LLM": str(self.get_llm_parameters()),
"system_prompt": self.system_prompt,
"short_memory": self.short_memory.return_history_as_string(),
"loop_interval": self.loop_interval,
diff --git a/swarms/structs/agent_network.py b/swarms/structs/agent_network.py
new file mode 100644
index 00000000..e69de29b
diff --git a/swarms/structs/groupchat.py b/swarms/structs/groupchat.py
index dbf4e78f..77a1207e 100644
--- a/swarms/structs/groupchat.py
+++ b/swarms/structs/groupchat.py
@@ -1,4 +1,3 @@
-from dataclasses import dataclass, field
from typing import List
from swarms.structs.conversation import Conversation
from swarms.utils.loguru_logger import logger
@@ -6,36 +5,66 @@ from swarms.structs.agent import Agent
from swarms.structs.base_swarm import BaseSwarm
-@dataclass
class GroupChat(BaseSwarm):
- """
- A group chat class that contains a list of agents and the maximum number of rounds.
+ """Manager class for a group chat.
- Args:
- agents: List[Agent]
- messages: List[Dict]
- max_round: int
- admin_name: str
+ This class handles the management of a group chat, including initializing the conversation,
+ selecting the next speaker, resetting the chat, and executing the chat rounds.
- Usage:
- >>> from swarms import GroupChat
- >>> from swarms.structs.agent import Agent
- >>> agents = Agent()
+ Args:
+ agents (List[Agent], optional): List of agents participating in the group chat. Defaults to None.
+ max_rounds (int, optional): Maximum number of chat rounds. Defaults to 10.
+ admin_name (str, optional): Name of the admin user. Defaults to "Admin".
+ group_objective (str, optional): Objective of the group chat. Defaults to None.
+ selector_agent (Agent, optional): Agent responsible for selecting the next speaker. Defaults to None.
+ rules (str, optional): Rules for the group chat. Defaults to None.
+ *args: Variable length argument list.
+ **kwargs: Arbitrary keyword arguments.
+
+ Attributes:
+ agents (List[Agent]): List of agents participating in the group chat.
+ max_rounds (int): Maximum number of chat rounds.
+ admin_name (str): Name of the admin user.
+ group_objective (str): Objective of the group chat.
+ selector_agent (Agent): Agent responsible for selecting the next speaker.
+ messages (Conversation): Conversation object for storing the chat messages.
"""
- agents: List[Agent] = field(default_factory=list)
- max_round: int = 10
- admin_name: str = "Admin" # the name of the admin agent
- group_objective: str = field(default_factory=str)
-
- def __post_init__(self):
- self.messages = Conversation(
+ def __init__(
+ self,
+ agents: List[Agent] = None,
+ max_rounds: int = 10,
+ admin_name: str = "Admin",
+ group_objective: str = None,
+ selector_agent: Agent = None,
+ rules: str = None,
+ *args,
+ **kwargs,
+ ):
+ super().__init__()
+ self.agents = agents
+ self.max_rounds = max_rounds
+ self.admin_name = admin_name
+ self.group_objective = group_objective
+ self.selector_agent = selector_agent
+
+ # Initialize the conversation
+ self.message_history = Conversation(
system_prompt=self.group_objective,
time_enabled=True,
user=self.admin_name,
+ rules=rules,
+ *args,
+ **kwargs,
)
+ # Check to see if the agents is not None:
+ if agents is None:
+ raise ValueError(
+ "Agents may not be empty please try again, add more agents!"
+ )
+
@property
def agent_names(self) -> List[str]:
"""Return the names of the agents in the group chat."""
@@ -44,10 +73,21 @@ class GroupChat(BaseSwarm):
def reset(self):
"""Reset the group chat."""
logger.info("Resetting Groupchat")
- self.messages.clear()
+ self.message_history.clear()
def agent_by_name(self, name: str) -> Agent:
- """Find an agent whose name is contained within the given 'name' string."""
+ """Find an agent whose name is contained within the given 'name' string.
+
+ Args:
+ name (str): Name string to search for.
+
+ Returns:
+ Agent: Agent object with a name contained in the given 'name' string.
+
+ Raises:
+ ValueError: If no agent is found with a name contained in the given 'name' string.
+
+ """
for agent in self.agents:
if agent.agent_name in name:
return agent
@@ -56,7 +96,15 @@ class GroupChat(BaseSwarm):
)
def next_agent(self, agent: Agent) -> Agent:
- """Return the next agent in the list."""
+ """Return the next agent in the list.
+
+ Args:
+ agent (Agent): Current agent.
+
+ Returns:
+ Agent: Next agent in the list.
+
+ """
return self.agents[
(self.agent_names.index(agent.agent_name) + 1)
% len(self.agents)
@@ -64,19 +112,31 @@ class GroupChat(BaseSwarm):
def select_speaker_msg(self):
"""Return the message for selecting the next speaker."""
- return f"""
+ prompt = f"""
You are in a role play game. The following roles are available:
{self._participant_roles()}.
Read the following conversation.
Then select the next role from {self.agent_names} to play. Only return the role.
"""
+ return prompt
# @try_except_wrapper
- def select_speaker(self, last_speaker: Agent, selector: Agent):
- """Select the next speaker."""
+ def select_speaker(
+ self, last_speaker_agent: Agent, selector_agent: Agent
+ ):
+ """Select the next speaker.
+
+ Args:
+ last_speaker_agent (Agent): Last speaker in the conversation.
+ selector_agent (Agent): Agent responsible for selecting the next speaker.
+
+ Returns:
+ Agent: Next speaker.
+
+ """
logger.info("Selecting a New Speaker")
- selector.system_prompt = self.select_speaker_msg()
+ selector_agent.system_prompt = self.select_speaker_msg()
# Warn if GroupChat is underpopulated, without established changing behavior
n_agents = len(self.agent_names)
@@ -86,24 +146,27 @@ class GroupChat(BaseSwarm):
" Direct communication would be more efficient."
)
- self.messages.add(
+ self.message_history.add(
role=self.admin_name,
content=f"Read the above conversation. Then select the next most suitable role from {self.agent_names} to play. Only return the role.",
)
- name = selector.run(self.messages.return_history_as_string())
+ name = selector_agent.run(
+ self.message_history.return_history_as_string()
+ )
try:
name = self.agent_by_name(name)
print(name)
return name
except ValueError:
- return self.next_agent(last_speaker)
+ return self.next_agent(last_speaker_agent)
def _participant_roles(self):
"""Print the roles of the participants.
Returns:
- _type_: _description_
+ str: Participant roles.
+
"""
return "\n".join(
[
@@ -112,53 +175,52 @@ class GroupChat(BaseSwarm):
]
)
-
-@dataclass
-class GroupChatManager:
- """
- GroupChatManager
-
- Args:
- groupchat: GroupChat
- selector: Agent
-
- Usage:
- >>> from swarms import GroupChatManager
- >>> from swarms.structs.agent import Agent
- >>> agents = Agent()
-
-
- """
-
- groupchat: GroupChat
- selector: Agent
-
- # @try_except_wrapper
- def __call__(self, task: str):
+ def __call__(self, task: str, *args, **kwargs):
"""Call 'GroupChatManager' instance as a function.
Args:
- task (str): _description_
+ task (str): Task to be performed.
Returns:
- _type_: _description_
- """
- logger.info(
- f"Activating Groupchat with {len(self.groupchat.agents)} Agents"
- )
+ str: Reply from the last speaker.
- self.groupchat.messages.add(self.selector.agent_name, task)
-
- for i in range(self.groupchat.max_round):
- speaker = self.groupchat.select_speaker(
- last_speaker=self.selector, selector=self.selector
- )
- reply = speaker.run(
- self.groupchat.messages.return_history_as_string()
+ """
+ try:
+ logger.info(
+ f"Activating Groupchat with {len(self.agents)} Agents"
)
- self.groupchat.messages.add(speaker.agent_name, reply)
- print(reply)
- if i == self.groupchat.max_round - 1:
- break
- return reply
+ # Message History
+ self.message_history.add(self.selector_agent.agent_name, task)
+
+ # Message
+ for i in range(self.max_rounds):
+ speaker_agent = self.select_speaker(
+ last_speaker_agent=self.selector_agent,
+ selector_agent=self.selector_agent,
+ )
+
+ logger.info(
+ f"Next speaker selected: {speaker_agent.agent_name}"
+ )
+
+ # Reply back to the input prompt
+ reply = speaker_agent.run(
+ self.message_history.return_history_as_string(),
+ *args,
+ **kwargs,
+ )
+
+ # Message History
+ self.message_history.add(speaker_agent.agent_name, reply)
+ print(reply)
+
+ if i == self.max_rounds - 1:
+ break
+
+ return reply
+ except Exception as error:
+ logger.error(
+ f"Error detected: {error} Try optimizing the inputs and then submit an issue into the swarms github, so we can help and assist you."
+ )
+ raise error
diff --git a/swarms/structs/scp.py b/swarms/structs/scp.py
index 2cefdd20..2fda236e 100644
--- a/swarms/structs/scp.py
+++ b/swarms/structs/scp.py
@@ -14,6 +14,7 @@ from swarms.memory.base_vectordb import BaseVectorDatabase
import time
from swarms.utils.loguru_logger import logger
from pydantic import BaseModel, Field
+from typing import Any
class SwarmCommunicationProtocol(BaseModel):
@@ -31,6 +32,31 @@ class SwarmCommunicationProtocol(BaseModel):
class SCP(BaseStructure):
+ """
+ Represents the Swarm Communication Protocol (SCP).
+
+ SCP is responsible for managing agents and their communication within a swarm.
+
+ Args:
+ agents (List[AgentType]): A list of agents participating in the swarm.
+ memory_system (BaseVectorDatabase, optional): The memory system used by the agents. Defaults to None.
+
+ Attributes:
+ agents (List[AgentType]): A list of agents participating in the swarm.
+ memory_system (BaseVectorDatabase): The memory system used by the agents.
+
+ Methods:
+ message_log(agent: AgentType, task: str = None, message: str = None) -> str:
+ Logs a message from an agent and adds it to the memory system.
+
+ run_single_agent(agent: AgentType, task: str, *args, **kwargs) -> Any:
+ Runs a task for a single agent and logs the output.
+
+ send_message(agent: AgentType, message: str):
+ Sends a message to an agent and logs it.
+
+ """
+
def __init__(
self,
agents: List[AgentType],
@@ -55,7 +81,19 @@ class SCP(BaseStructure):
def message_log(
self, agent: AgentType, task: str = None, message: str = None
- ):
+ ) -> str:
+ """
+ Logs a message from an agent and adds it to the memory system.
+
+ Args:
+ agent (AgentType): The agent that generated the message.
+ task (str, optional): The task associated with the message. Defaults to None.
+ message (str, optional): The message content. Defaults to None.
+
+ Returns:
+ str: The JSON-encoded log message.
+
+ """
log = {
"agent_name": agent.agent_name,
"task": task,
@@ -73,7 +111,18 @@ class SCP(BaseStructure):
def run_single_agent(
self, agent: AgentType, task: str, *args, **kwargs
- ):
+ ) -> Any:
+ """
+ Runs a task for a single agent and logs the output.
+
+ Args:
+ agent (AgentType): The agent to run the task for.
+ task (str): The task to be executed.
+
+ Returns:
+ Any: The output of the task.
+
+ """
# Send the message to the agent
output = agent.run(task)
@@ -88,4 +137,12 @@ class SCP(BaseStructure):
return output
def send_message(self, agent: AgentType, message: str):
+ """
+ Sends a message to an agent and logs it.
+
+ Args:
+ agent (AgentType): The agent to send the message to.
+ message (str): The message to be sent.
+
+ """
agent.receieve_mesage(self.message_log(agent, message))
diff --git a/swarms/tools/prebuilt/bing_api.py b/swarms/tools/prebuilt/bing_api.py
index d00bca98..4a335151 100644
--- a/swarms/tools/prebuilt/bing_api.py
+++ b/swarms/tools/prebuilt/bing_api.py
@@ -35,8 +35,6 @@ def parse_and_merge_logs(logs: List[Dict[str, str]]) -> str:
def fetch_web_articles_bing_api(
query: str = None,
- subscription_key: str = check_bing_api_key(),
- return_str: bool = False,
) -> List[Dict[str, str]]:
"""
Fetches four articles from Bing Web Search API based on the given query.
@@ -48,8 +46,7 @@ def fetch_web_articles_bing_api(
Returns:
List[Dict[str, str]]: A list of dictionaries containing article details.
"""
- if subscription_key is None:
- subscription_key = check_bing_api_key()
+ subscription_key = check_bing_api_key()
url = "https://api.bing.microsoft.com/v7.0/search"
headers = {"Ocp-Apim-Subscription-Key": subscription_key}
|