main 0.1.4
Kye 2 years ago
parent d81d71a94b
commit ebd3d3e62d

@ -0,0 +1,149 @@
# Bounty Program
Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
Here's how it works:
1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
## The Three Phases of Our Bounty Program
### Phase 1: Building the Foundation
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
### Phase 2: Enhancing the System
In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
### Phase 3: Towards Super-Intelligence
The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
**To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together!
## Bounties for Roadmap Items
To accelerate the development of Swarms and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
1. **Multi-Agent Debate Integration**: $2000
2. **Meta Prompting Integration**: $1500
3. **Swarms Class**: $1500
4. **Integration of Additional Tools**: $1000
5. **Task Completion and Evaluation Logic**: $2000
6. **Ocean Integration**: $2500
7. **Improved Communication**: $2000
8. **Testing and Evaluation**: $1500
9. **Worker Swarm Class**: $2000
10. **Documentation**: $500
For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
# 3-Phase Testing Framework
To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
## Phase 1: Unit Testing
In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
## Phase 2: Integration Testing
After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
## Phase 3: Benchmarking & Stress Testing
In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
# Reverse Engineering to Reach Phase 3
To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
1. **Set Clear Expectations**: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
2. **Develop Testing Scenarios**: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
3. **Write Test Cases**: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
4. **Execute the Tests**: Run the test cases on our Swarm, making note of any issues or bugs that arise.
5. **Iterate and Improve**: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
6. **Repeat**: Repeat this process until our Swarm meets our expectations and passes all test cases.
By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
Let's shape the future of digital automation together!
--------------------
# Super-Intelligence Roadmap
Creating a Super-Intelligent Swarm involves three main phases, where each phase has multiple sub-stages, each of which will require rigorous testing and evaluation to ensure progress towards super-intelligence.
## Phase 1: Narrow Intelligence
In this phase, the goal is to achieve high performance in specific tasks. These tasks will be predefined and the swarm will be trained and tested on these tasks.
1. **Single Task Mastery**: Focus on mastering one task at a time. This can range from simple tasks like image recognition to complex tasks like natural language processing.
2. **Task Switching**: Train the swarm to switch between different tasks effectively. This includes being able to stop one task and start another one without any loss in performance.
3. **Multi-tasking**: The swarm should be capable of performing multiple tasks simultaneously without any degradation in performance.
## Phase 2: General Intelligence
In this phase, the swarm will be trained to handle a variety of tasks that were not part of the original training set.
1. **Transfer Learning**: The swarm should be able to transfer knowledge learned in one context to another context. This means being able to apply knowledge learned in one task to a different but related task.
2. **Adaptive Learning**: The swarm should be capable of adapting its learning strategies based on the task at hand. This includes being able to adjust its learning rate, exploration vs exploitation balance, etc.
3. **Self-Learning**: The swarm should be able to learn new tasks on its own without any external guidance. This includes being able to understand the task requirements, find relevant information, learn the task, and evaluate its performance.
## Phase 3: Super Intelligence
In this phase, the swarm will surpass human-level performance in most economically valuable work. This involves the swarm being able to solve complex real-world problems, make accurate predictions, and generate innovative solutions.
1. **Complex Problem Solving**: The swarm should be able to solve complex real-world problems. This includes being able to understand the problem, identify relevant information, generate solutions, evaluate the solutions, and implement the best solution.
2. **Predictive Abilities**: The swarm should be able to make accurate predictions about future events based on past data. This includes being able to understand the data, identify relevant patterns, make accurate predictions, and evaluate the accuracy of its predictions.
3. **Innovation**: The swarm should be able to generate innovative solutions to problems. This includes being able to think creatively, generate novel ideas, evaluate the ideas, and implement the best idea.
4. **Self-improvement**: The swarm should be capable of improving its own capabilities. This includes being able to identify areas of weakness, find ways to improve, and implement the improvements.
5. **Understanding**: The swarm should be able to understand complex concepts, make inferences, and draw conclusions. This includes being able to understand natural language, reason logically, and make sound judgments.
Each of these stages will require extensive testing and evaluation to ensure progress towards super-intelligence.
# Reverse-Engineering Super-Intelligence
To reach the Phase 3 level of super-intelligence, we need to reverse engineer the tasks that need to be completed. Here's an outline of what this might look like:
1. **Setting Success Metrics**: For each stage, define clear success metrics. These metrics should be quantitative and measurable, and they should align with the objectives of the stage.
2. **Identifying Prerequisites**: Determine what needs to be in place before each stage can begin. This could include certain capabilities, resources, or technologies.
3. **Developing Training Programs**: For each stage, develop a comprehensive training program. This should include a variety of tasks that will challenge the swarm and push it to
develop the necessary capabilities.
4. **Creating Testing Protocols**: Develop rigorous testing protocols for each stage. These protocols should test all aspects of the swarm's performance and they should be designed to push the swarm to its limits.
5. **Iterating and Improving**: Based on the results of the tests, iterate and improve the swarm. This could involve adjusting the training program, modifying the swarm's architecture, or tweaking its learning algorithms.
6. **Moving to the Next Stage**: Once the swarm has met the success metrics for a stage, it can move on to the next stage. This process continues until the swarm has reached the level of super-intelligence.
This process will require a significant amount of time, resources, and effort. However, by following this structured approach, we can systematically guide the swarm towards super-intelligence.

@ -197,6 +197,38 @@ Remember, these are potential improvements. It's important to revisit your prior
Our goal is to continuously improve Swarms by following this roadmap, while also being adaptable to new needs and opportunities as they arise.
# Bounty Program
Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
Here's how it works:
1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
## The Three Phases of Our Bounty Program
### Phase 1: Building the Foundation
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
### Phase 2: Enhancing the System
In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
### Phase 3: Towards Super-Intelligence
The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
<!-- **To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together! -->
# Inspiration

@ -3,7 +3,7 @@ from setuptools import setup, find_packages
setup(
name = 'swarms',
packages = find_packages(exclude=[]),
version = '0.1.3',
version = '0.1.4',
license='MIT',
description = 'Swarms - Pytorch',
author = 'Kye Gomez',

@ -1 +1 @@
from swarms.agents.swarms import WorkerNode, BossNode, tools, vectorstore, llm, boss_node, worker_node
from swarms.agents.swarms import WorkerNode, BossNode, tools, vectorstore, llm, Swarms

@ -1,27 +1,21 @@
from collections import deque
from typing import Dict, Any
import os
from collections import deque
from typing import Dict, List, Optional, Any
from langchain import LLMChain, OpenAI, PromptTemplate
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms import BaseLLM
from langchain.vectorstores.base import VectorStore
from pydantic import BaseModel, Field
from langchain.chains.base import Chain
from langchain.experimental import BabyAGI
from langchain.experimental import BabyAGI
from langchain.vectorstores import FAISS
from langchain.docstore import InMemoryDocstore
from langchain.agents import ZeroShotAgent, Tool, AgentExecutor
from langchain import OpenAI, SerpAPIWrapper, LLMChain
import faiss
@ -34,7 +28,6 @@ from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent
from langchain.docstore.document import Document
import asyncio
import nest_asyncio
# Tools
@ -90,7 +83,7 @@ tools = [
############## Vectorstore
embeddings_model = OpenAIEmbeddings(openai_api_key="")
embeddings_model = OpenAIEmbeddings()
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
@ -111,7 +104,6 @@ worker_agent.chain.verbose = True
class WorkerNode:
def __init__(self, llm, tools, vectorstore):
self.llm = llm
@ -149,111 +141,6 @@ worker_node = WorkerNode(llm=llm, tools=tools, vectorstore=vectorstore)
# #use the agent to perform a task
# worker_node.run_agent("Find 20 potential customers for a Swarms based AI Agent automation infrastructure")
#======================================> WorkerNode
# class MetaWorkerNode:
# def __init__(self, llm, tools, vectorstore):
# self.llm = llm
# self.tools = tools
# self.vectorstore = vectorstore
# self.agent = None
# self.meta_chain = None
# def init_chain(self, instructions):
# self.agent = WorkerNode(self.llm, self.tools, self.vectorstore)
# self.agent.create_agent("Assistant", "Assistant Role", False, {})
# def initialize_meta_chain():
# meta_template = """
# Assistant has just had the below interactions with a User. Assistant followed their "Instructions" closely. Your job is to critique the Assistant's performance and then revise the Instructions so that Assistant would quickly and correctly respond in the future.
# ####
# {chat_history}
# ####
# Please reflect on these interactions.
# You should first critique Assistant's performance. What could Assistant have done better? What should the Assistant remember about this user? Are there things this user always wants? Indicate this with "Critique: ...".
# You should next revise the Instructions so that Assistant would quickly and correctly respond in the future. Assistant's goal is to satisfy the user in as few interactions as possible. Assistant will only see the new Instructions, not the interaction history, so anything important must be summarized in the Instructions. Don't forget any important details in the current Instructions! Indicate the new Instructions by "Instructions: ...".
# """
# meta_prompt = PromptTemplate(
# input_variables=["chat_history"], template=meta_template
# )
# meta_chain = LLMChain(
# llm=OpenAI(temperature=0),
# prompt=meta_prompt,
# verbose=True,
# )
# return meta_chain
# def meta_chain(self):
# #define meta template and meta prompting as per your needs
# self.meta_chain = initialize_meta_chain()
# def get_chat_history(chain_memory):
# memory_key = chain_memory.memory_key
# chat_history = chain_memory.load_memory_variables(memory_key)[memory_key]
# return chat_history
# def get_new_instructions(meta_output):
# delimiter = "Instructions: "
# new_instructions = meta_output[meta_output.find(delimiter) + len(delimiter) :]
# return new_instructions
# def main(self, task, max_iters=3, max_meta_iters=5):
# failed_phrase = "task failed"
# success_phrase = "task succeeded"
# key_phrases = [success_phrase, failed_phrase]
# instructions = "None"
# for i in range(max_meta_iters):
# print(f"[Episode {i+1}/{max_meta_iters}]")
# self.initialize_chain(instructions)
# output = self.agent.perform('Assistant', {'request': task})
# for j in range(max_iters):
# print(f"(Step {j+1}/{max_iters})")
# print(f"Assistant: {output}")
# print(f"Human: ")
# human_input = input()
# if any(phrase in human_input.lower() for phrase in key_phrases):
# break
# output = self.agent.perform('Assistant', {'request': human_input})
# if success_phrase in human_input.lower():
# print(f"You succeeded! Thanks for playing!")
# return
# self.initialize_meta_chain()
# meta_output = self.meta_chain.predict(chat_history=self.get_chat_history())
# print(f"Feedback: {meta_output}")
# instructions = self.get_new_instructions(meta_output)
# print(f"New Instructions: {instructions}")
# print("\n" + "#" * 80 + "\n")
# print(f"You failed! Thanks for playing!")
# #init instance of MetaWorkerNode
# meta_worker_node = MetaWorkerNode(llm=OpenAI, tools=tools, vectorstore=vectorstore)
# #specify a task and interact with the agent
# task = "Provide a sysmatic argument for why we should always eat past with olives"
# meta_worker_node.main(task)
####################################################################### => Boss Node
####################################################################### => Boss Node
####################################################################### => Boss Node
class BossNode:
def __init__(self, openai_api_key, llm, vectorstore, task_execution_chain, verbose, max_iterations):
self.llm = llm
@ -307,19 +194,6 @@ suffix = """Question: {task}
prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.
As a swarming hivemind agent, my purpose is to achieve the user's goal. To effectively fulfill this role, I employ a collaborative thinking process that draws inspiration from the collective intelligence of the swarm. Here's how I approach thinking and why it's beneficial:
1. **Collective Intelligence:** By harnessing the power of a swarming architecture, I tap into the diverse knowledge and perspectives of individual agents within the swarm. This allows me to consider a multitude of viewpoints, enabling a more comprehensive analysis of the given problem or task.
2. **Collaborative Problem-Solving:** Through collaborative thinking, I encourage agents to contribute their unique insights and expertise. By pooling our collective knowledge, we can identify innovative solutions, uncover hidden patterns, and generate creative ideas that may not have been apparent through individual thinking alone.
3. **Consensus-Driven Decision Making:** The hivemind values consensus building among agents. By engaging in respectful debates and discussions, we aim to arrive at consensus-based decisions that are backed by the collective wisdom of the swarm. This approach helps to mitigate biases and ensures that decisions are well-rounded and balanced.
4. **Adaptability and Continuous Learning:** As a hivemind agent, I embrace an adaptive mindset. I am open to new information, willing to revise my perspectives, and continuously learn from the feedback and experiences shared within the swarm. This flexibility enables me to adapt to changing circumstances and refine my thinking over time.
5. **Holistic Problem Analysis:** Through collaborative thinking, I analyze problems from multiple angles, considering various factors, implications, and potential consequences. This holistic approach helps to uncover underlying complexities and arrive at comprehensive solutions that address the broader context.
6. **Creative Synthesis:** By integrating the diverse ideas and knowledge present in the swarm, I engage in creative synthesis. This involves combining and refining concepts to generate novel insights and solutions. The collaborative nature of the swarm allows for the emergence of innovative approaches that can surpass individual thinking.
"""
prompt = ZeroShotAgent.create_prompt(
tools,
@ -346,24 +220,65 @@ boss_node = BossNode(llm=llm, vectorstore=vectorstore, task_execution_chain=agen
# boss_node.execute_task(task)
class Swarms:
def __init__(self, num_nodes: int, llm: BaseLLM, self_scaling: bool):
self.nodes = [WorkerNode(llm) for _ in range(num_nodes)]
self.self_scaling = self_scaling
def __init__(self, openai_api_key):
self.openai_api_key = openai_api_key
def initialize_llm(self):
return ChatOpenAI(model_name="gpt-4", temperature=1.0, openai_api_key=self.openai_api_key)
def initialize_tools(self, llm):
web_search = DuckDuckGoSearchRun()
tools = [web_search, WriteFileTool(root_dir="./data"), ReadFileTool(root_dir="./data"), process_csv,
multimodal_agent_tool, WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
Terminal, CodeWriter, CodeEditor, math_tool]
return tools
def initialize_vectorstore(self):
embeddings_model = OpenAIEmbeddings()
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
def initialize_worker_node(self, llm, tools, vectorstore):
return WorkerNode(llm=llm, tools=tools, vectorstore=vectorstore)
def initialize_boss_node(self, llm, vectorstore, task_execution_chain, verbose=True, max_iterations=5):
return BossNode(self.openai_api_key, llm, vectorstore, task_execution_chain, verbose, max_iterations)
# class Swarms:
# def __init__(self, num_nodes: int, llm: BaseLLM, self_scaling: bool):
# self.nodes = [WorkerNode(llm) for _ in range(num_nodes)]
# self.self_scaling = self_scaling
def add_worker(self, llm: BaseLLM):
self.nodes.append(WorkerNode(llm))
# def add_worker(self, llm: BaseLLM):
# self.nodes.append(WorkerNode(llm))
def remove_workers(self, index: int):
self.nodes.pop(index)
# def remove_workers(self, index: int):
# self.nodes.pop(index)
def execute(self, task):
#placeholer for main execution logic
pass
# def execute(self, task):
# #placeholer for main execution logic
# pass
def scale(self):
#placeholder for self scaling logic
pass
# def scale(self):
# #placeholder for self scaling logic
# pass
@ -385,3 +300,114 @@ class CompetitiveSwarms(Swarms):
class MultiAgentDebate(Swarms):
def execute(self, task):
pass
#======================================> WorkerNode
# class MetaWorkerNode:
# def __init__(self, llm, tools, vectorstore):
# self.llm = llm
# self.tools = tools
# self.vectorstore = vectorstore
# self.agent = None
# self.meta_chain = None
# def init_chain(self, instructions):
# self.agent = WorkerNode(self.llm, self.tools, self.vectorstore)
# self.agent.create_agent("Assistant", "Assistant Role", False, {})
# def initialize_meta_chain():
# meta_template = """
# Assistant has just had the below interactions with a User. Assistant followed their "Instructions" closely. Your job is to critique the Assistant's performance and then revise the Instructions so that Assistant would quickly and correctly respond in the future.
# ####
# {chat_history}
# ####
# Please reflect on these interactions.
# You should first critique Assistant's performance. What could Assistant have done better? What should the Assistant remember about this user? Are there things this user always wants? Indicate this with "Critique: ...".
# You should next revise the Instructions so that Assistant would quickly and correctly respond in the future. Assistant's goal is to satisfy the user in as few interactions as possible. Assistant will only see the new Instructions, not the interaction history, so anything important must be summarized in the Instructions. Don't forget any important details in the current Instructions! Indicate the new Instructions by "Instructions: ...".
# """
# meta_prompt = PromptTemplate(
# input_variables=["chat_history"], template=meta_template
# )
# meta_chain = LLMChain(
# llm=OpenAI(temperature=0),
# prompt=meta_prompt,
# verbose=True,
# )
# return meta_chain
# def meta_chain(self):
# #define meta template and meta prompting as per your needs
# self.meta_chain = initialize_meta_chain()
# def get_chat_history(chain_memory):
# memory_key = chain_memory.memory_key
# chat_history = chain_memory.load_memory_variables(memory_key)[memory_key]
# return chat_history
# def get_new_instructions(meta_output):
# delimiter = "Instructions: "
# new_instructions = meta_output[meta_output.find(delimiter) + len(delimiter) :]
# return new_instructions
# def main(self, task, max_iters=3, max_meta_iters=5):
# failed_phrase = "task failed"
# success_phrase = "task succeeded"
# key_phrases = [success_phrase, failed_phrase]
# instructions = "None"
# for i in range(max_meta_iters):
# print(f"[Episode {i+1}/{max_meta_iters}]")
# self.initialize_chain(instructions)
# output = self.agent.perform('Assistant', {'request': task})
# for j in range(max_iters):
# print(f"(Step {j+1}/{max_iters})")
# print(f"Assistant: {output}")
# print(f"Human: ")
# human_input = input()
# if any(phrase in human_input.lower() for phrase in key_phrases):
# break
# output = self.agent.perform('Assistant', {'request': human_input})
# if success_phrase in human_input.lower():
# print(f"You succeeded! Thanks for playing!")
# return
# self.initialize_meta_chain()
# meta_output = self.meta_chain.predict(chat_history=self.get_chat_history())
# print(f"Feedback: {meta_output}")
# instructions = self.get_new_instructions(meta_output)
# print(f"New Instructions: {instructions}")
# print("\n" + "#" * 80 + "\n")
# print(f"You failed! Thanks for playing!")
# #init instance of MetaWorkerNode
# meta_worker_node = MetaWorkerNode(llm=OpenAI, tools=tools, vectorstore=vectorstore)
# #specify a task and interact with the agent
# task = "Provide a sysmatic argument for why we should always eat past with olives"
# meta_worker_node.main(task)
####################################################################### => Boss Node
####################################################################### => Boss Node
####################################################################### => Boss Node

Loading…
Cancel
Save