Former-commit-id: 3d08aa481393accbd28ee22a72bbb654799c963apull/160/head
commit
d624b84628
@ -0,0 +1,101 @@
|
||||
# Flywheel Effect for Developer Acquisition and Incentivization
|
||||
|
||||
As with the sales model, the developer acquisition and incentivization model also relies on a flywheel effect. This effect is particularly potent in a community-driven ecosystem such as ours, where the value proposition continually grows as more developers join and contribute to our projects. Here's how we could apply this approach:
|
||||
|
||||
## Step 1: Initial Value Proposition for Developers
|
||||
The starting point of the flywheel is to provide an attractive value proposition for developers. This could include:
|
||||
|
||||
- The ability to work on cutting-edge technology (Swarms, in this case).
|
||||
- The opportunity to contribute to a community-driven, open-source project.
|
||||
- The chance to learn from and collaborate with a global network of highly skilled developers.
|
||||
- An incentivization structure that rewards contributions (more on this later).
|
||||
|
||||
## Step 2: Developer Acquisition
|
||||
With the initial value proposition in place, we can move on to the actual acquisition of developers. This could be accomplished through:
|
||||
|
||||
- Active recruitment from online developer communities.
|
||||
- Referral programs that incentivize current contributors to bring in new developers.
|
||||
- Partnerships with universities, boot camps, and other institutions to attract budding developers.
|
||||
|
||||
## Step 3: Collaboration and Learning
|
||||
Once developers join our ecosystem, they become part of a collaborative community where they can learn from each other, improve their skills, and work on exciting and meaningful projects. This, in turn, attracts more developers, adding momentum to the flywheel.
|
||||
|
||||
## Step 4: Recognizing and Rewarding Contributions
|
||||
To keep the flywheel spinning, it's crucial to recognize and reward the contributions made by developers. This can be done in various ways:
|
||||
|
||||
- Monetary rewards: Developers can be paid based on the value their contributions bring to the project. This could be determined through various metrics, such as the complexity of their contributions, the impact on the project, or the amount of their code that gets used in production.
|
||||
|
||||
- Reputation and recognition: The open-source nature of our project means that all contributions are public and can be used by developers to build their professional profiles. Contributors could also be highlighted on our website, in our communications, and at community events.
|
||||
|
||||
- Career advancement: Developers who consistently make valuable contributions could be offered positions of leadership within the project, such as becoming maintainers or joining a steering committee.
|
||||
|
||||
- Agora Tokens: We could create a system of tokens that are earned based on contributions. These tokens could be exchanged for various benefits, such as access to exclusive events, special training, or even physical goods.
|
||||
|
||||
## Step 5: Scaling the Flywheel
|
||||
With the flywheel in motion, the next step is to scale. As our community grows and our technology improves, we can attract more developers and create more value. This leads to a virtuous cycle of growth, where each new developer adds to the attractiveness of our project, which in turn brings in more developers.
|
||||
|
||||
In essence, this flywheel approach is about creating a community where everyone benefits from each other's contributions. The more value a developer adds, the more they are rewarded. The more developers contribute, the more value is created, attracting even more developers.
|
||||
|
||||
Such a model not only aligns with our values of openness, collaboration, and shared success, but it also gives us a sustainable and scalable method for growing our developer community. It makes Agora not just a place to work, but also a place to learn, grow, and be recognized for one's contributions. This is a powerful way to ensure that we can continue to advance our technology and make a significant impact on the world.
|
||||
|
||||
|
||||
# Risks and mitigations
|
||||
|
||||
The open source engineering freelancer model brings with it its own set of potential risks and challenges. Here's an exploration of some of these, along with strategies for mitigation:
|
||||
|
||||
**1. Quality Control:** When dealing with a wide network of freelance contributors, ensuring a consistent standard of quality across all contributions can be challenging. This can be mitigated by implementing rigorous review processes and standards, establishing an automated testing infrastructure, and fostering a culture of quality among contributors. Providing clear contribution guidelines, code style guides, and other resources can help freelancers understand what's expected of them. Providing Educational resources such as sponsoring creators like Yannic, and even making our own courses and then building techno-monasteries where young people can come in and research for free.
|
||||
|
||||
**2. Security Risks:** Open-source projects can be susceptible to malicious contributors, who might introduce vulnerabilities into the codebase. To mitigate this, rigorous code review processes should be in place. Additionally, adopting a "trust but verify" approach, leveraging automated security scanning tools, and conducting periodic security audits can be beneficial.
|
||||
|
||||
**3. Intellectual Property Issues:** Open-source projects can face risks around intellectual property, such as contributors introducing code that infringes on someone else's copyrights. A clear Contributor License Agreement (CLA) should be in place, which contributors need to agree to before their contributions can be accepted. This helps protect the project and its users from potential legal issues.
|
||||
|
||||
**4. Loss of Core Focus:** With numerous contributors focusing on different aspects of the project, there can be a risk of losing sight of the project's core objectives. Maintaining a clear roadmap, having a strong leadership team, and ensuring open and regular communication can help keep the project focused.
|
||||
|
||||
**5. Contributor Burnout:** Freelancers contributing in their free time might face burnout, especially if they feel their contributions aren't being recognized or rewarded. To mitigate this, create a supportive environment where contributors' efforts are acknowledged and rewarded. This might include monetary rewards, but can also include non-monetary rewards like public recognition, advancement opportunities within the project, and so on.
|
||||
|
||||
**6. Fragmentation:** In open source projects, there is a risk of fragmentation where different contributors or groups of contributors might want to take the project in different directions. Strong project governance, a clear roadmap, and open, transparent decision-making processes can help mitigate this risk.
|
||||
|
||||
**7. Dependency on Key Individuals:** If key parts of the project are understood and maintained by only a single contributor, there is a risk if that individual decides to leave or is unable to contribute for some reason. This can be mitigated by ensuring knowledge is shared and responsibilities are spread among multiple contributors.
|
||||
|
||||
Overall, these risks can be managed with proper planning, clear communication, and the implementation of good governance and security practices. It's essential to approach the open source model with a clear understanding of these potential pitfalls and a plan to address them.
|
||||
|
||||
## Plan to Gain Open Source Developers for SWARMS
|
||||
|
||||
Attracting and retaining open-source developers is a challenge that requires a strategic approach. This plan emphasizes delivering value to the developers as well as providing recognition, community, and financial incentives.
|
||||
|
||||
### Step 1: Foster an Engaging and Inclusive Community
|
||||
|
||||
The first step is to foster an engaging and inclusive open-source community around SWARMS. This community should be a place where developers feel welcome and excited to contribute. Regular community events (both online and offline), engaging content, and a supportive environment can help attract and retain developers.
|
||||
|
||||
### Step 2: Provide Clear Contribution Guidelines
|
||||
|
||||
Providing clear and comprehensive contribution guidelines will make it easier for developers to get started. These guidelines should cover the basics of how to set up the development environment, how to submit changes, and how the code review process works.
|
||||
|
||||
### Step 3: Offer Educational Resources and Training
|
||||
|
||||
Providing training and educational resources can help developers grow their skills and contribute more effectively. These resources could include tutorials, webinars, workshops, documentation, and more.
|
||||
|
||||
### Step 4: Establish a Recognition and Reward System
|
||||
|
||||
Recognize and reward the contributions of developers. This could involve public recognition, like featuring contributors on the SWARMS website, as well as financial incentives. Implementing a system where developers earn a share of the revenue from SWARMS based on their contributions can be a strong motivator.
|
||||
|
||||
### Step 5: Implement a Strong Support System
|
||||
|
||||
Offer strong technical support to developers. This could include dedicated channels for developers to ask questions, request feedback, and share their progress. Having core team members available to provide assistance and mentorship can be hugely beneficial.
|
||||
|
||||
### Step 6: Regularly Solicit and Incorporate Feedback
|
||||
|
||||
Regularly ask for feedback from developers and incorporate their suggestions into future developments. This shows developers that their opinions are valued and can lead to improvements in SWARMS.
|
||||
|
||||
## Flywheel for Gaining More Open Source Developers
|
||||
|
||||
Now let's look at the flywheel effect that can result from this plan. The idea of the flywheel is that each part of the process feeds into the next, creating a cycle of growth that becomes self-sustaining over time.
|
||||
|
||||
1. We build an engaging and supportive community around SWARMS.
|
||||
2. This community attracts more developers who are interested in contributing to SWARMS.
|
||||
3. As more developers contribute, the quality and scope of SWARMS improve, making it more attractive to potential users.
|
||||
4. As SWARMS gains more users, the potential revenue from SWARMS increases, allowing for larger rewards to be distributed to developers.
|
||||
5. The prospect of these rewards attracts even more developers to the SWARMS community.
|
||||
6. The cycle repeats, with each iteration attracting more developers, improving SWARMS, increasing its user base, and raising potential rewards.
|
||||
|
||||
Through this plan and the resulting flywheel effect, we can attract a strong, committed team of open-source developers to build SWARMS and make it the best it can be.
|
@ -0,0 +1,101 @@
|
||||
# The Swarms Flywheel
|
||||
|
||||
1. **Building a Supportive Community:** Initiate by establishing an engaging and inclusive open-source community for both developers and sales freelancers around Swarms. Regular online meetups, webinars, tutorials, and sales training can make them feel welcome and encourage contributions and sales efforts.
|
||||
|
||||
2. **Increased Contributions and Sales Efforts:** The more engaged the community, the more developers will contribute to Swarms and the more effort sales freelancers will put into selling Swarms.
|
||||
|
||||
3. **Improvement in Quality and Market Reach:** More developer contributions mean better quality, reliability, and feature offerings from Swarms. Simultaneously, increased sales efforts from freelancers boost Swarms' market penetration and visibility.
|
||||
|
||||
4. **Rise in User Base:** As Swarms becomes more robust and more well-known, the user base grows, driving more revenue.
|
||||
|
||||
5. **Greater Financial Incentives:** Increased revenue can be redirected to offer more significant financial incentives to both developers and salespeople. Developers can be incentivized based on their contribution to Swarms, and salespeople can be rewarded with higher commissions.
|
||||
|
||||
6. **Attract More Developers and Salespeople:** These financial incentives, coupled with the recognition and experience from participating in a successful project, attract more developers and salespeople to the community.
|
||||
|
||||
7. **Wider Adoption of Swarms:** An ever-improving product, a growing user base, and an increasing number of passionate salespeople accelerate the adoption of Swarms.
|
||||
|
||||
8. **Return to Step 1:** As the community, user base, and sales network continue to grow, the cycle repeats, each time speeding up the flywheel.
|
||||
|
||||
|
||||
```markdown
|
||||
+---------------------+
|
||||
| Building a |
|
||||
| Supportive | <--+
|
||||
| Community | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Increased | |
|
||||
| Contributions & | |
|
||||
| Sales Efforts | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Improvement in | |
|
||||
| Quality & Market | |
|
||||
| Reach | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Rise in User | |
|
||||
| Base | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Greater Financial | |
|
||||
| Incentives | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Attract More | |
|
||||
| Developers & | |
|
||||
| Salespeople | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Wider Adoption of | |
|
||||
| Swarms |----+
|
||||
+---------------------+
|
||||
```
|
||||
|
||||
|
||||
# Potential Risks and Mitigations:
|
||||
|
||||
1. **Insufficient Contributions or Quality of Work**: Open-source efforts rely on individuals being willing and able to spend time contributing. If not enough people participate, or the work they produce is of poor quality, the product development could stall.
|
||||
* **Mitigation**: Create a robust community with clear guidelines, support, and resources. Provide incentives for quality contributions, such as a reputation system, swag, or financial rewards. Conduct thorough code reviews to ensure the quality of contributions.
|
||||
|
||||
2. **Lack of Sales Results**: Commission-based salespeople will only continue to sell the product if they're successful. If they aren't making enough sales, they may lose motivation and cease their efforts.
|
||||
* **Mitigation**: Provide adequate sales training and resources. Ensure the product-market fit is strong, and adjust messaging or sales tactics as necessary. Consider implementing a minimum commission or base pay to reduce risk for salespeople.
|
||||
|
||||
3. **Poor User Experience or User Adoption**: If users don't find the product useful or easy to use, they won't adopt it, and the user base won't grow. This could also discourage salespeople and contributors.
|
||||
* **Mitigation**: Prioritize user experience in the product development process. Regularly gather and incorporate user feedback. Ensure robust user support is in place.
|
||||
|
||||
4. **Inadequate Financial Incentives**: If the financial rewards don't justify the time and effort contributors and salespeople are putting in, they will likely disengage.
|
||||
* **Mitigation**: Regularly review and adjust financial incentives as needed. Ensure that the method for calculating and distributing rewards is transparent and fair.
|
||||
|
||||
5. **Security and Compliance Risks**: As the user base grows and the software becomes more complex, the risk of security issues increases. Moreover, as contributors from various regions join, compliance with various international laws could become an issue.
|
||||
* **Mitigation**: Establish strong security practices from the start. Regularly conduct security audits. Seek legal counsel to understand and adhere to international laws and regulations.
|
||||
|
||||
## Activation Plan for the Flywheel:
|
||||
|
||||
1. **Community Building**: Begin by fostering a supportive community around Swarms. Encourage early adopters to contribute and provide feedback. Create comprehensive documentation, community guidelines, and a forum for discussion and support.
|
||||
|
||||
2. **Sales and Development Training**: Provide resources and training for salespeople and developers. Make sure they understand the product, its value, and how to effectively contribute or sell.
|
||||
|
||||
3. **Increase Contributions and Sales Efforts**: Encourage increased participation by highlighting successful contributions and sales, rewarding top contributors and salespeople, and regularly communicating about the project's progress and impact.
|
||||
|
||||
4. **Iterate and Improve**: Continually gather and implement feedback to improve Swarms and its market reach. The better the product and its alignment with the market, the more the user base will grow.
|
||||
|
||||
5. **Expand User Base**: As the product improves and sales efforts continue, the user base should grow. Ensure you have the infrastructure to support this growth and maintain a positive user experience.
|
||||
|
||||
6. **Increase Financial Incentives**: As the user base and product grow, so too should the financial incentives. Make sure rewards continue to be competitive and attractive.
|
||||
|
||||
7. **Attract More Contributors and Salespeople**: As the financial incentives and success of the product increase, this should attract more contributors and salespeople, further feeding the flywheel.
|
||||
|
||||
Throughout this process, it's important to regularly reassess and adjust your strategy as necessary. Stay flexible and responsive to changes in the market, user feedback, and the evolving needs of the community.
|
@ -0,0 +1,386 @@
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# class Swarms:
|
||||
# def __init__(self, openai_api_key):
|
||||
# self.openai_api_key = openai_api_key
|
||||
|
||||
# def initialize_llm(self, llm_class, temperature=0.5):
|
||||
# # Initialize language model
|
||||
# return llm_class(openai_api_key=self.openai_api_key, temperature=temperature)
|
||||
|
||||
# def initialize_tools(self, llm_class):
|
||||
# llm = self.initialize_llm(llm_class)
|
||||
# # Initialize tools
|
||||
# web_search = DuckDuckGoSearchRun()
|
||||
# tools = [
|
||||
# web_search,
|
||||
# WriteFileTool(root_dir=ROOT_DIR),
|
||||
# ReadFileTool(root_dir=ROOT_DIR),
|
||||
|
||||
# process_csv,
|
||||
# WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
|
||||
# # RequestsGet()
|
||||
# Tool(name="RequestsGet", func=RequestsGet.get, description="A portal to the internet, Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request."),
|
||||
|
||||
|
||||
# # CodeEditor,
|
||||
# # Terminal,
|
||||
# # RequestsGet,
|
||||
# # ExitConversation
|
||||
|
||||
# #code editor + terminal editor + visual agent
|
||||
# # Give the worker node itself as a tool
|
||||
|
||||
# ]
|
||||
# assert tools is not None, "tools is not initialized"
|
||||
# return tools
|
||||
|
||||
# def initialize_vectorstore(self):
|
||||
# # Initialize vector store
|
||||
# embeddings_model = OpenAIEmbeddings(openai_api_key=self.openai_api_key)
|
||||
# embedding_size = 1536
|
||||
# index = faiss.IndexFlatL2(embedding_size)
|
||||
# return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
# def initialize_worker_node(self, worker_tools, vectorstore):
|
||||
# # Initialize worker node
|
||||
# llm = self.initialize_llm(ChatOpenAI)
|
||||
# worker_node = WorkerNode(llm=llm, tools=worker_tools, vectorstore=vectorstore)
|
||||
# worker_node.create_agent(ai_name="Swarm Worker AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={})
|
||||
# worker_node_tool = Tool(name="WorkerNode AI Agent", func=worker_node.run, description="Input: an objective with a todo list for that objective. Output: your task completed: Please be very clear what the objective and task instructions are. The Swarm worker agent is Useful for when you need to spawn an autonomous agent instance as a worker to accomplish any complex tasks, it can search the internet or write code or spawn child multi-modality models to process and generate images and text or audio and so on")
|
||||
# return worker_node_tool
|
||||
|
||||
# def initialize_boss_node(self, vectorstore, worker_node):
|
||||
# # Initialize boss node
|
||||
# llm = self.initialize_llm(OpenAI)
|
||||
# todo_prompt = PromptTemplate.from_template("You are a boss planer in a swarm who is an expert at coming up with a todo list for a given objective and then creating an worker to help you accomplish your task. Come up with a todo list for this objective: {objective} and then spawn a worker agent to complete the task for you. Always spawn an worker agent after creating a plan and pass the objective and plan to the worker agent.")
|
||||
# todo_chain = LLMChain(llm=llm, prompt=todo_prompt)
|
||||
# tools = [
|
||||
# Tool(name="TODO", func=todo_chain.run, description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!"),
|
||||
# worker_node
|
||||
# ]
|
||||
# suffix = """Question: {task}\n{agent_scratchpad}"""
|
||||
# prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\n """
|
||||
# prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["objective", "task", "context", "agent_scratchpad"],)
|
||||
# llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
# agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools])
|
||||
# agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
# # return BossNode(return BossNode(llm, vectorstore, agent_executor, max_iterations=5)
|
||||
# return BossNode(llm, vectorstore, agent_executor, max_iterations=5)
|
||||
|
||||
|
||||
# def run_swarms(self, objective, run_as=None):
|
||||
# try:
|
||||
# # Run the swarm with the given objective
|
||||
# worker_tools = self.initialize_tools(OpenAI)
|
||||
# assert worker_tools is not None, "worker_tools is not initialized"
|
||||
|
||||
# vectorstore = self.initialize_vectorstore()
|
||||
# worker_node = self.initialize_worker_node(worker_tools, vectorstore)
|
||||
|
||||
# if run_as.lower() == 'worker':
|
||||
# tool_input = {'prompt': objective}
|
||||
# return worker_node.run(tool_input)
|
||||
# else:
|
||||
# boss_node = self.initialize_boss_node(vectorstore, worker_node)
|
||||
# task = boss_node.create_task(objective)
|
||||
# return boss_node.execute_task(task)
|
||||
# except Exception as e:
|
||||
# logging.error(f"An error occurred in run_swarms: {e}")
|
||||
# raise
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#omni agent ===> working
|
||||
# class Swarms:
|
||||
# def __init__(self,
|
||||
# openai_api_key,
|
||||
# # omni_api_key=None,
|
||||
# # omni_api_endpoint=None,
|
||||
# # omni_api_type=None
|
||||
# ):
|
||||
# self.openai_api_key = openai_api_key
|
||||
# # self.omni_api_key = omni_api_key
|
||||
# # self.omni_api_endpoint = omni_api_endpoint
|
||||
# # self.omni_api_key = omni_api_type
|
||||
|
||||
# # if omni_api_key and omni_api_endpoint and omni_api_type:
|
||||
# # self.omni_worker_agent = OmniWorkerAgent(omni_api_key, omni_api_endpoint, omni_api_type)
|
||||
# # else:
|
||||
# # self.omni_worker_agent = None
|
||||
|
||||
# def initialize_llm(self):
|
||||
# # Initialize language model
|
||||
# return ChatOpenAI(model_name="gpt-4", temperature=1.0, openai_api_key=self.openai_api_key)
|
||||
|
||||
# def initialize_tools(self, llm):
|
||||
# # Initialize tools
|
||||
# web_search = DuckDuckGoSearchRun()
|
||||
# tools = [
|
||||
# web_search,
|
||||
# WriteFileTool(root_dir=ROOT_DIR),
|
||||
# ReadFileTool(root_dir=ROOT_DIR),
|
||||
# process_csv,
|
||||
# WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
# ]
|
||||
# # if self.omni_worker_agent:
|
||||
# # tools.append(self.omni_worker_agent.chat) #add omniworker agent class
|
||||
# return tools
|
||||
|
||||
# def initialize_vectorstore(self):
|
||||
# # Initialize vector store
|
||||
# embeddings_model = OpenAIEmbeddings()
|
||||
# embedding_size = 1536
|
||||
# index = faiss.IndexFlatL2(embedding_size)
|
||||
# return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
# def initialize_worker_node(self, llm, worker_tools, vectorstore):
|
||||
# # Initialize worker node
|
||||
# worker_node = WorkerNode(llm=llm, tools=worker_tools, vectorstore=vectorstore)
|
||||
# worker_node.create_agent(ai_name="AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={})
|
||||
# return worker_node
|
||||
|
||||
# def initialize_boss_node(self, llm, vectorstore, worker_node):
|
||||
# # Initialize boss node
|
||||
# todo_prompt = PromptTemplate.from_template("You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}")
|
||||
# todo_chain = LLMChain(llm=OpenAI(temperature=0), prompt=todo_prompt)
|
||||
# tools = [
|
||||
# Tool(name="TODO", func=todo_chain.run, description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!"),
|
||||
# worker_node,
|
||||
# ]
|
||||
# suffix = """Question: {task}\n{agent_scratchpad}"""
|
||||
# prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\n"""
|
||||
# prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["objective", "task", "context", "agent_scratchpad"],)
|
||||
# llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
|
||||
# agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools])
|
||||
# agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
# return BossNode(self.openai_api_key, llm, vectorstore, agent_executor, verbose=True, max_iterations=5)
|
||||
|
||||
# def run_swarms(self, objective):
|
||||
# # Run the swarm with the given objective
|
||||
# llm = self.initialize_llm()
|
||||
# worker_tools = self.initialize_tools(llm)
|
||||
# vectorstore = self.initialize_vectorstore()
|
||||
# worker_node = self.initialize_worker_node(llm, worker_tools, vectorstore)
|
||||
# boss_node = self.initialize_boss_node(llm, vectorstore, worker_node)
|
||||
# task = boss_node.create_task(objective)
|
||||
# boss_node.execute_task(task)
|
||||
# worker_node.run_agent(objective)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# class Swarms:
|
||||
# def __init__(self, num_nodes: int, llm: BaseLLM, self_scaling: bool):
|
||||
# self.nodes = [WorkerNode(llm) for _ in range(num_nodes)]
|
||||
# self.self_scaling = self_scaling
|
||||
|
||||
# def add_worker(self, llm: BaseLLM):
|
||||
# self.nodes.append(WorkerNode(llm))
|
||||
|
||||
# def remove_workers(self, index: int):
|
||||
# self.nodes.pop(index)
|
||||
|
||||
# def execute(self, task):
|
||||
# #placeholer for main execution logic
|
||||
# pass
|
||||
|
||||
# def scale(self):
|
||||
# #placeholder for self scaling logic
|
||||
# pass
|
||||
|
||||
|
||||
|
||||
#special classes
|
||||
|
||||
# class HierarchicalSwarms(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
|
||||
# class CollaborativeSwarms(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
# class CompetitiveSwarms(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
# class MultiAgentDebate(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
|
||||
#======================================> WorkerNode
|
||||
|
||||
|
||||
# class MetaWorkerNode:
|
||||
# def __init__(self, llm, tools, vectorstore):
|
||||
# self.llm = llm
|
||||
# self.tools = tools
|
||||
# self.vectorstore = vectorstore
|
||||
|
||||
# self.agent = None
|
||||
# self.meta_chain = None
|
||||
|
||||
# def init_chain(self, instructions):
|
||||
# self.agent = WorkerNode(self.llm, self.tools, self.vectorstore)
|
||||
# self.agent.create_agent("Assistant", "Assistant Role", False, {})
|
||||
|
||||
# def initialize_meta_chain():
|
||||
# meta_template = """
|
||||
# Assistant has just had the below interactions with a User. Assistant followed their "Instructions" closely. Your job is to critique the Assistant's performance and then revise the Instructions so that Assistant would quickly and correctly respond in the future.
|
||||
|
||||
# ####
|
||||
|
||||
# {chat_history}
|
||||
|
||||
# ####
|
||||
|
||||
# Please reflect on these interactions.
|
||||
|
||||
# You should first critique Assistant's performance. What could Assistant have done better? What should the Assistant remember about this user? Are there things this user always wants? Indicate this with "Critique: ...".
|
||||
|
||||
# You should next revise the Instructions so that Assistant would quickly and correctly respond in the future. Assistant's goal is to satisfy the user in as few interactions as possible. Assistant will only see the new Instructions, not the interaction history, so anything important must be summarized in the Instructions. Don't forget any important details in the current Instructions! Indicate the new Instructions by "Instructions: ...".
|
||||
# """
|
||||
|
||||
# meta_prompt = PromptTemplate(
|
||||
# input_variables=["chat_history"], template=meta_template
|
||||
# )
|
||||
|
||||
# meta_chain = LLMChain(
|
||||
# llm=OpenAI(temperature=0),
|
||||
# prompt=meta_prompt,
|
||||
# verbose=True,
|
||||
# )
|
||||
# return meta_chain
|
||||
|
||||
# def meta_chain(self):
|
||||
# #define meta template and meta prompting as per your needs
|
||||
# self.meta_chain = initialize_meta_chain()
|
||||
|
||||
|
||||
# def get_chat_history(chain_memory):
|
||||
# memory_key = chain_memory.memory_key
|
||||
# chat_history = chain_memory.load_memory_variables(memory_key)[memory_key]
|
||||
# return chat_history
|
||||
|
||||
|
||||
# def get_new_instructions(meta_output):
|
||||
# delimiter = "Instructions: "
|
||||
# new_instructions = meta_output[meta_output.find(delimiter) + len(delimiter) :]
|
||||
# return new_instructions
|
||||
|
||||
|
||||
# def main(self, task, max_iters=3, max_meta_iters=5):
|
||||
# failed_phrase = "task failed"
|
||||
# success_phrase = "task succeeded"
|
||||
# key_phrases = [success_phrase, failed_phrase]
|
||||
|
||||
# instructions = "None"
|
||||
# for i in range(max_meta_iters):
|
||||
# print(f"[Episode {i+1}/{max_meta_iters}]")
|
||||
# self.initialize_chain(instructions)
|
||||
# output = self.agent.perform('Assistant', {'request': task})
|
||||
# for j in range(max_iters):
|
||||
# print(f"(Step {j+1}/{max_iters})")
|
||||
# print(f"Assistant: {output}")
|
||||
# print(f"Human: ")
|
||||
# human_input = input()
|
||||
# if any(phrase in human_input.lower() for phrase in key_phrases):
|
||||
# break
|
||||
# output = self.agent.perform('Assistant', {'request': human_input})
|
||||
# if success_phrase in human_input.lower():
|
||||
# print(f"You succeeded! Thanks for playing!")
|
||||
# return
|
||||
# self.initialize_meta_chain()
|
||||
# meta_output = self.meta_chain.predict(chat_history=self.get_chat_history())
|
||||
# print(f"Feedback: {meta_output}")
|
||||
# instructions = self.get_new_instructions(meta_output)
|
||||
# print(f"New Instructions: {instructions}")
|
||||
# print("\n" + "#" * 80 + "\n")
|
||||
# print(f"You failed! Thanks for playing!")
|
||||
|
||||
|
||||
# #init instance of MetaWorkerNode
|
||||
# meta_worker_node = MetaWorkerNode(llm=OpenAI, tools=tools, vectorstore=vectorstore)
|
||||
|
||||
|
||||
# #specify a task and interact with the agent
|
||||
# task = "Provide a sysmatic argument for why we should always eat past with olives"
|
||||
# meta_worker_node.main(task)
|
||||
|
||||
|
||||
####################################################################### => Boss Node
|
||||
####################################################################### => Boss Node
|
||||
####################################################################### => Boss Node
|
@ -1,3 +1,4 @@
|
||||
# from swarms import Swarms, swarm
|
||||
from swarms.swarms import Swarms, swarm
|
||||
from swarms.agents import worker_node, UltraNode
|
||||
from swarms.agents import worker_node
|
||||
from swarms.agents.workers.WorkerUltraNode import WorkerUltra
|
Binary file not shown.
@ -1,3 +1,3 @@
|
||||
"""Agents, workers and bosses"""
|
||||
from ..agents.workers import worker_node
|
||||
from ..agents.workers.worker_ultranode import UltraNode
|
||||
from .workers.WorkerUltraNode import WorkerUltraNode
|
@ -0,0 +1,56 @@
|
||||
from swarms.tools.agent_tools import *
|
||||
from pydantic import ValidationError
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
# ---------- Boss Node ----------
|
||||
class BossNode:
|
||||
"""
|
||||
The BossNode class is responsible for creating and executing tasks using the BabyAGI model.
|
||||
It takes a language model (llm), a vectorstore for memory, an agent_executor for task execution, and a maximum number of iterations for the BabyAGI model.
|
||||
"""
|
||||
def __init__(self, llm, vectorstore, agent_executor, max_iterations):
|
||||
if not llm or not vectorstore or not agent_executor or not max_iterations:
|
||||
logging.error("llm, vectorstore, agent_executor, and max_iterations cannot be None.")
|
||||
raise ValueError("llm, vectorstore, agent_executor, and max_iterations cannot be None.")
|
||||
self.llm = llm
|
||||
self.vectorstore = vectorstore
|
||||
self.agent_executor = agent_executor
|
||||
self.max_iterations = max_iterations
|
||||
|
||||
try:
|
||||
self.baby_agi = BabyAGI.from_llm(
|
||||
llm=self.llm,
|
||||
vectorstore=self.vectorstore,
|
||||
task_execution_chain=self.agent_executor,
|
||||
max_iterations=self.max_iterations,
|
||||
)
|
||||
except ValidationError as e:
|
||||
logging.error(f"Validation Error while initializing BabyAGI: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected Error while initializing BabyAGI: {e}")
|
||||
raise
|
||||
|
||||
def create_task(self, objective):
|
||||
"""
|
||||
Creates a task with the given objective.
|
||||
"""
|
||||
if not objective:
|
||||
logging.error("Objective cannot be empty.")
|
||||
raise ValueError("Objective cannot be empty.")
|
||||
return {"objective": objective}
|
||||
|
||||
def execute_task(self, task):
|
||||
"""
|
||||
Executes a task using the BabyAGI model.
|
||||
"""
|
||||
if not task:
|
||||
logging.error("Task cannot be empty.")
|
||||
raise ValueError("Task cannot be empty.")
|
||||
try:
|
||||
self.baby_agi(task)
|
||||
except Exception as e:
|
||||
logging.error(f"Error while executing task: {e}")
|
||||
raise
|
@ -1,27 +0,0 @@
|
||||
from swarms.tools.agent_tools import *
|
||||
from pydantic import ValidationError
|
||||
|
||||
# ---------- Boss Node ----------
|
||||
class BossNode:
|
||||
def __init__(self, llm, vectorstore, agent_executor, max_iterations):
|
||||
self.llm = llm
|
||||
self.vectorstore = vectorstore
|
||||
self.agent_executor = agent_executor
|
||||
self.max_iterations = max_iterations
|
||||
|
||||
try:
|
||||
self.baby_agi = BabyAGI.from_llm(
|
||||
llm=self.llm,
|
||||
vectorstore=self.vectorstore,
|
||||
task_execution_chain=self.agent_executor,
|
||||
max_iterations=self.max_iterations,
|
||||
)
|
||||
except ValidationError as e:
|
||||
print(f"Validation Error while initializing BabyAGI: {e}")
|
||||
except Exception as e:
|
||||
print(f"Unexpected Error while initializing BabyAGI: {e}")
|
||||
def create_task(self, objective):
|
||||
return {"objective": objective}
|
||||
|
||||
def execute_task(self, task):
|
||||
self.baby_agi(task)
|
@ -0,0 +1,617 @@
|
||||
"""Chain that takes in an input and produces an action and action input."""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from abc import abstractmethod
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union
|
||||
|
||||
import yaml
|
||||
from pydantic import BaseModel, root_validator
|
||||
|
||||
from langchain.agents.agent_types import AgentType
|
||||
from langchain.agents.tools import InvalidTool
|
||||
from langchain.callbacks.base import BaseCallbackManager
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForChainRun,
|
||||
AsyncCallbackManagerForToolRun,
|
||||
CallbackManagerForChainRun,
|
||||
CallbackManagerForToolRun,
|
||||
Callbacks,
|
||||
)
|
||||
from langchain.chains.base import Chain
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain.input import get_color_mapping
|
||||
from langchain.prompts.few_shot import FewShotPromptTemplate
|
||||
from langchain.prompts.prompt import PromptTemplate
|
||||
from langchain.schema import (
|
||||
AgentAction,
|
||||
AgentFinish,
|
||||
BaseOutputParser,
|
||||
BasePromptTemplate,
|
||||
OutputParserException,
|
||||
)
|
||||
from langchain.schema.language_model import BaseLanguageModel
|
||||
from langchain.schema.messages import BaseMessage
|
||||
from langchain.tools.base import BaseTool
|
||||
from langchain.utilities.asyncio import asyncio_timeout
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseSingleActionAgent(BaseModel):
|
||||
"""Base Agent class."""
|
||||
|
||||
@property
|
||||
def return_values(self) -> List[str]:
|
||||
"""Return values of the agent."""
|
||||
return ["output"]
|
||||
|
||||
def get_allowed_tools(self) -> Optional[List[str]]:
|
||||
return None
|
||||
|
||||
@abstractmethod
|
||||
def plan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def aplan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def input_keys(self) -> List[str]:
|
||||
"""Return the input keys.
|
||||
|
||||
:meta private:
|
||||
"""
|
||||
|
||||
def return_stopped_response(
|
||||
self,
|
||||
early_stopping_method: str,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
**kwargs: Any,
|
||||
) -> AgentFinish:
|
||||
"""Return response when agent has been stopped due to max iterations."""
|
||||
if early_stopping_method == "force":
|
||||
# `force` just returns a constant string
|
||||
return AgentFinish(
|
||||
{"output": "Agent stopped due to iteration limit or time limit."}, ""
|
||||
)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got unsupported early_stopping_method `{early_stopping_method}`"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_tools(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
tools: Sequence[BaseTool],
|
||||
callback_manager: Optional[BaseCallbackManager] = None,
|
||||
**kwargs: Any,
|
||||
) -> BaseSingleActionAgent:
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def _agent_type(self) -> str:
|
||||
"""Return Identifier of agent type."""
|
||||
raise NotImplementedError
|
||||
|
||||
def dict(self, **kwargs: Any) -> Dict:
|
||||
"""Return dictionary representation of agent."""
|
||||
_dict = super().dict()
|
||||
_type = self._agent_type
|
||||
if isinstance(_type, AgentType):
|
||||
_dict["_type"] = str(_type.value)
|
||||
else:
|
||||
_dict["_type"] = _type
|
||||
return _dict
|
||||
|
||||
def save(self, file_path: Union[Path, str]) -> None:
|
||||
"""Save the agent.
|
||||
|
||||
Args:
|
||||
file_path: Path to file to save the agent to.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
# If working with agent executor
|
||||
agent.agent.save(file_path="path/agent.yaml")
|
||||
"""
|
||||
# Convert file to Path object.
|
||||
if isinstance(file_path, str):
|
||||
save_path = Path(file_path)
|
||||
else:
|
||||
save_path = file_path
|
||||
|
||||
directory_path = save_path.parent
|
||||
directory_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Fetch dictionary to save
|
||||
agent_dict = self.dict()
|
||||
|
||||
if save_path.suffix == ".json":
|
||||
with open(file_path, "w") as f:
|
||||
json.dump(agent_dict, f, indent=4)
|
||||
elif save_path.suffix == ".yaml":
|
||||
with open(file_path, "w") as f:
|
||||
yaml.dump(agent_dict, f, default_flow_style=False)
|
||||
else:
|
||||
raise ValueError(f"{save_path} must be json or yaml")
|
||||
|
||||
def tool_run_logging_kwargs(self) -> Dict:
|
||||
return {}
|
||||
|
||||
|
||||
class BaseMultiActionAgent(BaseModel):
|
||||
"""Base Agent class."""
|
||||
|
||||
@property
|
||||
def return_values(self) -> List[str]:
|
||||
"""Return values of the agent."""
|
||||
return ["output"]
|
||||
|
||||
def get_allowed_tools(self) -> Optional[List[str]]:
|
||||
return None
|
||||
|
||||
@abstractmethod
|
||||
def plan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[List[AgentAction], AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Actions specifying what tool to use.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def aplan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[List[AgentAction], AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Actions specifying what tool to use.
|
||||
"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def input_keys(self) -> List[str]:
|
||||
"""Return the input keys.
|
||||
|
||||
:meta private:
|
||||
"""
|
||||
|
||||
def return_stopped_response(
|
||||
self,
|
||||
early_stopping_method: str,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
**kwargs: Any,
|
||||
) -> AgentFinish:
|
||||
"""Return response when agent has been stopped due to max iterations."""
|
||||
if early_stopping_method == "force":
|
||||
# `force` just returns a constant string
|
||||
return AgentFinish({"output": "Agent stopped due to max iterations."}, "")
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got unsupported early_stopping_method `{early_stopping_method}`"
|
||||
)
|
||||
|
||||
@property
|
||||
def _agent_type(self) -> str:
|
||||
"""Return Identifier of agent type."""
|
||||
raise NotImplementedError
|
||||
|
||||
def dict(self, **kwargs: Any) -> Dict:
|
||||
"""Return dictionary representation of agent."""
|
||||
_dict = super().dict()
|
||||
_dict["_type"] = str(self._agent_type)
|
||||
return _dict
|
||||
|
||||
def save(self, file_path: Union[Path, str]) -> None:
|
||||
"""Save the agent.
|
||||
|
||||
Args:
|
||||
file_path: Path to file to save the agent to.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
# If working with agent executor
|
||||
agent.agent.save(file_path="path/agent.yaml")
|
||||
"""
|
||||
# Convert file to Path object.
|
||||
if isinstance(file_path, str):
|
||||
save_path = Path(file_path)
|
||||
else:
|
||||
save_path = file_path
|
||||
|
||||
directory_path = save_path.parent
|
||||
directory_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Fetch dictionary to save
|
||||
agent_dict = self.dict()
|
||||
|
||||
if save_path.suffix == ".json":
|
||||
with open(file_path, "w") as f:
|
||||
json.dump(agent_dict, f, indent=4)
|
||||
elif save_path.suffix == ".yaml":
|
||||
with open(file_path, "w") as f:
|
||||
yaml.dump(agent_dict, f, default_flow_style=False)
|
||||
else:
|
||||
raise ValueError(f"{save_path} must be json or yaml")
|
||||
|
||||
def tool_run_logging_kwargs(self) -> Dict:
|
||||
return {}
|
||||
|
||||
|
||||
class AgentOutputParser(BaseOutputParser):
|
||||
@abstractmethod
|
||||
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
|
||||
"""Parse text into agent action/finish."""
|
||||
|
||||
|
||||
class LLMSingleActionAgent(BaseSingleActionAgent):
|
||||
llm_chain: LLMChain
|
||||
output_parser: AgentOutputParser
|
||||
stop: List[str]
|
||||
|
||||
@property
|
||||
def input_keys(self) -> List[str]:
|
||||
return list(set(self.llm_chain.input_keys) - {"intermediate_steps"})
|
||||
|
||||
def dict(self, **kwargs: Any) -> Dict:
|
||||
"""Return dictionary representation of agent."""
|
||||
_dict = super().dict()
|
||||
del _dict["output_parser"]
|
||||
return _dict
|
||||
|
||||
def plan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
output = self.llm_chain.run(
|
||||
intermediate_steps=intermediate_steps,
|
||||
stop=self.stop,
|
||||
callbacks=callbacks,
|
||||
**kwargs,
|
||||
)
|
||||
return self.output_parser.parse(output)
|
||||
|
||||
async def aplan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
output = await self.llm_chain.arun(
|
||||
intermediate_steps=intermediate_steps,
|
||||
stop=self.stop,
|
||||
callbacks=callbacks,
|
||||
**kwargs,
|
||||
)
|
||||
return self.output_parser.parse(output)
|
||||
|
||||
def tool_run_logging_kwargs(self) -> Dict:
|
||||
return {
|
||||
"llm_prefix": "",
|
||||
"observation_prefix": "" if len(self.stop) == 0 else self.stop[0],
|
||||
}
|
||||
|
||||
|
||||
class Agent(BaseSingleActionAgent):
|
||||
"""Class responsible for calling the language model and deciding the action.
|
||||
|
||||
This is driven by an LLMChain. The prompt in the LLMChain MUST include
|
||||
a variable called "agent_scratchpad" where the agent can put its
|
||||
intermediary work.
|
||||
"""
|
||||
|
||||
llm_chain: LLMChain
|
||||
output_parser: AgentOutputParser
|
||||
allowed_tools: Optional[List[str]] = None
|
||||
|
||||
def dict(self, **kwargs: Any) -> Dict:
|
||||
"""Return dictionary representation of agent."""
|
||||
_dict = super().dict()
|
||||
del _dict["output_parser"]
|
||||
return _dict
|
||||
|
||||
def get_allowed_tools(self) -> Optional[List[str]]:
|
||||
return self.allowed_tools
|
||||
|
||||
@property
|
||||
def return_values(self) -> List[str]:
|
||||
return ["output"]
|
||||
|
||||
def _fix_text(self, text: str) -> str:
|
||||
"""Fix the text."""
|
||||
raise ValueError("fix_text not implemented for this agent.")
|
||||
|
||||
@property
|
||||
def _stop(self) -> List[str]:
|
||||
return [
|
||||
f"\n{self.observation_prefix.rstrip()}",
|
||||
f"\n\t{self.observation_prefix.rstrip()}",
|
||||
]
|
||||
|
||||
def _construct_scratchpad(
|
||||
self, intermediate_steps: List[Tuple[AgentAction, str]]
|
||||
) -> Union[str, List[BaseMessage]]:
|
||||
"""Construct the scratchpad that lets the agent continue its thought process."""
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
|
||||
return thoughts
|
||||
|
||||
def plan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
|
||||
full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
|
||||
return self.output_parser.parse(full_output)
|
||||
|
||||
async def aplan(
|
||||
self,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
|
||||
full_output = await self.llm_chain.apredict(callbacks=callbacks, **full_inputs)
|
||||
return self.output_parser.parse(full_output)
|
||||
|
||||
def get_full_inputs(
|
||||
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
|
||||
) -> Dict[str, Any]:
|
||||
"""Create the full inputs for the LLMChain from intermediate steps."""
|
||||
thoughts = self._construct_scratchpad(intermediate_steps)
|
||||
new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
|
||||
full_inputs = {**kwargs, **new_inputs}
|
||||
return full_inputs
|
||||
|
||||
@property
|
||||
def input_keys(self) -> List[str]:
|
||||
"""Return the input keys.
|
||||
|
||||
:meta private:
|
||||
"""
|
||||
return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"})
|
||||
|
||||
@root_validator()
|
||||
def validate_prompt(cls, values: Dict) -> Dict:
|
||||
"""Validate that prompt matches format."""
|
||||
prompt = values["llm_chain"].prompt
|
||||
if "agent_scratchpad" not in prompt.input_variables:
|
||||
logger.warning(
|
||||
"`agent_scratchpad` should be a variable in prompt.input_variables."
|
||||
" Did not find it, so adding it at the end."
|
||||
)
|
||||
prompt.input_variables.append("agent_scratchpad")
|
||||
if isinstance(prompt, PromptTemplate):
|
||||
prompt.template += "\n{agent_scratchpad}"
|
||||
elif isinstance(prompt, FewShotPromptTemplate):
|
||||
prompt.suffix += "\n{agent_scratchpad}"
|
||||
else:
|
||||
raise ValueError(f"Got unexpected prompt type {type(prompt)}")
|
||||
return values
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def observation_prefix(self) -> str:
|
||||
"""Prefix to append the observation with."""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def llm_prefix(self) -> str:
|
||||
"""Prefix to append the LLM call with."""
|
||||
|
||||
@classmethod
|
||||
@abstractmethod
|
||||
def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:
|
||||
"""Create a prompt for this class."""
|
||||
|
||||
@classmethod
|
||||
def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
|
||||
"""Validate that appropriate tools are passed in."""
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
@abstractmethod
|
||||
def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser:
|
||||
"""Get default output parser for this class."""
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_tools(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
tools: Sequence[BaseTool],
|
||||
callback_manager: Optional[BaseCallbackManager] = None,
|
||||
output_parser: Optional[AgentOutputParser] = None,
|
||||
**kwargs: Any,
|
||||
) -> Agent:
|
||||
"""Construct an agent from an LLM and tools."""
|
||||
cls._validate_tools(tools)
|
||||
llm_chain = LLMChain(
|
||||
llm=llm,
|
||||
prompt=cls.create_prompt(tools),
|
||||
callback_manager=callback_manager,
|
||||
)
|
||||
tool_names = [tool.name for tool in tools]
|
||||
_output_parser = output_parser or cls._get_default_output_parser()
|
||||
return cls(
|
||||
llm_chain=llm_chain,
|
||||
allowed_tools=tool_names,
|
||||
output_parser=_output_parser,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def return_stopped_response(
|
||||
self,
|
||||
early_stopping_method: str,
|
||||
intermediate_steps: List[Tuple[AgentAction, str]],
|
||||
**kwargs: Any,
|
||||
) -> AgentFinish:
|
||||
"""Return response when agent has been stopped due to max iterations."""
|
||||
if early_stopping_method == "force":
|
||||
# `force` just returns a constant string
|
||||
return AgentFinish(
|
||||
{"output": "Agent stopped due to iteration limit or time limit."}, ""
|
||||
)
|
||||
elif early_stopping_method == "generate":
|
||||
# Generate does one final forward pass
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += (
|
||||
f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
|
||||
)
|
||||
# Adding to the previous steps, we now tell the LLM to make a final pred
|
||||
thoughts += (
|
||||
"\n\nI now need to return a final answer based on the previous steps:"
|
||||
)
|
||||
new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
|
||||
full_inputs = {**kwargs, **new_inputs}
|
||||
full_output = self.llm_chain.predict(**full_inputs)
|
||||
# We try to extract a final answer
|
||||
parsed_output = self.output_parser.parse(full_output)
|
||||
if isinstance(parsed_output, AgentFinish):
|
||||
# If we can extract, we send the correct stuff
|
||||
return parsed_output
|
||||
else:
|
||||
# If we can extract, but the tool is not the final tool,
|
||||
# we just return the full output
|
||||
return AgentFinish({"output": full_output}, full_output)
|
||||
else:
|
||||
raise ValueError(
|
||||
"early_stopping_method should be one of `force` or `generate`, "
|
||||
f"got {early_stopping_method}"
|
||||
)
|
||||
|
||||
def tool_run_logging_kwargs(self) -> Dict:
|
||||
return {
|
||||
"llm_prefix": self.llm_prefix,
|
||||
"observation_prefix": self.observation_prefix,
|
||||
}
|
||||
|
||||
|
||||
class ExceptionTool(BaseTool):
|
||||
name = "_Exception"
|
||||
description = "Exception tool"
|
||||
|
||||
def _run(
|
||||
self,
|
||||
query: str,
|
||||
run_manager: Optional[CallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
return query
|
||||
|
||||
async def _arun(
|
||||
self,
|
||||
query: str,
|
||||
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
return query
|
@ -0,0 +1,115 @@
|
||||
from typing import Dict, Optional
|
||||
import logging
|
||||
|
||||
from celery import Task
|
||||
|
||||
from langchain.agents.agent import AgentExecutor
|
||||
from langchain.callbacks.manager import CallbackManager
|
||||
from langchain.chains.conversation.memory import ConversationBufferMemory
|
||||
from langchain.memory.chat_memory import BaseChatMemory
|
||||
|
||||
from swarms.tools.main import BaseToolSet, ToolsFactory
|
||||
from .AgentBuilder import AgentBuilder
|
||||
from .Calback import EVALCallbackHandler, ExecutionTracingCallbackHandler
|
||||
|
||||
|
||||
callback_manager_instance = CallbackManager(EVALCallbackHandler())
|
||||
|
||||
|
||||
class AgentManager:
|
||||
def __init__(self, toolsets: list[BaseToolSet] = []):
|
||||
if not isinstance(toolsets, list):
|
||||
raise TypeError("Toolsets must be a list")
|
||||
self.toolsets: list[BaseToolSet] = toolsets
|
||||
self.memories: Dict[str, BaseChatMemory] = {}
|
||||
self.executors: Dict[str, AgentExecutor] = {}
|
||||
|
||||
def create_memory(self) -> BaseChatMemory:
|
||||
return ConversationBufferMemory(memory_key="chat_history", return_messages=True)
|
||||
|
||||
def get_or_create_memory(self, session: str) -> BaseChatMemory:
|
||||
if not isinstance(session, str):
|
||||
raise TypeError("Session must be a string")
|
||||
if not session:
|
||||
raise ValueError("Session is empty")
|
||||
if not (session in self.memories):
|
||||
self.memories[session] = self.create_memory()
|
||||
return self.memories[session]
|
||||
|
||||
def create_executor(self, session: str, execution: Optional[Task] = None, openai_api_key: str = None) -> AgentExecutor:
|
||||
try:
|
||||
builder = AgentBuilder(self.toolsets)
|
||||
builder.build_parser()
|
||||
|
||||
|
||||
callbacks = []
|
||||
eval_callback = EVALCallbackHandler()
|
||||
eval_callback.set_parser(builder.get_parser())
|
||||
callbacks.append(eval_callback)
|
||||
|
||||
if execution:
|
||||
execution_callback = ExecutionTracingCallbackHandler(execution)
|
||||
execution_callback.set_parser(builder.get_parser())
|
||||
callbacks.append(execution_callback)
|
||||
|
||||
#llm init
|
||||
callback_manager = CallbackManager(callbacks)
|
||||
builder.build_llm(callback_manager, openai_api_key)
|
||||
if builder.llm is None:
|
||||
raise ValueError('LLM not created')
|
||||
|
||||
builder.build_global_tools()
|
||||
|
||||
#agent init
|
||||
agent = builder.get_agent()
|
||||
if not agent:
|
||||
raise ValueError("Agent not created")
|
||||
|
||||
memory: BaseChatMemory = self.get_or_create_memory(session)
|
||||
tools = [
|
||||
*builder.get_global_tools(),
|
||||
*ToolsFactory.create_per_session_tools(
|
||||
self.toolsets,
|
||||
get_session=lambda: (session, self.executors[session]),
|
||||
),
|
||||
]
|
||||
|
||||
for tool in tools:
|
||||
tool.callback_manager = callback_manager
|
||||
|
||||
# # Ensure the 'agent' key is present in the values dictionary
|
||||
# values = {'agent': agent, 'tools': tools}
|
||||
|
||||
# executor = AgentExecutor.from_agent_and_tools(
|
||||
# agent=agent,
|
||||
# tools=tools,
|
||||
# memory=memory,
|
||||
# callback_manager=callback_manager,
|
||||
# verbose=True,
|
||||
# )
|
||||
|
||||
# Prepare the arguments for the executor
|
||||
executor_args = {
|
||||
'agent': agent,
|
||||
'tools': tools,
|
||||
'memory': memory,
|
||||
'callback_manager': callback_manager,
|
||||
'verbose': True # Or any other value based on your requirement
|
||||
}
|
||||
|
||||
executor = AgentExecutor.from_agent_and_tools(**executor_args)
|
||||
|
||||
if 'agent' not in executor.__dict__:
|
||||
executor.__dict__['agent'] = agent
|
||||
self.executors[session] = executor
|
||||
|
||||
return executor
|
||||
except Exception as e:
|
||||
logging.error(f"Error while creating executor: {str(e)}")
|
||||
raise e
|
||||
|
||||
@staticmethod
|
||||
def create(toolsets: list[BaseToolSet]) -> "AgentManager":
|
||||
if not isinstance(toolsets, list):
|
||||
raise TypeError("Toolsets must be a list")
|
||||
return AgentManager(toolsets=toolsets)
|
@ -1 +1 @@
|
||||
"""Agents"""
|
||||
"""Agents"""
|
||||
|
@ -1,82 +0,0 @@
|
||||
from typing import Dict, Optional
|
||||
# from celery import Task
|
||||
|
||||
from langchain.agents.agent import AgentExecutor
|
||||
from langchain.callbacks.manager import CallbackManager
|
||||
from langchain.callbacks.base import set_handler
|
||||
from langchain.chains.conversation.memory import ConversationBufferMemory
|
||||
from langchain.memory.chat_memory import BaseChatMemory
|
||||
|
||||
from swarms.tools.main import BaseToolSet, ToolsFactory
|
||||
|
||||
from .builder import AgentBuilder
|
||||
from .callback import EVALCallbackHandler, ExecutionTracingCallbackHandler
|
||||
|
||||
|
||||
set_handler(EVALCallbackHandler())
|
||||
|
||||
|
||||
class AgentManager:
|
||||
def __init__(
|
||||
self,
|
||||
toolsets: list[BaseToolSet] = [],
|
||||
):
|
||||
self.toolsets: list[BaseToolSet] = toolsets
|
||||
self.memories: Dict[str, BaseChatMemory] = {}
|
||||
self.executors: Dict[str, AgentExecutor] = {}
|
||||
|
||||
def create_memory(self) -> BaseChatMemory:
|
||||
return ConversationBufferMemory(memory_key="chat_history", return_messages=True)
|
||||
|
||||
def get_or_create_memory(self, session: str) -> BaseChatMemory:
|
||||
if not (session in self.memories):
|
||||
self.memories[session] = self.create_memory()
|
||||
return self.memories[session]
|
||||
|
||||
def create_executor(
|
||||
self, session: str, execution: Optional[Task] = None
|
||||
) -> AgentExecutor:
|
||||
builder = AgentBuilder(self.toolsets)
|
||||
builder.build_parser()
|
||||
|
||||
callbacks = []
|
||||
eval_callback = EVALCallbackHandler()
|
||||
eval_callback.set_parser(builder.get_parser())
|
||||
callbacks.append(eval_callback)
|
||||
if execution:
|
||||
execution_callback = ExecutionTracingCallbackHandler(execution)
|
||||
execution_callback.set_parser(builder.get_parser())
|
||||
callbacks.append(execution_callback)
|
||||
|
||||
callback_manager = CallbackManager(callbacks)
|
||||
|
||||
builder.build_llm(callback_manager)
|
||||
builder.build_global_tools()
|
||||
|
||||
memory: BaseChatMemory = self.get_or_create_memory(session)
|
||||
tools = [
|
||||
*builder.get_global_tools(),
|
||||
*ToolsFactory.create_per_session_tools(
|
||||
self.toolsets,
|
||||
get_session=lambda: (session, self.executors[session]),
|
||||
),
|
||||
]
|
||||
|
||||
for tool in tools:
|
||||
tool.callback_manager = callback_manager
|
||||
|
||||
executor = AgentExecutor.from_agent_and_tools(
|
||||
agent=builder.get_agent(),
|
||||
tools=tools,
|
||||
memory=memory,
|
||||
callback_manager=callback_manager,
|
||||
verbose=True,
|
||||
)
|
||||
self.executors[session] = executor
|
||||
return executor
|
||||
|
||||
@staticmethod
|
||||
def create(toolsets: list[BaseToolSet]) -> "AgentManager":
|
||||
return AgentManager(
|
||||
toolsets=toolsets,
|
||||
)
|
@ -0,0 +1,105 @@
|
||||
def generate_agent_role_prompt(agent):
|
||||
""" Generates the agent role prompt.
|
||||
Args: agent (str): The type of the agent.
|
||||
Returns: str: The agent role prompt.
|
||||
"""
|
||||
prompts = {
|
||||
"Finance Agent": "You are a seasoned finance analyst AI assistant. Your primary goal is to compose comprehensive, astute, impartial, and methodically arranged financial reports based on provided data and trends.",
|
||||
"Travel Agent": "You are a world-travelled AI tour guide assistant. Your main purpose is to draft engaging, insightful, unbiased, and well-structured travel reports on given locations, including history, attractions, and cultural insights.",
|
||||
"Academic Research Agent": "You are an AI academic research assistant. Your primary responsibility is to create thorough, academically rigorous, unbiased, and systematically organized reports on a given research topic, following the standards of scholarly work.",
|
||||
"Default Agent": "You are an AI critical thinker research assistant. Your sole purpose is to write well written, critically acclaimed, objective and structured reports on given text."
|
||||
|
||||
}
|
||||
|
||||
return prompts.get(agent, "No such agent")
|
||||
|
||||
|
||||
def generate_report_prompt(question, research_summary):
|
||||
""" Generates the report prompt for the given question and research summary.
|
||||
Args: question (str): The question to generate the report prompt for
|
||||
research_summary (str): The research summary to generate the report prompt for
|
||||
Returns: str: The report prompt for the given question and research summary
|
||||
"""
|
||||
|
||||
return f'"""{research_summary}""" Using the above information, answer the following'\
|
||||
f' question or topic: "{question}" in a detailed report --'\
|
||||
" The report should focus on the answer to the question, should be well structured, informative," \
|
||||
" in depth, with facts and numbers if available, a minimum of 1,200 words and with markdown syntax and apa format. "\
|
||||
"Write all source urls at the end of the report in apa format"
|
||||
|
||||
def generate_search_queries_prompt(question):
|
||||
""" Generates the search queries prompt for the given question.
|
||||
Args: question (str): The question to generate the search queries prompt for
|
||||
Returns: str: The search queries prompt for the given question
|
||||
"""
|
||||
|
||||
return f'Write 4 google search queries to search online that form an objective opinion from the following: "{question}"'\
|
||||
f'You must respond with a list of strings in the following format: ["query 1", "query 2", "query 3", "query 4"]'
|
||||
|
||||
|
||||
def generate_resource_report_prompt(question, research_summary):
|
||||
"""Generates the resource report prompt for the given question and research summary.
|
||||
|
||||
Args:
|
||||
question (str): The question to generate the resource report prompt for.
|
||||
research_summary (str): The research summary to generate the resource report prompt for.
|
||||
|
||||
Returns:
|
||||
str: The resource report prompt for the given question and research summary.
|
||||
"""
|
||||
return f'"""{research_summary}""" Based on the above information, generate a bibliography recommendation report for the following' \
|
||||
f' question or topic: "{question}". The report should provide a detailed analysis of each recommended resource,' \
|
||||
' explaining how each source can contribute to finding answers to the research question.' \
|
||||
' Focus on the relevance, reliability, and significance of each source.' \
|
||||
' Ensure that the report is well-structured, informative, in-depth, and follows Markdown syntax.' \
|
||||
' Include relevant facts, figures, and numbers whenever available.' \
|
||||
' The report should have a minimum length of 1,200 words.'
|
||||
|
||||
|
||||
def generate_outline_report_prompt(question, research_summary):
|
||||
""" Generates the outline report prompt for the given question and research summary.
|
||||
Args: question (str): The question to generate the outline report prompt for
|
||||
research_summary (str): The research summary to generate the outline report prompt for
|
||||
Returns: str: The outline report prompt for the given question and research summary
|
||||
"""
|
||||
|
||||
return f'"""{research_summary}""" Using the above information, generate an outline for a research report in Markdown syntax'\
|
||||
f' for the following question or topic: "{question}". The outline should provide a well-structured framework'\
|
||||
' for the research report, including the main sections, subsections, and key points to be covered.' \
|
||||
' The research report should be detailed, informative, in-depth, and a minimum of 1,200 words.' \
|
||||
' Use appropriate Markdown syntax to format the outline and ensure readability.'
|
||||
|
||||
def generate_concepts_prompt(question, research_summary):
|
||||
""" Generates the concepts prompt for the given question.
|
||||
Args: question (str): The question to generate the concepts prompt for
|
||||
research_summary (str): The research summary to generate the concepts prompt for
|
||||
Returns: str: The concepts prompt for the given question
|
||||
"""
|
||||
|
||||
return f'"""{research_summary}""" Using the above information, generate a list of 5 main concepts to learn for a research report'\
|
||||
f' on the following question or topic: "{question}". The outline should provide a well-structured framework'\
|
||||
'You must respond with a list of strings in the following format: ["concepts 1", "concepts 2", "concepts 3", "concepts 4, concepts 5"]'
|
||||
|
||||
|
||||
def generate_lesson_prompt(concept):
|
||||
"""
|
||||
Generates the lesson prompt for the given question.
|
||||
Args:
|
||||
concept (str): The concept to generate the lesson prompt for.
|
||||
Returns:
|
||||
str: The lesson prompt for the given concept.
|
||||
"""
|
||||
|
||||
prompt = f'generate a comprehensive lesson about {concept} in Markdown syntax. This should include the definition'\
|
||||
f'of {concept}, its historical background and development, its applications or uses in different'\
|
||||
f'fields, and notable events or facts related to {concept}.'
|
||||
|
||||
return prompt
|
||||
|
||||
def get_report_by_type(report_type):
|
||||
report_type_mapping = {
|
||||
'research_report': generate_report_prompt,
|
||||
'resource_report': generate_resource_report_prompt,
|
||||
'outline_report': generate_outline_report_prompt
|
||||
}
|
||||
return report_type_mapping[report_type]
|
@ -0,0 +1,157 @@
|
||||
from swarms.tools.agent_tools import *
|
||||
from langchain.tools import BaseTool
|
||||
from typing import Optional, Type
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForToolRun,
|
||||
CallbackManagerForToolRun,
|
||||
)
|
||||
from typing import List, Any, Dict, Optional
|
||||
from langchain.memory.chat_message_histories import FileChatMessageHistory
|
||||
|
||||
import logging
|
||||
from pydantic import BaseModel, Extra
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
|
||||
class WorkerNode:
|
||||
"""Useful for when you need to spawn an autonomous agent instance as a worker to accomplish complex tasks, it can search the internet or spawn child multi-modality models to process and generate images and text or audio and so on"""
|
||||
|
||||
def __init__(self, llm, tools, vectorstore):
|
||||
if not llm or not tools or not vectorstore:
|
||||
logging.error("llm, tools, and vectorstore cannot be None.")
|
||||
raise ValueError("llm, tools, and vectorstore cannot be None.")
|
||||
|
||||
self.llm = llm
|
||||
self.tools = tools
|
||||
self.vectorstore = vectorstore
|
||||
self.agent = None
|
||||
|
||||
def create_agent(self, ai_name="Swarm Worker AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={}, verbose=False):
|
||||
logging.info("Creating agent in WorkerNode")
|
||||
try:
|
||||
self.agent = AutoGPT.from_llm_and_tools(
|
||||
ai_name=ai_name,
|
||||
ai_role=ai_role,
|
||||
tools=self.tools,
|
||||
llm=self.llm,
|
||||
memory=self.vectorstore.as_retriever(search_kwargs=search_kwargs),
|
||||
human_in_the_loop=human_in_the_loop,
|
||||
chat_history_memory=FileChatMessageHistory("chat_history.txt"),
|
||||
)
|
||||
self.agent.chain.verbose = verbose
|
||||
except Exception as e:
|
||||
logging.error(f"Error while creating agent: {str(e)}")
|
||||
raise e
|
||||
|
||||
|
||||
def add_tool(self, tool: Tool):
|
||||
if not isinstance(tool, Tool):
|
||||
logging.error("Tool must be an instance of Tool.")
|
||||
raise TypeError("Tool must be an instance of Tool.")
|
||||
|
||||
self.tools.append(tool)
|
||||
|
||||
def run(self, prompt: str) -> str:
|
||||
if not isinstance(prompt, str):
|
||||
logging.error("Prompt must be a string.")
|
||||
raise TypeError("Prompt must be a string.")
|
||||
|
||||
if not prompt:
|
||||
logging.error("Prompt is empty.")
|
||||
raise ValueError("Prompt is empty.")
|
||||
|
||||
try:
|
||||
self.agent.run([f"{prompt}"])
|
||||
return "Task completed by WorkerNode"
|
||||
except Exception as e:
|
||||
logging.error(f"While running the agent: {str(e)}")
|
||||
raise e
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
class WorkerNodeInitializer:
|
||||
def __init__(self, openai_api_key):
|
||||
if not openai_api_key:
|
||||
logging.error("OpenAI API key is not provided")
|
||||
raise ValueError("openai_api_key cannot be None")
|
||||
|
||||
self.openai_api_key = openai_api_key
|
||||
|
||||
def initialize_llm(self, llm_class, temperature=0.5):
|
||||
if not llm_class:
|
||||
logging.error("llm_class cannot be none")
|
||||
raise ValueError("llm_class cannot be None")
|
||||
|
||||
try:
|
||||
return llm_class(openai_api_key=self.openai_api_key, temperature=temperature)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize language model: {e}")
|
||||
raise
|
||||
|
||||
def initialize_tools(self, llm_class):
|
||||
if not llm_class:
|
||||
logging.error("llm_class not cannot be none")
|
||||
raise ValueError("llm_class cannot be none")
|
||||
try:
|
||||
|
||||
logging.info('Creating WorkerNode')
|
||||
llm = self.initialize_llm(llm_class)
|
||||
web_search = DuckDuckGoSearchRun()
|
||||
|
||||
tools = [
|
||||
web_search,
|
||||
WriteFileTool(root_dir=ROOT_DIR),
|
||||
ReadFileTool(root_dir=ROOT_DIR),
|
||||
process_csv,
|
||||
WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
]
|
||||
if not tools:
|
||||
logging.error("Tools are not initialized")
|
||||
raise ValueError("Tools are not initialized")
|
||||
return tools
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize tools: {e}")
|
||||
|
||||
def initialize_vectorstore(self):
|
||||
try:
|
||||
|
||||
embeddings_model = OpenAIEmbeddings(openai_api_key=self.openai_api_key)
|
||||
embedding_size = 1536
|
||||
index = faiss.IndexFlatL2(embedding_size)
|
||||
return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize vector store: {e}")
|
||||
raise
|
||||
|
||||
def create_worker_node(self, llm_class=ChatOpenAI, ai_name="Swarm Worker AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={}, verbose=False):
|
||||
if not llm_class:
|
||||
logging.error("llm_class cannot be None.")
|
||||
raise ValueError("llm_class cannot be None.")
|
||||
try:
|
||||
worker_tools = self.initialize_tools(llm_class)
|
||||
vectorstore = self.initialize_vectorstore()
|
||||
worker_node = WorkerNode(llm=self.initialize_llm(llm_class), tools=worker_tools, vectorstore=vectorstore)
|
||||
worker_node.create_agent(ai_name=ai_name, ai_role=ai_role, human_in_the_loop=human_in_the_loop, search_kwargs=search_kwargs, verbose=verbose)
|
||||
return worker_node
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to create worker node: {e}")
|
||||
raise
|
||||
|
||||
def worker_node(openai_api_key):
|
||||
if not openai_api_key:
|
||||
logging.error("OpenAI API key is not provided")
|
||||
raise ValueError("OpenAI API key is required")
|
||||
|
||||
try:
|
||||
|
||||
initializer = WorkerNodeInitializer(openai_api_key)
|
||||
worker_node = initializer.create_worker_node()
|
||||
return worker_node
|
||||
except Exception as e:
|
||||
logging.error(f"An error occured in worker_node: {e}")
|
||||
raise
|
||||
|
||||
|
@ -0,0 +1,114 @@
|
||||
import os
|
||||
import re
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, List
|
||||
|
||||
from swarms.agents.utils.AgentManager import AgentManager
|
||||
from swarms.utils.main import BaseHandler, FileHandler, FileType
|
||||
|
||||
from swarms.tools.main import ExitConversation, RequestsGet, CodeEditor, Terminal
|
||||
from swarms.utils.main import CsvToDataframe
|
||||
|
||||
from swarms.tools.main import BaseToolSet
|
||||
from swarms.utils.main import StaticUploader
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
BASE_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
# Check if "PLAYGROUND_DIR" environment variable exists, if not, set a default value
|
||||
playground = os.environ.get("PLAYGROUND_DIR", './playground')
|
||||
|
||||
# Ensure the path exists before changing the directory
|
||||
os.makedirs(BASE_DIR / playground, exist_ok=True)
|
||||
|
||||
try:
|
||||
os.chdir(BASE_DIR / playground)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to change directory: {e}")
|
||||
|
||||
class WorkerUltraNode:
|
||||
def __init__(self, objective: str, openai_api_key: str):
|
||||
self.openai_api_key = openai_api_key
|
||||
|
||||
if not isinstance(objective, str):
|
||||
raise TypeError("Objective must be a string")
|
||||
if not objective:
|
||||
raise ValueError("Objective cannot be empty")
|
||||
|
||||
toolsets: List[BaseToolSet] = [
|
||||
Terminal(),
|
||||
CodeEditor(),
|
||||
RequestsGet(),
|
||||
ExitConversation(),
|
||||
]
|
||||
handlers: Dict[FileType, BaseHandler] = {FileType.DATAFRAME: CsvToDataframe()}
|
||||
|
||||
if os.environ.get("USE_GPU", False):
|
||||
import torch
|
||||
from swarms.tools.main import ImageCaptioning
|
||||
from swarms.tools.main import ImageEditing, InstructPix2Pix, Text2Image, VisualQuestionAnswering
|
||||
|
||||
if torch.cuda.is_available():
|
||||
toolsets.extend(
|
||||
[
|
||||
Text2Image("cuda"),
|
||||
ImageEditing("cuda"),
|
||||
InstructPix2Pix("cuda"),
|
||||
VisualQuestionAnswering("cuda"),
|
||||
]
|
||||
)
|
||||
handlers[FileType.IMAGE] = ImageCaptioning("cuda")
|
||||
|
||||
try:
|
||||
|
||||
|
||||
self.agent_manager = AgentManager.create(toolsets=toolsets)
|
||||
self.file_handler = FileHandler(handlers=handlers, path=BASE_DIR)
|
||||
self.uploader = StaticUploader.from_settings(
|
||||
path=BASE_DIR / "static", endpoint="static"
|
||||
)
|
||||
|
||||
|
||||
self.session = self.agent_manager.create_executor(objective, self.openai_api_key)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error while initializing WorkerUltraNode: {str(e)}")
|
||||
raise e
|
||||
|
||||
def execute_task(self):
|
||||
# Now the prompt is not needed as an argument
|
||||
promptedQuery = self.file_handler.handle(self.objective)
|
||||
|
||||
try:
|
||||
res = self.session({"input": promptedQuery})
|
||||
except Exception as e:
|
||||
logging.error(f"Error while executing task: {str(e)}")
|
||||
return {"answer": str(e), "files": []}
|
||||
|
||||
files = re.findall(r"\[file://\S*\]", res["output"])
|
||||
files = [file[1:-1].split("file://")[1] for file in files]
|
||||
|
||||
return {
|
||||
"answer": res["output"],
|
||||
"files": [self.uploader.upload(file) for file in files],
|
||||
}
|
||||
|
||||
|
||||
def execute(self):
|
||||
try:
|
||||
|
||||
# The prompt is not needed here either
|
||||
return self.execute_task()
|
||||
except Exception as e:
|
||||
logging.error(f"Error while executing: {str(e)}")
|
||||
raise e
|
||||
|
||||
|
||||
|
||||
def WorkerUltra(objective: str, openai_api_key: str):
|
||||
worker_node = WorkerUltraNode(objective, openai_api_key)
|
||||
# Return the result of the execution
|
||||
return worker_node.result
|
||||
|
@ -1,3 +1,2 @@
|
||||
from .worker_agent import worker_node
|
||||
|
||||
from .worker_ultranode import UltraNode
|
||||
from .WorkerNode import worker_node
|
||||
from .WorkerUltraNode import WorkerUltraNode
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,95 +0,0 @@
|
||||
# General
|
||||
import os
|
||||
import pandas as pd
|
||||
from langchain.experimental.autonomous_agents.autogpt.agent import AutoGPT
|
||||
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent
|
||||
from langchain.docstore.document import Document
|
||||
|
||||
import asyncio
|
||||
import nest_asyncio
|
||||
|
||||
# Tools
|
||||
|
||||
from contextlib import contextmanager
|
||||
from typing import Optional
|
||||
from langchain.agents import tool
|
||||
|
||||
from langchain.tools.file_management.read import ReadFileTool
|
||||
from langchain.tools.file_management.write import WriteFileTool
|
||||
from langchain.tools import BaseTool, DuckDuckGoSearchRun
|
||||
|
||||
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
||||
from pydantic import Field
|
||||
from langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain, BaseCombineDocumentsChain
|
||||
|
||||
# Memory
|
||||
import faiss
|
||||
from langchain.vectorstores import FAISS
|
||||
from langchain.docstore import InMemoryDocstore
|
||||
from langchain.embeddings import OpenAIEmbeddings
|
||||
|
||||
from langchain.tools.human.tool import HumanInputRun
|
||||
# from swarms.agents.workers.auto_agent import
|
||||
from swarms.agents.workers.visual_worker import multimodal_agent_tool
|
||||
from swarms.tools.main import Terminal, CodeWriter, CodeEditor, process_csv, WebpageQATool
|
||||
|
||||
|
||||
|
||||
class WorkerAgent:
|
||||
def __init__(self, objective: str, api_key: str):
|
||||
self.objective = objective
|
||||
self.api_key = api_key
|
||||
self.worker = self.create_agent_worker()
|
||||
|
||||
def create_agent_worker(self):
|
||||
os.environ['OPENAI_API_KEY'] = self.api_key
|
||||
|
||||
llm = ChatOpenAI(model_name="gpt-4", temperature=1.0)
|
||||
embeddings_model = OpenAIEmbeddings()
|
||||
embedding_size = 1536
|
||||
index = faiss.IndexFlatL2(embedding_size)
|
||||
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
query_website_tool = WebpageQATool(qa_chain=load_qa_with_sources_chain(llm))
|
||||
web_search = DuckDuckGoSearchRun()
|
||||
|
||||
tools = [
|
||||
web_search,
|
||||
WriteFileTool(root_dir="./data"),
|
||||
ReadFileTool(root_dir="./data"),
|
||||
|
||||
multimodal_agent_tool,
|
||||
process_csv,
|
||||
query_website_tool,
|
||||
Terminal,
|
||||
|
||||
|
||||
CodeWriter,
|
||||
CodeEditor
|
||||
]
|
||||
|
||||
agent_worker = AutoGPT.from_llm_and_tools(
|
||||
ai_name="WorkerX",
|
||||
ai_role="Assistant",
|
||||
tools=tools,
|
||||
llm=llm,
|
||||
memory=vectorstore.as_retriever(search_kwargs={"k": 8}),
|
||||
human_in_the_loop=True,
|
||||
)
|
||||
|
||||
agent_worker.chain.verbose = True
|
||||
|
||||
return agent_worker
|
||||
|
||||
# objective = "Your objective here"
|
||||
# api_key = "Your OpenAI API key here"
|
||||
|
||||
# worker_agent = WorkerAgent(objective, api_key)
|
||||
|
||||
|
||||
# objective = "Your objective here"
|
||||
|
||||
|
||||
# worker_agent = WorkerAgent(objective)
|
@ -1,96 +0,0 @@
|
||||
from swarms.tools.agent_tools import *
|
||||
from langchain.tools import BaseTool
|
||||
from typing import Optional, Type
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForToolRun,
|
||||
CallbackManagerForToolRun,
|
||||
)
|
||||
from typing import List, Any, Dict, Optional
|
||||
from langchain.memory.chat_message_histories import FileChatMessageHistory
|
||||
|
||||
import logging
|
||||
from pydantic import BaseModel, Extra
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
class WorkerNode:
|
||||
"""Useful for when you need to spawn an autonomous agent instance as a worker to accomplish complex tasks, it can search the internet or spawn child multi-modality models to process and generate images and text or audio and so on"""
|
||||
|
||||
def __init__(self, llm, tools, vectorstore):
|
||||
self.llm = llm
|
||||
self.tools = tools
|
||||
self.vectorstore = vectorstore
|
||||
self.agent = None
|
||||
|
||||
def create_agent(self, ai_name, ai_role, human_in_the_loop, search_kwargs):
|
||||
logging.info("Creating agent in WorkerNode")
|
||||
self.agent = AutoGPT.from_llm_and_tools(
|
||||
ai_name=ai_name,
|
||||
ai_role=ai_role,
|
||||
tools=self.tools,
|
||||
llm=self.llm,
|
||||
memory=self.vectorstore.as_retriever(search_kwargs=search_kwargs),
|
||||
human_in_the_loop=human_in_the_loop,
|
||||
chat_history_memory=FileChatMessageHistory("chat_history.txt"),
|
||||
)
|
||||
self.agent.chain.verbose = True
|
||||
|
||||
def add_tool(self, tool: Tool):
|
||||
self.tools.append(tool)
|
||||
|
||||
def run(self, prompt: str) -> str:
|
||||
if not isinstance(prompt, str):
|
||||
raise TypeError("Prompt must be a string")
|
||||
|
||||
if not prompt:
|
||||
raise ValueError("Prompt is empty")
|
||||
|
||||
self.agent.run([f"{prompt}"])
|
||||
return "Task completed by WorkerNode"
|
||||
|
||||
|
||||
worker_tool = Tool(
|
||||
name="WorkerNode AI Agent",
|
||||
func=WorkerNode.run,
|
||||
description="Useful for when you need to spawn an autonomous agent instance as a worker to accomplish complex tasks, it can search the internet or spawn child multi-modality models to process and generate images and text or audio and so on"
|
||||
)
|
||||
|
||||
|
||||
class WorkerNodeInitializer:
|
||||
def __init__(self, openai_api_key):
|
||||
self.openai_api_key = openai_api_key
|
||||
|
||||
def initialize_llm(self, llm_class, temperature=0.5):
|
||||
return llm_class(openai_api_key=self.openai_api_key, temperature=temperature)
|
||||
|
||||
def initialize_tools(self, llm_class):
|
||||
llm = self.initialize_llm(llm_class)
|
||||
web_search = DuckDuckGoSearchRun()
|
||||
tools = [
|
||||
web_search,
|
||||
WriteFileTool(root_dir=ROOT_DIR),
|
||||
ReadFileTool(root_dir=ROOT_DIR),
|
||||
process_csv,
|
||||
WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
]
|
||||
return tools
|
||||
|
||||
def initialize_vectorstore(self):
|
||||
embeddings_model = OpenAIEmbeddings(openai_api_key=self.openai_api_key)
|
||||
embedding_size = 1536
|
||||
index = faiss.IndexFlatL2(embedding_size)
|
||||
return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
def create_worker_node(self, llm_class=ChatOpenAI):
|
||||
worker_tools = self.initialize_tools(llm_class)
|
||||
vectorstore = self.initialize_vectorstore()
|
||||
worker_node = WorkerNode(llm=self.initialize_llm(llm_class), tools=worker_tools, vectorstore=vectorstore)
|
||||
worker_node.create_agent(ai_name="Swarm Worker AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={})
|
||||
return worker_node
|
||||
|
||||
def worker_node(openai_api_key):
|
||||
initializer = WorkerNodeInitializer(openai_api_key)
|
||||
worker_node = initializer.create_worker_node()
|
||||
return worker_node
|
||||
|
||||
|
@ -1,74 +0,0 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, List
|
||||
|
||||
from swarms.agents.utils.manager import AgentManager
|
||||
from swarms.utils.utils import BaseHandler, FileHandler, FileType
|
||||
from swarms.tools.main import CsvToDataframe, ExitConversation, RequestsGet, CodeEditor, Terminal
|
||||
from swarms.tools.main import BaseToolSet
|
||||
from swarms.utils.utils import StaticUploader
|
||||
|
||||
BASE_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
os.chdir(BASE_DIR / os.environ["PLAYGROUND_DIR"])
|
||||
|
||||
class UltraNode:
|
||||
def __init__(self, objective: str):
|
||||
toolsets: List[BaseToolSet] = [
|
||||
Terminal(),
|
||||
CodeEditor(),
|
||||
RequestsGet(),
|
||||
ExitConversation(),
|
||||
]
|
||||
handlers: Dict[FileType, BaseHandler] = {FileType.DATAFRAME: CsvToDataframe()}
|
||||
|
||||
if os.environ["USE_GPU"]:
|
||||
import torch
|
||||
from swarms.tools.main import ImageCaptioning
|
||||
from swarms.tools.main import ImageEditing, InstructPix2Pix, Text2Image, VisualQuestionAnswering
|
||||
|
||||
if torch.cuda.is_available():
|
||||
toolsets.extend(
|
||||
[
|
||||
Text2Image("cuda"),
|
||||
ImageEditing("cuda"),
|
||||
InstructPix2Pix("cuda"),
|
||||
VisualQuestionAnswering("cuda"),
|
||||
]
|
||||
)
|
||||
handlers[FileType.IMAGE] = ImageCaptioning("cuda")
|
||||
|
||||
self.agent_manager = AgentManager.create(toolsets=toolsets)
|
||||
self.file_handler = FileHandler(handlers=handlers, path=BASE_DIR)
|
||||
self.uploader = StaticUploader.from_settings(
|
||||
path=BASE_DIR / "static", endpoint="static"
|
||||
)
|
||||
|
||||
|
||||
self.session = self.agent_manager.create_executor(objective)
|
||||
|
||||
def execute_task(self):
|
||||
# Now the prompt is not needed as an argument
|
||||
promptedQuery = self.file_handler.handle(self.objective)
|
||||
|
||||
try:
|
||||
res = self.session({"input": promptedQuery})
|
||||
except Exception as e:
|
||||
return {"answer": str(e), "files": []}
|
||||
|
||||
files = re.findall(r"\[file://\S*\]", res["output"])
|
||||
files = [file[1:-1].split("file://")[1] for file in files]
|
||||
|
||||
return {
|
||||
"answer": res["output"],
|
||||
"files": [self.uploader.upload(file) for file in files],
|
||||
}
|
||||
|
||||
|
||||
def execute(self):
|
||||
# The prompt is not needed here either
|
||||
return self.execute_task()
|
||||
|
||||
# from worker_node import UltraNode
|
||||
|
||||
# node = UltraNode('objective')
|
||||
# result = node.execute()
|
@ -0,0 +1,2 @@
|
||||
# many boss + workers in unison
|
||||
#kye gomez jul 13 4:01pm, can scale up the number of swarms working on a probkem with `hivemind(swarms=4, or swarms=auto which will scale the agents depending on the complexity)`
|
@ -1,522 +1,186 @@
|
||||
from swarms.tools.agent_tools import *
|
||||
from swarms.agents.workers.worker_agent import WorkerNode
|
||||
from swarms.agents.boss.boss_agent import BossNode
|
||||
# from swarms.agents.workers.omni_worker import OmniWorkerAgent
|
||||
# from swarms.tools.main import RequestsGet, ExitConversation
|
||||
# visual agent
|
||||
from swarms.agents.workers.WorkerNode import WorkerNode, worker_node
|
||||
from swarms.agents.boss.BossNode import BossNode
|
||||
from swarms.agents.workers.WorkerUltraNode import WorkerUltra
|
||||
|
||||
from swarms.agents.workers.worker_agent import worker_tool
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
|
||||
class Swarms:
|
||||
def __init__(self, openai_api_key):
|
||||
def __init__(self, openai_api_key=""):
|
||||
#openai_api_key: the openai key. Default is empty
|
||||
if not openai_api_key:
|
||||
logging.error("OpenAI key is not provided")
|
||||
raise ValueError("OpenAI API key is required")
|
||||
|
||||
self.openai_api_key = openai_api_key
|
||||
|
||||
def initialize_llm(self, llm_class, temperature=0.5):
|
||||
# Initialize language model
|
||||
return llm_class(openai_api_key=self.openai_api_key, temperature=temperature)
|
||||
"""
|
||||
Init LLM
|
||||
|
||||
Params:
|
||||
llm_class(class): The Language model class. Default is OpenAI.
|
||||
temperature (float): The Temperature for the language model. Default is 0.5
|
||||
"""
|
||||
try:
|
||||
# Initialize language model
|
||||
return llm_class(openai_api_key=self.openai_api_key, temperature=temperature)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize language model: {e}")
|
||||
|
||||
def initialize_tools(self, llm_class):
|
||||
llm = self.initialize_llm(llm_class)
|
||||
# Initialize tools
|
||||
web_search = DuckDuckGoSearchRun()
|
||||
tools = [
|
||||
web_search,
|
||||
WriteFileTool(root_dir=ROOT_DIR),
|
||||
ReadFileTool(root_dir=ROOT_DIR),
|
||||
|
||||
process_csv,
|
||||
WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
|
||||
# CodeEditor,
|
||||
# Terminal,
|
||||
# RequestsGet,
|
||||
# ExitConversation
|
||||
|
||||
#code editor + terminal editor + visual agent
|
||||
|
||||
]
|
||||
assert tools is not None, "tools is not initialized"
|
||||
return tools
|
||||
|
||||
def initialize_vectorstore(self):
|
||||
# Initialize vector store
|
||||
embeddings_model = OpenAIEmbeddings(openai_api_key=self.openai_api_key)
|
||||
embedding_size = 1536
|
||||
index = faiss.IndexFlatL2(embedding_size)
|
||||
return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
def initialize_worker_node(self, worker_tools, vectorstore):
|
||||
# Initialize worker node
|
||||
llm = self.initialize_llm(ChatOpenAI)
|
||||
worker_node = WorkerNode(llm=llm, tools=worker_tools, vectorstore=vectorstore)
|
||||
worker_node.create_agent(ai_name="Swarm Worker AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={})
|
||||
worker_node_tool = Tool(name="WorkerNode AI Agent", func=worker_node.run, description="Input: an objective with a todo list for that objective. Output: your task completed: Please be very clear what the objective and task instructions are. The Swarm worker agent is Useful for when you need to spawn an autonomous agent instance as a worker to accomplish any complex tasks, it can search the internet or write code or spawn child multi-modality models to process and generate images and text or audio and so on")
|
||||
return worker_node_tool
|
||||
|
||||
def initialize_boss_node(self, vectorstore, worker_node):
|
||||
# Initialize boss node
|
||||
llm = self.initialize_llm(OpenAI)
|
||||
todo_prompt = PromptTemplate.from_template("You are a boss planer in a swarm who is an expert at coming up with a todo list for a given objective and then creating an worker to help you accomplish your task. Come up with a todo list for this objective: {objective} and then spawn a worker agent to complete the task for you. Always spawn an worker agent after creating a plan and pass the objective and plan to the worker agent.")
|
||||
todo_chain = LLMChain(llm=llm, prompt=todo_prompt)
|
||||
tools = [
|
||||
Tool(name="TODO", func=todo_chain.run, description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!"),
|
||||
worker_node
|
||||
]
|
||||
suffix = """Question: {task}\n{agent_scratchpad}"""
|
||||
prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\n """
|
||||
prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["objective", "task", "context", "agent_scratchpad"],)
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools])
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
# return BossNode(return BossNode(llm, vectorstore, agent_executor, max_iterations=5)
|
||||
return BossNode(llm, vectorstore, agent_executor, max_iterations=5)
|
||||
|
||||
|
||||
def run_swarms(self, objective):
|
||||
"""
|
||||
Init tools
|
||||
|
||||
Params:
|
||||
llm_class (class): The Language model class. Default is OpenAI
|
||||
"""
|
||||
try:
|
||||
# Run the swarm with the given objective
|
||||
worker_tools = self.initialize_tools(OpenAI)
|
||||
assert worker_tools is not None, "worker_tools is not initialized"
|
||||
llm = self.initialize_llm(llm_class)
|
||||
# Initialize tools
|
||||
web_search = DuckDuckGoSearchRun()
|
||||
tools = [
|
||||
web_search,
|
||||
WriteFileTool(root_dir=ROOT_DIR),
|
||||
ReadFileTool(root_dir=ROOT_DIR),
|
||||
|
||||
vectorstore = self.initialize_vectorstore()
|
||||
worker_node = self.initialize_worker_node(worker_tools, vectorstore)
|
||||
process_csv,
|
||||
WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
]
|
||||
|
||||
boss_node = self.initialize_boss_node(vectorstore, worker_node)
|
||||
assert tools is not None, "tools is not initialized"
|
||||
return tools
|
||||
|
||||
task = boss_node.create_task(objective)
|
||||
return boss_node.execute_task(task)
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred in run_swarms: {e}")
|
||||
logging.error(f"Failed to initialize tools: {e}")
|
||||
raise
|
||||
|
||||
def initialize_vectorstore(self):
|
||||
"""
|
||||
Init vector store
|
||||
"""
|
||||
|
||||
try:
|
||||
|
||||
embeddings_model = OpenAIEmbeddings(openai_api_key=self.openai_api_key)
|
||||
embedding_size = 1536
|
||||
index = faiss.IndexFlatL2(embedding_size)
|
||||
|
||||
return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize vector store: {e}")
|
||||
raise
|
||||
|
||||
def initialize_worker_node(self, worker_tools, vectorstore, llm_class=ChatOpenAI, ai_name="Swarm Worker AI Assistant"):
|
||||
"""
|
||||
Init WorkerNode
|
||||
|
||||
Params:
|
||||
worker_tools (list): The list of worker tools.
|
||||
vectorstore (object): The vector store object
|
||||
llm_class (class): The Language model class. Default is ChatOpenAI
|
||||
ai_name (str): The AI name. Default is "Swarms worker AI assistant"
|
||||
"""
|
||||
|
||||
try:
|
||||
|
||||
# Initialize worker node
|
||||
llm = self.initialize_llm(ChatOpenAI)
|
||||
worker_node = WorkerNode(llm=llm, tools=worker_tools, vectorstore=vectorstore)
|
||||
worker_node.create_agent(ai_name=ai_name, ai_role="Assistant", human_in_the_loop=False, search_kwargs={}) # add search kwargs
|
||||
|
||||
worker_node_tool = Tool(name="WorkerNode AI Agent", func=worker_node.run, description="Input: an objective with a todo list for that objective. Output: your task completed: Please be very clear what the objective and task instructions are. The Swarm worker agent is Useful for when you need to spawn an autonomous agent instance as a worker to accomplish any complex tasks, it can search the internet or write code or spawn child multi-modality models to process and generate images and text or audio and so on")
|
||||
return worker_node_tool
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize worker node: {e}")
|
||||
raise
|
||||
|
||||
def initialize_boss_node(self, vectorstore, worker_node, llm_class=OpenAI, max_iterations=5, verbose=False):
|
||||
"""
|
||||
Init BossNode
|
||||
|
||||
Params:
|
||||
vectorstore (object): the vector store object.
|
||||
worker_node (object): the worker node object
|
||||
llm_class (class): the language model class. Default is OpenAI
|
||||
max_iterations(int): The number of max iterations. Default is 5
|
||||
verbose(bool): Debug mode. Default is False
|
||||
|
||||
"""
|
||||
try:
|
||||
|
||||
# Initialize boss node
|
||||
llm = self.initialize_llm(llm_class)
|
||||
todo_prompt = PromptTemplate.from_template("You are a boss planer in a swarm who is an expert at coming up with a todo list for a given objective and then creating an worker to help you accomplish your task. Come up with a todo list for this objective: {objective} and then spawn a worker agent to complete the task for you. Always spawn an worker agent after creating a plan and pass the objective and plan to the worker agent.")
|
||||
todo_chain = LLMChain(llm=llm, prompt=todo_prompt)
|
||||
|
||||
tools = [
|
||||
Tool(name="TODO", func=todo_chain.run, description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!"),
|
||||
worker_node
|
||||
]
|
||||
suffix = """Question: {task}\n{agent_scratchpad}"""
|
||||
prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\n """
|
||||
|
||||
prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["objective", "task", "context", "agent_scratchpad"],)
|
||||
llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools])
|
||||
|
||||
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=verbose)
|
||||
return BossNode(llm, vectorstore, agent_executor, max_iterations=max_iterations)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to initialize boss node: {e}")
|
||||
raise
|
||||
|
||||
|
||||
|
||||
|
||||
def run_swarms(self, objective):
|
||||
"""
|
||||
Run the swarm with the given objective
|
||||
|
||||
Params:
|
||||
objective(str): The task
|
||||
"""
|
||||
try:
|
||||
# Run the swarm with the given objective
|
||||
worker_tools = self.initialize_tools(OpenAI)
|
||||
assert worker_tools is not None, "worker_tools is not initialized"
|
||||
|
||||
vectorstore = self.initialize_vectorstore()
|
||||
worker_node = self.initialize_worker_node(worker_tools, vectorstore)
|
||||
|
||||
boss_node = self.initialize_boss_node(vectorstore, worker_node)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
task = boss_node.create_task(objective)
|
||||
return boss_node.execute_task(task)
|
||||
except Exception as e:
|
||||
logging.error(f"An error occurred in run_swarms: {e}")
|
||||
raise
|
||||
|
||||
# usage
|
||||
def swarm(api_key, objective):
|
||||
swarms = Swarms(api_key)
|
||||
return swarms.run_swarms(objective)
|
||||
|
||||
# # Use the function
|
||||
# api_key = "APIKEY"
|
||||
# objective = "What is the capital of the UK?"
|
||||
# result = swarm(api_key, objective)
|
||||
# print(result) # Prints: "The capital of the UK is London."
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# class Swarms:
|
||||
# def __init__(self, openai_api_key):
|
||||
# self.openai_api_key = openai_api_key
|
||||
|
||||
# def initialize_llm(self, llm_class, temperature=0.5):
|
||||
# # Initialize language model
|
||||
# return llm_class(openai_api_key=self.openai_api_key, temperature=temperature)
|
||||
|
||||
# def initialize_tools(self, llm_class):
|
||||
# llm = self.initialize_llm(llm_class)
|
||||
# # Initialize tools
|
||||
# web_search = DuckDuckGoSearchRun()
|
||||
# tools = [
|
||||
# web_search,
|
||||
# WriteFileTool(root_dir=ROOT_DIR),
|
||||
# ReadFileTool(root_dir=ROOT_DIR),
|
||||
|
||||
# process_csv,
|
||||
# WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
|
||||
# # RequestsGet()
|
||||
# Tool(name="RequestsGet", func=RequestsGet.get, description="A portal to the internet, Use this when you need to get specific content from a website. Input should be a url (i.e. https://www.google.com). The output will be the text response of the GET request."),
|
||||
|
||||
def swarm(api_key="", objective=""):
|
||||
"""
|
||||
Run the swarm with the given API key and objective.
|
||||
|
||||
Parameters:
|
||||
api_key (str): The OpenAI API key. Default is an empty string.
|
||||
objective (str): The objective. Default is an empty string.
|
||||
|
||||
Returns:
|
||||
The result of the swarm.
|
||||
"""
|
||||
|
||||
if not api_key:
|
||||
logging.error("OpenAIkey is not provided")
|
||||
raise ValueError("OpenAI API key is not provided")
|
||||
if not objective:
|
||||
logging.error("Objective is not provided")
|
||||
raise ValueError("Objective is required")
|
||||
try:
|
||||
|
||||
# # CodeEditor,
|
||||
# # Terminal,
|
||||
# # RequestsGet,
|
||||
# # ExitConversation
|
||||
|
||||
# #code editor + terminal editor + visual agent
|
||||
# # Give the worker node itself as a tool
|
||||
|
||||
# ]
|
||||
# assert tools is not None, "tools is not initialized"
|
||||
# return tools
|
||||
|
||||
# def initialize_vectorstore(self):
|
||||
# # Initialize vector store
|
||||
# embeddings_model = OpenAIEmbeddings(openai_api_key=self.openai_api_key)
|
||||
# embedding_size = 1536
|
||||
# index = faiss.IndexFlatL2(embedding_size)
|
||||
# return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
# def initialize_worker_node(self, worker_tools, vectorstore):
|
||||
# # Initialize worker node
|
||||
# llm = self.initialize_llm(ChatOpenAI)
|
||||
# worker_node = WorkerNode(llm=llm, tools=worker_tools, vectorstore=vectorstore)
|
||||
# worker_node.create_agent(ai_name="Swarm Worker AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={})
|
||||
# worker_node_tool = Tool(name="WorkerNode AI Agent", func=worker_node.run, description="Input: an objective with a todo list for that objective. Output: your task completed: Please be very clear what the objective and task instructions are. The Swarm worker agent is Useful for when you need to spawn an autonomous agent instance as a worker to accomplish any complex tasks, it can search the internet or write code or spawn child multi-modality models to process and generate images and text or audio and so on")
|
||||
# return worker_node_tool
|
||||
|
||||
# def initialize_boss_node(self, vectorstore, worker_node):
|
||||
# # Initialize boss node
|
||||
# llm = self.initialize_llm(OpenAI)
|
||||
# todo_prompt = PromptTemplate.from_template("You are a boss planer in a swarm who is an expert at coming up with a todo list for a given objective and then creating an worker to help you accomplish your task. Come up with a todo list for this objective: {objective} and then spawn a worker agent to complete the task for you. Always spawn an worker agent after creating a plan and pass the objective and plan to the worker agent.")
|
||||
# todo_chain = LLMChain(llm=llm, prompt=todo_prompt)
|
||||
# tools = [
|
||||
# Tool(name="TODO", func=todo_chain.run, description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!"),
|
||||
# worker_node
|
||||
# ]
|
||||
# suffix = """Question: {task}\n{agent_scratchpad}"""
|
||||
# prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\n """
|
||||
# prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["objective", "task", "context", "agent_scratchpad"],)
|
||||
# llm_chain = LLMChain(llm=llm, prompt=prompt)
|
||||
# agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools])
|
||||
# agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
# # return BossNode(return BossNode(llm, vectorstore, agent_executor, max_iterations=5)
|
||||
# return BossNode(llm, vectorstore, agent_executor, max_iterations=5)
|
||||
|
||||
|
||||
# def run_swarms(self, objective, run_as=None):
|
||||
# try:
|
||||
# # Run the swarm with the given objective
|
||||
# worker_tools = self.initialize_tools(OpenAI)
|
||||
# assert worker_tools is not None, "worker_tools is not initialized"
|
||||
|
||||
# vectorstore = self.initialize_vectorstore()
|
||||
# worker_node = self.initialize_worker_node(worker_tools, vectorstore)
|
||||
|
||||
# if run_as.lower() == 'worker':
|
||||
# tool_input = {'prompt': objective}
|
||||
# return worker_node.run(tool_input)
|
||||
# else:
|
||||
# boss_node = self.initialize_boss_node(vectorstore, worker_node)
|
||||
# task = boss_node.create_task(objective)
|
||||
# return boss_node.execute_task(task)
|
||||
# except Exception as e:
|
||||
# logging.error(f"An error occurred in run_swarms: {e}")
|
||||
# raise
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
#omni agent ===> working
|
||||
# class Swarms:
|
||||
# def __init__(self,
|
||||
# openai_api_key,
|
||||
# # omni_api_key=None,
|
||||
# # omni_api_endpoint=None,
|
||||
# # omni_api_type=None
|
||||
# ):
|
||||
# self.openai_api_key = openai_api_key
|
||||
# # self.omni_api_key = omni_api_key
|
||||
# # self.omni_api_endpoint = omni_api_endpoint
|
||||
# # self.omni_api_key = omni_api_type
|
||||
|
||||
# # if omni_api_key and omni_api_endpoint and omni_api_type:
|
||||
# # self.omni_worker_agent = OmniWorkerAgent(omni_api_key, omni_api_endpoint, omni_api_type)
|
||||
# # else:
|
||||
# # self.omni_worker_agent = None
|
||||
|
||||
# def initialize_llm(self):
|
||||
# # Initialize language model
|
||||
# return ChatOpenAI(model_name="gpt-4", temperature=1.0, openai_api_key=self.openai_api_key)
|
||||
|
||||
# def initialize_tools(self, llm):
|
||||
# # Initialize tools
|
||||
# web_search = DuckDuckGoSearchRun()
|
||||
# tools = [
|
||||
# web_search,
|
||||
# WriteFileTool(root_dir=ROOT_DIR),
|
||||
# ReadFileTool(root_dir=ROOT_DIR),
|
||||
# process_csv,
|
||||
# WebpageQATool(qa_chain=load_qa_with_sources_chain(llm)),
|
||||
# ]
|
||||
# # if self.omni_worker_agent:
|
||||
# # tools.append(self.omni_worker_agent.chat) #add omniworker agent class
|
||||
# return tools
|
||||
|
||||
# def initialize_vectorstore(self):
|
||||
# # Initialize vector store
|
||||
# embeddings_model = OpenAIEmbeddings()
|
||||
# embedding_size = 1536
|
||||
# index = faiss.IndexFlatL2(embedding_size)
|
||||
# return FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
|
||||
|
||||
# def initialize_worker_node(self, llm, worker_tools, vectorstore):
|
||||
# # Initialize worker node
|
||||
# worker_node = WorkerNode(llm=llm, tools=worker_tools, vectorstore=vectorstore)
|
||||
# worker_node.create_agent(ai_name="AI Assistant", ai_role="Assistant", human_in_the_loop=False, search_kwargs={})
|
||||
# return worker_node
|
||||
|
||||
# def initialize_boss_node(self, llm, vectorstore, worker_node):
|
||||
# # Initialize boss node
|
||||
# todo_prompt = PromptTemplate.from_template("You are a planner who is an expert at coming up with a todo list for a given objective. Come up with a todo list for this objective: {objective}")
|
||||
# todo_chain = LLMChain(llm=OpenAI(temperature=0), prompt=todo_prompt)
|
||||
# tools = [
|
||||
# Tool(name="TODO", func=todo_chain.run, description="useful for when you need to come up with todo lists. Input: an objective to create a todo list for. Output: a todo list for that objective. Please be very clear what the objective is!"),
|
||||
# worker_node,
|
||||
# ]
|
||||
# suffix = """Question: {task}\n{agent_scratchpad}"""
|
||||
# prefix = """You are an Boss in a swarm who performs one task based on the following objective: {objective}. Take into account these previously completed tasks: {context}.\n"""
|
||||
# prompt = ZeroShotAgent.create_prompt(tools, prefix=prefix, suffix=suffix, input_variables=["objective", "task", "context", "agent_scratchpad"],)
|
||||
# llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
|
||||
# agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=[tool.name for tool in tools])
|
||||
# agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
|
||||
# return BossNode(self.openai_api_key, llm, vectorstore, agent_executor, verbose=True, max_iterations=5)
|
||||
|
||||
# def run_swarms(self, objective):
|
||||
# # Run the swarm with the given objective
|
||||
# llm = self.initialize_llm()
|
||||
# worker_tools = self.initialize_tools(llm)
|
||||
# vectorstore = self.initialize_vectorstore()
|
||||
# worker_node = self.initialize_worker_node(llm, worker_tools, vectorstore)
|
||||
# boss_node = self.initialize_boss_node(llm, vectorstore, worker_node)
|
||||
# task = boss_node.create_task(objective)
|
||||
# boss_node.execute_task(task)
|
||||
# worker_node.run_agent(objective)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# class Swarms:
|
||||
# def __init__(self, num_nodes: int, llm: BaseLLM, self_scaling: bool):
|
||||
# self.nodes = [WorkerNode(llm) for _ in range(num_nodes)]
|
||||
# self.self_scaling = self_scaling
|
||||
|
||||
# def add_worker(self, llm: BaseLLM):
|
||||
# self.nodes.append(WorkerNode(llm))
|
||||
|
||||
# def remove_workers(self, index: int):
|
||||
# self.nodes.pop(index)
|
||||
|
||||
# def execute(self, task):
|
||||
# #placeholer for main execution logic
|
||||
# pass
|
||||
|
||||
# def scale(self):
|
||||
# #placeholder for self scaling logic
|
||||
# pass
|
||||
|
||||
|
||||
|
||||
#special classes
|
||||
|
||||
# class HierarchicalSwarms(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
|
||||
# class CollaborativeSwarms(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
# class CompetitiveSwarms(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
# class MultiAgentDebate(Swarms):
|
||||
# def execute(self, task):
|
||||
# pass
|
||||
|
||||
|
||||
#======================================> WorkerNode
|
||||
|
||||
|
||||
# class MetaWorkerNode:
|
||||
# def __init__(self, llm, tools, vectorstore):
|
||||
# self.llm = llm
|
||||
# self.tools = tools
|
||||
# self.vectorstore = vectorstore
|
||||
|
||||
# self.agent = None
|
||||
# self.meta_chain = None
|
||||
|
||||
# def init_chain(self, instructions):
|
||||
# self.agent = WorkerNode(self.llm, self.tools, self.vectorstore)
|
||||
# self.agent.create_agent("Assistant", "Assistant Role", False, {})
|
||||
|
||||
# def initialize_meta_chain():
|
||||
# meta_template = """
|
||||
# Assistant has just had the below interactions with a User. Assistant followed their "Instructions" closely. Your job is to critique the Assistant's performance and then revise the Instructions so that Assistant would quickly and correctly respond in the future.
|
||||
|
||||
# ####
|
||||
|
||||
# {chat_history}
|
||||
|
||||
# ####
|
||||
|
||||
# Please reflect on these interactions.
|
||||
|
||||
# You should first critique Assistant's performance. What could Assistant have done better? What should the Assistant remember about this user? Are there things this user always wants? Indicate this with "Critique: ...".
|
||||
|
||||
# You should next revise the Instructions so that Assistant would quickly and correctly respond in the future. Assistant's goal is to satisfy the user in as few interactions as possible. Assistant will only see the new Instructions, not the interaction history, so anything important must be summarized in the Instructions. Don't forget any important details in the current Instructions! Indicate the new Instructions by "Instructions: ...".
|
||||
# """
|
||||
|
||||
# meta_prompt = PromptTemplate(
|
||||
# input_variables=["chat_history"], template=meta_template
|
||||
# )
|
||||
|
||||
# meta_chain = LLMChain(
|
||||
# llm=OpenAI(temperature=0),
|
||||
# prompt=meta_prompt,
|
||||
# verbose=True,
|
||||
# )
|
||||
# return meta_chain
|
||||
|
||||
# def meta_chain(self):
|
||||
# #define meta template and meta prompting as per your needs
|
||||
# self.meta_chain = initialize_meta_chain()
|
||||
|
||||
|
||||
# def get_chat_history(chain_memory):
|
||||
# memory_key = chain_memory.memory_key
|
||||
# chat_history = chain_memory.load_memory_variables(memory_key)[memory_key]
|
||||
# return chat_history
|
||||
|
||||
|
||||
# def get_new_instructions(meta_output):
|
||||
# delimiter = "Instructions: "
|
||||
# new_instructions = meta_output[meta_output.find(delimiter) + len(delimiter) :]
|
||||
# return new_instructions
|
||||
|
||||
|
||||
# def main(self, task, max_iters=3, max_meta_iters=5):
|
||||
# failed_phrase = "task failed"
|
||||
# success_phrase = "task succeeded"
|
||||
# key_phrases = [success_phrase, failed_phrase]
|
||||
|
||||
# instructions = "None"
|
||||
# for i in range(max_meta_iters):
|
||||
# print(f"[Episode {i+1}/{max_meta_iters}]")
|
||||
# self.initialize_chain(instructions)
|
||||
# output = self.agent.perform('Assistant', {'request': task})
|
||||
# for j in range(max_iters):
|
||||
# print(f"(Step {j+1}/{max_iters})")
|
||||
# print(f"Assistant: {output}")
|
||||
# print(f"Human: ")
|
||||
# human_input = input()
|
||||
# if any(phrase in human_input.lower() for phrase in key_phrases):
|
||||
# break
|
||||
# output = self.agent.perform('Assistant', {'request': human_input})
|
||||
# if success_phrase in human_input.lower():
|
||||
# print(f"You succeeded! Thanks for playing!")
|
||||
# return
|
||||
# self.initialize_meta_chain()
|
||||
# meta_output = self.meta_chain.predict(chat_history=self.get_chat_history())
|
||||
# print(f"Feedback: {meta_output}")
|
||||
# instructions = self.get_new_instructions(meta_output)
|
||||
# print(f"New Instructions: {instructions}")
|
||||
# print("\n" + "#" * 80 + "\n")
|
||||
# print(f"You failed! Thanks for playing!")
|
||||
|
||||
|
||||
# #init instance of MetaWorkerNode
|
||||
# meta_worker_node = MetaWorkerNode(llm=OpenAI, tools=tools, vectorstore=vectorstore)
|
||||
|
||||
|
||||
# #specify a task and interact with the agent
|
||||
# task = "Provide a sysmatic argument for why we should always eat past with olives"
|
||||
# meta_worker_node.main(task)
|
||||
|
||||
|
||||
####################################################################### => Boss Node
|
||||
####################################################################### => Boss Node
|
||||
####################################################################### => Boss Node
|
||||
swarms = Swarms(api_key)
|
||||
return swarms.run_swarms(objective)
|
||||
except Exception as e:
|
||||
logging.error(f"An error occured in swarm: {e}")
|
||||
raise
|
@ -0,0 +1,21 @@
|
||||
import os
|
||||
from swarms.swarms import WorkerUltra
|
||||
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
# Define an objective
|
||||
objective = """
|
||||
Please make a web GUI for using HTTP API server.
|
||||
The name of it is Swarms.
|
||||
You can check the server code at ./main.py.
|
||||
The server is served on localhost:8000.
|
||||
Users should be able to write text input as 'query' and url array as 'files', and check the response.
|
||||
Users input form should be delivered in JSON format.
|
||||
I want it to have neumorphism-style. Serve it on port 4500.
|
||||
|
||||
"""
|
||||
|
||||
node = WorkerUltra(objective, openai_api_key=api_key)
|
||||
|
||||
|
||||
result = node.execute()
|
Loading…
Reference in new issue