Former-commit-id: fee6539b367169ab719ef7634e2776d89f61e022pull/133/head
parent
374efe3411
commit
672c8bc2e8
@ -0,0 +1,3 @@
|
|||||||
|
.env
|
||||||
|
__pycache__
|
||||||
|
.venv
|
@ -0,0 +1,2 @@
|
|||||||
|
.gritmodules
|
||||||
|
*.log
|
@ -0,0 +1,75 @@
|
|||||||
|
# Swarms Documentation
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
The Swarm module includes the implementation of two classes, `WorkerNode` and `BossNode`, which respectively represent a worker agent and a boss agent. A worker agent is responsible for completing given tasks, while a boss agent is responsible for creating and managing tasks for the worker agent(s).
|
||||||
|
|
||||||
|
## Key Classes
|
||||||
|
|
||||||
|
### WorkerNode
|
||||||
|
```python
|
||||||
|
class WorkerNode:
|
||||||
|
```
|
||||||
|
|
||||||
|
The WorkerNode class represents an autonomous worker agent that can perform a range of tasks.
|
||||||
|
|
||||||
|
__Methods__:
|
||||||
|
|
||||||
|
- `create_agent(ai_name: str, ai_role: str, human_in_the_loop: bool, search_kwargs: dict) -> None`:
|
||||||
|
|
||||||
|
This method creates a new autonomous agent that can complete tasks. The agent utilizes several tools such as search engines, a file writer/reader, and a multi-modal visual tool.
|
||||||
|
The agent's configuration is customizable through the method parameters.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example usage
|
||||||
|
worker_node = WorkerNode(llm, tools, vectorstore)
|
||||||
|
worker_node.create_agent('test_agent', 'test_role', False, {})
|
||||||
|
```
|
||||||
|
|
||||||
|
- `run_agent(prompt: str) -> None`:
|
||||||
|
|
||||||
|
This method runs the agent on a given task, defined by the `prompt` parameter.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example usage
|
||||||
|
worker_node = WorkerNode(llm, tools, vectorstore)
|
||||||
|
worker_node.create_agent('test_agent', 'test_role', False, {})
|
||||||
|
worker_node.run_agent('Calculate the square root of 144.')
|
||||||
|
```
|
||||||
|
|
||||||
|
### BossNode
|
||||||
|
```python
|
||||||
|
class BossNode:
|
||||||
|
```
|
||||||
|
|
||||||
|
The BossNode class represents a manager agent that can create tasks and control the execution of these tasks.
|
||||||
|
|
||||||
|
__Methods__:
|
||||||
|
|
||||||
|
- `create_task(objective: str) -> dict`:
|
||||||
|
|
||||||
|
This method creates a new task based on the provided `objective`. The created task is a dictionary with the objective as its value.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example usage
|
||||||
|
boss_node = BossNode(llm, vectorstore, task_execution_chain, False, 3)
|
||||||
|
task = boss_node.create_task('Find the square root of 144.')
|
||||||
|
```
|
||||||
|
|
||||||
|
- `execute_task(task: dict) -> None`:
|
||||||
|
|
||||||
|
This method triggers the execution of a given task.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example usage
|
||||||
|
boss_node = BossNode(llm, vectorstore, task_execution_chain, False, 3)
|
||||||
|
task = boss_node.create_task('Find the square root of 144.')
|
||||||
|
boss_node.execute_task(task)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Note
|
||||||
|
|
||||||
|
Before creating the WorkerNode and BossNode, make sure to initialize the lower level model (llm), tools, and vectorstore which are used as parameters in the constructors of the two classes.
|
||||||
|
|
||||||
|
In addition, the WorkerNode class uses the MultiModalVisualAgentTool which is a custom tool that enables the worker agent to run multi-modal visual tasks. Ensure that this tool is correctly initialized before running the WorkerNode.
|
||||||
|
|
||||||
|
This documentation provides an overview of the main functionalities of the Swarm module. For additional details and advanced functionalities, please review the source code and the accompanying comments.
|
@ -0,0 +1,214 @@
|
|||||||
|
## Swarming Architectures
|
||||||
|
|
||||||
|
Here are three examples of swarming architectures that could be applied in this context.
|
||||||
|
|
||||||
|
1. **Hierarchical Swarms**: In this architecture, a 'lead' agent coordinates the efforts of other agents, distributing tasks based on each agent's unique strengths. The lead agent might be equipped with additional functionality or decision-making capabilities to effectively manage the swarm.
|
||||||
|
|
||||||
|
2. **Collaborative Swarms**: Here, each agent in the swarm works in parallel, potentially on different aspects of a task. They then collectively determine the best output, often through a voting or consensus mechanism.
|
||||||
|
|
||||||
|
3. **Competitive Swarms**: In this setup, multiple agents work on the same task independently. The output from the agent which produces the highest confidence or quality result is then selected. This can often lead to more robust outputs, as the competition drives each agent to perform at its best.
|
||||||
|
|
||||||
|
4. **Multi-Agent Debate**: Here, multiple agents debate a topic. The output from the agent which produces the highest confidence or quality result is then selected. This can lead to more robust outputs, as the competition drives each agent to perform it's best.
|
||||||
|
|
||||||
|
|
||||||
|
# Ideas
|
||||||
|
|
||||||
|
A swarm, particularly in the context of distributed computing, refers to a large number of coordinated agents or nodes that work together to solve a problem. The specific requirements of a swarm might vary depending on the task at hand, but some of the general requirements include:
|
||||||
|
|
||||||
|
1. **Distributed Nature**: The swarm should consist of multiple individual units or nodes, each capable of functioning independently.
|
||||||
|
|
||||||
|
2. **Coordination**: The nodes in the swarm need to coordinate with each other to ensure they're working together effectively. This might involve communication between nodes, or it could be achieved through a central orchestrator.
|
||||||
|
|
||||||
|
3. **Scalability**: A well-designed swarm system should be able to scale up or down as needed, adding or removing nodes based on the task load.
|
||||||
|
|
||||||
|
4. **Resilience**: If a node in the swarm fails, it shouldn't bring down the entire system. Instead, other nodes should be able to pick up the slack.
|
||||||
|
|
||||||
|
5. **Load Balancing**: Tasks should be distributed evenly across the nodes in the swarm to avoid overloading any single node.
|
||||||
|
|
||||||
|
6. **Interoperability**: Each node should be able to interact with others, regardless of differences in underlying hardware or software.
|
||||||
|
|
||||||
|
Integrating these requirements with Large Language Models (LLMs) can be done as follows:
|
||||||
|
|
||||||
|
1. **Distributed Nature**: Each LLM agent can be considered as a node in the swarm. These agents can be distributed across multiple servers or even geographically dispersed data centers.
|
||||||
|
|
||||||
|
2. **Coordination**: An orchestrator can manage the LLM agents, assigning tasks, coordinating responses, and ensuring effective collaboration between agents.
|
||||||
|
|
||||||
|
3. **Scalability**: As the demand for processing power increases or decreases, the number of LLM agents can be adjusted accordingly.
|
||||||
|
|
||||||
|
4. **Resilience**: If an LLM agent goes offline or fails, the orchestrator can assign its tasks to other agents, ensuring the swarm continues functioning smoothly.
|
||||||
|
|
||||||
|
5. **Load Balancing**: The orchestrator can also handle load balancing, ensuring tasks are evenly distributed amongst the LLM agents.
|
||||||
|
|
||||||
|
6. **Interoperability**: By standardizing the input and output formats of the LLM agents, they can effectively communicate and collaborate, regardless of the specific model or configuration of each agent.
|
||||||
|
|
||||||
|
In terms of architecture, the swarm might look something like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
(Orchestrator)
|
||||||
|
/ \
|
||||||
|
Tools + Vector DB -- (LLM Agent)---(Communication Layer) (Communication Layer)---(LLM Agent)-- Tools + Vector DB
|
||||||
|
/ | | \
|
||||||
|
(Task Assignment) (Task Completion) (Task Assignment) (Task Completion)
|
||||||
|
```
|
||||||
|
|
||||||
|
Each LLM agent communicates with the orchestrator through a dedicated communication layer. The orchestrator assigns tasks to each LLM agent, which the agents then complete and return. This setup allows for a high degree of flexibility, scalability, and robustness.
|
||||||
|
|
||||||
|
|
||||||
|
## Communication Layer
|
||||||
|
|
||||||
|
Communication layers play a critical role in distributed systems, enabling interaction between different nodes (agents) and the orchestrator. Here are three potential communication layers for a distributed system, including their strengths and weaknesses:
|
||||||
|
|
||||||
|
1. **Message Queuing Systems (like RabbitMQ, Kafka)**:
|
||||||
|
|
||||||
|
- Strengths: They are highly scalable, reliable, and designed for high-throughput systems. They also ensure delivery of messages and can persist them if necessary. Furthermore, they support various messaging patterns like publish/subscribe, which can be highly beneficial in a distributed system. They also have robust community support.
|
||||||
|
|
||||||
|
- Weaknesses: They can add complexity to the system, including maintenance of the message broker. Moreover, they require careful configuration to perform optimally, and handling failures can sometimes be challenging.
|
||||||
|
|
||||||
|
2. **RESTful APIs**:
|
||||||
|
|
||||||
|
- Strengths: REST is widely adopted, and most programming languages have libraries to easily create RESTful APIs. They leverage standard HTTP(S) protocols and methods and are straightforward to use. Also, they can be stateless, meaning each request contains all the necessary information, enabling scalability.
|
||||||
|
|
||||||
|
- Weaknesses: For real-time applications, REST may not be the best fit due to its synchronous nature. Additionally, handling a large number of API requests can put a strain on the system, causing slowdowns or timeouts.
|
||||||
|
|
||||||
|
3. **gRPC (Google Remote Procedure Call)**:
|
||||||
|
|
||||||
|
- Strengths: gRPC uses Protocol Buffers as its interface definition language, leading to smaller payloads and faster serialization/deserialization compared to JSON (commonly used in RESTful APIs). It supports bidirectional streaming and can use HTTP/2 features, making it excellent for real-time applications.
|
||||||
|
|
||||||
|
- Weaknesses: gRPC is more complex to set up compared to REST. Protocol Buffers' binary format can be more challenging to debug than JSON. It's also not as widely adopted as REST, so tooling and support might be limited in some environments.
|
||||||
|
|
||||||
|
In the context of swarm LLMs, one could consider an **Omni-Vector Embedding Database** for communication. This database could store and manage the high-dimensional vectors produced by each LLM agent.
|
||||||
|
|
||||||
|
- Strengths: This approach would allow for similarity-based lookup and matching of LLM-generated vectors, which can be particularly useful for tasks that involve finding similar outputs or recognizing patterns.
|
||||||
|
|
||||||
|
- Weaknesses: An Omni-Vector Embedding Database might add complexity to the system in terms of setup and maintenance. It might also require significant computational resources, depending on the volume of data being handled and the complexity of the vectors. The handling and transmission of high-dimensional vectors could also pose challenges in terms of network load.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Technical Analysis Document: Particle Swarm of AI Agents using Ocean Database
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The goal is to create a particle swarm of AI agents using the OpenAI API for the agents and the Ocean database as the communication space, where the embeddings act as particles. The swarm will work collectively to perform tasks and optimize their behavior based on the interaction with the Ocean database.
|
||||||
|
|
||||||
|
## Algorithmic Overview
|
||||||
|
|
||||||
|
1. Initialize the AI agents and the Ocean database.
|
||||||
|
2. Assign tasks to the AI agents.
|
||||||
|
3. AI agents use the OpenAI API to perform tasks and generate embeddings.
|
||||||
|
4. AI agents store their embeddings in the Ocean database.
|
||||||
|
5. AI agents query the Ocean database for relevant embeddings.
|
||||||
|
6. AI agents update their positions based on the retrieved embeddings.
|
||||||
|
7. Evaluate the performance of the swarm and update the agents' behavior accordingly.
|
||||||
|
8. Repeat steps 3-7 until a stopping criterion is met.
|
||||||
|
|
||||||
|
## Python Implementation Logic
|
||||||
|
|
||||||
|
1. **Initialize the AI agents and the Ocean database.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
import openai
|
||||||
|
import oceandb
|
||||||
|
from oceandb.utils.embedding_functions import ImageBindEmbeddingFunction
|
||||||
|
|
||||||
|
# Initialize Ocean database
|
||||||
|
client = oceandb.Client()
|
||||||
|
text_embedding_function = ImageBindEmbeddingFunction(modality="text")
|
||||||
|
collection = client.create_collection("all-my-documents", embedding_function=text_embedding_function)
|
||||||
|
|
||||||
|
# Initialize AI agents
|
||||||
|
agents = initialize_agents(...)
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Assign tasks to the AI agents.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
tasks = assign_tasks_to_agents(agents, ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **AI agents use the OpenAI API to perform tasks and generate embeddings.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def agent_perform_task(agent, task):
|
||||||
|
# Perform the task using the OpenAI API
|
||||||
|
result = perform_task_with_openai_api(agent, task)
|
||||||
|
# Generate the embedding
|
||||||
|
embedding = generate_embedding(result)
|
||||||
|
return embedding
|
||||||
|
|
||||||
|
embeddings = [agent_perform_task(agent, task) for agent, task in zip(agents, tasks)]
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **AI agents store their embeddings in the Ocean database.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def store_embeddings_in_database(embeddings, collection):
|
||||||
|
for i, embedding in enumerate(embeddings):
|
||||||
|
document_id = f"agent_{i}"
|
||||||
|
collection.add(documents=[embedding], ids=[document_id])
|
||||||
|
|
||||||
|
store_embeddings_in_database(embeddings, collection)
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **AI agents query the Ocean database for relevant embeddings.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def query_database_for_embeddings(agent, collection, n_results=1):
|
||||||
|
query_result = collection.query(query_texts=[agent], n_results=n_results)
|
||||||
|
return query_result
|
||||||
|
|
||||||
|
queried_embeddings = [query_database_for_embeddings(agent, collection) for agent in agents]
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **AI agents update their positions based on the retrieved embeddings.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def update_agent_positions(agents, queried_embeddings):
|
||||||
|
for agent, embedding in zip(agents, queried_embeddings):
|
||||||
|
agent.update_position(embedding)
|
||||||
|
|
||||||
|
update_agent_positions(agents, queried_embeddings)
|
||||||
|
```
|
||||||
|
|
||||||
|
7. **Evaluate the performance of the swarm and update the agents' behavior accordingly.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def evaluate_swarm_performance(agents, ...):
|
||||||
|
# Evaluate the performance of the swarm
|
||||||
|
performance = compute_performance_metric(agents, ...)
|
||||||
|
return performance
|
||||||
|
|
||||||
|
def update_agent_behavior(agents, performance):
|
||||||
|
# Update agents' behavior based on swarm performance
|
||||||
|
for agent in agents:
|
||||||
|
agent.adjust_behavior(performance)
|
||||||
|
|
||||||
|
performance = evaluate_swarm_performance(agents, ...)
|
||||||
|
update_agent_behavior(agents, performance)
|
||||||
|
```
|
||||||
|
|
||||||
|
8. **Repeat steps 3-7 until a stopping criterion is met.**
|
||||||
|
|
||||||
|
```python
|
||||||
|
while not stopping_criterion_met():
|
||||||
|
# Perform tasks and generate embeddings
|
||||||
|
embeddings = [agent_perform_task(agent, task) for agent, task in zip(agents, tasks)]
|
||||||
|
|
||||||
|
# Store embeddings in the Ocean database
|
||||||
|
store_embeddings_in_database(embeddings, collection)
|
||||||
|
|
||||||
|
# Query the Ocean database for relevant embeddings
|
||||||
|
queried_embeddings = [query_database_for_embeddings(agent, collection) for agent in agents]
|
||||||
|
|
||||||
|
# Update AI agent positions based on the retrieved embeddings
|
||||||
|
update_agent_positions(agents, queried_embeddings)
|
||||||
|
|
||||||
|
# Evaluate the performance of the swarm and update the agents' behavior accordingly
|
||||||
|
performance = evaluate_swarm_performance(agents, ...)
|
||||||
|
update_agent_behavior(agents, performance)
|
||||||
|
```
|
||||||
|
|
||||||
|
This code demonstrates the complete loop to repeat steps 3-7 until a stopping criterion is met. You will need to define the `stopping_criterion_met()` function, which could be based on a predefined number of iterations, a target performance level, or any other condition that indicates that the swarm has reached a desired state.
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -0,0 +1,13 @@
|
|||||||
|
Today, we stand at the verge of a revolution in artificial intelligence and machine learning. Individual models have accomplished incredible feats, achieving unprecedented levels of understanding and generating incredibly human-like text. But this is just the beginning.
|
||||||
|
|
||||||
|
In the future, we should expect more. These models, which we've seen perform so admirably in isolation, should be able to work together, as a team, a swarm. However, this kind of collaborative intelligence doesn't exist today. That's because the technology to seamlessly integrate these models and foster true inter-model collaboration has been missing, until now.
|
||||||
|
|
||||||
|
In attempting to create this swarm, we face numerous challenges, such as developing the necessary infrastructure, ensuring seamless integration between the agents, and overcoming the practical limitations of our current computing capabilities. These are daunting tasks, and many have shied away from them because of the sheer complexity of the problem. But, if we can overcome these challenges, the rewards will be unimaginable, all digital activities will be automated.
|
||||||
|
|
||||||
|
We envision a future where swarms of Language Learning Model (LLM) agents revolutionize fields like customer support, content creation, and research. Imagine an AI system that could work cohesively, understand complex problems, and deliver multi-faceted solutions. We estimate this could lead to a 100-fold improvement in AI effectiveness, and up to a trillion-dollar impact on the global economy.
|
||||||
|
|
||||||
|
The secret to achieving this lies in our open-source approach and the power of the collective. By embracing open-source, we are enabling hundreds of thousands of minds worldwide to contribute to this vision, each bringing unique insights and solutions. Our bug bounty program and automated testing environments will act as catalysts, motivating and rewarding contributors while ensuring the robustness and reliability of our technology.
|
||||||
|
|
||||||
|
At Agora, we believe in the transformative potential of this technology, and we are committed to making it a reality. Our world-class team of researchers, engineers, and AI enthusiasts are singularly focused on this mission. With a proven track record of success, and the tenacity to tackle the most complex problems, we are best positioned to lead this charge.
|
||||||
|
|
||||||
|
We invite you to join us on this exciting journey. Let's come together to create swarms, advance humanity, and redefine what is possible with artificial intelligence. Our future is in our hands. Let's shape it together.
|
@ -0,0 +1,149 @@
|
|||||||
|
# Bounty Program
|
||||||
|
|
||||||
|
Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
|
||||||
|
|
||||||
|
Here's how it works:
|
||||||
|
|
||||||
|
1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
|
||||||
|
|
||||||
|
2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
|
||||||
|
|
||||||
|
3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
|
||||||
|
|
||||||
|
4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
|
||||||
|
|
||||||
|
5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
|
||||||
|
|
||||||
|
## The Three Phases of Our Bounty Program
|
||||||
|
|
||||||
|
### Phase 1: Building the Foundation
|
||||||
|
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
|
||||||
|
|
||||||
|
### Phase 2: Enhancing the System
|
||||||
|
In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
|
||||||
|
|
||||||
|
### Phase 3: Towards Super-Intelligence
|
||||||
|
The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
|
||||||
|
|
||||||
|
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
|
||||||
|
|
||||||
|
**To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together!
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Bounties for Roadmap Items
|
||||||
|
|
||||||
|
To accelerate the development of Swarms and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
|
||||||
|
|
||||||
|
1. **Multi-Agent Debate Integration**: $2000
|
||||||
|
2. **Meta Prompting Integration**: $1500
|
||||||
|
3. **Swarms Class**: $1500
|
||||||
|
4. **Integration of Additional Tools**: $1000
|
||||||
|
5. **Task Completion and Evaluation Logic**: $2000
|
||||||
|
6. **Ocean Integration**: $2500
|
||||||
|
7. **Improved Communication**: $2000
|
||||||
|
8. **Testing and Evaluation**: $1500
|
||||||
|
9. **Worker Swarm Class**: $2000
|
||||||
|
10. **Documentation**: $500
|
||||||
|
|
||||||
|
For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
|
||||||
|
|
||||||
|
# 3-Phase Testing Framework
|
||||||
|
|
||||||
|
To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
|
||||||
|
|
||||||
|
## Phase 1: Unit Testing
|
||||||
|
In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
|
||||||
|
|
||||||
|
## Phase 2: Integration Testing
|
||||||
|
After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
|
||||||
|
|
||||||
|
## Phase 3: Benchmarking & Stress Testing
|
||||||
|
In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
|
||||||
|
|
||||||
|
By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
|
||||||
|
|
||||||
|
# Reverse Engineering to Reach Phase 3
|
||||||
|
|
||||||
|
To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
|
||||||
|
|
||||||
|
1. **Set Clear Expectations**: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
|
||||||
|
|
||||||
|
2. **Develop Testing Scenarios**: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
|
||||||
|
|
||||||
|
3. **Write Test Cases**: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
|
||||||
|
|
||||||
|
4. **Execute the Tests**: Run the test cases on our Swarm, making note of any issues or bugs that arise.
|
||||||
|
|
||||||
|
5. **Iterate and Improve**: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
|
||||||
|
|
||||||
|
6. **Repeat**: Repeat this process until our Swarm meets our expectations and passes all test cases.
|
||||||
|
|
||||||
|
By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
|
||||||
|
|
||||||
|
Let's shape the future of digital automation together!
|
||||||
|
|
||||||
|
|
||||||
|
--------------------
|
||||||
|
# Super-Intelligence Roadmap
|
||||||
|
|
||||||
|
Creating a Super-Intelligent Swarm involves three main phases, where each phase has multiple sub-stages, each of which will require rigorous testing and evaluation to ensure progress towards super-intelligence.
|
||||||
|
|
||||||
|
## Phase 1: Narrow Intelligence
|
||||||
|
|
||||||
|
In this phase, the goal is to achieve high performance in specific tasks. These tasks will be predefined and the swarm will be trained and tested on these tasks.
|
||||||
|
|
||||||
|
1. **Single Task Mastery**: Focus on mastering one task at a time. This can range from simple tasks like image recognition to complex tasks like natural language processing.
|
||||||
|
|
||||||
|
2. **Task Switching**: Train the swarm to switch between different tasks effectively. This includes being able to stop one task and start another one without any loss in performance.
|
||||||
|
|
||||||
|
3. **Multi-tasking**: The swarm should be capable of performing multiple tasks simultaneously without any degradation in performance.
|
||||||
|
|
||||||
|
## Phase 2: General Intelligence
|
||||||
|
|
||||||
|
In this phase, the swarm will be trained to handle a variety of tasks that were not part of the original training set.
|
||||||
|
|
||||||
|
1. **Transfer Learning**: The swarm should be able to transfer knowledge learned in one context to another context. This means being able to apply knowledge learned in one task to a different but related task.
|
||||||
|
|
||||||
|
2. **Adaptive Learning**: The swarm should be capable of adapting its learning strategies based on the task at hand. This includes being able to adjust its learning rate, exploration vs exploitation balance, etc.
|
||||||
|
|
||||||
|
3. **Self-Learning**: The swarm should be able to learn new tasks on its own without any external guidance. This includes being able to understand the task requirements, find relevant information, learn the task, and evaluate its performance.
|
||||||
|
|
||||||
|
## Phase 3: Super Intelligence
|
||||||
|
|
||||||
|
In this phase, the swarm will surpass human-level performance in most economically valuable work. This involves the swarm being able to solve complex real-world problems, make accurate predictions, and generate innovative solutions.
|
||||||
|
|
||||||
|
1. **Complex Problem Solving**: The swarm should be able to solve complex real-world problems. This includes being able to understand the problem, identify relevant information, generate solutions, evaluate the solutions, and implement the best solution.
|
||||||
|
|
||||||
|
2. **Predictive Abilities**: The swarm should be able to make accurate predictions about future events based on past data. This includes being able to understand the data, identify relevant patterns, make accurate predictions, and evaluate the accuracy of its predictions.
|
||||||
|
|
||||||
|
3. **Innovation**: The swarm should be able to generate innovative solutions to problems. This includes being able to think creatively, generate novel ideas, evaluate the ideas, and implement the best idea.
|
||||||
|
|
||||||
|
4. **Self-improvement**: The swarm should be capable of improving its own capabilities. This includes being able to identify areas of weakness, find ways to improve, and implement the improvements.
|
||||||
|
|
||||||
|
5. **Understanding**: The swarm should be able to understand complex concepts, make inferences, and draw conclusions. This includes being able to understand natural language, reason logically, and make sound judgments.
|
||||||
|
|
||||||
|
Each of these stages will require extensive testing and evaluation to ensure progress towards super-intelligence.
|
||||||
|
|
||||||
|
# Reverse-Engineering Super-Intelligence
|
||||||
|
|
||||||
|
To reach the Phase 3 level of super-intelligence, we need to reverse engineer the tasks that need to be completed. Here's an outline of what this might look like:
|
||||||
|
|
||||||
|
1. **Setting Success Metrics**: For each stage, define clear success metrics. These metrics should be quantitative and measurable, and they should align with the objectives of the stage.
|
||||||
|
|
||||||
|
2. **Identifying Prerequisites**: Determine what needs to be in place before each stage can begin. This could include certain capabilities, resources, or technologies.
|
||||||
|
|
||||||
|
3. **Developing Training Programs**: For each stage, develop a comprehensive training program. This should include a variety of tasks that will challenge the swarm and push it to
|
||||||
|
|
||||||
|
develop the necessary capabilities.
|
||||||
|
|
||||||
|
4. **Creating Testing Protocols**: Develop rigorous testing protocols for each stage. These protocols should test all aspects of the swarm's performance and they should be designed to push the swarm to its limits.
|
||||||
|
|
||||||
|
5. **Iterating and Improving**: Based on the results of the tests, iterate and improve the swarm. This could involve adjusting the training program, modifying the swarm's architecture, or tweaking its learning algorithms.
|
||||||
|
|
||||||
|
6. **Moving to the Next Stage**: Once the swarm has met the success metrics for a stage, it can move on to the next stage. This process continues until the swarm has reached the level of super-intelligence.
|
||||||
|
|
||||||
|
This process will require a significant amount of time, resources, and effort. However, by following this structured approach, we can systematically guide the swarm towards super-intelligence.
|
||||||
|
|
@ -0,0 +1,91 @@
|
|||||||
|
Jeff Bezos, the founder of Amazon.com, is known for his customer-centric approach and long-term strategic thinking. Leveraging his methodology, here are five ways you could monetize the Swarms framework:
|
||||||
|
|
||||||
|
1. **Platform as a Service (PaaS):** Create a cloud-based platform that allows users to build, run, and manage applications without the complexity of maintaining the infrastructure. You could charge users a subscription fee for access to the platform and provide different pricing tiers based on usage levels. This could be an attractive solution for businesses that do not have the capacity to build or maintain their own swarm intelligence solutions.
|
||||||
|
|
||||||
|
2. **Professional Services:** Offer consultancy and implementation services to businesses looking to utilize the Swarm technology. This could include assisting with integration into existing systems, offering custom development services, or helping customers to build specific solutions using the framework.
|
||||||
|
|
||||||
|
3. **Education and Training:** Create a certification program for developers or companies looking to become proficient with the Swarms framework. This could be sold as standalone courses, or bundled with other services.
|
||||||
|
|
||||||
|
4. **Managed Services:** Some companies may prefer to outsource the management of their Swarm-based systems. A managed services solution could take care of all the technical aspects, from hosting the solution to ensuring it runs smoothly, allowing the customer to focus on their core business.
|
||||||
|
|
||||||
|
5. **Data Analysis and Insights:** Swarm intelligence can generate valuable data and insights. By anonymizing and aggregating this data, you could provide industry reports, trend analysis, and other valuable insights to businesses.
|
||||||
|
|
||||||
|
As for the type of platform, Swarms can be offered as a cloud-based solution given its scalability and flexibility. This would also allow you to apply a SaaS/PaaS type monetization model, which provides recurring revenue.
|
||||||
|
|
||||||
|
Potential customers could range from small to large enterprises in various sectors such as logistics, eCommerce, finance, and technology, who are interested in leveraging artificial intelligence and machine learning for complex problem solving, optimization, and decision-making.
|
||||||
|
|
||||||
|
**Product Brief Monetization Strategy:**
|
||||||
|
|
||||||
|
Product Name: Swarms.AI Platform
|
||||||
|
|
||||||
|
Product Description: A cloud-based AI and ML platform harnessing the power of swarm intelligence.
|
||||||
|
|
||||||
|
1. **Platform as a Service (PaaS):** Offer tiered subscription plans (Basic, Premium, Enterprise) to accommodate different usage levels and business sizes.
|
||||||
|
|
||||||
|
2. **Professional Services:** Offer consultancy and custom development services to tailor the Swarms solution to the specific needs of the business.
|
||||||
|
|
||||||
|
3. **Education and Training:** Launch an online Swarms.AI Academy with courses and certifications for developers and businesses.
|
||||||
|
|
||||||
|
4. **Managed Services:** Provide a premium, fully-managed service offering that includes hosting, maintenance, and 24/7 support.
|
||||||
|
|
||||||
|
5. **Data Analysis and Insights:** Offer industry reports and customized insights generated from aggregated and anonymized Swarm data.
|
||||||
|
|
||||||
|
Potential Customers: Enterprises in sectors such as logistics, eCommerce, finance, and technology. This can be sold globally, provided there's an internet connection.
|
||||||
|
|
||||||
|
Marketing Channels: Online marketing (SEO, Content Marketing, Social Media), Partnerships with tech companies, Direct Sales to Enterprises.
|
||||||
|
|
||||||
|
This strategy is designed to provide multiple revenue streams, while ensuring the Swarms.AI platform is accessible and useful to a range of potential customers.
|
||||||
|
|
||||||
|
1. **AI Solution as a Service:** By offering the Swarms framework as a service, businesses can access and utilize the power of multiple LLM agents without the need to maintain the infrastructure themselves. Subscription can be tiered based on usage and additional features.
|
||||||
|
|
||||||
|
2. **Integration and Custom Development:** Offer integration services to businesses wanting to incorporate the Swarms framework into their existing systems. Also, you could provide custom development for businesses with specific needs not met by the standard framework.
|
||||||
|
|
||||||
|
3. **Training and Certification:** Develop an educational platform offering courses, webinars, and certifications on using the Swarms framework. This can serve both developers seeking to broaden their skills and businesses aiming to train their in-house teams.
|
||||||
|
|
||||||
|
4. **Managed Swarms Solutions:** For businesses that prefer to outsource their AI needs, provide a complete solution which includes the development, maintenance, and continuous improvement of swarms-based applications.
|
||||||
|
|
||||||
|
5. **Data Analytics Services:** Leveraging the aggregated insights from the AI swarms, you could offer data analytics services. Businesses can use these insights to make informed decisions and predictions.
|
||||||
|
|
||||||
|
**Type of Platform:**
|
||||||
|
|
||||||
|
Cloud-based platform or Software as a Service (SaaS) will be a suitable model. It offers accessibility, scalability, and ease of updates.
|
||||||
|
|
||||||
|
**Target Customers:**
|
||||||
|
|
||||||
|
The technology can be beneficial for businesses across sectors like eCommerce, technology, logistics, finance, healthcare, and education, among others.
|
||||||
|
|
||||||
|
**Product Brief Monetization Strategy:**
|
||||||
|
|
||||||
|
Product Name: Swarms.AI
|
||||||
|
|
||||||
|
1. **AI Solution as a Service:** Offer different tiered subscriptions (Standard, Premium, and Enterprise) each with varying levels of usage and features.
|
||||||
|
|
||||||
|
2. **Integration and Custom Development:** Offer custom development and integration services, priced based on the scope and complexity of the project.
|
||||||
|
|
||||||
|
3. **Training and Certification:** Launch the Swarms.AI Academy with courses and certifications, available for a fee.
|
||||||
|
|
||||||
|
4. **Managed Swarms Solutions:** Offer fully managed solutions tailored to business needs, priced based on scope and service level agreements.
|
||||||
|
|
||||||
|
5. **Data Analytics Services:** Provide insightful reports and data analyses, which can be purchased on a one-off basis or through a subscription.
|
||||||
|
|
||||||
|
By offering a variety of services and payment models, Swarms.AI will be able to cater to a diverse range of business needs, from small start-ups to large enterprises. Marketing channels would include digital marketing, partnerships with technology companies, presence in tech events, and direct sales to targeted industries.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Roadmap
|
||||||
|
|
||||||
|
* Create a landing page for swarms apac.ai/product/swarms
|
||||||
|
|
||||||
|
* Create Hosted Swarms API for anybody to just use without need for mega gpu infra, charge usage based pricing. Prerequisites for success => Swarms has to be extremely reliable + we need world class documentation and many daily users => how do we get many daily users? We provide a seamless and fluid experience, how do we create a seamless and fluid experience? We write good code that is modular, provides feedback to the user in times of distress, and ultimately accomplishes the user's tasks.
|
||||||
|
|
||||||
|
* Hosted consumer and enterprise subscription as a service on The Domain, where users can interact with 1000s of APIs and ingest 1000s of different data streams.
|
||||||
|
|
||||||
|
* Hosted dedicated capacity deals with mega enterprises on automating many operations with Swarms for monthly subscription 300,000+$
|
||||||
|
|
||||||
|
* Partnerships with enterprises, massive contracts with performance based fee
|
||||||
|
|
||||||
|
* Have discord bot and or slack bot with users personal data, charge subscription + browser extension
|
||||||
|
|
||||||
|
* each user gets a dedicated ocean instance of all their data so the swarm can query it as needed.
|
||||||
|
|
||||||
|
*
|
@ -0,0 +1,26 @@
|
|||||||
|
<!-- Thank you for contributing to Swarms!
|
||||||
|
|
||||||
|
Replace this comment with:
|
||||||
|
- Description: a description of the change,
|
||||||
|
- Issue: the issue # it fixes (if applicable),
|
||||||
|
- Dependencies: any dependencies required for this change,
|
||||||
|
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
|
||||||
|
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
|
||||||
|
|
||||||
|
If you're adding a new integration, please include:
|
||||||
|
1. a test for the integration, preferably unit tests that do not rely on network access,
|
||||||
|
2. an example notebook showing its use.
|
||||||
|
|
||||||
|
Maintainer responsibilities:
|
||||||
|
- General / Misc / if you don't know who to tag: kye@apac.ai
|
||||||
|
- DataLoaders / VectorStores / Retrievers: kye@apac.ai
|
||||||
|
- Models / Prompts: kye@apac.ai
|
||||||
|
- Memory: kye@apac.ai
|
||||||
|
- Agents / Tools / Toolkits: kye@apac.ai
|
||||||
|
- Tracing / Callbacks: kye@apac.ai
|
||||||
|
- Async: kye@apac.ai
|
||||||
|
|
||||||
|
If no one reviews your PR within a few days, feel free to kye@apac.ai
|
||||||
|
|
||||||
|
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
|
||||||
|
-->
|
@ -0,0 +1,61 @@
|
|||||||
|
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List
|
||||||
|
|
||||||
|
from fastapi.templating import Jinja2Templates
|
||||||
|
|
||||||
|
from swarms.agents.workers.agents import AgentManager
|
||||||
|
from swarms.utils.utils import BaseHandler, FileHandler, FileType, StaticUploader, CsvToDataframe
|
||||||
|
|
||||||
|
from swarms.tools.main import BaseToolSet, ExitConversation, RequestsGet, CodeEditor, Terminal
|
||||||
|
|
||||||
|
from env import settings
|
||||||
|
|
||||||
|
|
||||||
|
BASE_DIR = Path(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||||
|
os.chdir(BASE_DIR / os.getenv["PLAYGROUND_DIR"])
|
||||||
|
|
||||||
|
|
||||||
|
toolsets: List[BaseToolSet] = [
|
||||||
|
Terminal(),
|
||||||
|
CodeEditor(),
|
||||||
|
RequestsGet(),
|
||||||
|
ExitConversation(),
|
||||||
|
]
|
||||||
|
handlers: Dict[FileType, BaseHandler] = {FileType.DATAFRAME: CsvToDataframe()}
|
||||||
|
|
||||||
|
if os.getenv["USE_GPU"]:
|
||||||
|
import torch
|
||||||
|
|
||||||
|
from swarms.tools.main import ImageCaptioning
|
||||||
|
from swarms.tools.main import (
|
||||||
|
ImageEditing,
|
||||||
|
InstructPix2Pix,
|
||||||
|
Text2Image,
|
||||||
|
VisualQuestionAnswering,
|
||||||
|
)
|
||||||
|
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
toolsets.extend(
|
||||||
|
[
|
||||||
|
Text2Image("cuda"),
|
||||||
|
ImageEditing("cuda"),
|
||||||
|
InstructPix2Pix("cuda"),
|
||||||
|
VisualQuestionAnswering("cuda"),
|
||||||
|
]
|
||||||
|
)
|
||||||
|
handlers[FileType.IMAGE] = ImageCaptioning("cuda")
|
||||||
|
|
||||||
|
agent_manager = AgentManager.create(toolsets=toolsets)
|
||||||
|
|
||||||
|
file_handler = FileHandler(handlers=handlers, path=BASE_DIR)
|
||||||
|
|
||||||
|
templates = Jinja2Templates(directory=BASE_DIR / "api" / "templates")
|
||||||
|
|
||||||
|
uploader = StaticUploader.from_settings(
|
||||||
|
settings, path=BASE_DIR / "static", endpoint="static"
|
||||||
|
)
|
||||||
|
|
||||||
|
reload_dirs = [BASE_DIR / "swarms", BASE_DIR / "api"]
|
@ -0,0 +1,130 @@
|
|||||||
|
import os
|
||||||
|
import re
|
||||||
|
from multiprocessing import Process
|
||||||
|
from tempfile import NamedTemporaryFile
|
||||||
|
|
||||||
|
from typing import List, TypedDict
|
||||||
|
import uvicorn
|
||||||
|
from fastapi import FastAPI, Request, UploadFile
|
||||||
|
from fastapi.responses import HTMLResponse
|
||||||
|
|
||||||
|
from fastapi.staticfiles import StaticFiles
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
from api.container import agent_manager, file_handler, reload_dirs, templates, uploader
|
||||||
|
from api.worker import get_task_result, start_worker, task_execute
|
||||||
|
|
||||||
|
|
||||||
|
app = FastAPI()
|
||||||
|
|
||||||
|
app.mount("/static", StaticFiles(directory=uploader.path), name="static")
|
||||||
|
|
||||||
|
|
||||||
|
class ExecuteRequest(BaseModel):
|
||||||
|
session: str
|
||||||
|
prompt: str
|
||||||
|
files: List[str]
|
||||||
|
|
||||||
|
|
||||||
|
class ExecuteResponse(TypedDict):
|
||||||
|
answer: str
|
||||||
|
files: List[str]
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/", response_class=HTMLResponse)
|
||||||
|
async def index(request: Request):
|
||||||
|
return templates.TemplateResponse("index.html", {"request": request})
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/dashboard", response_class=HTMLResponse)
|
||||||
|
async def dashboard(request: Request):
|
||||||
|
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/upload")
|
||||||
|
async def create_upload_file(files: List[UploadFile]):
|
||||||
|
urls = []
|
||||||
|
for file in files:
|
||||||
|
extension = "." + file.filename.split(".")[-1]
|
||||||
|
with NamedTemporaryFile(suffix=extension) as tmp_file:
|
||||||
|
tmp_file.write(file.file.read())
|
||||||
|
tmp_file.flush()
|
||||||
|
urls.append(uploader.upload(tmp_file.name))
|
||||||
|
return {"urls": urls}
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/api/execute")
|
||||||
|
async def execute(request: ExecuteRequest) -> ExecuteResponse:
|
||||||
|
query = request.prompt
|
||||||
|
files = request.files
|
||||||
|
session = request.session
|
||||||
|
|
||||||
|
executor = agent_manager.create_executor(session)
|
||||||
|
|
||||||
|
promptedQuery = "\n".join([file_handler.handle(file) for file in files])
|
||||||
|
promptedQuery += query
|
||||||
|
|
||||||
|
try:
|
||||||
|
res = executor({"input": promptedQuery})
|
||||||
|
except Exception as e:
|
||||||
|
return {"answer": str(e), "files": []}
|
||||||
|
|
||||||
|
files = re.findall(r"\[file://\S*\]", res["output"])
|
||||||
|
files = [file[1:-1].split("file://")[1] for file in files]
|
||||||
|
|
||||||
|
return {
|
||||||
|
"answer": res["output"],
|
||||||
|
"files": [uploader.upload(file) for file in files],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@app.post("/api/execute/async")
|
||||||
|
async def execute_async(request: ExecuteRequest):
|
||||||
|
query = request.prompt
|
||||||
|
files = request.files
|
||||||
|
session = request.session
|
||||||
|
|
||||||
|
promptedQuery = "\n".join([file_handler.handle(file) for file in files])
|
||||||
|
promptedQuery += query
|
||||||
|
|
||||||
|
execution = task_execute.delay(session, promptedQuery)
|
||||||
|
return {"id": execution.id}
|
||||||
|
|
||||||
|
|
||||||
|
@app.get("/api/execute/async/{execution_id}")
|
||||||
|
async def execute_async(execution_id: str):
|
||||||
|
execution = get_task_result(execution_id)
|
||||||
|
|
||||||
|
result = {}
|
||||||
|
if execution.status == "SUCCESS" and execution.result:
|
||||||
|
output = execution.result.get("output", "")
|
||||||
|
files = re.findall(r"\[file://\S*\]", output)
|
||||||
|
files = [file[1:-1].split("file://")[1] for file in files]
|
||||||
|
result = {
|
||||||
|
"answer": output,
|
||||||
|
"files": [uploader.upload(file) for file in files],
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
"status": execution.status,
|
||||||
|
"info": execution.info,
|
||||||
|
"result": result,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def serve():
|
||||||
|
p = Process(target=start_worker, args=[])
|
||||||
|
p.start()
|
||||||
|
uvicorn.run("api.main:app", host="0.0.0.0", port=os.getenv["EVAL_PORT"])
|
||||||
|
|
||||||
|
|
||||||
|
def dev():
|
||||||
|
p = Process(target=start_worker, args=[])
|
||||||
|
p.start()
|
||||||
|
uvicorn.run(
|
||||||
|
"api.main:app",
|
||||||
|
host="0.0.0.0",
|
||||||
|
port=os.getenv["EVAL_PORT"],
|
||||||
|
reload=True,
|
||||||
|
reload_dirs=reload_dirs,
|
||||||
|
)
|
@ -0,0 +1,46 @@
|
|||||||
|
import os
|
||||||
|
from celery import Celery
|
||||||
|
from celery.result import AsyncResult
|
||||||
|
|
||||||
|
from api.container import agent_manager
|
||||||
|
# from env import settings
|
||||||
|
|
||||||
|
celery_broker = os.environ["CELERY_BROKER_URL"]
|
||||||
|
|
||||||
|
|
||||||
|
celery_app = Celery(__name__)
|
||||||
|
celery_app.conf.broker_url = celery_broker
|
||||||
|
celery_app.conf.result_backend = celery_broker
|
||||||
|
celery_app.conf.update(
|
||||||
|
task_track_started=True,
|
||||||
|
task_serializer="json",
|
||||||
|
accept_content=["json"], # Ignore other content
|
||||||
|
result_serializer="json",
|
||||||
|
enable_utc=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@celery_app.task(name="task_execute", bind=True)
|
||||||
|
def task_execute(self, session: str, prompt: str):
|
||||||
|
executor = agent_manager.create_executor(session, self)
|
||||||
|
response = executor({"input": prompt})
|
||||||
|
result = {"output": response["output"]}
|
||||||
|
|
||||||
|
previous = AsyncResult(self.request.id)
|
||||||
|
if previous and previous.info:
|
||||||
|
result.update(previous.info)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def get_task_result(task_id):
|
||||||
|
return AsyncResult(task_id)
|
||||||
|
|
||||||
|
|
||||||
|
def start_worker():
|
||||||
|
celery_app.worker_main(
|
||||||
|
[
|
||||||
|
"worker",
|
||||||
|
"--loglevel=INFO",
|
||||||
|
]
|
||||||
|
)
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in new issue