Kye
38a3faacfa
|
2 years ago | |
---|---|---|
.github/workflows | 2 years ago | |
kubernetes | 2 years ago | |
swarms | 2 years ago | |
terraform | 2 years ago | |
unittesting | 2 years ago | |
Agora-Banner-blend.png | 2 years ago | |
DOCKERFILE | 2 years ago | |
DOCUMENTATION.md | 2 years ago | |
IDEAS.MD | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 2 years ago | |
docker-compose.yml | 2 years ago | |
dotenv.py | 2 years ago | |
example.py | 2 years ago | |
logger.py | 2 years ago | |
requirements.txt | 2 years ago | |
setup.py | 2 years ago |
README.md
Agora
Swarming Language Models (Swarms)
Welcome to Swarms - the future of AI, where we leverage the power of autonomous agents to create 'swarms' of Language Models (LLM) that work together, creating a dynamic and interactive AI system.
Vision
In the world of AI and machine learning, individual models have made significant strides in understanding and generating human-like text. But imagine the possibilities when these models are no longer solitary units, but part of a cooperative and communicative swarm. This is the future we envision.
Just as a swarm of bees works together, communicating and coordinating their actions for the betterment of the hive, swarming LLM agents can work together to create richer, more nuanced outputs. By harnessing the strengths of individual agents and combining them through a swarming architecture, we can unlock a new level of performance and responsiveness in AI systems. We envision swarms of LLM agents revolutionizing fields like customer support, content creation, research, and much more.
Table of Contents
Installation
git clone https://github.com/kyegomez/swarms.git
cd swarms
pip install -r requirements.txt
Usage
There are 2 methods, one is very simple to test it out and then there is another to explore the agents and so on! Check out the Documetation file to understand the classes
Method 1
Simple example python3 example.py
Method 2
The primary agent in this repository is the AutoAgent
from ./swarms/agents/workers/auto_agent.py
.
This AutoAgent
is used to create the MultiModalVisualAgent
, an autonomous agent that can process tasks in a multi-modal environment, like dealing with both text and visual data.
To use this agent, you need to import the agent and instantiate it. Here is a brief guide:
from swarms.agents.auto_agent import MultiModalVisualAgent
# Initialize the agent
multimodal_agent = MultiModalVisualAgent()
Working with MultiModalVisualAgentTool
The MultiModalVisualAgentTool
class is a tool wrapper around the MultiModalVisualAgent
. It simplifies working with the agent by encapsulating agent-related logic within its methods. Here's a brief guide on how to use it:
from swarms.agents.auto_agent import MultiModalVisualAgent, MultiModalVisualAgentTool
# Initialize the agent
multimodal_agent = MultiModalVisualAgent()
# Initialize the tool with the agent
multimodal_agent_tool = MultiModalVisualAgentTool(multimodal_agent)
# Now, you can use the agent tool to perform tasks. The run method is one of them.
result = multimodal_agent_tool.run('Your text here')
Note
-
The
AutoAgent
makes use of several helper tools and context managers for tasks such as processing CSV files, browsing web pages, and querying web pages. For the best use of this agent, understanding these tools is crucial. -
Additionally, the agent uses the ChatOpenAI, a language learning model (LLM), to perform its tasks. You need to provide an OpenAI API key to make use of it.
-
Detailed knowledge of FAISS, a library for efficient similarity search and clustering of dense vectors, is also essential as it's used for memory storage and retrieval.
Share with your Friends
Contribute
We're always looking for contributors to help us improve and expand this project. If you're interested, please check out our Contributing Guidelines.
Thank you for being a part of our project!
To do:
-
Integrate Multi Agent debate
-
Integrate Multi agent2 debate
-
Integrate meta prompting into all worker agents
-
Create 1 main swarms class
swarms('Increase sales by 40$', workers=4)
-
Integrate Jarvis as worker nodes
-
Integrate guidance and token healing
-
Add text to speech whisper x, youtube script and text to speech code models as tools
-
Add task completion logic with meta prompting, task evaluation as a state from 0.0 to 1.0, and critiquing for meta prompting.
-
Integrate meta prompting for every agent boss and worker
-
Get baby agi set up with the autogpt instance as a tool
-
Integrate Ocean vector db as the main embedding database for all the agents boss and or worker
-
Communication, a universal vector database that is only used when a task is completed in this format
[TASK][COMPLETED]
-
Create unit tests
-
Create benchmrks
-
Create evaluations
-
Add new tool that uses WhiseperX to transcribe a youtube video