Kye 2 years ago
parent 7fb1374238
commit ed2587c368

@ -7,6 +7,68 @@ In the world of AI and machine learning, individual models have made significant
Just as a swarm of bees works together, communicating and coordinating their actions for the betterment of the hive, swarming LLM agents can work together to create richer, more nuanced outputs. By harnessing the strengths of individual agents and combining them through a swarming architecture, we can unlock a new level of performance and responsiveness in AI systems. We envision swarms of LLM agents revolutionizing fields like customer support, content creation, research, and much more.
## README.md Update
---
# swarms
`swarms` is an innovative package that offers solutions for swarming language models. With a focus on Language Learning Models (LLMs) like GPT-4, it provides functionalities to use swarming agents, opening doors to future AI possibilities.
This repository is open to anyone who wishes to contribute, share, or learn about swarming agents. In this README, you will find an installation guide, a usage guide for `./swarms/agents/auto_agent.py`, and details on how you can share the project with your friends.
## Table of Contents
1. [Installation](#installation)
2. [Usage](#usage)
3. [Sharing](#sharing)
## Installation
```bash
git clone https://github.com/kyegomez/swarms.git
pip install -r requirements.txt
cd swarms
```
## Usage
The primary agent in this repository is the `AutoAgent` from `./swarms/agents/auto_agent.py`.
This `AutoAgent` is used to create the `MultiModalVisualAgent`, an autonomous agent that can process tasks in a multi-modal environment, like dealing with both text and visual data.
To use this agent, you need to import the agent and instantiate it. Here is a brief guide:
```python
from swarms.agents.auto_agent import MultiModalVisualAgent
# Initialize the agent
multimodal_agent = MultiModalVisualAgent()
```
### Working with MultiModalVisualAgentTool
The `MultiModalVisualAgentTool` class is a tool wrapper around the `MultiModalVisualAgent`. It simplifies working with the agent by encapsulating agent-related logic within its methods. Here's a brief guide on how to use it:
```python
from swarms.agents.auto_agent import MultiModalVisualAgent, MultiModalVisualAgentTool
# Initialize the agent
multimodal_agent = MultiModalVisualAgent()
# Initialize the tool with the agent
multimodal_agent_tool = MultiModalVisualAgentTool(multimodal_agent)
# Now, you can use the agent tool to perform tasks. The run method is one of them.
result = multimodal_agent_tool.run('Your text here')
```
## Note
- The `AutoAgent` makes use of several helper tools and context managers for tasks such as processing CSV files, browsing web pages, and querying web pages. For the best use of this agent, understanding these tools is crucial.
- Additionally, the agent uses the ChatOpenAI, a language learning model (LLM), to perform its tasks. You need to provide an OpenAI API key to make use of it.
- Detailed knowledge of FAISS, a library for efficient similarity search and clustering of dense vectors, is also essential as it's used for memory storage and retrieval.
## Swarming Architectures
Here are three examples of swarming architectures that could be applied in this context.
@ -17,6 +79,9 @@ Here are three examples of swarming architectures that could be applied in this
3. **Competitive Swarms**: In this setup, multiple agents work on the same task independently. The output from the agent which produces the highest confidence or quality result is then selected. This can often lead to more robust outputs, as the competition drives each agent to perform at its best.
4. **Multi-Agent Debate**: Here, multiple agents debate a topic. The output from the agent which produces the highest confidence or quality result is then selected. This can lead to more robust outputs, as the competition drives each agent to perform it's best.
## Share with your Friends
If you love what we're building here, please consider sharing our project with your friends and colleagues! You can use the following buttons to share on social media.

@ -221,8 +221,6 @@ Input: use 4 numbers and basic arithmetic operations (+-*/) to obtain 24 in 1 eq
Possible next steps:
"""
agent.run([f"{tree_of_thoughts_prompt} {input_problem}"])

@ -0,0 +1,99 @@
from langchain import OpenAI, LLMChain, PromptTemplate
from langchain.memory import ConversationBufferWindowMemory
def initialize_chain(instructions, memory=None):
if memory is None:
memory = ConversationBufferWindowMemory()
memory.ai_prefix = "Assistant"
template = f"""
Instructions: {instructions}
{{{memory.memory_key}}}
Human: {{human_input}}
Assistant:"""
prompt = PromptTemplate(
input_variables=["history", "human_input"], template=template
)
chain = LLMChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True,
memory=ConversationBufferWindowMemory(),
)
return chain
def initialize_meta_chain():
meta_template = """
Assistant has just had the below interactions with a User. Assistant followed their "Instructions" closely. Your job is to critique the Assistant's performance and then revise the Instructions so that Assistant would quickly and correctly respond in the future.
####
{chat_history}
####
Please reflect on these interactions.
You should first critique Assistant's performance. What could Assistant have done better? What should the Assistant remember about this user? Are there things this user always wants? Indicate this with "Critique: ...".
You should next revise the Instructions so that Assistant would quickly and correctly respond in the future. Assistant's goal is to satisfy the user in as few interactions as possible. Assistant will only see the new Instructions, not the interaction history, so anything important must be summarized in the Instructions. Don't forget any important details in the current Instructions! Indicate the new Instructions by "Instructions: ...".
"""
meta_prompt = PromptTemplate(
input_variables=["chat_history"], template=meta_template
)
meta_chain = LLMChain(
llm=OpenAI(temperature=0),
prompt=meta_prompt,
verbose=True,
)
return meta_chain
def get_chat_history(chain_memory):
memory_key = chain_memory.memory_key
chat_history = chain_memory.load_memory_variables(memory_key)[memory_key]
return chat_history
def get_new_instructions(meta_output):
delimiter = "Instructions: "
new_instructions = meta_output[meta_output.find(delimiter) + len(delimiter) :]
return new_instructions
def meta_agent(task, max_iters=3, max_meta_iters=5):
failed_phrase = "task failed"
success_phrase = "task succeeded"
key_phrases = [success_phrase, failed_phrase]
instructions = "None"
for i in range(max_meta_iters):
print(f"[Episode {i+1}/{max_meta_iters}]")
chain = initialize_chain(instructions, memory=None)
output = chain.predict(human_input=task)
for j in range(max_iters):
print(f"(Step {j+1}/{max_iters})")
print(f"Assistant: {output}")
print(f"Human: ")
human_input = input()
if any(phrase in human_input.lower() for phrase in key_phrases):
break
output = chain.predict(human_input=human_input)
if success_phrase in human_input.lower():
print(f"You succeeded! Thanks for playing!")
return
meta_chain = initialize_meta_chain()
meta_output = meta_chain.predict(chat_history=get_chat_history(chain.memory))
print(f"Feedback: {meta_output}")
instructions = get_new_instructions(meta_output)
print(f"New Instructions: {instructions}")
print("\n" + "#" * 80 + "\n")
print(f"You failed! Thanks for playing!")
task = "Provide a systematic argument for why we should always eat pasta with olives."
meta_agent(task)
Loading…
Cancel
Save