Kye
5fd6f5ecb0
|
1 year ago | |
---|---|---|
.github | 1 year ago | |
demos | 1 year ago | |
docs | 1 year ago | |
images | 1 year ago | |
metadata | 1 year ago | |
playground | 1 year ago | |
swarms | 1 year ago | |
tests | 1 year ago | |
.env.example | 1 year ago | |
.gitignore | 1 year ago | |
.pre-commit-config.yaml | 1 year ago | |
.readthedocs.yml | 1 year ago | |
CONTRIBUTING.md | 1 year ago | |
LICENSE | 2 years ago | |
README.md | 1 year ago | |
errors.txt | 1 year ago | |
example.py | 1 year ago | |
flow.py | 1 year ago | |
mkdocs.yml | 1 year ago | |
pyproject.toml | 1 year ago | |
requirements.txt | 1 year ago | |
stacked_worker.py | 1 year ago | |
workflow.py | 1 year ago |
README.md
Swarms is a modular framework that enables reliable and useful multi-agent collaboration at scale to automate real-world tasks.
Share on Social Media
Vision
At Swarms, we're transforming the landscape of AI from siloed AI agents to a unified 'swarm' of intelligence. Through relentless iteration and the power of collective insight from our 1500+ Agora researchers, we're developing a groundbreaking framework for AI collaboration. Our mission is to catalyze a paradigm shift, advancing Humanity with the power of unified autonomous AI agent swarms.
🤝 Schedule a 1-on-1 Session
Book a 1-on-1 Session with Kye, the Creator, to discuss any issues, provide feedback, or explore how we can improve Swarms for you.
Installation
pip3 install --upgrade swarms
Usage
We have a small gallery of examples to run here, for more check out the docs to build your own agent and or swarms!
MultiAgentDebate
MultiAgentDebate
is a simple class that enables multi agent collaboration.
from swarms.workers import Worker
from swarms.swarms import MultiAgentDebate, select_speaker
from swarms.models import OpenAIChat
api_key = "sk-"
llm = OpenAIChat(
model_name='gpt-4',
openai_api_key=api_key,
temperature=0.5
)
node = Worker(
llm=llm,
openai_api_key=api_key,
ai_name="Optimus Prime",
ai_role="Worker in a swarm",
external_tools = None,
human_in_the_loop = False,
temperature = 0.5,
)
node2 = Worker(
llm=llm,
openai_api_key=api_key,
ai_name="Bumble Bee",
ai_role="Worker in a swarm",
external_tools = None,
human_in_the_loop = False,
temperature = 0.5,
)
node3 = Worker(
llm=llm,
openai_api_key=api_key,
ai_name="Bumble Bee",
ai_role="Worker in a swarm",
external_tools = None,
human_in_the_loop = False,
temperature = 0.5,
)
agents = [
node,
node2,
node3
]
# Initialize multi-agent debate with the selection function
debate = MultiAgentDebate(agents, select_speaker)
# Run task
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
results = debate.run(task, max_iters=4)
# Print results
for result in results:
print(f"Agent {result['agent']} responded: {result['response']}")
Worker
- The
Worker
is an fully feature complete agent with an llm, tools, and a vectorstore for long term memory! - Place your api key as parameters in the llm if you choose!
- And, then place the openai api key in the Worker for the openai embedding model
from swarms.models import OpenAIChat
from swarms import Worker
api_key = ""
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
)
node = Worker(
llm=llm,
ai_name="Optimus Prime",
openai_api_key=api_key,
ai_role="Worker in a swarm",
external_tools=None,
human_in_the_loop=False,
temperature=0.5,
)
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
response = node.run(task)
print(response)
OmniModalAgent
- OmniModal Agent is an LLM that access to 10+ multi-modal encoders and diffusers! It can generate images, videos, speech, music and so much more, get started with:
from swarms.models import OpenAIChat
from swarms.agents import OmniModalAgent
api_key = "SK-"
llm = OpenAIChat(model_name="gpt-4", openai_api_key=api_key)
agent = OmniModalAgent(llm)
agent.run("Create a video of a swarm of fish")
Flow
Example
- The
Flow
is a superior iteratioin of theLLMChain
from Langchain, our intent withFlow
is to create the most reliable loop structure that gives the agents their "autonomy" through 3 main methods of interaction, one through user specified loops, then dynamic where the agent parses a token, and or an interactive human input verison, or a mix of all 3.
from swarms.models import OpenAIChat
from swarms.structs import Flow
api_key = ""
# Initialize the language model,
# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
llm = OpenAIChat(
openai_api_key=api_key,
temperature=0.5,
)
# Initialize the flow
flow = Flow(
llm=llm,
max_loops=5,
)
out = flow.run("Generate a 10,000 word blog, say Stop when done")
print(out)
Documentation
- For documentation, go here, swarms.apac.ai
Focus
We are radically devoted to creating outcomes that our users want, we believe this is only possible by focusing extensively on reliability, scalability, and agility.
An Agent's purpose is to satisfy your wants and needs and so this is our only focus, we believe this is only possible by investing impeccable detail into agent structure design in other words gluing together an llm with tools and memory in a way that delights users and executes tasks exactly how users want them to be executed.
The reliability of communication in a swarm is also extremely critical to your success and with this in mind we carefully craft and extensively test our structures.
- Reliability.
- Scalability.
- Speed.
- Power.
- Agile.
Contribute
We're always looking for contributors to help us improve and expand this project. If you're interested, please check out our Contributing Guidelines.
License
MIT