From f87110c4016993b1b6d3a7fd6056c9c7f7c90926 Mon Sep 17 00:00:00 2001 From: Kye Gomez <98760976+kyegomez@users.noreply.github.com> Date: Mon, 17 Jun 2024 11:46:54 -0700 Subject: [PATCH] Update README.md --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 1614253f..4a457bee 100644 --- a/README.md +++ b/README.md @@ -1444,8 +1444,7 @@ Coming soon... This is an implementation from the paper: "Mixture-of-Agents Enhances Large Language Model Capabilities" by together.ai, it achieves SOTA on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT-4 Omni. Great for tasks that need to be parallelized and then sequentially fed into another loop ```python -from swarms import Agent, OpenAIChat -from swarms.structs.mixture_of_agents import MixtureOfAgents +from swarms import Agent, OpenAIChat, MixtureOfAgents # Initialize the director agent director = Agent(