Merge branch 'master' into fix-ci-2

pull/388/head
Wyatt Stanke 11 months ago committed by GitHub
commit 1124ad3271
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

2
.gitignore vendored

@ -15,7 +15,7 @@ Unit Testing Agent_state.json
swarms/__pycache__
venv
.DS_Store
Cargo.lock
.DS_STORE
Cargo.lock
swarms/agents/.DS_Store

@ -1,14 +1,15 @@
[package]
name = "engine"
version = "0.1.0"
edition = "2018"
[lib]
name = "engine"
path = "runtime/concurrent_exec.rs"
crate-type = ["cdylib"]
name = "swarms-runtime" # The name of your project
version = "0.1.0" # The current version, adhering to semantic versioning
edition = "2021" # Specifies which edition of Rust you're using, e.g., 2018 or 2021
authors = ["Your Name <your.email@example.com>"] # Optional: specify the package authors
license = "MIT" # Optional: the license for your project
description = "A brief description of my project" # Optional: a short description of your project
[dependencies]
pyo3 = { version = "0.15", features = ["extension-module"] }
rayon = "1.5.1"
log = "0.4.14"
cpython = "0.5"
rayon = "1.5"
[dependencies.pyo3]
version = "0.20.3"
features = ["extension-module", "auto-initialize"]

@ -94,6 +94,7 @@ Swarms is an open source framework for developers in python to enable seamless,
[Here is the official Swarms Github Page:](https://github.com/kyegomez/swarms)
### Product Growth Metrics
| Name | Description | Link |
|--------------------------b--------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
| Total Downloads of all time | Total number of downloads for the product over its entire lifespan. | [![Downloads](https://static.pepy.tech/badge/swarms)](https://pepy.tech/project/swarms) |
@ -107,4 +108,6 @@ Swarms is an open source framework for developers in python to enable seamless,
| Github Traffic Metrics | Metrics related to traffic, such as views and clones on Github. | [Github Traffic Metrics](https://github.com/kyegomez/swarms/graphs/traffic) |
| Issues with the framework | Current open issues for the product on Github. | [![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms/issues) |
gi
-------

@ -0,0 +1,51 @@
# The Limits of Individual Agents
- Context Window Limits
- Single Task Execution
- Hallucination
- No collaboration
In the rapidly evolving field of artificial intelligence, individual agents have pushed the boundaries of what machines can learn and accomplish. However, despite their impressive capabilities, these agents face inherent limitations that can hinder their effectiveness in complex, real-world applications. This discussion explores the critical constraints of individual agents, such as context window limits, hallucination, single-task threading, and lack of collaboration, and illustrates how multi-agent collaboration can address these limitations.
#### Context Window Limits
One of the most significant constraints of individual agents, particularly in the domain of language models, is the context window limit. This limitation refers to the maximum amount of information an agent can consider at any given time. For instance, many language models can only process a fixed number of tokens (words or characters) in a single inference, restricting their ability to understand and generate responses based on longer texts. This limitation can lead to a lack of coherence in longer compositions and an inability to maintain context in extended conversations or documents.
#### Hallucination
Hallucination in AI refers to the phenomenon where an agent generates information that is not grounded in the input data or real-world facts. This can manifest as making up facts, entities, or events that do not exist or are incorrect. Hallucinations pose a significant challenge in ensuring the reliability and trustworthiness of AI-generated content, particularly in critical applications such as news generation, academic research, and legal advice.
#### Single Task Threading
Individual agents are often designed to excel at specific tasks, leveraging their architecture and training data to optimize performance in a narrowly defined domain. However, this specialization can also be a drawback, as it limits the agent's ability to multitask or adapt to tasks that fall outside its primary domain. Single-task threading means an agent may excel in language translation but struggle with image recognition or vice versa, necessitating the deployment of multiple specialized agents for comprehensive AI solutions.
#### Lack of Collaboration
Traditional AI agents operate in isolation, processing inputs and generating outputs independently. This isolation limits their ability to leverage diverse perspectives, share knowledge, or build upon the insights of other agents. In complex problem-solving scenarios, where multiple facets of a problem need to be addressed simultaneously, this lack of collaboration can lead to suboptimal solutions or an inability to tackle multifaceted challenges effectively.
# The Elegant yet Simple Solution
- ## Multi-Agent Collaboration
Recognizing the limitations of individual agents, researchers and practitioners have explored the potential of multi-agent collaboration as a means to transcend these constraints. Multi-agent systems comprise several agents that can interact, communicate, and collaborate to achieve common goals or solve complex problems. This collaborative approach offers several advantages:
#### Overcoming Context Window Limits
By dividing a large task among multiple agents, each focusing on different segments of the problem, multi-agent systems can effectively overcome the context window limits of individual agents. For instance, in processing a long document, different agents could be responsible for understanding and analyzing different sections, pooling their insights to generate a coherent understanding of the entire text.
#### Mitigating Hallucination
Through collaboration, agents can cross-verify facts and information, reducing the likelihood of hallucinations. If one agent generates a piece of information, other agents can provide checks and balances, verifying the accuracy against known data or through consensus mechanisms.
#### Enhancing Multitasking Capabilities
Multi-agent systems can tackle tasks that require a diverse set of skills by leveraging the specialization of individual agents. For example, in a complex project that involves both natural language processing and image analysis, one agent specialized in text can collaborate with another specialized in visual data, enabling a comprehensive approach to the task.
#### Facilitating Collaboration and Knowledge Sharing
Multi-agent collaboration inherently encourages the sharing of knowledge and insights, allowing agents to learn from each other and improve their collective performance. This can be particularly powerful in scenarios where iterative learning and adaptation are crucial, such as dynamic environments or tasks that evolve over time.
### Conclusion
While individual AI agents have made remarkable strides in various domains, their inherent limitations necessitate innovative approaches to unlock the full potential of artificial intelligence. Multi-agent collaboration emerges as a compelling solution, offering a pathway to transcend individual constraints through collective intelligence. By harnessing the power of collaborative AI, we can address more complex, multifaceted problems, paving the way for more versatile, efficient, and effective AI systems in the future.

@ -178,4 +178,4 @@ nav:
- Hiring: "corporate/hiring.md"
- SwarmCloud: "corporate/swarm_cloud.md"
- SwarmMemo: "corporate/swarm_memo.md"
- Data Room: "corporate/data_room.md"
- Data Room: "corporate/data_room.md"

@ -5,6 +5,7 @@ from swarms import Agent, ConcurrentWorkflow, Task
# model
model = MultiOnAgent(multion_api_key="api-key")
# out = model.run("search for a recipe")
agent = Agent(
agent_name="MultiOnAgent",

@ -14,7 +14,6 @@ docs = loader.load()
qdrant_client = qdrant.Qdrant(
host="https://697ea26c-2881-4e17-8af4-817fcb5862e8.europe-west3-0.gcp.cloud.qdrant.io",
collection_name="qdrant",
api_key="BhG2_yINqNU-aKovSEBadn69Zszhbo5uaqdJ6G_qDkdySjAljvuPqQ",
)
qdrant_client.add_vectors(docs)

@ -0,0 +1,14 @@
from swarms import Agent, AzureOpenAI
## Initialize the workflow
agent = Agent(
llm=AzureOpenAI(),
max_loops="auto",
autosave=True,
dashboard=False,
streaming_on=True,
verbose=True,
)
# Run the workflow on a task
agent("Understand the risk profile of this account")

@ -1,12 +1,11 @@
[build-system]
requires = ["poetry-core>=1.0.0", "maturin"]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "4.1.9"
version = "4.2.0"
description = "Swarms - Pytorch"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]
@ -51,7 +50,7 @@ httpx = "0.24.1"
tiktoken = "0.4.0"
attrs = "22.2.0"
ratelimit = "2.2.1"
loguru = "*"
loguru = "0.7.2"
cohere = "4.24"
huggingface-hub = "*"
pydantic = "1.10.12"
@ -76,7 +75,6 @@ supervision = "*"
scikit-image = "*"
pinecone-client = "*"
roboflow = "*"
multion = "*"

@ -16,8 +16,7 @@ sentencepiece==0.1.98
requests_mock
pypdf==4.0.1
accelerate==0.22.0
loguru
multion
loguru==0.7.2
chromadb
tensorflow
optimum

@ -44,5 +44,5 @@ workflow.add(tasks=[task1, task2])
workflow.run()
# # Output the results
# for task in workflow.tasks:
# print(f"Task: {task.description}, Result: {task.result}")
for task in workflow.tasks:
print(f"Task: {task.description}, Result: {task.result}")

@ -0,0 +1,59 @@
use pyo3::prelude::*;
use pyo3::types::PyList;
use rayon::prelude::*;
use std::fs;
use std::time::Instant;
// Define the new execute function
fn exec_concurrently(script_path: &str, threads: usize) -> PyResult<()> {
(0..threads).into_par_iter().for_each(|_| {
Python::with_gil(|py| {
let sys = py.import("sys").unwrap();
let path: &PyList = match sys.getattr("path") {
Ok(path) => match path.downcast() {
Ok(path) => path,
Err(e) => {
eprintln!("Failed to downcast path: {:?}", e);
return;
}
},
Err(e) => {
eprintln!("Failed to get path attribute: {:?}", e);
return;
}
};
if let Err(e) = path.append("lib/python3.11/site-packages") {
eprintln!("Failed to append path: {:?}", e);
}
let script = fs::read_to_string(script_path).unwrap();
py.run(&script, None, None).unwrap();
});
});
Ok(())
}
fn main() -> PyResult<()> {
let args: Vec<String> = std::env::args().collect();
let threads = 20;
if args.len() < 2 {
eprintln!("Usage: {} <path_to_python_script>", args[0]);
std::process::exit(1);
}
let script_path = &args[1];
let start = Instant::now();
// Call the execute function
exec_concurrently(script_path, threads)?;
let duration = start.elapsed();
match fs::write("/tmp/elapsed.time", format!("booting time: {:?}", duration)) {
Ok(_) => println!("Successfully wrote elapsed time to /tmp/elapsed.time"),
Err(e) => eprintln!("Failed to write elapsed time: {:?}", e),
}
Ok(())
}

@ -16,7 +16,6 @@ from swarms.agents.stopping_conditions import (
)
from swarms.agents.tool_agent import ToolAgent
from swarms.agents.worker_agent import Worker
from swarms.agents.multion_agent import MultiOnAgent
__all__ = [
"AbstractAgent",
@ -35,5 +34,4 @@ __all__ = [
"check_end",
"Worker",
"agent_wrapper",
"MultiOnAgent",
]

@ -4,6 +4,7 @@ from typing import Any, Callable, List, Optional
from swarms.structs.task import Task
from swarms.utils.logger import logger
from swarms.structs.agent import Agent
@dataclass
@ -42,6 +43,7 @@ class AsyncWorkflow:
results: List[Any] = field(default_factory=list)
loop: Optional[asyncio.AbstractEventLoop] = None
stopping_condition: Optional[Callable] = None
agents: List[Agent] = None
async def add(self, task: Any = None, tasks: List[Any] = None):
"""Add tasks to the workflow"""

@ -1,11 +1,15 @@
import json
from dataclasses import dataclass, field
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from termcolor import colored
from swarms.structs.task import Task
from swarms.utils.logger import logger
# from swarms.utils.logger import logger
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.utils.loguru_logger import logger
# SequentialWorkflow class definition using dataclasses
@ -39,7 +43,7 @@ class SequentialWorkflow:
name: str = None
description: str = None
task_pool: List[Task] = field(default_factory=list)
task_pool: List[Task] = None
max_loops: int = 1
autosave: bool = False
saved_state_filepath: Optional[str] = (
@ -47,6 +51,18 @@ class SequentialWorkflow:
)
restore_state_filepath: Optional[str] = None
dashboard: bool = False
agents: List[Agent] = None
def __post_init__(self):
self.conversation = Conversation(
system_prompt=f"Objective: {self.description}",
time_enabled=True,
autosave=True,
)
# Logging
logger.info(f"Number of agents activated: {len(self.agents)}")
logger.info(f"Task Pool Size: {self.task_pool}")
def add(
self,
@ -65,35 +81,44 @@ class SequentialWorkflow:
*args: Additional arguments to pass to the task execution.
**kwargs: Additional keyword arguments to pass to the task execution.
"""
try:
# If the agent is a Task instance, we include the task in kwargs for Agent.run()
# Append the task to the task_pool list
if task:
self.task_pool.append(task)
logger.info(
f"[INFO][SequentialWorkflow] Added task {task} to"
" workflow"
)
elif tasks:
for task in tasks:
self.task_pool.append(task)
logger.info(
"[INFO][SequentialWorkflow] Added task"
f" {task} to workflow"
)
else:
if task and tasks is not None:
# Add the task and list of tasks to the task_pool at the same time
self.task_pool.append(task)
for task in tasks:
self.task_pool.append(task)
except Exception as error:
logger.error(
colored(
f"Error adding task to workflow: {error}", "red"
),
)
logger.info("A")
for agent in self.agents:
out = agent(str(self.description))
self.conversation.add(agent.agent_name, out)
prompt = self.conversation.return_history_as_string()
out = agent(prompt)
return out
# try:
# # If the agent is a Task instance, we include the task in kwargs for Agent.run()
# # Append the task to the task_pool list
# if task:
# self.task_pool.append(task)
# logger.info(
# f"[INFO][SequentialWorkflow] Added task {task} to"
# " workflow"
# )
# elif tasks:
# for task in tasks:
# self.task_pool.append(task)
# logger.info(
# "[INFO][SequentialWorkflow] Added task"
# f" {task} to workflow"
# )
# else:
# if task and tasks is not None:
# # Add the task and list of tasks to the task_pool at the same time
# self.task_pool.append(task)
# for task in tasks:
# self.task_pool.append(task)
# except Exception as error:
# logger.error(
# colored(
# f"Error adding task to workflow: {error}", "red"
# ),
# )
def reset_workflow(self) -> None:
"""Resets the workflow by clearing the results of each task."""

Loading…
Cancel
Save