parent
9d28535393
commit
9cbbdc9a10
@ -0,0 +1 @@
|
|||||||
|
content='# Swarms Structs Documentation\n\n## Module: MajorityVoting\n\n### Overview\n\nThe `MajorityVoting` module in the Swarms Structs library represents a majority voting system for agents. This module allows multiple agents to provide responses to a given task, and the system aggregates these responses to determine the majority vote. The purpose of this module is to leverage the collective intelligence of multiple agents to arrive at consensual decisions or answers.\n\n### Class Definition\n\n```python\nclass MajorityVoting:\n """\n Class representing a majority voting system for agents.\n\n Args:\n agents (list): A list of agents to be used in the majority voting system.\n output_parser (function, optional): A function used to parse the output of the agents.\n If not provided, the default majority voting function is used.\n autosave (bool, optional): A boolean indicating whether to autosave the conversation to a file.\n verbose (bool, optional): A boolean indicating whether to enable verbose logging.\n Examples:\n >>> from swarms.structs.agent import Agent\n >>> from swarms.structs.majority_voting import MajorityVoting\n >>> agents = [\n ... Agent("GPT-3"),\n ... Agent("Codex"),\n ... Agent("Tabnine"),\n ... ]\n >>> majority_voting = MajorityVoting(agents)\n >>> majority_voting.run("What is the capital of France?")\n \'Paris\'\n """\n\n def __init__(\n self,\n agents: List[Agent],\n output_parser: Optional[Callable] = majority_voting,\n autosave: bool = False,\n verbose: bool = False,\n *args,\n **kwargs,\n ):\n # Constructor code here\n```\n\n### Functionality and Usage\n\nThe `MajorityVoting` class is used to create a majority voting system for a given set of agents. The key features and usage of this class are as follows:\n\n- **Initialization**: The class is initialized with a list of agents, an optional output parser function, and flags for autosave and verbose logging.\n- **Run Method**: The `run` method is used to execute the majority voting system. It takes a task as input, routes the task to each agent concurrently, aggregates agent responses, and performs a majority vote.\n\n#### Examples\n\n1. **Basic Usage**:\n\n```python\nfrom swarms.structs.agent import Agent\nfrom swarms.structs.majority_voting import MajorityVoting\n\nagents = [\n Agent("GPT-3"),\n Agent("Codex"),\n Agent("Tabnine"),\n]\nmajority_voting = MajorityVoting(agents)\nresult = majority_voting.run("What is the capital of France?")\nprint(result) # Output: \'Paris\'\n```\n\n2. **Custom Output Parser**:\n\n```python\ndef custom_parser(responses):\n # Custom parsing logic here\n return most_common_response\n\nmajority_voting = MajorityVoting(agents, output_parser=custom_parser)\nresult = majority_voting.run("What is the capital of France?")\n```\n\n3. **Autosave Conversation**:\n\n```python\nmajority_voting = MajorityVoting(agents, autosave=True)\nresult = majority_voting.run("What is the capital of France?")\n```\n\n### Additional Information\n\n- **Threaded Execution**: The `MajorityVoting` class uses a ThreadPoolExecutor for concurrent execution of agents\' tasks.\n- **Logging**: Verbose logging can be enabled to track agent responses and system operations.\n- **Output Parsing**: Custom output parsers can be provided to handle different response aggregation strategies.\n\n### References\n\nFor more information on multi-agent systems and consensus algorithms, refer to the following resources:\n\n1. [Multi-Agent Systems: A Modern Approach to Distributed Artificial Intelligence](https://www.cambridge.org/highereducation/books/multi-agent-systems/7A7C7D3E4E0B6A53C4C4C24E7E5F1B58)\n2. [Consensus Algorithms in Multi-Agent Systems](https://link.springer.com/chapter/10.1007/978-3-319-02813-0_22)\n\n---\n\nThis detailed documentation provides a comprehensive guide for understanding and utilizing the `MajorityVoting` module in the Swarms Structs library. It covers class definition, functionality, usage examples, additional information, and external references to enhance the user\'s knowledge and application of the module.' response_metadata={'token_usage': {'completion_tokens': 918, 'prompt_tokens': 2388, 'total_tokens': 3306}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}
|
File diff suppressed because one or more lines are too long
@ -0,0 +1 @@
|
|||||||
|
content='# Module/Class Name: TaskQueueBase\n\n## Overview\nThe `TaskQueueBase` class is a helper class that provides a standard way to create an Abstract Base Class (ABC) using inheritance. It includes methods for adding tasks to a queue, getting the next task from the queue, marking tasks as completed, and resetting tasks if they were not completed successfully.\n\n## Class Definition\n```python\nclass TaskQueueBase(ABC):\n def __init__(self):\n self.lock = threading.Lock()\n\n @synchronized_queue\n @abstractmethod\n def add(self, task: Task) -> bool:\n """Adds a task to the queue.\n\n Args:\n task (Task): The task to be added to the queue.\n\n Returns:\n bool: True if the task was successfully added, False otherwise.\n """\n\n @synchronized_queue\n @abstractmethod\n def get(self, agent: Agent) -> Task:\n """Gets the next task from the queue.\n\n Args:\n agent (Agent): The agent requesting the task.\n\n Returns:\n Task: The next task from the queue.\n """\n\n @synchronized_queue\n @abstractmethod\n def complete_task(self, task_id: str):\n """Sets the task as completed.\n\n Args:\n task_id (str): The ID of the task to be marked as completed.\n """\n\n @synchronized_queue\n @abstractmethod\n def reset(self, task_id: str):\n """Resets the task if the agent failed to complete it.\n\n Args:\n task_id (str): The ID of the task to be reset.\n """\n```\n\n## Functionality and Usage\nThe `TaskQueueBase` class is designed to be used as a base class for creating task queue implementations. It provides a set of abstract methods that must be implemented by subclasses to define the specific behavior of the task queue.\n\n### Adding a Task\nTo add a task to the queue, you need to implement the `add` method in a subclass of `TaskQueueBase`. This method takes a `Task` object as input and returns a boolean value indicating whether the task was successfully added to the queue.\n\n```python\nclass MyTaskQueue(TaskQueueBase):\n def add(self, task: Task) -> bool:\n # Custom implementation to add task to the queue\n pass\n```\n\n### Getting the Next Task\nThe `get` method is used to retrieve the next task from the queue. This method should be implemented in a subclass of `TaskQueueBase` to define how tasks are retrieved from the queue based on the requesting agent.\n\n```python\nclass MyTaskQueue(TaskQueueBase):\n def get(self, agent: Agent) -> Task:\n # Custom implementation to get the next task from the queue\n pass\n```\n\n### Completing and Resetting Tasks\nThe `complete_task` method is used to mark a task as completed, while the `reset` method is used to reset a task if the agent failed to complete it. Subclasses of `TaskQueueBase` should provide implementations for these methods based on the specific requirements of the task queue.\n\n```python\nclass MyTaskQueue(TaskQueueBase):\n def complete_task(self, task_id: str):\n # Custom implementation to mark task as completed\n pass\n\n def reset(self, task_id: str):\n # Custom implementation to reset task\n pass\n```\n\n## Additional Information\n- It is important to ensure thread safety when implementing the methods of `TaskQueueBase` due to the presence of the `threading.Lock` object in the class.\n- Subclasses of `TaskQueueBase` should provide detailed implementations for each abstract method to define the behavior of the task queue.\n- Consider using the `synchronized_queue` decorator to ensure that the queue operations are thread-safe.\n\n## References\n- Python threading documentation: [Python Threading](https://docs.python.org/3/library/threading.html)\n- Abstract Base Classes in Python: [ABCs in Python](https://docs.python.org/3/library/abc.html)' response_metadata={'token_usage': {'completion_tokens': 835, 'prompt_tokens': 1734, 'total_tokens': 2569}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}
|
@ -0,0 +1,27 @@
|
|||||||
|
import os
|
||||||
|
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
# Import the OpenAIChat model and the Agent struct
|
||||||
|
from swarms import Agent, HuggingfaceLLM
|
||||||
|
|
||||||
|
# Load the environment variables
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
# Get the API key from the environment
|
||||||
|
api_key = os.environ.get("OPENAI_API_KEY")
|
||||||
|
|
||||||
|
# Initialize the language model
|
||||||
|
llm = HuggingfaceLLM(model_id="meta-llama/Meta-Llama-3-8B").cuda()
|
||||||
|
|
||||||
|
## Initialize the workflow
|
||||||
|
agent = Agent(
|
||||||
|
llm=llm,
|
||||||
|
max_loops="auto",
|
||||||
|
autosave=True,
|
||||||
|
dashboard=True,
|
||||||
|
interactive=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Run the workflow on a task
|
||||||
|
agent.run("Generate a 10,000 word blog on health and wellness.")
|
@ -0,0 +1,90 @@
|
|||||||
|
from swarms import BaseSwarm, AutoSwarm, AutoSwarmRouter, Agent, Anthropic
|
||||||
|
|
||||||
|
|
||||||
|
class MarketingSwarm(BaseSwarm):
|
||||||
|
"""
|
||||||
|
A class representing a marketing swarm.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
name (str): The name of the marketing swarm.
|
||||||
|
market_trend_analyzer (Agent): An agent for analyzing market trends.
|
||||||
|
content_idea_generator (Agent): An agent for generating content ideas.
|
||||||
|
campaign_optimizer (Agent): An agent for optimizing marketing campaigns.
|
||||||
|
|
||||||
|
Methods:
|
||||||
|
run(task: str, *args, **kwargs) -> Any: Runs the marketing swarm for the given task.
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, name="kyegomez/marketingswarm", *args, **kwargs):
|
||||||
|
super().__init__(*args, **kwargs)
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
# Agent for market trend analyzer
|
||||||
|
self.market_trend_analyzer = Agent(
|
||||||
|
agent_name="Market Trend Analyzer",
|
||||||
|
system_prompt="Analyze market trends to identify opportunities for marketing campaigns.",
|
||||||
|
llm=Anthropic(),
|
||||||
|
meax_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
streaming_on=True,
|
||||||
|
verbose=True,
|
||||||
|
stopping_token="<DONE>",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Agent for content idea generator
|
||||||
|
self.content_idea_generator = Agent(
|
||||||
|
agent_name="Content Idea Generator",
|
||||||
|
system_prompt="Generate content ideas based on market trends.",
|
||||||
|
llm=Anthropic(),
|
||||||
|
max_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
streaming_on=True,
|
||||||
|
verbose=True,
|
||||||
|
stopping_token="<DONE>",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Agent for campaign optimizer
|
||||||
|
self.campaign_optimizer = Agent(
|
||||||
|
agent_name="Campaign Optimizer",
|
||||||
|
system_prompt="Optimize marketing campaigns based on content ideas and market trends.",
|
||||||
|
llm=Anthropic(),
|
||||||
|
max_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
streaming_on=True,
|
||||||
|
verbose=True,
|
||||||
|
stopping_token="<DONE>",
|
||||||
|
)
|
||||||
|
|
||||||
|
def run(self, task: str, *args, **kwargs):
|
||||||
|
"""
|
||||||
|
Runs the marketing swarm for the given task.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
task (str): The task to be performed by the marketing swarm.
|
||||||
|
*args: Additional positional arguments.
|
||||||
|
**kwargs: Additional keyword arguments.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Any: The result of running the marketing swarm.
|
||||||
|
|
||||||
|
"""
|
||||||
|
# Analyze market trends
|
||||||
|
analyzed_trends = self.market_trend_analyzer.run(
|
||||||
|
task, *args, **kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate content ideas based on market trends
|
||||||
|
content_ideas = self.content_idea_generator.run(
|
||||||
|
task, analyzed_trends, *args, **kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
# Optimize marketing campaigns based on content ideas and market trends
|
||||||
|
optimized_campaigns = self.campaign_optimizer.run(
|
||||||
|
task, content_ideas, analyzed_trends, *args, **kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
return optimized_campaigns
|
@ -0,0 +1,49 @@
|
|||||||
|
from swarms import AutoSwarm, AutoSwarmRouter, BaseSwarm, Agent, Anthropic
|
||||||
|
|
||||||
|
|
||||||
|
class MySwarm(BaseSwarm):
|
||||||
|
def __init__(self, name="kyegomez/myswarm", *args, **kwargs):
|
||||||
|
super(self, MySwarm).__init__(*args, **kwargs)
|
||||||
|
self.name = name
|
||||||
|
|
||||||
|
# Define and add your agents here
|
||||||
|
self.agent1 = Agent(
|
||||||
|
agent_name="Agent 1",
|
||||||
|
system_prompt="A specialized agent for task 1.",
|
||||||
|
llm=Anthropic(),
|
||||||
|
max_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
streaming_on=True,
|
||||||
|
verbose=True,
|
||||||
|
stopping_token="<DONE>",
|
||||||
|
)
|
||||||
|
self.agent2 = Agent(
|
||||||
|
agent_name="Agent 2",
|
||||||
|
system_prompt="A specialized agent for task 2.",
|
||||||
|
llm=Anthropic(),
|
||||||
|
max_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
streaming_on=True,
|
||||||
|
verbose=True,
|
||||||
|
stopping_token="<DONE>",
|
||||||
|
)
|
||||||
|
self.agent3 = Agent(
|
||||||
|
agent_name="Agent 3",
|
||||||
|
system_prompt="A specialized agent for task 3.",
|
||||||
|
llm=Anthropic(),
|
||||||
|
max_loops=1,
|
||||||
|
autosave=True,
|
||||||
|
dashboard=False,
|
||||||
|
streaming_on=True,
|
||||||
|
verbose=True,
|
||||||
|
stopping_token="<DONE>",
|
||||||
|
)
|
||||||
|
|
||||||
|
def run(self, task: str, *args, **kwargs):
|
||||||
|
# Add your multi-agent logic here
|
||||||
|
output1 = self.agent1.run(task, *args, **kwargs)
|
||||||
|
output2 = self.agent2.run(task, output1, *args, **kwargs)
|
||||||
|
output3 = self.agent3.run(task, output2, *args, **kwargs)
|
||||||
|
return output3
|
@ -1,34 +0,0 @@
|
|||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
|
||||||
# CSV to dataframe
|
|
||||||
def csv_to_dataframe(file_path):
|
|
||||||
"""
|
|
||||||
Read a CSV file and return a pandas DataFrame.
|
|
||||||
|
|
||||||
Parameters:
|
|
||||||
file_path (str): The path to the CSV file.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
pandas.DataFrame: The DataFrame containing the data from the CSV file.
|
|
||||||
"""
|
|
||||||
df = pd.read_csv(file_path)
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
# Dataframe to strings
|
|
||||||
def dataframe_to_strings(df):
|
|
||||||
"""
|
|
||||||
Converts a pandas DataFrame to a list of string representations of each row.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
df (pandas.DataFrame): The DataFrame to convert.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
list: A list of string representations of each row in the DataFrame.
|
|
||||||
"""
|
|
||||||
row_strings = []
|
|
||||||
for index, row in df.iterrows():
|
|
||||||
row_string = row.to_string()
|
|
||||||
row_strings.append(row_string)
|
|
||||||
return row_strings
|
|
@ -1,57 +0,0 @@
|
|||||||
import torch
|
|
||||||
from torch import nn
|
|
||||||
|
|
||||||
|
|
||||||
def load_model_torch(
|
|
||||||
model_path: str = None,
|
|
||||||
device: torch.device = None,
|
|
||||||
model: nn.Module = None,
|
|
||||||
strict: bool = True,
|
|
||||||
map_location=None,
|
|
||||||
*args,
|
|
||||||
**kwargs,
|
|
||||||
) -> nn.Module:
|
|
||||||
"""
|
|
||||||
Load a PyTorch model from a given path and move it to the specified device.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
model_path (str): Path to the saved model file.
|
|
||||||
device (torch.device): Device to move the model to.
|
|
||||||
model (nn.Module): The model architecture, if the model file only contains the state dictionary.
|
|
||||||
strict (bool): Whether to strictly enforce that the keys in the state dictionary match the keys returned by the model's `state_dict()` function.
|
|
||||||
map_location (callable): A function to remap the storage locations of the loaded model.
|
|
||||||
*args: Additional arguments to pass to `torch.load`.
|
|
||||||
**kwargs: Additional keyword arguments to pass to `torch.load`.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
nn.Module: The loaded model.
|
|
||||||
|
|
||||||
Raises:
|
|
||||||
FileNotFoundError: If the model file is not found.
|
|
||||||
RuntimeError: If there is an error while loading the model.
|
|
||||||
"""
|
|
||||||
if device is None:
|
|
||||||
device = torch.device(
|
|
||||||
"cuda" if torch.cuda.is_available() else "cpu"
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
if model is None:
|
|
||||||
model = torch.load(
|
|
||||||
model_path, map_location=map_location, *args, **kwargs
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
model.load_state_dict(
|
|
||||||
torch.load(
|
|
||||||
model_path,
|
|
||||||
map_location=map_location,
|
|
||||||
*args,
|
|
||||||
**kwargs,
|
|
||||||
),
|
|
||||||
strict=strict,
|
|
||||||
)
|
|
||||||
return model.to(device)
|
|
||||||
except FileNotFoundError:
|
|
||||||
raise FileNotFoundError(f"Model file not found: {model_path}")
|
|
||||||
except RuntimeError as e:
|
|
||||||
raise RuntimeError(f"Error loading model: {str(e)}")
|
|
@ -1,49 +0,0 @@
|
|||||||
import pandas as pd
|
|
||||||
|
|
||||||
|
|
||||||
def dataframe_to_text(
|
|
||||||
df: pd.DataFrame,
|
|
||||||
parsing_func: callable = None,
|
|
||||||
) -> str:
|
|
||||||
"""
|
|
||||||
Convert a pandas DataFrame to a string representation.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
df (pd.DataFrame): The pandas DataFrame to convert.
|
|
||||||
parsing_func (callable, optional): A function to parse the resulting text. Defaults to None.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str: The string representation of the DataFrame.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
>>> df = pd.DataFrame({
|
|
||||||
... 'A': [1, 2, 3],
|
|
||||||
... 'B': [4, 5, 6],
|
|
||||||
... 'C': [7, 8, 9],
|
|
||||||
... })
|
|
||||||
>>> print(dataframe_to_text(df))
|
|
||||||
|
|
||||||
"""
|
|
||||||
# Get a string representation of the dataframe
|
|
||||||
df_str = df.to_string()
|
|
||||||
|
|
||||||
# Get a string representation of the column names
|
|
||||||
info_str = df.info()
|
|
||||||
|
|
||||||
# Combine the dataframe string and the info string
|
|
||||||
text = f"DataFrame:\n{df_str}\n\nInfo:\n{info_str}"
|
|
||||||
|
|
||||||
if parsing_func:
|
|
||||||
text = parsing_func(text)
|
|
||||||
|
|
||||||
return text
|
|
||||||
|
|
||||||
|
|
||||||
# # # Example usage:
|
|
||||||
# df = pd.DataFrame({
|
|
||||||
# 'A': [1, 2, 3],
|
|
||||||
# 'B': [4, 5, 6],
|
|
||||||
# 'C': [7, 8, 9],
|
|
||||||
# })
|
|
||||||
|
|
||||||
# print(dataframe_to_text(df))
|
|
Loading…
Reference in new issue