diff --git a/DOCS/agents/README.md b/DOCS/agents/README.md index c45d390f..c30fd7d2 100644 --- a/DOCS/agents/README.md +++ b/DOCS/agents/README.md @@ -1,7 +1,30 @@ +# Agents +Agents are individual building blocks in a swarm, they each have a driving force in our case a language model, with long term memory and the capacity to use tools in other words an agent is: + +LLM => Long Term Memory => Tools + +That's it. + +That's as simple as it can get. + +Our Agent classes have to be as simple as humanely possible, they should be plug in and play with any of language model classes, vectorstores, and tools. + +## File structure +``` +* memory +* models +* tools +* utils +* mem +``` + + # Swarms Documentation + ==================== ## Language Models + --------------- Language models are the driving force of our agents. They are responsible for generating text based on a given prompt. We currently support two types of language models: Anthropic and HuggingFace. @@ -13,55 +36,72 @@ The `Anthropic` class is a wrapper for the Anthropic large language models. #### Initialization ``` + Anthropic(model="claude-2", max_tokens_to_sample=256, temperature=None, top_k=None, top_p=None, streaming=False, default_request_timeout=None) + ``` -Copy code + ##### Parameters -- `model` (str, optional): The name of the model to use. Default is "claude-2". -- `max_tokens_to_sample` (int, optional): The maximum number of tokens to sample. Default is 256. -- `temperature` (float, optional): The temperature to use for the generation. Higher values result in more random outputs. -- `top_k` (int, optional): The number of top tokens to consider for the generation. -- `top_p` (float, optional): The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. -- `streaming` (bool, optional): Whether to use streaming mode. Default is False. -- `default_request_timeout` (int, optional): The default request timeout in seconds. Default is 600. +- `model` (str, optional): The name of the model to use. Default is "claude-2". + +- `max_tokens_to_sample` (int, optional): The maximum number of tokens to sample. Default is 256. + +- `temperature` (float, optional): The temperature to use for the generation. Higher values result in more random outputs. + +- `top_k` (int, optional): The number of top tokens to consider for the generation. + +- `top_p` (float, optional): The cumulative probability of parameter highest probability vocabulary tokens to keep for nucleus sampling. + +- `streaming` (bool, optional): Whether to use streaming mode. Default is False. + +- `default_request_timeout` (int, optional): The default request timeout in seconds. Default is 600. ##### Example ``` + anthropic = Anthropic(model="claude-2", max_tokens_to_sample=100, temperature=0.8) + ``` -Copy code + #### Generation ``` + anthropic.generate(prompt, stop=None) + ``` -Copy code + ##### Parameters -- `prompt` (str): The prompt to use for the generation. -- `stop` (list, optional): A list of stop sequences. The generation will stop if one of these sequences is encountered. +- `prompt` (str): The prompt to use for the generation. + +- `stop` (list, optional): A list of stop sequences. The generation will stop if one of these sequences is encountered. ##### Returns -- `str`: The generated text. +- `str`: The generated text. ##### Example ``` + prompt = "Once upon a time" + stop = ["The end"] + print(anthropic.generate(prompt, stop)) + ``` -Copy code + ### HuggingFaceLLM @@ -70,47 +110,98 @@ The `HuggingFaceLLM` class is a wrapper for the HuggingFace language models. #### Initialization ``` + HuggingFaceLLM(model_id: str, device: str = None, max_length: int = 20, quantize: bool = False, quantization_config: dict = None) + ``` -Copy code + ##### Parameters -- `model_id` (str): The ID of the model to use. -- `device` (str, optional): The device to use for the generation. Default is "cuda" if available, otherwise "cpu". -- `max_length` (int, optional): The maximum length of the generated text. Default is 20. -- `quantize` (bool, optional): Whether to quantize the model. Default is False. -- `quantization_config` (dict, optional): The configuration for the quantization. +- `model_id` (str): The ID of the model to use. + +- `device` (str, optional): The device to use for the generation. Default is "cuda" if available, otherwise "cpu". + +- `max_length` (int, optional): The maximum length of the generated text. Default is 20. + +- `quantize` (bool, optional): Whether to quantize the model. Default is False. + +- `quantization_config` (dict, optional): The configuration for the quantization. ##### Example ``` + huggingface = HuggingFaceLLM(model_id="gpt2", device="cpu", max_length=50) + ``` -Copy code + #### Generation ``` + huggingface.generate(prompt_text: str, max_length: int = None) + ``` -Copy code + ##### Parameters -- `prompt_text` (str): The prompt to use for the generation. -- `max_length` (int, optional): The maximum length of the generated text. If not provided, the default value specified during initialization is used. +- `prompt_text` (str): The prompt to use for the generation. + +- `max_length` (int, optional): The maximum length of the generated text. If not provided, the default value specified during initialization is used. ##### Returns -- `str`: The generated text. +- `str`: The generated text. ##### Example ``` + +prompt = "Once upon a time" + +print(huggingface.generate(prompt)) + +``` + + +### Full Examples + +```python +# Import the necessary classes + +from swarms import Anthropic, HuggingFaceLLM + +# Create an instance of the Anthropic class + +anthropic = Anthropic(model="claude-2", max_tokens_to_sample=100, temperature=0.8) + +# Use the Anthropic instance to generate text + prompt = "Once upon a time" + +stop = ["The end"] + +print("Anthropic output:") + +print(anthropic.generate(prompt, stop)) + +# Create an instance of the HuggingFaceLLM class + +huggingface = HuggingFaceLLM(model_id="gpt2", device="cpu", max_length=50) + +# Use the HuggingFaceLLM instance to generate text + +prompt = "Once upon a time" + +print("\nHuggingFaceLLM output:") + print(huggingface.generate(prompt)) -``` \ No newline at end of file + +``` +