pull/522/head
Kye Gomez 7 months ago
parent 1aa9a84a5e
commit aff2220b54

@ -158,10 +158,10 @@ nav:
- AutoSwarmRouter: "swarms/structs/auto_swarm_router.md"
- AutoSwarm: "swarms/structs/auto_swarm.md"
- GroupChat: "swarms/structs/group_chat.md"
- Swarms Cloud API:
- Overview: "swarms_cloud/main.md"
- Available Models: "swarms_cloud/available_models.md"
- Agent API: "swarms_cloud/agent_api.md"
- Migrate from OpenAI to Swarms in 3 lines of code: "swarms_cloud/migrate_openai.md"
- Getting Started with SOTA Vision Language Models VLM: "swarms_cloud/getting_started.md"
- Enterprise Guide to High-Performance Multi-Agent LLM Deployments: "swarms_cloud/production_deployment.md"

@ -113,7 +113,7 @@ rearrange(agents, flow, task)
Here's an example of how to use the `AgentRearrange` class and the `rearrange` function:
```python
from swarms import Agent, AgentRearrange, rearrange
from swarms import Agent, AgentRearrange
from typing import List
# Initialize the director agent
@ -169,10 +169,6 @@ agent_system = AgentRearrange(agents=agents, flow=flow)
output = agent_system.run("Process monthly financial statements")
print(output)
# Using rearrange function
output = rearrange(agents, flow, "Process monthly financial statements")
print(output)
```
In this example, we first initialize three agents: `director`, `worker1`, and `worker2`. Then, we create a list of these agents and define the flow pattern `"Director -> Worker1 -> Worker2"`.

@ -0,0 +1,220 @@
# Swarms FastAPI Documentation
The Swarms FastAPI module is designed to manage and interact with multiple language models through a RESTful API interface. This documentation will cover the classes, functions, and endpoints provided by this module, and provide comprehensive examples on how to use them effectively.
### Purpose
The purpose of this module is to create a flexible, scalable API service that can interface with various language models including OpenAIChat, GPT4o, GPT4VisionAPI, and Anthropic models. This allows for dynamic model selection, efficient token counting, and handling user requests for AI-generated content.
### Key Features
- **Dynamic Model Switching**: Easily switch between different language models based on the user input.
- **Token Counting**: Efficiently count tokens using the `tiktoken` library.
- **Agent Configuration**: Configure and run agents with detailed settings for various tasks.
- **CORS Handling**: Support for Cross-Origin Resource Sharing (CORS) to allow web-based clients to interact with the API.
## Class Definitions
### `AgentInput`
The `AgentInput` class defines the structure of the input data required to configure and run an agent.
| Parameter | Type | Default | Description |
|-----------------------------|-----------------|---------------|-----------------------------------------------------------------------------|
| `agent_name` | `str` | "Swarm Agent" | The name of the agent. |
| `system_prompt` | `str` or `None` | `None` | The system prompt to guide the agent's behavior. |
| `agent_description` | `str` or `None` | `None` | A description of the agent's purpose. |
| `model_name` | `str` | "OpenAIChat" | The name of the language model to use. |
| `max_loops` | `int` | 1 | The maximum number of loops the agent should perform. |
| `autosave` | `bool` | `False` | Whether to enable autosave functionality. |
| `dynamic_temperature_enabled` | `bool` | `False` | Whether dynamic temperature adjustment is enabled. |
| `dashboard` | `bool` | `False` | Whether to enable the dashboard feature. |
| `verbose` | `bool` | `False` | Whether to enable verbose logging. |
| `streaming_on` | `bool` | `True` | Whether to enable streaming of responses. |
| `saved_state_path` | `str` or `None` | `None` | Path to save the agent's state. |
| `sop` | `str` or `None` | `None` | Standard operating procedures for the agent. |
| `sop_list` | `List[str]` or `None` | `None` | A list of standard operating procedures. |
| `user_name` | `str` | "User" | The name of the user interacting with the agent. |
| `retry_attempts` | `int` | 3 | Number of retry attempts for failed operations. |
| `context_length` | `int` | 8192 | Maximum context length for the model's input. |
| `task` | `str` or `None` | `None` | The task description for the agent to perform. |
### `AgentOutput`
The `AgentOutput` class defines the structure of the output data returned by the agent after processing a request.
| Parameter | Type | Description |
|-----------------|-----------------|-----------------------------------------------------------------------------|
| `agent` | `AgentInput` | The input configuration used to create the agent. |
| `completions` | `ChatCompletionResponse` | The response generated by the agent, including completion data. |
## Functions
### `count_tokens`
The `count_tokens` function counts the number of tokens in a given text using the `tiktoken` library.
**Parameters:**
- `text` (`str`): The text to be tokenized and counted.
**Returns:**
- `int`: The number of tokens in the text.
**Example Usage:**
```python
text = "This is a sample text to count tokens."
token_count = count_tokens(text)
print(f"Token count: {token_count}")
```
### `model_router`
The `model_router` function switches to the specified language model based on the provided model name.
**Parameters:**
- `model_name` (`str`): The name of the model to switch to.
**Returns:**
- An instance of the specified language model.
**Example Usage:**
```python
model_name = "OpenAIChat"
model_instance = model_router(model_name)
```
## FastAPI Endpoints
### `/v1/models`
This endpoint lists the available models.
**Method:** `GET`
**Response Model:** `List[str]`
**Description:**
Returns a list of available model names for the clients to query and understand the options.
**Example Usage:**
```python
# Using HTTP client to get the list of models
response = requests.get("http://api.swarms.world/v1/models")
print(response.json())
```
### `/v1/agent/completions`
This endpoint handles the completion request for an agent configured with the given input parameters.
**Method:** `POST`
**Request Model:** `AgentInput`
**Response Model:** `AgentOutput`
**Description:**
Receives an `AgentInput` configuration, sets up the agent, processes the request, and returns the completion results.
**Example Usage:**
```python
import requests
from pydantic import BaseModel
from typing import List
class AgentInput(BaseModel):
agent_name: str = "Swarm Agent"
system_prompt: str = None
agent_description: str = None
model_name: str = "OpenAIChat"
max_loops: int = 1
autosave: bool = False
dynamic_temperature_enabled: bool = False
dashboard: bool = False
verbose: bool = False
streaming_on: bool = True
saved_state_path: str = None
sop: str = None
sop_list: List[str] = None
user_name: str = "User"
retry_attempts: int = 3
context_length: int = 8192
task: str = None
agent_input = AgentInput(task="Generate a summary of the provided text.")
response = requests.post("http://api.swarms.world/v1/agent/completions", json=agent_input.dict())
print(response.json())
```
## Implementation Details
### FastAPI App Initialization
The FastAPI application is initialized with CORS middleware to allow cross-origin requests. This is essential for enabling web-based clients to interact with the API without facing CORS issues.
```python
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI(debug=True)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
```
### Model Switching Logic
The `model_router` function encapsulates the logic for switching between different language models based on the input model name. This is crucial for dynamic model selection based on user preferences.
```python
def model_router(model_name: str):
if model_name == "OpenAIChat":
llm = OpenAIChat()
elif model_name == "GPT4o":
llm = GPT4o(openai_api_key=os.getenv("OPENAI_API_KEY"))
elif model_name == "GPT4VisionAPI":
llm = GPT4VisionAPI()
elif model_name == "Anthropic":
llm = Anthropic(anthropic_api_key=os.getenv("ANTHROPIC_API_KEY"))
else:
raise ValueError("Invalid model name provided.")
return llm
```
### Token Counting Function
The `count_tokens` function uses the `tiktoken` library to encode and count the number of tokens in a given text. This is essential for managing token limits and understanding the cost implications of API requests.
```python
def count_tokens(text: str):
try:
encoding = tiktoken.get_encoding("gpt-4o")
tokens = encoding.encode(text)
return len(tokens)
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
```
## Additional Information and Tips
- **Error Handling**: Ensure robust error handling by catching exceptions and returning meaningful HTTP status codes and messages.
- **Model Selection**: When adding new models, update the `model_router` function and the `/v1/models` endpoint to include the new model names.
- **Token Management**: Keep track of token usage to optimize API costs and manage rate limits effectively.
## References and Resources
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
- [Pydantic Documentation](https://pydantic-docs.helpmanual.io/)
- [tiktoken Library](https://github.com/openai/tiktoken)
- [OpenAI API Documentation](https://beta.openai.com/docs/)
This documentation provides a comprehensive guide to using the Swarms FastAPI module, with detailed descriptions, examples, and implementation insights to help developers effectively utilize the provided functionalities.
Loading…
Cancel
Save