updated swarms docs

pull/791/head
Kye Gomez 2 months ago
parent c240e0c734
commit 5477635441

@ -43,20 +43,27 @@ Run a single swarm with specified agents and tasks.
#### Request Parameters #### Request Parameters
| Parameter | Type | Required | Description | | Parameter | Type | Required | Default | Description |
|-----------|------|----------|-------------| |-----------|------|----------|---------|-------------|
| name | string | Optional | Name of the swarm (max 100 chars) | | name | string | Optional | "swarms-01" | Name of the swarm (max 100 chars) |
| description | string | Optional | Description of the swarm (max 500 chars) | | description | string | Optional | - | Description of the swarm (max 500 chars) |
| agents | array | Required | Array of agent configurations | | agents | array | Required | - | Array of agent configurations |
| max_loops | integer | Optional | Maximum number of iterations | | max_loops | integer | Optional | 1 | Maximum number of iterations |
| swarm_type | string | Optional | Type of swarm workflow (e.g., "AgentRearrange", "MixtureOfAgents", "SpreadSheetSwarm", "SequentialWorkflow", "ConcurrentWorkflow", "GroupChat", "MultiAgentRouter", "AutoSwarmBuilder", "HiearchicalSwarm", "auto", "MajorityVoting") | | swarm_type | string | Optional | - | Type of swarm workflow |
| task | string | Required | The task to be performed | | task | string | Required | - | The task to be performed |
| img | string | Optional | Image URL if relevant | | img | string | Optional | - | Image URL if relevant |
| output_type | string | Optional | "str" | Output format ("str", "json", "dict", "yaml", "list") |
| rules | string | Optional | - | Rules for the agent |
| return_history | boolean | Optional | true | Whether to return the full conversation history | | return_history | boolean | Optional | true | Whether to return the full conversation history |
| rearrange_flow | string | Optional | - | Flow for the agents | | rules | string | Optional | - | Rules for the swarm to follow |
| rearrange_flow | string | Optional | - | Flow pattern for agent rearrangement |
| output_type | string | Optional | "str" | Output format ("str", "json", "dict", "yaml", "list") |
| schedule | object | Optional | - | Scheduling information for the swarm |
#### Schedule Configuration Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| scheduled_time | datetime | Required | - | When to execute the swarm (UTC) |
| timezone | string | Optional | "UTC" | Timezone for the scheduled time |
#### Agent Configuration Parameters #### Agent Configuration Parameters
@ -65,114 +72,255 @@ Run a single swarm with specified agents and tasks.
| agent_name | string | Required | - | Name of the agent (max 100 chars) | | agent_name | string | Required | - | Name of the agent (max 100 chars) |
| description | string | Optional | - | Description of the agent (max 500 chars) | | description | string | Optional | - | Description of the agent (max 500 chars) |
| system_prompt | string | Optional | - | System prompt for the agent (max 500 chars) | | system_prompt | string | Optional | - | System prompt for the agent (max 500 chars) |
| model_name | string | Optional | "gpt-4o" | Model to be used by the agent (follows litellm conventions) | | model_name | string | Optional | "gpt-4o" | Model to be used by the agent |
| auto_generate_prompt | boolean | Optional | false | Whether to auto-generate prompts | | auto_generate_prompt | boolean | Optional | false | Whether to auto-generate prompts |
| max_tokens | integer | Optional | - | Maximum tokens for response | | max_tokens | integer | Optional | - | Maximum tokens for response |
| temperature | float | Optional | 0.5 | Temperature for response generation | | temperature | float | Optional | 0.5 | Temperature for response generation |
| role | string | Optional | "worker" | Role of the agent | | role | string | Optional | "worker" | Role of the agent |
| max_loops | integer | Optional | 1 | Maximum iterations for this agent | | max_loops | integer | Optional | 1 | Maximum iterations for this agent |
## Available Swarm Types ## Available Swarm Types
---
| Swarm Type | Description | | Swarm Type | Description |
|----------------------|-----------------------------------------------------------------------------| |------------|-------------|
| AgentRearrange | Rearranges agents dynamically to optimize task execution. | | AgentRearrange | Rearranges agents dynamically to optimize task execution |
| MixtureOfAgents | Combines different agents to leverage their unique capabilities. | | MixtureOfAgents | Combines different agents to leverage their unique capabilities |
| SpreadSheetSwarm | Utilizes spreadsheet-like operations for data manipulation and analysis. | | SpreadSheetSwarm | Utilizes spreadsheet-like operations for data manipulation |
| SequentialWorkflow | Executes tasks in a predefined sequential order. | | SequentialWorkflow | Executes tasks in a predefined sequential order |
| ConcurrentWorkflow | Runs tasks concurrently to improve efficiency and reduce execution time. | | ConcurrentWorkflow | Runs tasks concurrently to improve efficiency |
| GroupChat | Facilitates communication and collaboration among agents in a chat format. | | GroupChat | Facilitates communication among agents in a chat format |
| MultiAgentRouter | Routes tasks to multiple agents based on their expertise and availability. | | MultiAgentRouter | Routes tasks to agents based on their expertise |
| AutoSwarmBuilder | Automatically constructs a swarm based on task requirements and agent skills.| | AutoSwarmBuilder | Automatically constructs swarms based on task requirements |
| HiearchicalSwarm | Organizes agents in a hierarchy to manage complex task dependencies. | | HiearchicalSwarm | Organizes agents in a hierarchy for complex tasks |
| auto | Automatically selects the most suitable swarm type for the task. | | auto | Automatically selects the most suitable swarm type |
| MajorityVoting | Uses majority voting among agents to reach a consensus on task outcomes. | | MajorityVoting | Uses majority voting to reach consensus on outcomes |
## Job Scheduling Endpoints
#### Example Request
### Schedule a Swarm
Schedule a swarm to run at a specific time.
**Endpoint:** `POST /v1/swarm/schedule`
**Authentication Required:** Yes
#### Request Format
Same as single swarm completion, with additional `schedule` object:
```json ```json
{ {
"name": "Test Swarm", "name": "Scheduled Swarm",
"description": "A test swarm", "agents": [...],
"agents": [ "task": "Perform analysis",
{ "schedule": {
"agent_name": "Research Agent", "scheduled_time": "2024-03-20T15:00:00Z",
"description": "Conducts research", "timezone": "America/New_York"
"system_prompt": "You are a research assistant.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
} }
],
"max_loops": 1,
"swarm_type": "ConcurrentWorkflow",
"task": "Write a short blog post about AI agents."
} }
``` ```
#### Response Structure ### List Scheduled Jobs
Get all scheduled swarm jobs.
| Field | Type | Description | **Endpoint:** `GET /v1/swarm/schedule`
|-------|------|-------------| **Authentication Required:** Yes
| status | string | Status of the swarm execution |
| swarm_name | string | Name of the executed swarm |
| description | string | Description of the swarm |
| task | string | Original task description |
| metadata | object | Execution metadata |
| output | object/array | Results from the swarm execution |
### Batch Swarm Completion #### Response Format
Run multiple swarms in a single request. ```json
{
"status": "success",
"scheduled_jobs": [
{
"job_id": "swarm_analysis_1234567890",
"swarm_name": "Analysis Swarm",
"scheduled_time": "2024-03-20T15:00:00Z",
"timezone": "America/New_York"
}
]
}
```
**Endpoint:** `POST /v1/swarm/batch/completions` ### Cancel Scheduled Job
Cancel a scheduled swarm job.
**Endpoint:** `DELETE /v1/swarm/schedule/{job_id}`
**Authentication Required:** Yes **Authentication Required:** Yes
#### Request Format #### Response Format
Array of swarm configurations, each following the same format as single swarm completion. ```json
{
"status": "success",
"message": "Scheduled job cancelled successfully",
"job_id": "swarm_analysis_1234567890"
}
```
## Billing and Credits
The API uses a credit-based billing system with the following components:
### Cost Calculation
| Component | Cost |
|-----------|------|
| Base cost per agent | $0.01 |
| Input tokens (per 1M) | $2.00 |
| Output tokens (per 1M) | $6.00 |
Special pricing:
- California night time hours (8 PM to 6 AM PT): 75% discount on token costs
- Credits are deducted in the following order:
1. Free credits
2. Regular credits
Costs are calculated based on:
- Number of agents used
- Total input tokens (including system prompts and agent memory)
- Total output tokens generated
- Execution time
## Error Handling
| HTTP Status Code | Description |
|-----------------|-------------|
| 402 | Insufficient credits |
| 403 | Invalid API key |
| 404 | Resource not found |
| 500 | Internal server error |
## Best Practices
1. Start with small swarms and gradually increase complexity
2. Monitor credit usage and token counts
3. Use appropriate max_loops values to control execution
4. Implement proper error handling for API responses
5. Consider using batch completions for multiple related tasks
## Response Structures
### Single Swarm Response
```json
{
"status": "success",
"swarm_name": "Test Swarm",
"description": "A test swarm",
"swarm_type": "ConcurrentWorkflow",
"task": "Write a blog post",
"output": {
// Swarm output here
},
"metadata": {
"max_loops": 1,
"num_agents": 2,
"execution_time_seconds": 5.23,
"completion_time": 1647123456.789,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.02,
"input_token_cost": 0.015,
"output_token_cost": 0.045,
"token_counts": {
"total_input_tokens": 1500,
"total_output_tokens": 3000,
"total_tokens": 4500,
"per_agent": {
"agent1": {
"input_tokens": 750,
"output_tokens": 1500,
"total_tokens": 2250
},
"agent2": {
"input_tokens": 750,
"output_tokens": 1500,
"total_tokens": 2250
}
}
},
"num_agents": 2,
"execution_time_seconds": 5.23
},
"total_cost": 0.08
}
}
}
```
### Batch Swarm Response
#### Example Batch Request
```json ```json
[ [
{ {
"name": "Batch Swarm 1", "status": "success",
"description": "First swarm in batch", "swarm_name": "Batch Swarm 1",
"agents": [...], "output": {},
"task": "Task 1" "metadata": {}
}, },
{ {
"name": "Batch Swarm 2", "status": "success",
"description": "Second swarm in batch", "swarm_name": "Batch Swarm 2",
"agents": [...], "output": {},
"task": "Task 2" "metadata": {}
} }
] ]
``` ```
### Get Logs ## Logs Endpoint
Get the logs of a swarm.
### Get Swarm Logs
Retrieve execution logs for your API key.
**Endpoint:** `GET /v1/swarm/logs` **Endpoint:** `GET /v1/swarm/logs`
**Authentication Required:** Yes **Authentication Required:** Yes
#### Response Format
#### Example Request ```json
```bash {
curl -X GET "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/logs" \ "status": "success",
-H "x-api-key: your_api_key_here" "count": 2,
"logs": [
{
"api_key": "masked",
"data": {
"swarm_name": "Test Swarm",
"task": "Write a blog post",
"execution_time": "2024-03-19T15:30:00Z",
"status": "success"
}
}
]
}
``` ```
---- ## Error Handling
The API uses standard HTTP status codes and provides detailed error messages:
| HTTP Status Code | Description | Example Response |
|-----------------|-------------|------------------|
| 400 | Bad Request - Invalid parameters | `{"detail": "Invalid swarm configuration"}` |
| 401 | Unauthorized - Missing API key | `{"detail": "API key is required"}` |
| 402 | Payment Required - Insufficient credits | `{"detail": "Insufficient credits"}` |
| 403 | Forbidden - Invalid API key | `{"detail": "Invalid API key"}` |
| 429 | Too Many Requests - Rate limit exceeded | `{"detail": "Rate limit exceeded"}` |
| 500 | Internal Server Error | `{"detail": "Internal server error"}` |
## Rate Limiting
The API implements rate limiting to ensure fair usage:
- **Rate Limit:** 100 requests per minute per IP address
- **Time Window:** 60 seconds
- **Response on Limit Exceeded:** HTTP 429 with retry-after header
# Examples # Code Examples
## Python ## Python
### Using requests ### Using requests
```python ```python
import requests import requests
import json from datetime import datetime, timedelta
import pytz
API_KEY = "your_api_key_here" API_KEY = "your_api_key_here"
BASE_URL = "https://swarms-api-285321057562.us-east1.run.app" BASE_URL = "https://swarms-api-285321057562.us-east1.run.app"
@ -198,7 +346,9 @@ def run_single_swarm():
], ],
"max_loops": 1, "max_loops": 1,
"swarm_type": "SequentialWorkflow", "swarm_type": "SequentialWorkflow",
"task": "Analyze current market trends in tech sector" "task": "Analyze current market trends in tech sector",
"return_history": True,
"rules": "Focus on major market indicators"
} }
response = requests.post( response = requests.post(
@ -209,11 +359,12 @@ def run_single_swarm():
return response.json() return response.json()
def run_batch_swarms(): def schedule_swarm():
payload = [ # Schedule for 1 hour from now
{ scheduled_time = datetime.now(pytz.UTC) + timedelta(hours=1)
"name": "Market Analysis",
"description": "First swarm", payload = {
"name": "Scheduled Analysis",
"agents": [ "agents": [
{ {
"agent_name": "Analyst", "agent_name": "Analyst",
@ -222,43 +373,41 @@ def run_batch_swarms():
"role": "worker" "role": "worker"
} }
], ],
"task": "Analyze tech trends" "task": "Analyze tech trends",
}, "schedule": {
{ "scheduled_time": scheduled_time.isoformat(),
"name": "Risk Assessment", "timezone": "America/New_York"
"description": "Second swarm",
"agents": [
{
"agent_name": "Risk Analyst",
"system_prompt": "You are a risk analyst.",
"model_name": "gpt-4o",
"role": "worker"
} }
],
"task": "Assess market risks"
} }
]
response = requests.post( response = requests.post(
f"{BASE_URL}/v1/swarm/batch/completions", f"{BASE_URL}/v1/swarm/schedule",
headers=headers, headers=headers,
json=payload json=payload
) )
return response.json() return response.json()
# Using async/await with aiohttp def get_scheduled_jobs():
import aiohttp response = requests.get(
import asyncio f"{BASE_URL}/v1/swarm/schedule",
headers=headers
)
return response.json()
async def run_swarm_async(): def cancel_scheduled_job(job_id: str):
async with aiohttp.ClientSession() as session: response = requests.delete(
async with session.post( f"{BASE_URL}/v1/swarm/schedule/{job_id}",
f"{BASE_URL}/v1/swarm/completions", headers=headers
headers=headers, )
json=payload return response.json()
) as response:
return await response.json() def get_swarm_logs():
response = requests.get(
f"{BASE_URL}/v1/swarm/logs",
headers=headers
)
return response.json()
``` ```
## Node.js ## Node.js
@ -273,28 +422,28 @@ const headers = {
'Content-Type': 'application/json' 'Content-Type': 'application/json'
}; };
// Single swarm execution // Schedule a swarm
async function runSingleSwarm() { async function scheduleSwarm() {
const scheduledTime = new Date();
scheduledTime.setHours(scheduledTime.getHours() + 1);
const payload = { const payload = {
name: 'Financial Analysis', name: 'Scheduled Analysis',
description: 'Market analysis swarm', agents: [{
agents: [ agent_name: 'Analyst',
{ system_prompt: 'You are a market analyst.',
agent_name: 'Market Analyst',
description: 'Analyzes market trends',
system_prompt: 'You are a financial analyst expert.',
model_name: 'gpt-4o', model_name: 'gpt-4o',
role: 'worker', role: 'worker'
max_loops: 1 }],
task: 'Analyze tech trends',
schedule: {
scheduled_time: scheduledTime.toISOString(),
timezone: 'America/New_York'
} }
],
max_loops: 1,
swarm_type: 'SequentialWorkflow',
task: 'Analyze current market trends'
}; };
try { try {
const response = await fetch(`${BASE_URL}/v1/swarm/completions`, { const response = await fetch(`${BASE_URL}/v1/swarm/schedule`, {
method: 'POST', method: 'POST',
headers, headers,
body: JSON.stringify(payload) body: JSON.stringify(payload)
@ -307,415 +456,77 @@ async function runSingleSwarm() {
} }
} }
// Batch swarm execution // Get scheduled jobs
async function runBatchSwarms() { async function getScheduledJobs() {
const payload = [
{
name: 'Market Analysis',
agents: [{
agent_name: 'Analyst',
system_prompt: 'You are a market analyst.',
model_name: 'gpt-4o',
role: 'worker'
}],
task: 'Analyze tech trends'
},
{
name: 'Risk Assessment',
agents: [{
agent_name: 'Risk Analyst',
system_prompt: 'You are a risk analyst.',
model_name: 'gpt-4o',
role: 'worker'
}],
task: 'Assess market risks'
}
];
try { try {
const response = await fetch(`${BASE_URL}/v1/swarm/batch/completions`, { const response = await fetch(`${BASE_URL}/v1/swarm/schedule`, {
method: 'POST', headers
headers,
body: JSON.stringify(payload)
}); });
return await response.json(); return await response.json();
} catch (error) { } catch (error) {
console.error('Error:', error); console.error('Error:', error);
throw error; throw error;
} }
} }
```
### Using Axios
```javascript
const axios = require('axios');
const api = axios.create({
baseURL: BASE_URL,
headers: {
'x-api-key': API_KEY,
'Content-Type': 'application/json'
}
});
async function runSwarm() { // Cancel scheduled job
async function cancelScheduledJob(jobId) {
try { try {
const response = await api.post('/v1/swarm/completions', payload); const response = await fetch(`${BASE_URL}/v1/swarm/schedule/${jobId}`, {
return response.data; method: 'DELETE',
headers
});
return await response.json();
} catch (error) { } catch (error) {
console.error('Error:', error.response?.data || error.message); console.error('Error:', error);
throw error; throw error;
} }
} }
``` ```
## Go
```go
package main
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
)
const (
baseURL = "https://swarms-api-285321057562.us-east1.run.app"
apiKey = "your_api_key_here"
)
type Agent struct {
AgentName string `json:"agent_name"`
Description string `json:"description"`
SystemPrompt string `json:"system_prompt"`
ModelName string `json:"model_name"`
Role string `json:"role"`
MaxLoops int `json:"max_loops"`
}
type SwarmRequest struct {
Name string `json:"name"`
Description string `json:"description"`
Agents []Agent `json:"agents"`
MaxLoops int `json:"max_loops"`
SwarmType string `json:"swarm_type"`
Task string `json:"task"`
}
func runSingleSwarm() ([]byte, error) {
payload := SwarmRequest{
Name: "Financial Analysis",
Description: "Market analysis swarm",
Agents: []Agent{
{
AgentName: "Market Analyst",
Description: "Analyzes market trends",
SystemPrompt: "You are a financial analyst expert.",
ModelName: "gpt-4o",
Role: "worker",
MaxLoops: 1,
},
},
MaxLoops: 1,
SwarmType: "SequentialWorkflow",
Task: "Analyze current market trends",
}
jsonPayload, err := json.Marshal(payload)
if err != nil {
return nil, err
}
client := &http.Client{}
req, err := http.NewRequest("POST", baseURL+"/v1/swarm/completions", bytes.NewBuffer(jsonPayload))
if err != nil {
return nil, err
}
req.Header.Set("x-api-key", apiKey)
req.Header.Set("Content-Type", "application/json")
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ioutil.ReadAll(resp.Body)
}
func main() {
response, err := runSingleSwarm()
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
fmt.Printf("Response: %s\n", response)
}
```
## Rust
```rust
use reqwest::{Client, header};
use serde::{Deserialize, Serialize};
use serde_json::json;
const BASE_URL: &str = "https://swarms-api-285321057562.us-east1.run.app";
const API_KEY: &str = "your_api_key_here";
#[derive(Serialize, Deserialize)]
struct Agent {
agent_name: String,
description: String,
system_prompt: String,
model_name: String,
role: String,
max_loops: i32,
}
#[derive(Serialize, Deserialize)]
struct SwarmRequest {
name: String,
description: String,
agents: Vec<Agent>,
max_loops: i32,
swarm_type: String,
task: String,
}
async fn run_single_swarm() -> Result<String, Box<dyn std::error::Error>> {
let client = Client::new();
let payload = SwarmRequest {
name: "Financial Analysis".to_string(),
description: "Market analysis swarm".to_string(),
agents: vec![Agent {
agent_name: "Market Analyst".to_string(),
description: "Analyzes market trends".to_string(),
system_prompt: "You are a financial analyst expert.".to_string(),
model_name: "gpt-4o".to_string(),
role: "worker".to_string(),
max_loops: 1,
}],
max_loops: 1,
swarm_type: "SequentialWorkflow".to_string(),
task: "Analyze current market trends".to_string(),
};
let response = client
.post(format!("{}/v1/swarm/completions", BASE_URL))
.header("x-api-key", API_KEY)
.header("Content-Type", "application/json")
.json(&payload)
.send()
.await?;
let result = response.text().await?;
Ok(result)
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let response = run_single_swarm().await?;
println!("Response: {}", response);
Ok(())
}
```
## C#
```csharp
using System;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;
public class SwarmClient
{
private readonly HttpClient _client;
private const string BaseUrl = "https://swarms-api-285321057562.us-east1.run.app";
private readonly string _apiKey;
public SwarmClient(string apiKey)
{
_apiKey = apiKey;
_client = new HttpClient();
_client.DefaultRequestHeaders.Add("x-api-key", apiKey);
}
public class Agent
{
public string AgentName { get; set; }
public string Description { get; set; }
public string SystemPrompt { get; set; }
public string ModelName { get; set; }
public string Role { get; set; }
public int MaxLoops { get; set; }
}
public class SwarmRequest
{
public string Name { get; set; }
public string Description { get; set; }
public List<Agent> Agents { get; set; }
public int MaxLoops { get; set; }
public string SwarmType { get; set; }
public string Task { get; set; }
}
public async Task<string> RunSingleSwarm()
{
var payload = new SwarmRequest
{
Name = "Financial Analysis",
Description = "Market analysis swarm",
Agents = new List<Agent>
{
new Agent
{
AgentName = "Market Analyst",
Description = "Analyzes market trends",
SystemPrompt = "You are a financial analyst expert.",
ModelName = "gpt-4o",
Role = "worker",
MaxLoops = 1
}
},
MaxLoops = 1,
SwarmType = "SequentialWorkflow",
Task = "Analyze current market trends"
};
var content = new StringContent(
JsonSerializer.Serialize(payload),
Encoding.UTF8,
"application/json"
);
var response = await _client.PostAsync(
$"{BaseUrl}/v1/swarm/completions",
content
);
return await response.Content.ReadAsStringAsync();
}
}
// Usage
class Program
{
static async Task Main(string[] args)
{
var client = new SwarmClient("your_api_key_here");
var response = await client.RunSingleSwarm();
Console.WriteLine($"Response: {response}");
}
}
```
## Shell (cURL) ## Shell (cURL)
### Single Swarm Execution ### Schedule a Swarm
```bash ```bash
curl -X POST "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/completions" \ curl -X POST "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/schedule" \
-H "x-api-key: your_api_key_here" \ -H "x-api-key: your_api_key_here" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"name": "Financial Analysis", "name": "Scheduled Analysis",
"description": "Market analysis swarm",
"agents": [ "agents": [
{ {
"agent_name": "Market Analyst", "agent_name": "Analyst",
"description": "Analyzes market trends", "system_prompt": "You are a market analyst.",
"system_prompt": "You are a financial analyst expert.",
"model_name": "gpt-4o", "model_name": "gpt-4o",
"role": "worker", "role": "worker"
"max_loops": 1
} }
], ],
"max_loops": 1, "task": "Analyze tech trends",
"swarm_type": "SequentialWorkflow", "schedule": {
"task": "Analyze current market trends" "scheduled_time": "2024-03-20T15:00:00Z",
"timezone": "America/New_York"
}
}' }'
``` ```
### Batch Swarm Execution ### Get Scheduled Jobs
```bash ```bash
# Batch swarm execution curl -X GET "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/schedule" \
curl -X POST "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/batch/completions" \ -H "x-api-key: your_api_key_here"
-H "x-api-key: your_api_key_here" \
-H "Content-Type: application/json" \
-d '[
{
"name": "Market Analysis",
"agents": [{
"agent_name": "Analyst",
"system_prompt": "You are a market analyst.",
"model_name": "gpt-4o",
"role": "worker"
}],
"task": "Analyze tech trends"
},
{
"name": "Risk Assessment",
"agents": [{
"agent_name": "Risk Analyst",
"system_prompt": "You are a risk analyst.",
"model_name": "gpt-4o",
"role": "worker"
}],
"task": "Assess market risks"
}
]'
``` ```
### Cancel Scheduled Job
## Billing and Credits ```bash
curl -X DELETE "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/schedule/job_id_here" \
The API uses a credit-based billing system with the following components: -H "x-api-key: your_api_key_here"
```
### Cost Calculation
| Component | Cost |
|-----------|------|
| Base cost per agent | $0.01 |
| Input tokens (per 1M) | $5.00 |
| Output tokens (per 1M) | $15.50 |
Credits are deducted based on:
- Number of agents used
- Total input tokens (including system prompts and agent memory)
- Total output tokens generated
- Execution time
### Credit Types
- Free credits: Used first
- Regular credits: Used after free credits are exhausted
## Error Handling
| HTTP Status Code | Description |
|-----------------|-------------|
| 402 | Insufficient credits |
| 403 | Invalid API key |
| 404 | Resource not found |
| 500 | Internal server error |
## Best Practices ### Get Swarm Logs
1. Start with small swarms and gradually increase complexity ```bash
2. Monitor credit usage and token counts curl -X GET "https://swarms-api-285321057562.us-east1.run.app/v1/swarm/logs" \
3. Use appropriate max_loops values to control execution -H "x-api-key: your_api_key_here"
4. Implement proper error handling for API responses ```
5. Consider using batch completions for multiple related tasks

@ -0,0 +1,91 @@
# tools - search, code executor, create api
import os
import requests
from dotenv import load_dotenv
import json
from swarms_tools import coin_gecko_coin_api
load_dotenv()
API_KEY = os.getenv("SWARMS_API_KEY")
BASE_URL = "https://swarms-api-285321057562.us-east1.run.app"
headers = {"x-api-key": API_KEY, "Content-Type": "application/json"}
data = coin_gecko_coin_api("bitcoin")
print(data)
def run_health_check():
response = requests.get(f"{BASE_URL}/health", headers=headers)
return response.json()
def run_single_swarm():
payload = {
"name": "Hedge Fund Analysis Swarm",
"description": "A highly customized swarm for hedge fund analysis, focusing on market trends, risk assessment, and investment strategies.",
"agents": [
{
"agent_name": "Hedge Fund Analyst",
"description": "Analyzes market trends and investment opportunities.",
"system_prompt": "You are a hedge fund analyst with expertise in cryptocurrency. Analyze current market conditions for Bitcoin and major cryptocurrencies. Identify investment opportunities by evaluating volatility and performance. Provide a report with technical and fundamental analysis.",
"model_name": "openai/gpt-4o",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
},
{
"agent_name": "Risk Assessment Agent",
"description": "Evaluates risks in investment strategies.",
"system_prompt": "You are a risk assessment expert in cryptocurrency. Identify and evaluate risks related to investment strategies, including market and credit risks. Provide a risk analysis report with assessments and mitigation strategies.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
},
{
"agent_name": "Portfolio Manager",
"description": "Manages and optimizes investment portfolios.",
"system_prompt": "You are a portfolio manager for a crypto hedge fund. Optimize asset allocation based on market conditions. Analyze existing assets, suggest adjustments, and provide diversification strategies.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
},
{
"agent_name": "Market Sentiment Analyst",
"description": "Analyzes market sentiment for trading strategies.",
"system_prompt": "You are a market sentiment analyst in cryptocurrency. Assess current sentiment by analyzing news and social media. Provide insights on how sentiment impacts investment decisions and summarize key indicators.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
},
],
"max_loops": 1,
"swarm_type": "ConcurrentWorkflow",
"task": "Analyze Bitcoin right now and provide a detailed report on the current market conditions, including technical and fundamental analysis, and then suggest potential trades with buy and sell recommendations based on the analysis",
"output_type": "dict",
}
response = requests.post(
f"{BASE_URL}/v1/swarm/completions",
headers=headers,
json=payload,
)
# return response.json()
print(response.json())
print(response.status_code)
output = response.json()
return json.dumps(output, indent=4)
if __name__ == "__main__":
result = run_single_swarm()
print("Swarm Result:")
print(result)

@ -0,0 +1,910 @@
import os
from collections import defaultdict
from datetime import datetime
from decimal import Decimal
from functools import lru_cache
from threading import Thread
from time import sleep, time
from typing import Any, Dict, List, Optional, Union
import pytz
import supabase
from dotenv import load_dotenv
from fastapi import (
Depends,
FastAPI,
Header,
HTTPException,
Request,
status,
)
from fastapi.middleware.cors import CORSMiddleware
from loguru import logger
from pydantic import BaseModel, Field
from swarms import Agent, SwarmRouter, SwarmType
from swarms.utils.litellm_tokenizer import count_tokens
import asyncio
load_dotenv()
# Define rate limit parameters
RATE_LIMIT = 100 # Max requests
TIME_WINDOW = 60 # Time window in seconds
# In-memory store for tracking requests
request_counts = defaultdict(lambda: {"count": 0, "start_time": time()})
# In-memory store for scheduled jobs
scheduled_jobs: Dict[str, Dict] = {}
def rate_limiter(request: Request):
client_ip = request.client.host
current_time = time()
client_data = request_counts[client_ip]
# Reset count if time window has passed
if current_time - client_data["start_time"] > TIME_WINDOW:
client_data["count"] = 0
client_data["start_time"] = current_time
# Increment request count
client_data["count"] += 1
# Check if rate limit is exceeded
if client_data["count"] > RATE_LIMIT:
raise HTTPException(
status_code=429, detail="Rate limit exceeded. Please try again later."
)
class AgentSpec(BaseModel):
agent_name: Optional[str] = Field(None, description="Agent Name", max_length=100)
description: Optional[str] = Field(None, description="Description", max_length=500)
system_prompt: Optional[str] = Field(
None, description="System Prompt", max_length=500
)
model_name: Optional[str] = Field(
"gpt-4o", description="Model Name", max_length=500
)
auto_generate_prompt: Optional[bool] = Field(
False, description="Auto Generate Prompt"
)
max_tokens: Optional[int] = Field(None, description="Max Tokens")
temperature: Optional[float] = Field(0.5, description="Temperature")
role: Optional[str] = Field("worker", description="Role")
max_loops: Optional[int] = Field(1, description="Max Loops")
# class ExternalAgent(BaseModel):
# base_url: str = Field(..., description="Base URL")
# parameters: Dict[str, Any] = Field(..., description="Parameters")
# headers: Dict[str, Any] = Field(..., description="Headers")
class ScheduleSpec(BaseModel):
scheduled_time: datetime = Field(..., description="When to execute the swarm (UTC)")
timezone: Optional[str] = Field(
"UTC", description="Timezone for the scheduled time"
)
class SwarmSpec(BaseModel):
name: Optional[str] = Field(None, description="Swarm Name", max_length=100)
description: Optional[str] = Field(None, description="Description")
agents: Optional[Union[List[AgentSpec], Any]] = Field(None, description="Agents")
max_loops: Optional[int] = Field(None, description="Max Loops")
swarm_type: Optional[SwarmType] = Field(None, description="Swarm Type")
rearrange_flow: Optional[str] = Field(None, description="Flow")
task: Optional[str] = Field(None, description="Task")
img: Optional[str] = Field(None, description="Img")
return_history: Optional[bool] = Field(True, description="Return History")
rules: Optional[str] = Field(None, description="Rules")
schedule: Optional[ScheduleSpec] = Field(None, description="Scheduling information")
class ScheduledJob(Thread):
def __init__(
self, job_id: str, scheduled_time: datetime, swarm: SwarmSpec, api_key: str
):
super().__init__()
self.job_id = job_id
self.scheduled_time = scheduled_time
self.swarm = swarm
self.api_key = api_key
self.daemon = True # Allow the thread to be terminated when main program exits
self.cancelled = False
def run(self):
while not self.cancelled:
now = datetime.now(pytz.UTC)
if now >= self.scheduled_time:
try:
# Execute the swarm
asyncio.run(run_swarm_completion(self.swarm, self.api_key))
except Exception as e:
logger.error(
f"Error executing scheduled swarm {self.job_id}: {str(e)}"
)
finally:
# Remove the job from scheduled_jobs after execution
scheduled_jobs.pop(self.job_id, None)
break
sleep(1) # Check every second
def get_supabase_client():
supabase_url = os.getenv("SUPABASE_URL")
supabase_key = os.getenv("SUPABASE_KEY")
return supabase.create_client(supabase_url, supabase_key)
@lru_cache(maxsize=1000)
def check_api_key(api_key: str) -> bool:
supabase_client = get_supabase_client()
response = (
supabase_client.table("swarms_cloud_api_keys")
.select("*")
.eq("key", api_key)
.execute()
)
return bool(response.data)
# class ExternalAgent:
# def __init__(self, base_url: str, parameters: Dict[str, Any], headers: Dict[str, Any]):
# self.base_url = base_url
# self.parameters = parameters
# self.headers = headers
# def run(self, task: str) -> Dict[str, Any]:
# response = requests.post(self.base_url, json=self.parameters, headers=self.headers)
# return response.json()
@lru_cache(maxsize=1000)
def get_user_id_from_api_key(api_key: str) -> str:
"""
Maps an API key to its associated user ID.
Args:
api_key (str): The API key to look up
Returns:
str: The user ID associated with the API key
Raises:
ValueError: If the API key is invalid or not found
"""
supabase_client = get_supabase_client()
response = (
supabase_client.table("swarms_cloud_api_keys")
.select("user_id")
.eq("key", api_key)
.execute()
)
if not response.data:
raise ValueError("Invalid API key")
return response.data[0]["user_id"]
def verify_api_key(x_api_key: str = Header(...)) -> None:
"""
Dependency to verify the API key.
"""
if not check_api_key(x_api_key):
raise HTTPException(status_code=403, detail="Invalid API Key")
async def get_api_key_logs(api_key: str) -> List[Dict[str, Any]]:
"""
Retrieve all API request logs for a specific API key.
Args:
api_key: The API key to query logs for
Returns:
List[Dict[str, Any]]: List of log entries for the API key
"""
try:
supabase_client = get_supabase_client()
# Query swarms_api_logs table for entries matching the API key
response = (
supabase_client.table("swarms_api_logs")
.select("*")
.eq("api_key", api_key)
.execute()
)
return response.data
except Exception as e:
logger.error(f"Error retrieving API logs: {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to retrieve API logs: {str(e)}",
)
def create_swarm(swarm_spec: SwarmSpec) -> SwarmRouter:
try:
# Validate swarm_spec
if not swarm_spec.agents:
raise ValueError("Swarm specification must include at least one agent.")
agents = []
for agent_spec in swarm_spec.agents:
try:
# Handle both dict and AgentSpec objects
if isinstance(agent_spec, dict):
# Convert dict to AgentSpec
agent_spec = AgentSpec(**agent_spec)
# Validate agent_spec fields
if not agent_spec.agent_name:
raise ValueError("Agent name is required.")
if not agent_spec.model_name:
raise ValueError("Model name is required.")
# Create the agent
agent = Agent(
agent_name=agent_spec.agent_name,
description=agent_spec.description,
system_prompt=agent_spec.system_prompt,
model_name=agent_spec.model_name,
auto_generate_prompt=agent_spec.auto_generate_prompt,
max_tokens=agent_spec.max_tokens,
temperature=agent_spec.temperature,
role=agent_spec.role,
max_loops=agent_spec.max_loops,
)
agents.append(agent)
logger.info(
"Successfully created agent: {}",
agent_spec.agent_name,
)
except ValueError as ve:
logger.error(
"Validation error for agent {}: {}",
getattr(agent_spec, 'agent_name', 'unknown'),
str(ve),
)
raise
except Exception as agent_error:
logger.error(
"Error creating agent {}: {}",
getattr(agent_spec, 'agent_name', 'unknown'),
str(agent_error),
)
raise
if not agents:
raise ValueError(
"No valid agents could be created from the swarm specification."
)
# Create and configure the swarm
swarm = SwarmRouter(
name=swarm_spec.name,
description=swarm_spec.description,
agents=agents,
max_loops=swarm_spec.max_loops,
swarm_type=swarm_spec.swarm_type,
output_type="dict",
return_entire_history=False,
rules=swarm_spec.rules,
rearrange_flow=swarm_spec.rearrange_flow,
)
# Run the swarm task
output = swarm.run(task=swarm_spec.task)
return output
except Exception as e:
logger.error("Error creating swarm: {}", str(e))
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to create swarm: {str(e)}",
)
# Add this function after your get_supabase_client() function
async def log_api_request(api_key: str, data: Dict[str, Any]) -> None:
"""
Log API request data to Supabase swarms_api_logs table.
Args:
api_key: The API key used for the request
data: Dictionary containing request data to log
"""
try:
supabase_client = get_supabase_client()
# Create log entry
log_entry = {
"api_key": api_key,
"data": data,
}
# Insert into swarms_api_logs table
response = supabase_client.table("swarms_api_logs").insert(log_entry).execute()
if not response.data:
logger.error("Failed to log API request")
except Exception as e:
logger.error(f"Error logging API request: {str(e)}")
async def run_swarm_completion(
swarm: SwarmSpec, x_api_key: str = None
) -> Dict[str, Any]:
"""
Run a swarm with the specified task.
"""
try:
swarm_name = swarm.name
agents = swarm.agents
await log_api_request(x_api_key, swarm.model_dump())
# Log start of swarm execution
logger.info(f"Starting swarm {swarm_name} with {len(agents)} agents")
start_time = time()
# Create and run the swarm
logger.debug(f"Creating swarm object for {swarm_name}")
result = create_swarm(swarm)
logger.debug(f"Running swarm task: {swarm.task}")
# Calculate execution time
execution_time = time() - start_time
logger.info(
f"Swarm {swarm_name} executed in {round(execution_time, 2)} seconds"
)
# Calculate costs
logger.debug(f"Calculating costs for swarm {swarm_name}")
cost_info = calculate_swarm_cost(
agents=agents,
input_text=swarm.task,
agent_outputs=result,
execution_time=execution_time,
)
logger.info(f"Cost calculation completed for swarm {swarm_name}: {cost_info}")
# Deduct credits based on calculated cost
logger.debug(
f"Deducting credits for swarm {swarm_name} with cost {cost_info['total_cost']}"
)
deduct_credits(
x_api_key,
cost_info["total_cost"],
f"swarm_execution_{swarm_name}",
)
# Format the response
response = {
"status": "success",
"swarm_name": swarm_name,
"description": swarm.description,
"swarm_type": swarm.swarm_type,
"task": swarm.task,
"output": result,
"metadata": {
"max_loops": swarm.max_loops,
"num_agents": len(agents),
"execution_time_seconds": round(execution_time, 2),
"completion_time": time(),
"billing_info": cost_info,
},
}
logger.info(response)
await log_api_request(x_api_key, response)
return response
except HTTPException as http_exc:
logger.error("HTTPException occurred: {}", http_exc.detail)
raise
except Exception as e:
logger.error("Error running swarm {}: {}", swarm_name, str(e))
logger.exception(e)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to run swarm: {str(e)}",
)
def deduct_credits(api_key: str, amount: float, product_name: str) -> None:
"""
Deducts the specified amount of credits for the user identified by api_key,
preferring to use free_credit before using regular credit, and logs the transaction.
"""
supabase_client = get_supabase_client()
user_id = get_user_id_from_api_key(api_key)
# 1. Retrieve the user's credit record
response = (
supabase_client.table("swarms_cloud_users_credits")
.select("*")
.eq("user_id", user_id)
.execute()
)
if not response.data:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="User credits record not found.",
)
record = response.data[0]
# Use Decimal for precise arithmetic
available_credit = Decimal(record["credit"])
free_credit = Decimal(record.get("free_credit", "0"))
deduction = Decimal(str(amount))
print(
f"Available credit: {available_credit}, Free credit: {free_credit}, Deduction: {deduction}"
)
# 2. Verify sufficient total credits are available
if (available_credit + free_credit) < deduction:
raise HTTPException(
status_code=status.HTTP_402_PAYMENT_REQUIRED,
detail="Insufficient credits.",
)
# 3. Log the transaction
log_response = (
supabase_client.table("swarms_cloud_services")
.insert(
{
"user_id": user_id,
"api_key": api_key,
"charge_credit": int(
deduction
), # Assuming credits are stored as integers
"product_name": product_name,
}
)
.execute()
)
if not log_response.data:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Failed to log the credit transaction.",
)
# 4. Deduct credits: use free_credit first, then deduct the remainder from available_credit
if free_credit >= deduction:
free_credit -= deduction
else:
remainder = deduction - free_credit
free_credit = Decimal("0")
available_credit -= remainder
update_response = (
supabase_client.table("swarms_cloud_users_credits")
.update(
{
"credit": str(available_credit),
"free_credit": str(free_credit),
}
)
.eq("user_id", user_id)
.execute()
)
if not update_response.data:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Failed to update credits.",
)
def calculate_swarm_cost(
agents: List[Agent],
input_text: str,
execution_time: float,
agent_outputs: Union[List[Dict[str, str]], str] = None, # Update agent_outputs type
) -> Dict[str, Any]:
"""
Calculate the cost of running a swarm based on agents, tokens, and execution time.
Includes system prompts, agent memory, and scaled output costs.
Args:
agents: List of agents used in the swarm
input_text: The input task/prompt text
execution_time: Time taken to execute in seconds
agent_outputs: List of output texts from each agent or a list of dictionaries
Returns:
Dict containing cost breakdown and total cost
"""
# Base costs per unit (these could be moved to environment variables)
COST_PER_AGENT = 0.01 # Base cost per agent
COST_PER_1M_INPUT_TOKENS = 2.00 # Cost per 1M input tokens
COST_PER_1M_OUTPUT_TOKENS = 6.00 # Cost per 1M output tokens
# Get current time in California timezone
california_tz = pytz.timezone("America/Los_Angeles")
current_time = datetime.now(california_tz)
is_night_time = current_time.hour >= 20 or current_time.hour < 6 # 8 PM to 6 AM
try:
# Calculate input tokens for task
task_tokens = count_tokens(input_text)
# Calculate total input tokens including system prompts and memory for each agent
total_input_tokens = 0
total_output_tokens = 0
per_agent_tokens = {}
for i, agent in enumerate(agents):
agent_input_tokens = task_tokens # Base task tokens
# Add system prompt tokens if present
if agent.system_prompt:
agent_input_tokens += count_tokens(agent.system_prompt)
# Add memory tokens if available
try:
memory = agent.short_memory.return_history_as_string()
if memory:
memory_tokens = count_tokens(str(memory))
agent_input_tokens += memory_tokens
except Exception as e:
logger.warning(
f"Could not get memory for agent {agent.agent_name}: {str(e)}"
)
# Calculate actual output tokens if available, otherwise estimate
if agent_outputs:
if isinstance(agent_outputs, list):
# Sum tokens for each dictionary's content
agent_output_tokens = sum(
count_tokens(message["content"]) for message in agent_outputs
)
elif isinstance(agent_outputs, str):
agent_output_tokens = count_tokens(agent_outputs)
else:
agent_output_tokens = int(
agent_input_tokens * 2.5
) # Estimated output tokens
else:
agent_output_tokens = int(
agent_input_tokens * 2.5
) # Estimated output tokens
# Store per-agent token counts
per_agent_tokens[agent.agent_name] = {
"input_tokens": agent_input_tokens,
"output_tokens": agent_output_tokens,
"total_tokens": agent_input_tokens + agent_output_tokens,
}
# Add to totals
total_input_tokens += agent_input_tokens
total_output_tokens += agent_output_tokens
# Calculate costs (convert to millions of tokens)
agent_cost = len(agents) * COST_PER_AGENT
input_token_cost = (
(total_input_tokens / 1_000_000) * COST_PER_1M_INPUT_TOKENS * len(agents)
)
output_token_cost = (
(total_output_tokens / 1_000_000) * COST_PER_1M_OUTPUT_TOKENS * len(agents)
)
# Apply discount during California night time hours
if is_night_time:
input_token_cost *= 0.25 # 75% discount
output_token_cost *= 0.25 # 75% discount
# Calculate total cost
total_cost = agent_cost + input_token_cost + output_token_cost
output = {
"cost_breakdown": {
"agent_cost": round(agent_cost, 6),
"input_token_cost": round(input_token_cost, 6),
"output_token_cost": round(output_token_cost, 6),
"token_counts": {
"total_input_tokens": total_input_tokens,
"total_output_tokens": total_output_tokens,
"total_tokens": total_input_tokens + total_output_tokens,
"per_agent": per_agent_tokens,
},
"num_agents": len(agents),
"execution_time_seconds": round(execution_time, 2),
},
"total_cost": round(total_cost, 6),
}
return output
except Exception as e:
logger.error(f"Error calculating swarm cost: {str(e)}")
raise ValueError(f"Failed to calculate swarm cost: {str(e)}")
# --- FastAPI Application Setup ---
app = FastAPI(
title="Swarm Agent API",
description="API for managing and executing Python agents in the cloud without Docker/Kubernetes.",
version="1.0.0",
debug=True,
)
# Enable CORS (adjust origins as needed)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # In production, restrict this to specific domains
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/", dependencies=[Depends(rate_limiter)])
def root():
return {
"status": "Welcome to the Swarm API. Check out the docs at https://docs.swarms.world"
}
@app.get("/health", dependencies=[Depends(rate_limiter)])
def health():
return {"status": "ok"}
@app.post(
"/v1/swarm/completions",
dependencies=[
Depends(verify_api_key),
Depends(rate_limiter),
],
)
async def run_swarm(swarm: SwarmSpec, x_api_key=Header(...)) -> Dict[str, Any]:
"""
Run a swarm with the specified task.
"""
return await run_swarm_completion(swarm, x_api_key)
@app.post(
"/v1/swarm/batch/completions",
dependencies=[
Depends(verify_api_key),
Depends(rate_limiter),
],
)
async def run_batch_completions(
swarms: List[SwarmSpec], x_api_key=Header(...)
) -> List[Dict[str, Any]]:
"""
Run a batch of swarms with the specified tasks.
"""
results = []
for swarm in swarms:
try:
# Call the existing run_swarm function for each swarm
result = await run_swarm_completion(swarm, x_api_key)
results.append(result)
except HTTPException as http_exc:
logger.error("HTTPException occurred: {}", http_exc.detail)
results.append(
{
"status": "error",
"swarm_name": swarm.name,
"detail": http_exc.detail,
}
)
except Exception as e:
logger.error("Error running swarm {}: {}", swarm.name, str(e))
logger.exception(e)
results.append(
{
"status": "error",
"swarm_name": swarm.name,
"detail": f"Failed to run swarm: {str(e)}",
}
)
return results
# Add this new endpoint
@app.get(
"/v1/swarm/logs",
dependencies=[
Depends(verify_api_key),
Depends(rate_limiter),
],
)
async def get_logs(x_api_key: str = Header(...)) -> Dict[str, Any]:
"""
Get all API request logs for the provided API key.
"""
try:
logs = await get_api_key_logs(x_api_key)
return {"status": "success", "count": len(logs), "logs": logs}
except Exception as e:
logger.error(f"Error in get_logs endpoint: {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=str(e)
)
# @app.post("/v1/swarm/cost-prediction")
# async def cost_prediction(swarm: SwarmSpec) -> Dict[str, Any]:
# """
# Predict the cost of running a swarm.
# """
# return {"status": "success", "cost": calculate_swarm_cost(swarm)})
@app.post(
"/v1/swarm/schedule",
dependencies=[
Depends(verify_api_key),
Depends(rate_limiter),
],
)
async def schedule_swarm(
swarm: SwarmSpec, x_api_key: str = Header(...)
) -> Dict[str, Any]:
"""
Schedule a swarm to run at a specific time.
"""
if not swarm.schedule:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail="Schedule information is required",
)
try:
# Generate a unique job ID
job_id = f"swarm_{swarm.name}_{int(time())}"
# Create and start the scheduled job
job = ScheduledJob(
job_id=job_id,
scheduled_time=swarm.schedule.scheduled_time,
swarm=swarm,
api_key=x_api_key,
)
job.start()
# Store the job information
scheduled_jobs[job_id] = {
"job": job,
"swarm_name": swarm.name,
"scheduled_time": swarm.schedule.scheduled_time,
"timezone": swarm.schedule.timezone,
}
# Log the scheduling
await log_api_request(
x_api_key,
{
"action": "schedule_swarm",
"swarm_name": swarm.name,
"scheduled_time": swarm.schedule.scheduled_time.isoformat(),
"job_id": job_id,
},
)
return {
"status": "success",
"message": "Swarm scheduled successfully",
"job_id": job_id,
"scheduled_time": swarm.schedule.scheduled_time,
"timezone": swarm.schedule.timezone,
}
except Exception as e:
logger.error(f"Error scheduling swarm: {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to schedule swarm: {str(e)}",
)
@app.get(
"/v1/swarm/schedule",
dependencies=[
Depends(verify_api_key),
Depends(rate_limiter),
],
)
async def get_scheduled_jobs(x_api_key: str = Header(...)) -> Dict[str, Any]:
"""
Get all scheduled swarm jobs.
"""
try:
jobs_list = []
current_time = datetime.now(pytz.UTC)
# Clean up completed jobs
completed_jobs = [
job_id
for job_id, job_info in scheduled_jobs.items()
if current_time >= job_info["scheduled_time"]
]
for job_id in completed_jobs:
scheduled_jobs.pop(job_id, None)
# Get active jobs
for job_id, job_info in scheduled_jobs.items():
jobs_list.append(
{
"job_id": job_id,
"swarm_name": job_info["swarm_name"],
"scheduled_time": job_info["scheduled_time"].isoformat(),
"timezone": job_info["timezone"],
}
)
return {"status": "success", "scheduled_jobs": jobs_list}
except Exception as e:
logger.error(f"Error retrieving scheduled jobs: {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to retrieve scheduled jobs: {str(e)}",
)
@app.delete(
"/v1/swarm/schedule/{job_id}",
dependencies=[
Depends(verify_api_key),
Depends(rate_limiter),
],
)
async def cancel_scheduled_job(
job_id: str, x_api_key: str = Header(...)
) -> Dict[str, Any]:
"""
Cancel a scheduled swarm job.
"""
try:
if job_id not in scheduled_jobs:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Scheduled job not found"
)
# Cancel and remove the job
job_info = scheduled_jobs[job_id]
job_info["job"].cancelled = True
scheduled_jobs.pop(job_id)
await log_api_request(
x_api_key, {"action": "cancel_scheduled_job", "job_id": job_id}
)
return {
"status": "success",
"message": "Scheduled job cancelled successfully",
"job_id": job_id,
}
except HTTPException:
raise
except Exception as e:
logger.error(f"Error cancelling scheduled job: {str(e)}")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to cancel scheduled job: {str(e)}",
)
# --- Main Entrypoint ---
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080, workers=os.cpu_count())

@ -27,7 +27,7 @@ def run_single_swarm():
"agent_name": "Market Analyst", "agent_name": "Market Analyst",
"description": "Analyzes market trends", "description": "Analyzes market trends",
"system_prompt": "You are a financial analyst expert.", "system_prompt": "You are a financial analyst expert.",
"model_name": "gpt-4o", "model_name": "openai/gpt-4o",
"role": "worker", "role": "worker",
"max_loops": 1, "max_loops": 1,
"max_tokens": 8192, "max_tokens": 8192,
@ -46,7 +46,6 @@ def run_single_swarm():
"swarm_type": "SequentialWorkflow", "swarm_type": "SequentialWorkflow",
"task": "What are the best etfs and index funds for ai and tech?", "task": "What are the best etfs and index funds for ai and tech?",
"output_type": "dict", "output_type": "dict",
# "return_history": True,
} }
response = requests.post( response = requests.post(
@ -55,6 +54,8 @@ def run_single_swarm():
json=payload, json=payload,
) )
print(response)
print(response.status_code)
# return response.json() # return response.json()
output = response.json() output = response.json()

Loading…
Cancel
Save