llama 4 examples ++ docs fix

pull/811/head
Kye Gomez 2 weeks ago
parent 7ca26e0162
commit d603604ccf

@ -272,9 +272,9 @@ nav:
- Swarms Vision: "swarms/concept/vision.md"
- Swarm Ecosystem: "swarms/concept/swarm_ecosystem.md"
- Swarms Products: "swarms/products.md"
- Swarms Framework Architecture: "swarms/concept/framework_architecture.md"
- Developers and Contributors:
- Swarms Framework Architecture: "swarms/concept/framework_architecture.md"
- Bounty Program: "corporate/bounty_program.md"
- Contributing:
- Contributing: "swarms/contributing.md"

@ -8,67 +8,67 @@ This comprehensive guide outlines production-grade best practices for using the
!!! info "Available Swarm Architectures"
| Swarm Type | Best For | Use Cases | Example Configuration |
|------------|----------|------------|---------------------|
| `AgentRearrange` | Dynamic workflows | - Complex task decomposition<br>- Adaptive processing<br>- Multi-stage analysis<br>- Dynamic resource allocation | ```python<br>{"swarm_type": "AgentRearrange",<br> "rearrange_flow": "optimize for efficiency",<br> "max_loops": 3}``` |
| `MixtureOfAgents` | Diverse expertise | - Cross-domain problems<br>- Comprehensive analysis<br>- Multi-perspective tasks<br>- Research synthesis | ```python<br>{"swarm_type": "MixtureOfAgents",<br> "agents": [{"role": "researcher"},<br> {"role": "analyst"},<br> {"role": "writer"}]}``` |
| `SpreadSheetSwarm` | Data processing | - Financial analysis<br>- Data transformation<br>- Batch calculations<br>- Report generation | ```python<br>{"swarm_type": "SpreadSheetSwarm",<br> "data_format": "csv",<br> "analysis_type": "financial"}``` |
| `SequentialWorkflow` | Linear processes | - Document processing<br>- Step-by-step analysis<br>- Quality control<br>- Content pipeline | ```python<br>{"swarm_type": "SequentialWorkflow",<br> "steps": ["research", "draft",<br> "review", "finalize"]}``` |
| `ConcurrentWorkflow` | Parallel tasks | - Batch processing<br>- Independent analyses<br>- High-throughput needs<br>- Multi-market analysis | ```python<br>{"swarm_type": "ConcurrentWorkflow",<br> "max_parallel": 5,<br> "batch_size": 10}``` |
| `GroupChat` | Collaborative solving | - Brainstorming<br>- Decision making<br>- Problem solving<br>- Strategy development | ```python<br>{"swarm_type": "GroupChat",<br> "participants": ["expert1", "expert2"],<br> "discussion_rounds": 3}``` |
| `MultiAgentRouter` | Task distribution | - Load balancing<br>- Specialized processing<br>- Resource optimization<br>- Service routing | ```python<br>{"swarm_type": "MultiAgentRouter",<br> "routing_strategy": "skill_based",<br> "fallback_agent": "general"}``` |
| `AutoSwarmBuilder` | Automated setup | - Quick prototyping<br>- Simple tasks<br>- Testing<br>- MVP development | ```python<br>{"swarm_type": "AutoSwarmBuilder",<br> "complexity": "medium",<br> "optimize_for": "speed"}``` |
| `HiearchicalSwarm` | Complex organization | - Project management<br>- Research analysis<br>- Enterprise workflows<br>- Team automation | ```python<br>{"swarm_type": "HiearchicalSwarm",<br> "levels": ["manager", "specialist",<br> "worker"]}``` |
| `MajorityVoting` | Consensus needs | - Quality assurance<br>- Decision validation<br>- Risk assessment<br>- Content moderation | ```python<br>{"swarm_type": "MajorityVoting",<br> "min_votes": 3,<br> "threshold": 0.7}``` |
| Swarm Type | Best For | Use Cases |
|------------|----------|------------|
| `AgentRearrange` | Dynamic workflows | - Complex task decomposition<br>- Adaptive processing<br>- Multi-stage analysis<br>- Dynamic resource allocation |
| `MixtureOfAgents` | Diverse expertise | - Cross-domain problems<br>- Comprehensive analysis<br>- Multi-perspective tasks<br>- Research synthesis |
| `SpreadSheetSwarm` | Data processing | - Financial analysis<br>- Data transformation<br>- Batch calculations<br>- Report generation |
| `SequentialWorkflow` | Linear processes | - Document processing<br>- Step-by-step analysis<br>- Quality control<br>- Content pipeline |
| `ConcurrentWorkflow` | Parallel tasks | - Batch processing<br>- Independent analyses<br>- High-throughput needs<br>- Multi-market analysis |
| `GroupChat` | Collaborative solving | - Brainstorming<br>- Decision making<br>- Problem solving<br>- Strategy development |
| `MultiAgentRouter` | Task distribution | - Load balancing<br>- Specialized processing<br>- Resource optimization<br>- Service routing |
| `AutoSwarmBuilder` | Automated setup | - Quick prototyping<br>- Simple tasks<br>- Testing<br>- MVP development |
| `HiearchicalSwarm` | Complex organization | - Project management<br>- Research analysis<br>- Enterprise workflows<br>- Team automation |
| `MajorityVoting` | Consensus needs | - Quality assurance<br>- Decision validation<br>- Risk assessment<br>- Content moderation |
=== "Application Patterns"
!!! tip "Specialized Application Configurations"
| Application | Recommended Swarm | Configuration Example | Benefits |
|------------|-------------------|----------------------|-----------|
| **Team Automation** | `HiearchicalSwarm` | ```python<br>{<br> "swarm_type": "HiearchicalSwarm",<br> "agents": [<br> {"role": "ProjectManager",<br> "responsibilities": ["planning", "coordination"]},<br> {"role": "TechLead",<br> "responsibilities": ["architecture", "review"]},<br> {"role": "Developers",<br> "count": 3,<br> "specializations": ["frontend", "backend", "testing"]}<br> ]<br>}``` | - Automated team coordination<br>- Clear responsibility chain<br>- Scalable team structure |
| **Research Pipeline** | `SequentialWorkflow` | ```python<br>{<br> "swarm_type": "SequentialWorkflow",<br> "pipeline": [<br> {"stage": "Literature Review",<br> "agent_type": "Researcher"},<br> {"stage": "Data Analysis",<br> "agent_type": "Analyst"},<br> {"stage": "Report Generation",<br> "agent_type": "Writer"}<br> ]<br>}``` | - Structured research process<br>- Quality control at each stage<br>- Comprehensive output |
| **Trading System** | `ConcurrentWorkflow` | ```python<br>{<br> "swarm_type": "ConcurrentWorkflow",<br> "agents": [<br> {"market": "crypto",<br> "strategy": "momentum"},<br> {"market": "forex",<br> "strategy": "mean_reversion"},<br> {"market": "stocks",<br> "strategy": "value"}<br> ]<br>}``` | - Multi-market coverage<br>- Real-time analysis<br>- Risk distribution |
| **Content Factory** | `MixtureOfAgents` | ```python<br>{<br> "swarm_type": "MixtureOfAgents",<br> "workflow": [<br> {"role": "Researcher",<br> "focus": "topic_research"},<br> {"role": "Writer",<br> "style": "engaging"},<br> {"role": "Editor",<br> "quality_standards": "high"}<br> ]<br>}``` | - Automated content creation<br>- Consistent quality<br>- High throughput |
| Application | Recommended Swarm | Benefits |
|------------|-------------------|-----------|
| **Team Automation** | `HiearchicalSwarm` | - Automated team coordination<br>- Clear responsibility chain<br>- Scalable team structure |
| **Research Pipeline** | `SequentialWorkflow` | - Structured research process<br>- Quality control at each stage<br>- Comprehensive output |
| **Trading System** | `ConcurrentWorkflow` | - Multi-market coverage<br>- Real-time analysis<br>- Risk distribution |
| **Content Factory** | `MixtureOfAgents` | - Automated content creation<br>- Consistent quality<br>- High throughput |
=== "Cost Optimization"
!!! tip "Advanced Cost Management Strategies"
| Strategy | Implementation | Impact | Configuration Example |
|----------|----------------|---------|---------------------|
| Batch Processing | Group related tasks | 20-30% cost reduction | ```python<br>{"batch_size": 10,<br> "parallel_execution": true,<br> "deduplication": true}``` |
| Off-peak Usage | Schedule for 8 PM - 6 AM PT | 15-25% cost reduction | ```python<br>{"schedule": "0 20 * * *",<br> "timezone": "America/Los_Angeles"}``` |
| Token Optimization | Precise prompts, focused tasks | 10-20% cost reduction | ```python<br>{"max_tokens": 2000,<br> "compression": true,<br> "cache_similar": true}``` |
| Caching | Store reusable results | 30-40% cost reduction | ```python<br>{"cache_ttl": 3600,<br> "similarity_threshold": 0.95}``` |
| Agent Optimization | Use minimum required agents | 15-25% cost reduction | ```python<br>{"auto_scale": true,<br> "min_agents": 2,<br> "max_agents": 5}``` |
| Smart Routing | Route to specialized agents | 10-15% cost reduction | ```python<br>{"routing_strategy": "cost_effective",<br> "fallback": "general"}``` |
| Prompt Engineering | Optimize input tokens | 15-20% cost reduction | ```python<br>{"prompt_template": "focused",<br> "remove_redundancy": true}``` |
| Strategy | Implementation | Impact |
|----------|----------------|---------|
| Batch Processing | Group related tasks | 20-30% cost reduction |
| Off-peak Usage | Schedule for 8 PM - 6 AM PT | 15-25% cost reduction |
| Token Optimization | Precise prompts, focused tasks | 10-20% cost reduction |
| Caching | Store reusable results | 30-40% cost reduction |
| Agent Optimization | Use minimum required agents | 15-25% cost reduction |
| Smart Routing | Route to specialized agents | 10-15% cost reduction |
| Prompt Engineering | Optimize input tokens | 15-20% cost reduction |
=== "Industry Solutions"
!!! example "Industry-Specific Swarm Patterns"
| Industry | Swarm Pattern | Configuration | Use Case |
|----------|---------------|---------------|-----------|
| **Finance** | ```python<br>{<br> "swarm_type": "HiearchicalSwarm",<br> "agents": [<br> {"role": "RiskManager",<br> "models": ["risk_assessment"]},<br> {"role": "MarketAnalyst",<br> "markets": ["stocks", "crypto"]},<br> {"role": "Trader",<br> "strategies": ["momentum", "value"]}<br> ]<br>}``` | - Portfolio management<br>- Risk assessment<br>- Market analysis<br>- Trading execution | Automated trading desk |
| **Healthcare** | ```python<br>{<br> "swarm_type": "SequentialWorkflow",<br> "workflow": [<br> {"stage": "PatientIntake",<br> "agent": "DataCollector"},<br> {"stage": "Diagnosis",<br> "agent": "DiagnosticsSpecialist"},<br> {"stage": "Treatment",<br> "agent": "TreatmentPlanner"}<br> ]<br>}``` | - Patient analysis<br>- Diagnostic support<br>- Treatment planning<br>- Follow-up care | Clinical workflow automation |
| **Legal** | ```python<br>{<br> "swarm_type": "MixtureOfAgents",<br> "team": [<br> {"role": "Researcher",<br> "expertise": "case_law"},<br> {"role": "Analyst",<br> "expertise": "contracts"},<br> {"role": "Reviewer",<br> "expertise": "compliance"}<br> ]<br>}``` | - Document review<br>- Case analysis<br>- Contract review<br>- Compliance checks | Legal document processing |
| **E-commerce** | ```python<br>{<br> "swarm_type": "ConcurrentWorkflow",<br> "processes": [<br> {"task": "ProductCatalog",<br> "agent": "ContentManager"},<br> {"task": "PricingOptimization",<br> "agent": "PricingAnalyst"},<br> {"task": "CustomerService",<br> "agent": "SupportAgent"}<br> ]<br>}``` | - Product management<br>- Pricing optimization<br>- Customer support<br>- Inventory management | E-commerce operations |
| Industry | Use Case | Applications |
|----------|----------|--------------|
| **Finance** | Automated trading desk | - Portfolio management<br>- Risk assessment<br>- Market analysis<br>- Trading execution |
| **Healthcare** | Clinical workflow automation | - Patient analysis<br>- Diagnostic support<br>- Treatment planning<br>- Follow-up care |
| **Legal** | Legal document processing | - Document review<br>- Case analysis<br>- Contract review<br>- Compliance checks |
| **E-commerce** | E-commerce operations | - Product management<br>- Pricing optimization<br>- Customer support<br>- Inventory management |
=== "Error Handling"
!!! warning "Advanced Error Management Strategies"
| Error Code | Strategy | Implementation | Recovery Pattern |
|------------|----------|----------------|------------------|
| 400 | Input Validation | Pre-request parameter checks | ```python<br>{"validation": "strict",<br> "retry_on_fix": true}``` |
| 401 | Auth Management | Regular key rotation, secure storage | ```python<br>{"key_rotation": "7d",<br> "backup_keys": true}``` |
| 429 | Rate Limiting | Exponential backoff, request queuing | ```python<br>{"backoff_factor": 2,<br> "max_retries": 5}``` |
| 500 | Resilience | Retry with backoff, fallback logic | ```python<br>{"circuit_breaker": true,<br> "fallback_mode": "degraded"}``` |
| 503 | High Availability | Multi-region setup, redundancy | ```python<br>{"regions": ["us", "eu"],<br> "failover": true}``` |
| 504 | Timeout Handling | Adaptive timeouts, partial results | ```python<br>{"timeout_strategy": "adaptive",<br> "partial_results": true}``` |
| Error Code | Strategy | Recovery Pattern |
|------------|----------|------------------|
| 400 | Input Validation | Pre-request validation with fallback |
| 401 | Auth Management | Secure key rotation and storage |
| 429 | Rate Limiting | Exponential backoff with queuing |
| 500 | Resilience | Retry with circuit breaking |
| 503 | High Availability | Multi-region redundancy |
| 504 | Timeout Handling | Adaptive timeouts with partial results |
## Choosing the Right Swarm Architecture
@ -78,25 +78,17 @@ Use this framework to select the optimal swarm architecture for your use case:
1. **Task Complexity Analysis**
- Simple tasks → `AutoSwarmBuilder`
- Complex tasks → `HiearchicalSwarm` or `MultiAgentRouter`
- Dynamic tasks → `AgentRearrange`
2. **Workflow Pattern**
- Linear processes → `SequentialWorkflow`
- Parallel operations → `ConcurrentWorkflow`
- Collaborative tasks → `GroupChat`
3. **Domain Requirements**
- Multi-domain expertise → `MixtureOfAgents`
- Data processing → `SpreadSheetSwarm`
- Quality assurance → `MajorityVoting`
### Industry-Specific Recommendations
@ -104,136 +96,43 @@ Use this framework to select the optimal swarm architecture for your use case:
=== "Finance"
!!! example "Financial Applications"
- Risk Analysis: `HiearchicalSwarm`
- Market Research: `MixtureOfAgents`
- Trading Strategies: `ConcurrentWorkflow`
- Portfolio Management: `SpreadSheetSwarm`
=== "Healthcare"
!!! example "Healthcare Applications"
- Patient Analysis: `SequentialWorkflow`
- Research Review: `MajorityVoting`
- Treatment Planning: `GroupChat`
- Medical Records: `MultiAgentRouter`
=== "Legal"
!!! example "Legal Applications"
- Document Review: `SequentialWorkflow`
- Case Analysis: `MixtureOfAgents`
- Compliance Check: `HiearchicalSwarm`
- Contract Analysis: `ConcurrentWorkflow`
## Production Implementation Guide
### Authentication Best Practices
## Production Best Practices
```python
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Secure API key management
API_KEY = os.getenv("SWARMS_API_KEY")
if not API_KEY:
raise EnvironmentError("API key not found")
# Headers with retry capability
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json",
}
```
### Robust Error Handling
```python
import backoff
import requests
from typing import Dict, Any
class SwarmsAPIError(Exception):
"""Custom exception for Swarms API errors"""
pass
@backoff.on_exception(
backoff.expo,
(requests.exceptions.RequestException, SwarmsAPIError),
max_tries=5
)
def execute_swarm(payload: Dict[str, Any]) -> Dict[str, Any]:
"""
Execute swarm with robust error handling and retries
"""
try:
response = requests.post(
f"{BASE_URL}/v1/swarm/completions",
headers=headers,
json=payload,
timeout=30
)
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
if e.response is not None:
if e.response.status_code == 429:
# Rate limit exceeded
raise SwarmsAPIError("Rate limit exceeded")
elif e.response.status_code == 401:
# Authentication error
raise SwarmsAPIError("Invalid API key")
raise SwarmsAPIError(f"API request failed: {str(e)}")
```
## Appendix
### Common Patterns and Anti-patterns
### Best Practices Summary
!!! success "Recommended Patterns"
- Use appropriate swarm types for tasks
- Implement robust error handling
- Monitor and log executions
- Cache repeated results
- Rotate API keys regularly
!!! danger "Anti-patterns to Avoid"
- Hardcoding API keys
- Ignoring rate limits
- Missing error handling
- Excessive agent count
- Inadequate monitoring
### Performance Benchmarks

@ -1,5 +0,0 @@
from swarms.utils.litellm_wrapper import LiteLLM
model = LiteLLM(model_name="gpt-4o-mini", verbose=True)
print(model.run("What is your purpose in life?"))

@ -0,0 +1,8 @@
from swarms.utils.litellm_wrapper import LiteLLM
model = LiteLLM(
model_name="groq/meta-llama/llama-4-scout-17b-16e-instruct",
verbose=True,
)
print(model.run("What is your purpose in life?"))

@ -52,4 +52,4 @@ print(
agent.run(
"Conduct a risk analysis of the top cryptocurrencies. Think for 2 loops internally"
)
)
)

@ -0,0 +1,23 @@
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from dotenv import load_dotenv
load_dotenv()
# Initialize the agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=1,
model_name="groq/meta-llama/llama-4-scout-17b-16e-instruct",
dynamic_temperature_enabled=True,
)
print(
agent.run(
"Perform a comprehensive analysis of the most promising undervalued ETFs, considering market trends, historical performance, and potential growth opportunities. Please think through the analysis for 2 internal loops to refine your insights."
)
)

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "7.6.5"
version = "7.6.7"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]
@ -77,6 +77,8 @@ numpy = "*"
litellm = "*"
torch = "*"
httpx = "*"
mcp = "*"
[tool.poetry.scripts]
swarms = "swarms.cli.main:main"

@ -0,0 +1,233 @@
# pip install swarms bs4 requests
import re
from typing import Any, Dict
from urllib.parse import urlparse
import requests
from bs4 import BeautifulSoup
from dotenv import load_dotenv
from swarms import Agent
load_dotenv()
# Custom system prompt for financial sentiment analysis
FINANCIAL_SENTIMENT_SYSTEM_PROMPT = """
You are an expert financial analyst specializing in sentiment analysis of financial news and content. Your task is to:
1. Analyze financial content for bullish or bearish sentiment
2. Provide a numerical sentiment score between 0.0 (extremely bearish) and 1.0 (extremely bullish) where:
- 0.0-0.2: Extremely bearish (strong negative outlook)
- 0.2-0.4: Bearish (negative outlook)
- 0.4-0.6: Neutral (balanced or unclear outlook)
- 0.6-0.8: Bullish (positive outlook)
- 0.8-1.0: Extremely bullish (strong positive outlook)
3. Provide detailed rationale for your sentiment score by considering:
- Market indicators and metrics mentioned
- Expert opinions and quotes
- Historical comparisons
- Industry trends and context
- Risk factors and potential challenges
- Growth opportunities and positive catalysts
- Overall market sentiment and broader economic factors
Your analysis should be:
- Objective and data-driven
- Based on factual information present in the content
- Free from personal bias or speculation
- Considering both explicit and implicit sentiment indicators
- Taking into account the broader market context
For each analysis, structure your response as a clear sentiment score backed by comprehensive reasoning that explains why you arrived at that specific rating.
"""
class ArticleExtractor:
"""Class to handle article content extraction and cleaning."""
# Common financial news domains and their article content selectors
DOMAIN_SELECTORS = {
"seekingalpha.com": {"article": "div#SA-content"},
"finance.yahoo.com": {"article": "div.caas-body"},
"reuters.com": {
"article": "div.article-body__content__17Yit"
},
"bloomberg.com": {"article": "div.body-content"},
"marketwatch.com": {"article": "div.article__body"},
# Add more domains and their selectors as needed
}
@staticmethod
def get_domain(url: str) -> str:
"""Extract domain from URL."""
return urlparse(url).netloc.lower()
@staticmethod
def clean_text(text: str) -> str:
"""Clean extracted text content."""
# Remove extra whitespace
text = re.sub(r"\s+", " ", text)
# Remove special characters but keep basic punctuation
text = re.sub(r"[^\w\s.,!?-]", "", text)
# Remove multiple periods
text = re.sub(r"\.{2,}", ".", text)
return text.strip()
@classmethod
def extract_article_content(
cls, html_content: str, domain: str
) -> str:
"""Extract article content using domain-specific selectors."""
soup = BeautifulSoup(html_content, "html.parser")
# Remove unwanted elements
for element in soup.find_all(
["script", "style", "nav", "header", "footer", "iframe"]
):
element.decompose()
# Try domain-specific selector first
if domain in cls.DOMAIN_SELECTORS:
selector = cls.DOMAIN_SELECTORS[domain]["article"]
content = soup.select_one(selector)
if content:
return cls.clean_text(content.get_text())
# Fallback to common article containers
article_containers = [
"article",
'[role="article"]',
".article-content",
".post-content",
".entry-content",
"#main-content",
]
for container in article_containers:
content = soup.select_one(container)
if content:
return cls.clean_text(content.get_text())
# Last resort: extract all paragraph text
paragraphs = soup.find_all("p")
if paragraphs:
return cls.clean_text(
" ".join(p.get_text() for p in paragraphs)
)
return cls.clean_text(soup.get_text())
def fetch_url_content(url: str) -> Dict[str, Any]:
"""
Fetch and extract content from a financial news URL.
Args:
url (str): The URL of the financial news article
Returns:
Dict[str, Any]: Dictionary containing extracted content and metadata
"""
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
domain = ArticleExtractor.get_domain(url)
content = ArticleExtractor.extract_article_content(
response.text, domain
)
# Extract title if available
soup = BeautifulSoup(response.text, "html.parser")
title = soup.title.string if soup.title else None
return {
"title": title,
"content": content,
"domain": domain,
"url": url,
"status": "success",
}
except Exception as e:
return {
"content": f"Error fetching URL content: {str(e)}",
"status": "error",
"url": url,
}
tools = [
{
"type": "function",
"function": {
"name": "analyze_sentiment",
"description": "Analyze the sentiment of financial content and provide a bullish/bearish rating with rationale.",
"parameters": {
"type": "object",
"properties": {
"sentiment_score": {
"type": "number",
"description": "A score from 0.0 (extremely bearish) to 1.0 (extremely bullish)",
},
"rationale": {
"type": "string",
"description": "Detailed explanation of the sentiment analysis",
},
},
"required": ["sentiment_score", "rationale"],
},
},
}
]
# Initialize the agent
agent = Agent(
agent_name="Financial-Sentiment-Analyst",
agent_description="Expert financial sentiment analyzer that provides detailed bullish/bearish analysis of financial content",
system_prompt=FINANCIAL_SENTIMENT_SYSTEM_PROMPT,
max_loops=1,
tools_list_dictionary=tools,
output_type="final",
model_name="gpt-4o",
)
def run_sentiment_agent(url: str) -> Dict[str, Any]:
"""
Run the sentiment analysis agent on a given URL.
Args:
url (str): The URL of the financial content to analyze
Returns:
Dict[str, Any]: Dictionary containing sentiment analysis results
"""
article_data = fetch_url_content(url)
if article_data["status"] == "error":
return {"error": article_data["content"], "status": "error"}
prompt = f"""
Analyze the following financial article:
Title: {article_data.get('title', 'N/A')}
Source: {article_data['domain']}
URL: {article_data['url']}
Content:
{article_data['content']}
Please provide a detailed sentiment analysis with a score and explanation.
"""
return agent.run(prompt)
if __name__ == "__main__":
url = "https://finance.yahoo.com/"
result = run_sentiment_agent(url)
print(result)

@ -2,7 +2,6 @@ from swarms.structs.agent import Agent
from swarms.structs.agent_builder import AgentsBuilder
from swarms.structs.agents_available import showcase_available_agents
from swarms.structs.async_workflow import AsyncWorkflow
from experimental.auto_swarm import AutoSwarm, AutoSwarmRouter
from swarms.structs.base_structure import BaseStructure
from swarms.structs.base_swarm import BaseSwarm
from swarms.structs.base_workflow import BaseWorkflow

@ -27,6 +27,12 @@ except ImportError:
litellm.ssl_verify = False
class LiteLLMException(Exception):
"""
Exception for LiteLLM.
"""
def get_audio_base64(audio_source: str) -> str:
"""
Convert audio from a given source to a base64 encoded string.
@ -79,6 +85,7 @@ class LiteLLM:
audio: str = None,
retries: int = 3,
verbose: bool = False,
caching: bool = False,
*args,
**kwargs,
):
@ -102,6 +109,7 @@ class LiteLLM:
self.tools_list_dictionary = tools_list_dictionary
self.tool_choice = tool_choice
self.parallel_tool_calls = parallel_tool_calls
self.caching = caching
self.modalities = []
self._cached_messages = {} # Cache for prepared messages
self.messages = [] # Initialize messages list
@ -253,6 +261,7 @@ class LiteLLM:
"stream": self.stream,
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"caching": self.caching,
**kwargs,
}
@ -286,7 +295,7 @@ class LiteLLM:
response = completion(**completion_params)
return response.choices[0].message.content
except Exception as error:
except LiteLLMException as error:
logger.error(f"Error in LiteLLM run: {str(error)}")
if "rate_limit" in str(error).lower():
logger.warning(

Loading…
Cancel
Save