diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index 807792c3..2d14fbdc 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -272,9 +272,9 @@ nav:
- Swarms Vision: "swarms/concept/vision.md"
- Swarm Ecosystem: "swarms/concept/swarm_ecosystem.md"
- Swarms Products: "swarms/products.md"
- - Swarms Framework Architecture: "swarms/concept/framework_architecture.md"
- Developers and Contributors:
+ - Swarms Framework Architecture: "swarms/concept/framework_architecture.md"
- Bounty Program: "corporate/bounty_program.md"
- Contributing:
- Contributing: "swarms/contributing.md"
diff --git a/docs/swarms_cloud/best_practices.md b/docs/swarms_cloud/best_practices.md
index 9e17a7c2..9a33263f 100644
--- a/docs/swarms_cloud/best_practices.md
+++ b/docs/swarms_cloud/best_practices.md
@@ -8,67 +8,67 @@ This comprehensive guide outlines production-grade best practices for using the
!!! info "Available Swarm Architectures"
- | Swarm Type | Best For | Use Cases | Example Configuration |
- |------------|----------|------------|---------------------|
- | `AgentRearrange` | Dynamic workflows | - Complex task decomposition
- Adaptive processing
- Multi-stage analysis
- Dynamic resource allocation | ```python
{"swarm_type": "AgentRearrange",
"rearrange_flow": "optimize for efficiency",
"max_loops": 3}``` |
- | `MixtureOfAgents` | Diverse expertise | - Cross-domain problems
- Comprehensive analysis
- Multi-perspective tasks
- Research synthesis | ```python
{"swarm_type": "MixtureOfAgents",
"agents": [{"role": "researcher"},
{"role": "analyst"},
{"role": "writer"}]}``` |
- | `SpreadSheetSwarm` | Data processing | - Financial analysis
- Data transformation
- Batch calculations
- Report generation | ```python
{"swarm_type": "SpreadSheetSwarm",
"data_format": "csv",
"analysis_type": "financial"}``` |
- | `SequentialWorkflow` | Linear processes | - Document processing
- Step-by-step analysis
- Quality control
- Content pipeline | ```python
{"swarm_type": "SequentialWorkflow",
"steps": ["research", "draft",
"review", "finalize"]}``` |
- | `ConcurrentWorkflow` | Parallel tasks | - Batch processing
- Independent analyses
- High-throughput needs
- Multi-market analysis | ```python
{"swarm_type": "ConcurrentWorkflow",
"max_parallel": 5,
"batch_size": 10}``` |
- | `GroupChat` | Collaborative solving | - Brainstorming
- Decision making
- Problem solving
- Strategy development | ```python
{"swarm_type": "GroupChat",
"participants": ["expert1", "expert2"],
"discussion_rounds": 3}``` |
- | `MultiAgentRouter` | Task distribution | - Load balancing
- Specialized processing
- Resource optimization
- Service routing | ```python
{"swarm_type": "MultiAgentRouter",
"routing_strategy": "skill_based",
"fallback_agent": "general"}``` |
- | `AutoSwarmBuilder` | Automated setup | - Quick prototyping
- Simple tasks
- Testing
- MVP development | ```python
{"swarm_type": "AutoSwarmBuilder",
"complexity": "medium",
"optimize_for": "speed"}``` |
- | `HiearchicalSwarm` | Complex organization | - Project management
- Research analysis
- Enterprise workflows
- Team automation | ```python
{"swarm_type": "HiearchicalSwarm",
"levels": ["manager", "specialist",
"worker"]}``` |
- | `MajorityVoting` | Consensus needs | - Quality assurance
- Decision validation
- Risk assessment
- Content moderation | ```python
{"swarm_type": "MajorityVoting",
"min_votes": 3,
"threshold": 0.7}``` |
+ | Swarm Type | Best For | Use Cases |
+ |------------|----------|------------|
+ | `AgentRearrange` | Dynamic workflows | - Complex task decomposition
- Adaptive processing
- Multi-stage analysis
- Dynamic resource allocation |
+ | `MixtureOfAgents` | Diverse expertise | - Cross-domain problems
- Comprehensive analysis
- Multi-perspective tasks
- Research synthesis |
+ | `SpreadSheetSwarm` | Data processing | - Financial analysis
- Data transformation
- Batch calculations
- Report generation |
+ | `SequentialWorkflow` | Linear processes | - Document processing
- Step-by-step analysis
- Quality control
- Content pipeline |
+ | `ConcurrentWorkflow` | Parallel tasks | - Batch processing
- Independent analyses
- High-throughput needs
- Multi-market analysis |
+ | `GroupChat` | Collaborative solving | - Brainstorming
- Decision making
- Problem solving
- Strategy development |
+ | `MultiAgentRouter` | Task distribution | - Load balancing
- Specialized processing
- Resource optimization
- Service routing |
+ | `AutoSwarmBuilder` | Automated setup | - Quick prototyping
- Simple tasks
- Testing
- MVP development |
+ | `HiearchicalSwarm` | Complex organization | - Project management
- Research analysis
- Enterprise workflows
- Team automation |
+ | `MajorityVoting` | Consensus needs | - Quality assurance
- Decision validation
- Risk assessment
- Content moderation |
=== "Application Patterns"
!!! tip "Specialized Application Configurations"
- | Application | Recommended Swarm | Configuration Example | Benefits |
- |------------|-------------------|----------------------|-----------|
- | **Team Automation** | `HiearchicalSwarm` | ```python
{
"swarm_type": "HiearchicalSwarm",
"agents": [
{"role": "ProjectManager",
"responsibilities": ["planning", "coordination"]},
{"role": "TechLead",
"responsibilities": ["architecture", "review"]},
{"role": "Developers",
"count": 3,
"specializations": ["frontend", "backend", "testing"]}
]
}``` | - Automated team coordination
- Clear responsibility chain
- Scalable team structure |
- | **Research Pipeline** | `SequentialWorkflow` | ```python
{
"swarm_type": "SequentialWorkflow",
"pipeline": [
{"stage": "Literature Review",
"agent_type": "Researcher"},
{"stage": "Data Analysis",
"agent_type": "Analyst"},
{"stage": "Report Generation",
"agent_type": "Writer"}
]
}``` | - Structured research process
- Quality control at each stage
- Comprehensive output |
- | **Trading System** | `ConcurrentWorkflow` | ```python
{
"swarm_type": "ConcurrentWorkflow",
"agents": [
{"market": "crypto",
"strategy": "momentum"},
{"market": "forex",
"strategy": "mean_reversion"},
{"market": "stocks",
"strategy": "value"}
]
}``` | - Multi-market coverage
- Real-time analysis
- Risk distribution |
- | **Content Factory** | `MixtureOfAgents` | ```python
{
"swarm_type": "MixtureOfAgents",
"workflow": [
{"role": "Researcher",
"focus": "topic_research"},
{"role": "Writer",
"style": "engaging"},
{"role": "Editor",
"quality_standards": "high"}
]
}``` | - Automated content creation
- Consistent quality
- High throughput |
+ | Application | Recommended Swarm | Benefits |
+ |------------|-------------------|-----------|
+ | **Team Automation** | `HiearchicalSwarm` | - Automated team coordination
- Clear responsibility chain
- Scalable team structure |
+ | **Research Pipeline** | `SequentialWorkflow` | - Structured research process
- Quality control at each stage
- Comprehensive output |
+ | **Trading System** | `ConcurrentWorkflow` | - Multi-market coverage
- Real-time analysis
- Risk distribution |
+ | **Content Factory** | `MixtureOfAgents` | - Automated content creation
- Consistent quality
- High throughput |
=== "Cost Optimization"
!!! tip "Advanced Cost Management Strategies"
- | Strategy | Implementation | Impact | Configuration Example |
- |----------|----------------|---------|---------------------|
- | Batch Processing | Group related tasks | 20-30% cost reduction | ```python
{"batch_size": 10,
"parallel_execution": true,
"deduplication": true}``` |
- | Off-peak Usage | Schedule for 8 PM - 6 AM PT | 15-25% cost reduction | ```python
{"schedule": "0 20 * * *",
"timezone": "America/Los_Angeles"}``` |
- | Token Optimization | Precise prompts, focused tasks | 10-20% cost reduction | ```python
{"max_tokens": 2000,
"compression": true,
"cache_similar": true}``` |
- | Caching | Store reusable results | 30-40% cost reduction | ```python
{"cache_ttl": 3600,
"similarity_threshold": 0.95}``` |
- | Agent Optimization | Use minimum required agents | 15-25% cost reduction | ```python
{"auto_scale": true,
"min_agents": 2,
"max_agents": 5}``` |
- | Smart Routing | Route to specialized agents | 10-15% cost reduction | ```python
{"routing_strategy": "cost_effective",
"fallback": "general"}``` |
- | Prompt Engineering | Optimize input tokens | 15-20% cost reduction | ```python
{"prompt_template": "focused",
"remove_redundancy": true}``` |
+ | Strategy | Implementation | Impact |
+ |----------|----------------|---------|
+ | Batch Processing | Group related tasks | 20-30% cost reduction |
+ | Off-peak Usage | Schedule for 8 PM - 6 AM PT | 15-25% cost reduction |
+ | Token Optimization | Precise prompts, focused tasks | 10-20% cost reduction |
+ | Caching | Store reusable results | 30-40% cost reduction |
+ | Agent Optimization | Use minimum required agents | 15-25% cost reduction |
+ | Smart Routing | Route to specialized agents | 10-15% cost reduction |
+ | Prompt Engineering | Optimize input tokens | 15-20% cost reduction |
=== "Industry Solutions"
!!! example "Industry-Specific Swarm Patterns"
- | Industry | Swarm Pattern | Configuration | Use Case |
- |----------|---------------|---------------|-----------|
- | **Finance** | ```python
{
"swarm_type": "HiearchicalSwarm",
"agents": [
{"role": "RiskManager",
"models": ["risk_assessment"]},
{"role": "MarketAnalyst",
"markets": ["stocks", "crypto"]},
{"role": "Trader",
"strategies": ["momentum", "value"]}
]
}``` | - Portfolio management
- Risk assessment
- Market analysis
- Trading execution | Automated trading desk |
- | **Healthcare** | ```python
{
"swarm_type": "SequentialWorkflow",
"workflow": [
{"stage": "PatientIntake",
"agent": "DataCollector"},
{"stage": "Diagnosis",
"agent": "DiagnosticsSpecialist"},
{"stage": "Treatment",
"agent": "TreatmentPlanner"}
]
}``` | - Patient analysis
- Diagnostic support
- Treatment planning
- Follow-up care | Clinical workflow automation |
- | **Legal** | ```python
{
"swarm_type": "MixtureOfAgents",
"team": [
{"role": "Researcher",
"expertise": "case_law"},
{"role": "Analyst",
"expertise": "contracts"},
{"role": "Reviewer",
"expertise": "compliance"}
]
}``` | - Document review
- Case analysis
- Contract review
- Compliance checks | Legal document processing |
- | **E-commerce** | ```python
{
"swarm_type": "ConcurrentWorkflow",
"processes": [
{"task": "ProductCatalog",
"agent": "ContentManager"},
{"task": "PricingOptimization",
"agent": "PricingAnalyst"},
{"task": "CustomerService",
"agent": "SupportAgent"}
]
}``` | - Product management
- Pricing optimization
- Customer support
- Inventory management | E-commerce operations |
+ | Industry | Use Case | Applications |
+ |----------|----------|--------------|
+ | **Finance** | Automated trading desk | - Portfolio management
- Risk assessment
- Market analysis
- Trading execution |
+ | **Healthcare** | Clinical workflow automation | - Patient analysis
- Diagnostic support
- Treatment planning
- Follow-up care |
+ | **Legal** | Legal document processing | - Document review
- Case analysis
- Contract review
- Compliance checks |
+ | **E-commerce** | E-commerce operations | - Product management
- Pricing optimization
- Customer support
- Inventory management |
=== "Error Handling"
!!! warning "Advanced Error Management Strategies"
- | Error Code | Strategy | Implementation | Recovery Pattern |
- |------------|----------|----------------|------------------|
- | 400 | Input Validation | Pre-request parameter checks | ```python
{"validation": "strict",
"retry_on_fix": true}``` |
- | 401 | Auth Management | Regular key rotation, secure storage | ```python
{"key_rotation": "7d",
"backup_keys": true}``` |
- | 429 | Rate Limiting | Exponential backoff, request queuing | ```python
{"backoff_factor": 2,
"max_retries": 5}``` |
- | 500 | Resilience | Retry with backoff, fallback logic | ```python
{"circuit_breaker": true,
"fallback_mode": "degraded"}``` |
- | 503 | High Availability | Multi-region setup, redundancy | ```python
{"regions": ["us", "eu"],
"failover": true}``` |
- | 504 | Timeout Handling | Adaptive timeouts, partial results | ```python
{"timeout_strategy": "adaptive",
"partial_results": true}``` |
+ | Error Code | Strategy | Recovery Pattern |
+ |------------|----------|------------------|
+ | 400 | Input Validation | Pre-request validation with fallback |
+ | 401 | Auth Management | Secure key rotation and storage |
+ | 429 | Rate Limiting | Exponential backoff with queuing |
+ | 500 | Resilience | Retry with circuit breaking |
+ | 503 | High Availability | Multi-region redundancy |
+ | 504 | Timeout Handling | Adaptive timeouts with partial results |
## Choosing the Right Swarm Architecture
@@ -78,25 +78,17 @@ Use this framework to select the optimal swarm architecture for your use case:
1. **Task Complexity Analysis**
- Simple tasks → `AutoSwarmBuilder`
-
- Complex tasks → `HiearchicalSwarm` or `MultiAgentRouter`
-
- Dynamic tasks → `AgentRearrange`
2. **Workflow Pattern**
-
- Linear processes → `SequentialWorkflow`
-
- Parallel operations → `ConcurrentWorkflow`
-
- Collaborative tasks → `GroupChat`
3. **Domain Requirements**
-
- Multi-domain expertise → `MixtureOfAgents`
-
- Data processing → `SpreadSheetSwarm`
-
- Quality assurance → `MajorityVoting`
### Industry-Specific Recommendations
@@ -104,136 +96,43 @@ Use this framework to select the optimal swarm architecture for your use case:
=== "Finance"
!!! example "Financial Applications"
-
-
- Risk Analysis: `HiearchicalSwarm`
-
- Market Research: `MixtureOfAgents`
-
- Trading Strategies: `ConcurrentWorkflow`
-
- Portfolio Management: `SpreadSheetSwarm`
=== "Healthcare"
!!! example "Healthcare Applications"
-
-
- Patient Analysis: `SequentialWorkflow`
-
- Research Review: `MajorityVoting`
-
- Treatment Planning: `GroupChat`
-
- Medical Records: `MultiAgentRouter`
=== "Legal"
!!! example "Legal Applications"
-
-
- Document Review: `SequentialWorkflow`
-
- Case Analysis: `MixtureOfAgents`
-
- Compliance Check: `HiearchicalSwarm`
-
- Contract Analysis: `ConcurrentWorkflow`
-## Production Implementation Guide
-
-### Authentication Best Practices
+## Production Best Practices
-```python
-import os
-from dotenv import load_dotenv
-
-# Load environment variables
-load_dotenv()
-
-# Secure API key management
-API_KEY = os.getenv("SWARMS_API_KEY")
-if not API_KEY:
- raise EnvironmentError("API key not found")
-
-# Headers with retry capability
-headers = {
- "x-api-key": API_KEY,
- "Content-Type": "application/json",
-}
-```
-
-### Robust Error Handling
-
-```python
-import backoff
-import requests
-from typing import Dict, Any
-
-class SwarmsAPIError(Exception):
- """Custom exception for Swarms API errors"""
- pass
-
-@backoff.on_exception(
- backoff.expo,
- (requests.exceptions.RequestException, SwarmsAPIError),
- max_tries=5
-)
-def execute_swarm(payload: Dict[str, Any]) -> Dict[str, Any]:
- """
- Execute swarm with robust error handling and retries
- """
- try:
- response = requests.post(
- f"{BASE_URL}/v1/swarm/completions",
- headers=headers,
- json=payload,
- timeout=30
- )
-
- response.raise_for_status()
- return response.json()
-
- except requests.exceptions.RequestException as e:
- if e.response is not None:
- if e.response.status_code == 429:
- # Rate limit exceeded
- raise SwarmsAPIError("Rate limit exceeded")
- elif e.response.status_code == 401:
- # Authentication error
- raise SwarmsAPIError("Invalid API key")
- raise SwarmsAPIError(f"API request failed: {str(e)}")
-```
-
-
-## Appendix
-
-### Common Patterns and Anti-patterns
+### Best Practices Summary
!!! success "Recommended Patterns"
-
- Use appropriate swarm types for tasks
-
- Implement robust error handling
-
- Monitor and log executions
-
- Cache repeated results
-
- Rotate API keys regularly
!!! danger "Anti-patterns to Avoid"
-
-
- Hardcoding API keys
-
- Ignoring rate limits
-
- Missing error handling
-
-
- Excessive agent count
-
- Inadequate monitoring
### Performance Benchmarks
diff --git a/litellm_example.py b/litellm_example.py
deleted file mode 100644
index 63b297ef..00000000
--- a/litellm_example.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from swarms.utils.litellm_wrapper import LiteLLM
-
-model = LiteLLM(model_name="gpt-4o-mini", verbose=True)
-
-print(model.run("What is your purpose in life?"))
diff --git a/llama4_examples/litellm_example.py b/llama4_examples/litellm_example.py
new file mode 100644
index 00000000..fe210a66
--- /dev/null
+++ b/llama4_examples/litellm_example.py
@@ -0,0 +1,8 @@
+from swarms.utils.litellm_wrapper import LiteLLM
+
+model = LiteLLM(
+ model_name="groq/meta-llama/llama-4-scout-17b-16e-instruct",
+ verbose=True,
+)
+
+print(model.run("What is your purpose in life?"))
diff --git a/llama_4.py b/llama4_examples/llama_4.py
similarity index 99%
rename from llama_4.py
rename to llama4_examples/llama_4.py
index df4a08b7..6ece57ee 100644
--- a/llama_4.py
+++ b/llama4_examples/llama_4.py
@@ -52,4 +52,4 @@ print(
agent.run(
"Conduct a risk analysis of the top cryptocurrencies. Think for 2 loops internally"
)
-)
\ No newline at end of file
+)
diff --git a/llama4_examples/simple_agent.py b/llama4_examples/simple_agent.py
new file mode 100644
index 00000000..76c44251
--- /dev/null
+++ b/llama4_examples/simple_agent.py
@@ -0,0 +1,23 @@
+from swarms import Agent
+from swarms.prompts.finance_agent_sys_prompt import (
+ FINANCIAL_AGENT_SYS_PROMPT,
+)
+from dotenv import load_dotenv
+
+load_dotenv()
+
+# Initialize the agent
+agent = Agent(
+ agent_name="Financial-Analysis-Agent",
+ agent_description="Personal finance advisor agent",
+ system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
+ max_loops=1,
+ model_name="groq/meta-llama/llama-4-scout-17b-16e-instruct",
+ dynamic_temperature_enabled=True,
+)
+
+print(
+ agent.run(
+ "Perform a comprehensive analysis of the most promising undervalued ETFs, considering market trends, historical performance, and potential growth opportunities. Please think through the analysis for 2 internal loops to refine your insights."
+ )
+)
diff --git a/pyproject.toml b/pyproject.toml
index be5f833d..f2fd216d 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
-version = "7.6.5"
+version = "7.6.7"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez "]
@@ -77,6 +77,8 @@ numpy = "*"
litellm = "*"
torch = "*"
httpx = "*"
+mcp = "*"
+
[tool.poetry.scripts]
swarms = "swarms.cli.main:main"
diff --git a/sentiment_news_analysis.py b/sentiment_news_analysis.py
new file mode 100644
index 00000000..bc283e6e
--- /dev/null
+++ b/sentiment_news_analysis.py
@@ -0,0 +1,233 @@
+# pip install swarms bs4 requests
+
+import re
+from typing import Any, Dict
+from urllib.parse import urlparse
+
+import requests
+from bs4 import BeautifulSoup
+from dotenv import load_dotenv
+
+from swarms import Agent
+
+load_dotenv()
+
+# Custom system prompt for financial sentiment analysis
+FINANCIAL_SENTIMENT_SYSTEM_PROMPT = """
+You are an expert financial analyst specializing in sentiment analysis of financial news and content. Your task is to:
+
+1. Analyze financial content for bullish or bearish sentiment
+2. Provide a numerical sentiment score between 0.0 (extremely bearish) and 1.0 (extremely bullish) where:
+ - 0.0-0.2: Extremely bearish (strong negative outlook)
+ - 0.2-0.4: Bearish (negative outlook)
+ - 0.4-0.6: Neutral (balanced or unclear outlook)
+ - 0.6-0.8: Bullish (positive outlook)
+ - 0.8-1.0: Extremely bullish (strong positive outlook)
+
+3. Provide detailed rationale for your sentiment score by considering:
+ - Market indicators and metrics mentioned
+ - Expert opinions and quotes
+ - Historical comparisons
+ - Industry trends and context
+ - Risk factors and potential challenges
+ - Growth opportunities and positive catalysts
+ - Overall market sentiment and broader economic factors
+
+Your analysis should be:
+- Objective and data-driven
+- Based on factual information present in the content
+- Free from personal bias or speculation
+- Considering both explicit and implicit sentiment indicators
+- Taking into account the broader market context
+
+For each analysis, structure your response as a clear sentiment score backed by comprehensive reasoning that explains why you arrived at that specific rating.
+"""
+
+
+class ArticleExtractor:
+ """Class to handle article content extraction and cleaning."""
+
+ # Common financial news domains and their article content selectors
+ DOMAIN_SELECTORS = {
+ "seekingalpha.com": {"article": "div#SA-content"},
+ "finance.yahoo.com": {"article": "div.caas-body"},
+ "reuters.com": {
+ "article": "div.article-body__content__17Yit"
+ },
+ "bloomberg.com": {"article": "div.body-content"},
+ "marketwatch.com": {"article": "div.article__body"},
+ # Add more domains and their selectors as needed
+ }
+
+ @staticmethod
+ def get_domain(url: str) -> str:
+ """Extract domain from URL."""
+ return urlparse(url).netloc.lower()
+
+ @staticmethod
+ def clean_text(text: str) -> str:
+ """Clean extracted text content."""
+ # Remove extra whitespace
+ text = re.sub(r"\s+", " ", text)
+ # Remove special characters but keep basic punctuation
+ text = re.sub(r"[^\w\s.,!?-]", "", text)
+ # Remove multiple periods
+ text = re.sub(r"\.{2,}", ".", text)
+ return text.strip()
+
+ @classmethod
+ def extract_article_content(
+ cls, html_content: str, domain: str
+ ) -> str:
+ """Extract article content using domain-specific selectors."""
+ soup = BeautifulSoup(html_content, "html.parser")
+
+ # Remove unwanted elements
+ for element in soup.find_all(
+ ["script", "style", "nav", "header", "footer", "iframe"]
+ ):
+ element.decompose()
+
+ # Try domain-specific selector first
+ if domain in cls.DOMAIN_SELECTORS:
+ selector = cls.DOMAIN_SELECTORS[domain]["article"]
+ content = soup.select_one(selector)
+ if content:
+ return cls.clean_text(content.get_text())
+
+ # Fallback to common article containers
+ article_containers = [
+ "article",
+ '[role="article"]',
+ ".article-content",
+ ".post-content",
+ ".entry-content",
+ "#main-content",
+ ]
+
+ for container in article_containers:
+ content = soup.select_one(container)
+ if content:
+ return cls.clean_text(content.get_text())
+
+ # Last resort: extract all paragraph text
+ paragraphs = soup.find_all("p")
+ if paragraphs:
+ return cls.clean_text(
+ " ".join(p.get_text() for p in paragraphs)
+ )
+
+ return cls.clean_text(soup.get_text())
+
+
+def fetch_url_content(url: str) -> Dict[str, Any]:
+ """
+ Fetch and extract content from a financial news URL.
+
+ Args:
+ url (str): The URL of the financial news article
+
+ Returns:
+ Dict[str, Any]: Dictionary containing extracted content and metadata
+ """
+ try:
+ headers = {
+ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
+ }
+ response = requests.get(url, headers=headers, timeout=10)
+ response.raise_for_status()
+
+ domain = ArticleExtractor.get_domain(url)
+ content = ArticleExtractor.extract_article_content(
+ response.text, domain
+ )
+
+ # Extract title if available
+ soup = BeautifulSoup(response.text, "html.parser")
+ title = soup.title.string if soup.title else None
+
+ return {
+ "title": title,
+ "content": content,
+ "domain": domain,
+ "url": url,
+ "status": "success",
+ }
+ except Exception as e:
+ return {
+ "content": f"Error fetching URL content: {str(e)}",
+ "status": "error",
+ "url": url,
+ }
+
+
+tools = [
+ {
+ "type": "function",
+ "function": {
+ "name": "analyze_sentiment",
+ "description": "Analyze the sentiment of financial content and provide a bullish/bearish rating with rationale.",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "sentiment_score": {
+ "type": "number",
+ "description": "A score from 0.0 (extremely bearish) to 1.0 (extremely bullish)",
+ },
+ "rationale": {
+ "type": "string",
+ "description": "Detailed explanation of the sentiment analysis",
+ },
+ },
+ "required": ["sentiment_score", "rationale"],
+ },
+ },
+ }
+]
+
+# Initialize the agent
+agent = Agent(
+ agent_name="Financial-Sentiment-Analyst",
+ agent_description="Expert financial sentiment analyzer that provides detailed bullish/bearish analysis of financial content",
+ system_prompt=FINANCIAL_SENTIMENT_SYSTEM_PROMPT,
+ max_loops=1,
+ tools_list_dictionary=tools,
+ output_type="final",
+ model_name="gpt-4o",
+)
+
+
+def run_sentiment_agent(url: str) -> Dict[str, Any]:
+ """
+ Run the sentiment analysis agent on a given URL.
+
+ Args:
+ url (str): The URL of the financial content to analyze
+
+ Returns:
+ Dict[str, Any]: Dictionary containing sentiment analysis results
+ """
+ article_data = fetch_url_content(url)
+
+ if article_data["status"] == "error":
+ return {"error": article_data["content"], "status": "error"}
+
+ prompt = f"""
+ Analyze the following financial article:
+ Title: {article_data.get('title', 'N/A')}
+ Source: {article_data['domain']}
+ URL: {article_data['url']}
+
+ Content:
+ {article_data['content']}
+
+ Please provide a detailed sentiment analysis with a score and explanation.
+ """
+
+ return agent.run(prompt)
+
+
+if __name__ == "__main__":
+ url = "https://finance.yahoo.com/"
+ result = run_sentiment_agent(url)
+ print(result)
diff --git a/swarms/structs/__init__.py b/swarms/structs/__init__.py
index 5e8db7de..5484dae1 100644
--- a/swarms/structs/__init__.py
+++ b/swarms/structs/__init__.py
@@ -2,7 +2,6 @@ from swarms.structs.agent import Agent
from swarms.structs.agent_builder import AgentsBuilder
from swarms.structs.agents_available import showcase_available_agents
from swarms.structs.async_workflow import AsyncWorkflow
-from experimental.auto_swarm import AutoSwarm, AutoSwarmRouter
from swarms.structs.base_structure import BaseStructure
from swarms.structs.base_swarm import BaseSwarm
from swarms.structs.base_workflow import BaseWorkflow
diff --git a/swarms/utils/litellm_wrapper.py b/swarms/utils/litellm_wrapper.py
index 2e899b80..7c1a5faa 100644
--- a/swarms/utils/litellm_wrapper.py
+++ b/swarms/utils/litellm_wrapper.py
@@ -27,6 +27,12 @@ except ImportError:
litellm.ssl_verify = False
+class LiteLLMException(Exception):
+ """
+ Exception for LiteLLM.
+ """
+
+
def get_audio_base64(audio_source: str) -> str:
"""
Convert audio from a given source to a base64 encoded string.
@@ -79,6 +85,7 @@ class LiteLLM:
audio: str = None,
retries: int = 3,
verbose: bool = False,
+ caching: bool = False,
*args,
**kwargs,
):
@@ -102,6 +109,7 @@ class LiteLLM:
self.tools_list_dictionary = tools_list_dictionary
self.tool_choice = tool_choice
self.parallel_tool_calls = parallel_tool_calls
+ self.caching = caching
self.modalities = []
self._cached_messages = {} # Cache for prepared messages
self.messages = [] # Initialize messages list
@@ -253,6 +261,7 @@ class LiteLLM:
"stream": self.stream,
"temperature": self.temperature,
"max_tokens": self.max_tokens,
+ "caching": self.caching,
**kwargs,
}
@@ -286,7 +295,7 @@ class LiteLLM:
response = completion(**completion_params)
return response.choices[0].message.content
- except Exception as error:
+ except LiteLLMException as error:
logger.error(f"Error in LiteLLM run: {str(error)}")
if "rate_limit" in str(error).lower():
logger.warning(