Docker compose

park

first version to test

bugfixing

update install

fix dir

install

update

almost working

bugfix

move to trace level for now

log file name

logging

bugfix

pep8

next

wip

adding hunter

telehack

update

work in progress

now to test

we need to make sure the app is called correctly.

   main:create_app

in swarms/api/systemd/uvicorn.service

force pull

fixup

logging

wip

wip

wip

wip

wip

wip

wip

wip

wip

wip

wip

wip

adding scripts

bugfix

wip

wip

wip

wip

wip

no more sock

update

wip

wip

bugfix install useradd

switch branch

get rid of local

ignore error on user

more local

more local

bugfix APT

rerun

adding nginx

adding expect for unbuffer debugging

start nginx

bugfix print

hf_home and aws-cli

bugfix

adding values for undefined variables
+        key_name = "fixme"
+        user_id = "fixme"

merge fix

now for the free tier ubuntu

wip

example

cleanup

remove emacs

update

terminate spot instances as well

bugfix

https://meta-introspector.sentry.io/issues/14387352/?query=is%3Aunresolved%20issue.priority%3A%5Bhigh%2C%20medium%5D&referrer=issue-stream&stream_index=0

multiprocessing/process.py
in
_bootstrap
at line
329

ModuleNotFoundError
No module named 'pathos'

removing callbacks

for flexibility and security we dont want to have hard coded callbacks
in the code

now lets add a sleep to wait for swarms

removing prints

OSError: [Errno 5] Input/output error
  File "src/hunter/_tracer.pyx", line 45, in hunter._tracer.trace_func
  File "src/hunter/_predicates.pyx", line 587, in hunter._predicates.fast_call
  File "src/hunter/_predicates.pyx", line 360, in hunter._predicates.fast_When_call
  File "main.py", line 56, in process_hunter_event
    print("MOD", mod)

OSError: [Errno 5] Input/output error
(3 additional frame(s) were not displayed

starting the documentation of terraform

check api status

now testing

test working

adding pip freeze

fix bug

rebase

just run

just run

bootfast

update just run

start of cloudwatch logs

now remove the branch

remove checkout

just run

dont echo

remove echo

adding in the new install for systemd and nginx from swarms.

we will move this out into ssm files we can apply on boot later

update

update

wip

build dockerfile in github

building

Update docker-image.yml

wip readme

start of dockerization

create boot script to be called from ssm

adding header

adding docker boot

going to test

shell check

update

rundocker bugfix

expose on network

docker

adding torch and together

compose tests

compose test
pull/703/head
mike dupont 4 months ago
parent 135b02a812
commit 86fae9fa33

@ -0,0 +1,34 @@
name: Docker Compose Test
on:
workflow_dispatch:
push:
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: actions/checkout@v4
# - name: Build and push Docker images
# uses: docker/build-push-action@v6.10.0
# with:
# push: true
# tags: h4ckermike/swarms-api:experimental
# - name: Build the Docker image
# run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)
- uses: adambirds/docker-compose-action@v1.5.0
with:
compose-file: "./docker-compose.yml"
up-flags: "--build"
down-flags: "--volumes"
#test-container: "test-container"
#test-command: "npm test"

@ -3,9 +3,7 @@ name: Docker Image CI
on:
workflow_dispatch:
push:
#branches: [ "master" ]
pull_request:
#branches: [ "master" ]
jobs:
@ -14,7 +12,17 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- uses: actions/checkout@v4
- name: Build the Docker image
#run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)
run: docker compose build
- name: Build and push Docker images
uses: docker/build-push-action@v6.10.0
with:
push: true
tags: h4ckermike/swarms-api:experimental
# - name: Build the Docker image
# run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)

@ -1,5 +1,6 @@
# Use Python 3.11 slim-bullseye for smaller base image
FROM python:3.11-slim-bullseye AS builder
# review
# Use an official Python runtime as a parent image
FROM python:3.11-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
@ -7,49 +8,184 @@ ENV PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# Set the working directory
WORKDIR /build
WORKDIR /opt/swarms/
RUN apt update
RUN apt install -y git
RUN apt install --allow-change-held-packages -y python3-virtualenv
RUN apt install --allow-change-held-packages -y expect
RUN apt install --allow-change-held-packages -y jq netcat-traditional # missing packages
# Install only essential build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
gcc \
g++ \
gfortran \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
# Install swarms packages
RUN pip install --no-cache-dir swarm-models swarms
RUN mkdir -p /var/swarms/agent_workspace/
RUN adduser --disabled-password --gecos "" swarms --home "/home/swarms"
RUN chown -R swarms:swarms /var/swarms/agent_workspace
USER swarms
RUN python3 -m venv /var/swarms/agent_workspace/.venv/
# Production stage
FROM python:3.11-slim-bullseye
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cublas-cu12==12.4.5.8
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cuda-cupti-cu12==12.4.127
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cuda-nvrtc-cu12==12.4.127
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cuda-runtime-cu12==12.4.127
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cudnn-cu12==9.1.0.70
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cufft-cu12==11.2.1.3
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-curand-cu12==10.3.5.147
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cusolver-cu12==11.6.1.9
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-cusparse-cu12==12.3.1.170
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-nccl-cu12==2.21.5
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-nvjitlink-cu12==12.4.127
# RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install nvidia-nvtx-cu12==12.4.127
# Set secure environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
WORKSPACE_DIR="agent_workspace" \
PATH="/app:${PATH}" \
PYTHONPATH="/app:${PYTHONPATH}" \
USER=swarms
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install --upgrade pip
# 15l
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install aiofiles==24.1.0 \
aiohappyeyeballs==2.4.4 \
aiosignal==1.3.2 \
frozenlist==1.5.0 \
aiohttp==3.11.11 \
attrs==24.3.0 \
annotated-types==0.7.0 \
anyio==4.7.0 \
sniffio==1.3.1 \
typing_extensions==4.12.2 \
asyncio==3.4.3 \
multidict==6.1.0 \
propcache==0.2.1 \
yarl==1.18.3 \
idna==3.10
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install certifi==2024.12.14 \
chardet==5.2.0 \
charset-normalizer==3.4.0 \
click==8.1.8 \
dataclasses-json==0.6.7 \
marshmallow==3.23.2 \
typing-inspect==0.9.0 \
packaging==23.2 \
mypy-extensions==1.0.0 \
dill==0.3.9 \
distro==1.9.0 \
docstring_parser==0.16 \
filelock==3.16.1 \
fastapi==0.115.6 \
starlette==0.41.3 \
pydantic==2.10.4 \
pydantic_core==2.27.2
# 15s
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install GPUtil==1.4.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install Jinja2==3.1.5 MarkupSafe==3.0.2 PyYAML==6.0.2 Pygments==2.18.0 SQLAlchemy==2.0.36 fsspec==2024.12.0 greenlet==3.1.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install h11==0.14.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install httpcore==1.0.7
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install httpx==0.27.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install huggingface-hub==0.27.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install importlib_metadata==8.5.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install iniconfig==2.0.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install jiter==0.8.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install jsonpatch==1.33
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install jsonpointer==3.0.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install jsonschema-specifications==2024.10.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install jsonschema==4.23.0
# Huge
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install langchain-community==0.0.29 \
langchain-core==0.1.53 \
langsmith==0.1.147 \
numpy==1.26.4 \
orjson==3.10.12 \
requests-toolbelt==1.0.0 \
tenacity==8.5.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install loguru==0.7.3
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install lxml==5.3.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install markdown-it-py==3.0.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install mdurl==0.1.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install mpmath==1.3.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install msgpack==1.1.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install multiprocess==0.70.17
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install mypy-protobuf==3.6.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install networkx==3.4.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install ollama==0.4.4
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install openai==1.58.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pathos==0.3.3
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pathspec==0.12.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install platformdirs==4.3.6
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pluggy==1.5.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pox==0.3.5
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install ppft==1.7.6.9
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install protobuf==5.29.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install psutil==6.1.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pytesseract==0.3.13
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pytest==8.3.4
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install python-dateutil==2.9.0.post0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install python-docx==1.1.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install python-dotenv==1.0.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install python-magic==0.4.27
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pytz==2024.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install ratelimit==2.2.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install referencing==0.35.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install regex==2024.11.6
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install reportlab==4.2.5
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install requests==2.32.3
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install rich==13.9.4
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install rpds-py==0.22.3
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install safetensors==0.4.5
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install sentry-sdk==2.19.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install six==1.17.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install sympy==1.13.1 # 10 sec
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install termcolor==2.5.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install tiktoken==0.8.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install tokenizers==0.21.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install toml==0.10.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install tqdm==4.67.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install types-chardet==5.0.4.6
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install types-protobuf==5.29.1.20241207
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install types-pytz==2024.2.0.20241221
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install types-toml==0.10.8.20240310
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install tzdata==2024.2
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install urllib3==2.3.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install uvicorn==0.34.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install zipp==3.21.0
# Create non-root user
RUN useradd -m -s /bin/bash -U $USER && \
mkdir -p /app && \
chown -R $USER:$USER /app
# Set working directory
WORKDIR /app
## dev toools
#RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install ruff==0.4.4
#RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install black==24.10.0
# Copy only necessary files from builder
COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
## SLOW
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install doc-master==0.0.2
# needs qt
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install clusterops==0.1.6
# Copy application with correct permissions
COPY --chown=$USER:$USER . .
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pandas==2.2.3 # 13s
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install litellm==1.55.9 # 11s
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install transformers==4.47.1 # 12s
#RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install triton==3.1.0 # 18s
# Big ones
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pillow==11.0.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install pypdf==5.1.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install ray==2.40.0
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install torch==2.5.1
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install together==1.3.10
# Switch to non-root user
USER $USER
###
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install git+https://github.com/jmikedupont2/swarm-models@main#egg=swarm-models
# swarm-models==0.2.7 # BIG 55 sec
#RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install -r /opt/swarms/requirements.txt
RUN git config --global --add safe.directory "/opt/swarms"
#RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install uvicorn fastapi
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD python -c "import swarms; print('Health check passed')" || exit 1
COPY swarms /opt/swarms/swarms
COPY pyproject.toml /opt/swarms/
COPY README.md /opt/swarms/
RUN /var/swarms/agent_workspace/.venv/bin/python -m pip install -e /opt/swarms/
#COPY requirements.txt .
# things that change
COPY api/main.py /opt/swarms/api/main.py
WORKDIR /opt/swarms/api/
CMD ["/usr/bin/unbuffer", "/var/swarms/agent_workspace/.venv/bin/uvicorn", "--proxy-headers", "--forwarded-allow-ips='*'", "--workers=4", "--port=8000", "--reload-delay=30", "main:create_app"]

@ -4,7 +4,7 @@
</a>
</div>
<p align="center">
<em>The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework </em>
<em>The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework</em>
</p>
<p align="center">
@ -36,6 +36,14 @@
[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)
----
## Plan
1. removed large pip modules, forked swarm-models to remove torch which is very large
2. remove pillow and pdf processing for now
----
## ✨ Features

@ -0,0 +1,2 @@
rerun:
bash ./rerun.sh

@ -0,0 +1,95 @@
`sudo bash ./install.sh`
to redo all the steps remove the lock files
`rm ${ROOT}/opt/swarms/install/* `
or in my system:
```
export ROOT=/mnt/data1/swarms
sudo rm ${ROOT}/opt/swarms/install/*
```
rerun
```
export ROOT=/mnt/data1/swarms;
sudo rm ${ROOT}/opt/swarms/install/*;
sudo bash ./install.sh
```
* setup
To install on linux:
https://docs.aws.amazon.com/systems-manager/
```
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
sudo dpkg -i ./session-manager-plugin.deb
```
* run
To redo the installation steps for the Swarms tool on your system, follow these commands sequentially:
1. Set the ROOT variable:
```bash
export ROOT=/mnt/data1/swarms
```
2. Remove the lock files:
```bash
sudo rm ${ROOT}/opt/swarms/install/*
```
3. Run the installation script again:
```bash
sudo bash ./install.sh
```
For setting up the Session Manager plugin on Linux, you can follow these commands:
1. Download the Session Manager plugin:
```bash
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
```
2. Install the plugin:
```bash
sudo dpkg -i ./session-manager-plugin.deb
```
After that, you can run your desired commands or workflows.** get the instance id
`aws ec2 describe-instances`
** start a session
`aws ssm start-session --target i-XXXX`
** on the machine:
```
sudo su -
tail /var/log/cloud-init-output.log
```
Convert this to an automation of your choice to run all the steps
and run this on all the instances
To get the instance ID and start a session using AWS CLI, follow these steps:
1. **Get the Instance ID:**
Run the following command to list your instances and their details:
```bash
aws ec2 describe-instances
```
2. **Start a Session:**
Replace `i-XXXX` with your actual instance ID from the previous step:
```bash
aws ssm start-session --target i-XXXX
```
3. **On the Machine:**
After starting the session, you can execute the following commands:
```bash
sudo su -
tail /var/log/cloud-init-output.log
```

@ -1,149 +1,52 @@
import os
import json
import logging
from typing import Dict, Optional, Any
from dataclasses import dataclass
import requests
import time
# Set up logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("api_tests.log"),
logging.StreamHandler(),
],
)
logger = logging.getLogger(__name__)
import requests
from loguru import logger
import time
from typing import Dict, Optional, Tuple
from uuid import UUID
# Set up logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("api_tests.log"),
logging.StreamHandler(),
],
)
logger = logging.getLogger(__name__)
# Configuration
@dataclass
class TestConfig:
"""Test configuration settings"""
base_url: str
timeout: int = 30
verify_ssl: bool = True
debug: bool = True
# Load config from environment or use defaults
config = TestConfig(
base_url=os.getenv("API_BASE_URL", "http://0.0.0.0:8000/v1")
)
class APIClient:
"""API Client for testing"""
def __init__(self, config: TestConfig):
self.config = config
self.session = requests.Session()
def _url(self, path: str) -> str:
"""Construct full URL"""
return f"{self.config.base_url}/{path.lstrip('/')}"
def _log_request_details(
self, method: str, url: str, headers: Dict, data: Any
):
"""Log request details for debugging"""
logger.info("\nRequest Details:")
logger.info(f"Method: {method}")
logger.info(f"URL: {url}")
logger.info(f"Headers: {json.dumps(headers, indent=2)}")
logger.info(
f"Data: {json.dumps(data, indent=2) if data else None}"
)
def _log_response_details(self, response: requests.Response):
"""Log response details for debugging"""
logger.info("\nResponse Details:")
logger.info(f"Status Code: {response.status_code}")
logger.info(
f"Headers: {json.dumps(dict(response.headers), indent=2)}"
)
try:
logger.info(
f"Body: {json.dumps(response.json(), indent=2)}"
)
except Exception:
logger.info(f"Body: {response.text}")
def _request(
self,
method: str,
path: str,
headers: Optional[Dict] = None,
**kwargs: Any,
) -> requests.Response:
"""Make HTTP request with config defaults"""
url = self._url(path)
headers = headers or {}
if self.config.debug:
self._log_request_details(
method, url, headers, kwargs.get("json")
)
try:
response = self.session.request(
method=method,
url=url,
headers=headers,
timeout=self.config.timeout,
verify=self.config.verify_ssl,
**kwargs,
)
if self.config.debug:
self._log_response_details(response)
if response.status_code >= 400:
logger.error(
f"Request failed with status {response.status_code}"
)
logger.error(f"Response: {response.text}")
# # Configure loguru
# logger.add(
# "api_tests_{time}.log",
# rotation="100 MB",
# level="DEBUG",
# format="{time} {level} {message}",
# )
response.raise_for_status()
return response
BASE_URL = os.getenv("SWARMS_URL","http://localhost:8000/v1")
except requests.exceptions.RequestException as e:
logger.error(f"Request failed: {str(e)}")
if hasattr(e, "response") and e.response is not None:
logger.error(f"Error response: {e.response.text}")
raise
def check_api_server() -> bool:
"""Check if the API server is running and accessible."""
try:
response = requests.get(f"{BASE_URL}/docs")
return response.status_code == 200
except requests.exceptions.ConnectionError:
logger.error("API server is not running at {BASE_URL}")
logger.error("Please start the API server first with:")
logger.error(" python main.py")
return False
except Exception as e:
logger.error(f"Error checking API server: {str(e)}")
return False
class TestRunner:
"""Test runner with logging and reporting"""
class TestSession:
"""Manages test session state and authentication."""
def __init__(self):
self.client = APIClient(config)
self.results = {"passed": 0, "failed": 0, "total_time": 0}
self.api_key = None
self.user_id = None
self.agent_id = None
self.user_id: Optional[UUID] = None
self.api_key: Optional[str] = None
self.test_agents: list[UUID] = []
@property
def headers(self) -> Dict[str, str]:
"""Get headers with authentication."""
print("HEADERS",self.api_key)
return {"api-key": self.api_key} if self.api_key else {}
def run_test(self, test_name: str, test_func: callable):
"""Run a single test with timing and logging"""
logger.info(f"\nRunning test: {test_name}")
start_time = time.time()
def create_test_user(session: TestSession) -> Tuple[bool, str]:
"""Create a test user and store credentials in session."""
@ -326,166 +229,104 @@ def cleanup_test_resources(session: TestSession):
# Delete test agents
for agent_id in session.test_agents:
>>>>>>> 68728698 ([AGENT][LiteLLM FIX] [API FIX])
try:
test_func()
self.results["passed"] += 1
logger.info(f"{test_name} - PASSED")
except Exception as e:
self.results["failed"] += 1
logger.error(f"{test_name} - FAILED: {str(e)}")
logger.exception(e)
end_time = time.time()
duration = end_time - start_time
self.results["total_time"] += duration
logger.info(f"Test duration: {duration:.2f}s")
def test_user_creation(self):
"""Test user creation"""
response = self.client._request(
"POST", "/users", json={"username": "test_user"}
)
data = response.json()
assert "user_id" in data, "No user_id in response"
assert "api_key" in data, "No api_key in response"
self.api_key = data["api_key"]
self.user_id = data["user_id"]
logger.info(f"Created user with ID: {self.user_id}")
def test_create_api_key(self):
"""Test API key creation"""
headers = {"api-key": self.api_key}
response = self.client._request(
"POST",
f"/users/{self.user_id}/api-keys",
headers=headers,
json={"name": "test_key"},
)
data = response.json()
assert "key" in data, "No key in response"
logger.info("Successfully created new API key")
def test_create_agent(self):
"""Test agent creation"""
headers = {"api-key": self.api_key}
agent_config = {
"agent_name": "test_agent",
"model_name": "gpt-4",
"system_prompt": "You are a test agent",
"description": "Test agent description",
"temperature": 0.7,
"max_loops": 1,
}
response = self.client._request(
"POST", "/agent", headers=headers, json=agent_config
)
data = response.json()
assert "agent_id" in data, "No agent_id in response"
self.agent_id = data["agent_id"]
logger.info(f"Created agent with ID: {self.agent_id}")
# Wait a bit for agent to be ready
time.sleep(2)
def test_list_agents(self):
"""Test agent listing"""
headers = {"api-key": self.api_key}
response = self.client._request(
"GET", "/agents", headers=headers
)
agents = response.json()
assert isinstance(agents, list), "Response is not a list"
assert len(agents) > 0, "No agents returned"
logger.info(f"Successfully retrieved {len(agents)} agents")
def test_agent_completion(self):
"""Test agent completion"""
if not self.agent_id:
logger.error("No agent_id available for completion test")
raise ValueError("Agent ID not set")
headers = {"api-key": self.api_key}
completion_request = {
"prompt": "Write 'Hello World!'",
"agent_id": str(
self.agent_id
), # Ensure UUID is converted to string
"max_tokens": 100,
"stream": False,
"temperature_override": 0.7,
}
logger.info(
f"Sending completion request for agent {self.agent_id}"
)
response = self.client._request(
"POST",
"/agent/completions",
headers=headers,
json=completion_request,
)
data = response.json()
assert "response" in data, "No response in completion"
logger.info(f"Completion response: {data.get('response')}")
def run_all_tests(self):
"""Run all tests and generate report"""
logger.info("\n" + "=" * 50)
logger.info("Starting API test suite...")
logger.info(f"Base URL: {config.base_url}")
logger.info("=" * 50 + "\n")
# Define test sequence
tests = [
("User Creation", self.test_user_creation),
("API Key Creation", self.test_create_api_key),
("Agent Creation", self.test_create_agent),
("List Agents", self.test_list_agents),
("Agent Completion", self.test_agent_completion),
]
# Run tests
for test_name, test_func in tests:
self.run_test(test_name, test_func)
# Generate report
self.print_report()
def print_report(self):
"""Print test results report"""
total_tests = self.results["passed"] + self.results["failed"]
success_rate = (
(self.results["passed"] / total_tests * 100)
if total_tests > 0
else 0
)
response = requests.delete(
f"{BASE_URL}/agent/{agent_id}",
headers=session.headers,
)
if response.status_code == 200:
logger.debug(f"Deleted agent {agent_id}")
else:
logger.warning(
f"Failed to delete agent {agent_id}: {response.text}"
)
except Exception:
logger.exception(f"Exception deleting agent {agent_id}")
# Revoke API keys
if session.user_id:
try:
response = requests.get(
f"{BASE_URL}/users/{session.user_id}/api-keys",
headers=session.headers,
)
if response.status_code == 200:
for key in response.json():
try:
revoke_response = requests.delete(
f"{BASE_URL}/users/{session.user_id}/api-keys/{key['key']}",
headers=session.headers,
)
if revoke_response.status_code == 200:
logger.debug(
f"Revoked API key {key['name']}"
)
else:
logger.warning(
f"Failed to revoke API key {key['name']}"
)
except Exception:
logger.exception(
f"Exception revoking API key {key['name']}"
)
except Exception:
logger.exception("Exception getting API keys for cleanup")
report = f"""
\n{'='*50}
API TEST RESULTS
{'='*50}
Total Tests: {total_tests}
Passed: {self.results['passed']}
Failed: {self.results['failed']}
Success Rate: {success_rate:.2f}%
Total Time: {self.results['total_time']:.2f}s
{'='*50}
"""
logger.info(report)
def run_test_workflow():
"""Run complete test workflow."""
logger.info("Starting API tests")
# Check if API server is running first
if not check_api_server():
return False
session = TestSession()
if __name__ == "__main__":
<<<<<<< HEAD
try:
runner = TestRunner()
runner.run_all_tests()
except KeyboardInterrupt:
logger.info("\nTest suite interrupted by user")
except Exception as e:
logger.error(f"Test suite failed: {str(e)}")
logger.exception(e)
=======
# Create user
user_success, message = create_test_user(session)
if not user_success:
logger.error(f"User creation failed: {message}")
return False
# Create additional API key
key_success, key = create_additional_api_key(session)
if not key_success:
logger.error(f"API key creation failed: {key}")
return False
# Create agent
agent_success, agent_id = test_create_agent(session)
if not agent_success or not agent_id:
logger.error("Agent creation failed")
return False
# Test user agent listing
if not test_list_user_agents(session):
logger.error("Agent listing failed")
return False
# Test agent operations
if not test_agent_operations(session, agent_id):
logger.error("Agent operations failed")
return False
# Test completion
if not test_completion(session, agent_id):
logger.error("Completion test failed")
return False
logger.success("All tests completed successfully")
return True
except Exception:
logger.exception("Exception during test workflow")
return False
finally:
cleanup_test_resources(session)
if __name__ == "__main__":
success = run_test_workflow()
print(success)
>>>>>>> 68728698 ([AGENT][LiteLLM FIX] [API FIX])

@ -0,0 +1,32 @@
#!/bin/bash
# to be run as swarms user
set -e
set -x
export ROOT=""
export HOME="${ROOT}/home/swarms"
unset CONDA_EXE
unset CONDA_PYTHON_EXE
export PATH="${ROOT}/var/swarms/agent_workspace/.venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
if [ ! -f "${ROOT}/var/swarms/agent_workspace/.venv/" ];
then
virtualenv "${ROOT}/var/swarms/agent_workspace/.venv/"
fi
ls "${ROOT}/var/swarms/agent_workspace/"
. "${ROOT}/var/swarms/agent_workspace/.venv/bin/activate"
pip install fastapi uvicorn termcolor
# these are tried to be installed by the app on boot
pip install sniffio pydantic-core httpcore exceptiongroup annotated-types pydantic anyio httpx ollama
pip install -e "${ROOT}/opt/swarms/"
cd "${ROOT}/var/swarms/"
pip install -e "${ROOT}/opt/swarms-memory"
pip install "fastapi[standard]"
pip install "loguru"
pip install "hunter" # for tracing
pip install pydantic==2.8.2
pip install pathos || echo oops
pip freeze
# launch as systemd
# python /opt/swarms/api/main.py

@ -0,0 +1,17 @@
#!/bin/bash
# to be run as swarms user
set -e
set -x
export ROOT=""
export HOME="${ROOT}/home/swarms"
unset CONDA_EXE
unset CONDA_PYTHON_EXE
export PATH="${ROOT}/var/swarms/agent_workspace/.venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
ls "${ROOT}/var/swarms/agent_workspace/"
. "${ROOT}/var/swarms/agent_workspace/.venv/bin/activate"
pip install -e "${ROOT}/opt/swarms/"
cd "${ROOT}/var/swarms/"
pip install -e "${ROOT}/opt/swarms-memory"

@ -0,0 +1,52 @@
import os
import json
import boto3
# Create .cache directory if it doesn't exist
os.makedirs('.cache', exist_ok=True)
def cache(name, value):
cache_file = f'.cache/{name}'
if not os.path.isfile(cache_file):
with open(cache_file, 'w') as f:
f.write(value)
# Initialize Boto3 SSM client
ssm = boto3.client('ssm')
# List commands from AWS SSM
response = ssm.list_commands()
cache("aws_ssm_list_commands", response)
# Retrieve commands
print(response)
commands = response["Commands"]
run_ids = [cmd['CommandId'] for cmd in commands]
print(f"RUNIDS: {run_ids}")
# Check the status of each command
for command in commands:
#print(command)
command_id = command['CommandId']
status = command['Status']
#eG: command= {'CommandId': '820dcf47-e8d7-4c23-8e8a-bc64de2883ff', 'DocumentName': 'AWS-RunShellScript', 'DocumentVersion': '$DEFAULT', 'Comment': '', 'ExpiresAfter': datetime.datetime(2024, 12, 13, 12, 41, 24, 683000, tzinfo=tzlocal()), 'Parameters': {'commands': ['sudo su - -c "tail /var/log/cloud-init-output.log"']}, 'InstanceIds': [], 'Targets': [{'Key': 'instanceids', 'Values': ['i-073378237c5a9dda1']}], 'RequestedDateTime': datetime.datetime(2024, 12, 13, 10, 41, 24, 683000, tzinfo=tzlocal()), 'Status': 'Success', 'StatusDetails': 'Success', 'OutputS3Region': 'us-east-1', 'OutputS3BucketName': '', 'OutputS3KeyPrefix': '', 'MaxConcurrency': '50', 'MaxErrors': '0', 'TargetCount': 1, 'CompletedCount': 1, 'ErrorCount': 0, 'DeliveryTimedOutCount': 0, 'ServiceRole': '', 'NotificationConfig': {'NotificationArn': '', 'NotificationEvents': [], 'NotificationType': ''}, 'CloudWatchOutputConfig': {'CloudWatchLogGroupName': '', 'CloudWatchOutputEnabled': False}, 'TimeoutSeconds': 3600, 'AlarmConfiguration': {'IgnorePollAlarmFailure': False, 'Alarms': []}, 'TriggeredAlarms': []}], 'ResponseMetadata': {'RequestId': '535839c4-9b87-4526-9c01-ed57f07d21ef', 'HTTPStatusCode': 200, 'HTTPHeaders': {'server': 'Server', 'date': 'Fri, 13 Dec 2024 16:58:53 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '2068', 'connection': 'keep-alive', 'x-amzn-requestid': '535839c4-9b87-4526-9c01-ed57f07d21ef'}, 'RetryAttempts': 0}}
if status == "Success":
print(f"Check logs of {command_id}")
# use ssm to fetch logs using CommandId
# Assuming you have the command_id from the previous command output
command_id = command['CommandId']
instance_id = command['Targets'][0]['Values'][0] # Get the instance ID
# Fetching logs using CommandId
log_response = ssm.get_command_invocation(
CommandId=command_id,
InstanceId=instance_id
)
print(log_response['StandardOutputContent']) # Output logs
print(log_response['StandardErrorContent']) # Error logs (if any)
print(f"aws ssm start-session --target {instance_id}")

@ -0,0 +1,27 @@
#!/bin/bash
# run swarms via docker via systemd
# this script is called from ssm
# pull the new version via systemd
# now allow for reconfigure of the systemd
export WORKSOURCE="/opt/swarms/api"
mkdir -p "/var/run/swarms/secrets/"
mkdir -p "/home/swarms/.cache/huggingface/hub"
if ! grep -q "^OPENAI_KEY" "/var/run/swarms/secrets/env"; then
OPENAI_KEY=$(aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r )
export OPENAI_KEY
echo "OPENAI_KEY=${OPENAI_KEY}" > "/var/run/swarms/secrets/env"
fi
sed -e "s!ROOT!!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
sed -e "s!ROOT!!g" > /etc/systemd/system/swarms-docker.service < "${WORKSOURCE}/systemd/swarms-docker.service"
grep . -h -n /etc/systemd/system/swarms-docker.service
systemctl daemon-reload
# start and stop the service pulls the docker image
#systemctl stop swarms-docker || journalctl -xeu swarms-docker
#systemctl start swarms-docker || journalctl -xeu swarms-docker
systemctl restart swarms-docker || journalctl -xeu swarms-docker.service
systemctl enable swarms-docker || journalctl -xeu swarms-docker

@ -0,0 +1,54 @@
import time
import boto3
#from dateutil import tz
def parse_command_id(send_command_output):
return send_command_output['Command']['CommandId']
def main():
ec2_client = boto3.client('ec2')
ssm_client = boto3.client('ssm')
# Get the list of instance IDs and their states
instances_response = ec2_client.describe_instances()
instances = [
(instance['InstanceId'], instance['State']['Name'])
for reservation in instances_response['Reservations']
for instance in reservation['Instances']
]
for instance_id, state in instances:
if state == 'running':
print(f"Starting command for instance: {instance_id}")
# Send command to the instance
send_command_output = ssm_client.send_command(
DocumentName="AWS-RunShellScript",
Targets=[{"Key": "instanceids", "Values": [instance_id]}],
Parameters={'commands': ['sudo su - -c "tail /var/log/cloud-init-output.log"']}
)
# Get the command ID
command_id = parse_command_id(send_command_output)
# Check the command status every second for 4 seconds
for _ in range(4):
time.sleep(20)
command_status = ssm_client.list_command_invocations(CommandId=command_id, Details=True)
print(command_status)
for invocation in command_status['CommandInvocations']:
if invocation['Status'] == 'Success':
for plugin in invocation['CommandPlugins']:
if plugin['Status'] == 'Success':
print(f"Output from instance {instance_id}:\n{plugin['Output']}")
else:
print(f"Error in plugin execution for instance {instance_id}: {plugin['StatusDetails']}")
else:
print(f"Command for instance {instance_id} is still in progress... Status: {invocation['Status']}")
if __name__ == "__main__":
main()

@ -0,0 +1,166 @@
#!/bin/bash
# review and improve
. ./.env # for secrets
set -e # stop on any error
#set -x dont echo secrets
export BRANCH="feature/ec2"
#export ROOT="/mnt/data1/swarms"
export ROOT="" # empty
export WORKSOURCE="${ROOT}/opt/swarms/api"
if [ ! -d "${ROOT}/opt/swarms/install/" ]; then
mkdir -p "${ROOT}/opt/swarms/install"
fi
if [ ! -f "${ROOT}/opt/swarms/install/apt.txt" ]; then
apt update
apt install --allow-change-held-packages -y git python3-virtualenv nginx
apt install --allow-change-held-packages -y expect
apt install --allow-change-held-packages -y jq netcat-traditional # missing packages
snap install aws-cli --classic
echo 1 >"${ROOT}/opt/swarms/install/apt.txt"
fi
if [ ! -f "${ROOT}/opt/swarms/install/setup.txt" ]; then
#rm -rf ./src/swarms # oops
#adduser --disabled-password --comment "" swarms --home "${ROOT}/home/swarms" || echo ignore
adduser --disabled-password --gecos "" swarms --home "${ROOT}/home/swarms" || echo ignore
git config --global --add safe.directory "${ROOT}/opt/swarms"
git config --global --add safe.directory "${ROOT}/opt/swarms-memory"
# we should have done this
if [ ! -d "${ROOT}/opt/swarms/" ];
then
git clone https://github.com/jmikedupont2/swarms "${ROOT}/opt/swarms/"
fi
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
# git remote add local /time/2024/05/swarms/ || git remote set-url local /time/2024/05/swarms/
# git fetch local
# git stash
git checkout --force $BRANCH
git pull
git log -2 --patch | head -1000
if [ ! -d "${ROOT}/opt/swarms-memory/" ];
then
git clone https://github.com/The-Swarm-Corporation/swarms-memory "${ROOT}/opt/swarms-memory"
fi
# where the swarms will run
mkdir -p "${ROOT}/var/swarms/agent_workspace/"
mkdir -p "${ROOT}/home/swarms"
chown -R swarms:swarms "${ROOT}/var/swarms/agent_workspace" "${ROOT}/home/swarms"
# now for my local setup I aslo need to do this or we have to change the systemctl home var
#mkdir -p "/home/swarms"
#chown -R swarms:swarms "/home/swarms"
# copy the run file from git
cp "${WORKSOURCE}/boot.sh" "${ROOT}/var/swarms/agent_workspace/boot.sh"
mkdir -p "${ROOT}/var/swarms/logs"
chmod +x "${ROOT}/var/swarms/agent_workspace/boot.sh"
chown -R swarms:swarms "${ROOT}/var/swarms/" "${ROOT}/home/swarms" "${ROOT}/opt/swarms"
echo 1 >"${ROOT}/opt/swarms/install/setup.txt"
fi
if [ ! -f "${ROOT}/opt/swarms/install/boot.txt" ]; then
# user install but do not start
su -c "bash -e -x ${ROOT}/var/swarms/agent_workspace/boot.sh" swarms
echo 1 >"${ROOT}/opt/swarms/install/boot.txt"
fi
## pull
if [ ! -f "${ROOT}/opt/swarms/install/pull.txt" ]; then
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
# git fetch local
# git stash
git checkout --force $BRANCH
git pull # $BRANCH
echo 1 >"${ROOT}/opt/swarms/install/pull.txt"
fi
if [ ! -f "${ROOT}/opt/swarms/install/config.txt" ]; then
mkdir -p "${ROOT}/var/run/swarms/secrets/"
mkdir -p "${ROOT}/home/swarms/.cache/huggingface/hub"
# aws ssm get-parameter --name "swarms_openai_key" > /root/openaikey.txt
export OPENAI_KEY=`aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r `
echo "OPENAI_KEY=${OPENAI_KEY}" > "${ROOT}/var/run/swarms/secrets/env"
## append new homedir
echo "HF_HOME=${ROOT}/home/swarms/.cache/huggingface/hub" >> "${ROOT}/var/run/swarms/secrets/env"
echo "HOME=${ROOT}/home/swarms" >> "${ROOT}/var/run/swarms/secrets/env"
# attempt to move the workspace
echo 'WORKSPACE_DIR=${STATE_DIRECTORY}' >> "${ROOT}/var/run/swarms/secrets/env"
#EnvironmentFile=ROOT/var/run/swarms/secrets/env
#ExecStart=ROOT/var/run/uvicorn/env/bin/uvicorn \
# --uds ROOT/run/uvicorn/uvicorn-swarms-api.sock \
echo 1 >"${ROOT}/opt/swarms/install/config.txt"
fi
if [ ! -f "${ROOT}/opt/swarms/install/nginx.txt" ]; then
mkdir -p ${ROOT}/var/log/nginx/swarms/
fi
# create sock
mkdir -p ${ROOT}/run/uvicorn/
chown -R swarms:swarms ${ROOT}/run/uvicorn
# reconfigure
# now we setup the service and replace root in the files
#echo cat "${WORKSOURCE}/nginx/site.conf" \| sed -e "s!ROOT!${ROOT}!g"
sed -e "s!ROOT!${ROOT}!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
#cat /etc/nginx/sites-enabled/default
# ROOT/var/run/swarms/uvicorn-swarms-api.sock;
# access_log ROOT/var/log/nginx/swarms/access.log;
# error_log ROOT/var/log/nginx/swarms/error.log;
#echo cat "${WORKSOURCE}/systemd/uvicorn.service" \| sed -e "s!ROOT!/${ROOT}/!g"
#cat "${WORKSOURCE}/systemd/uvicorn.service"
sed -e "s!ROOT!${ROOT}!g" > /etc/systemd/system/swarms-uvicorn.service < "${WORKSOURCE}/systemd/uvicorn.service"
grep . -h -n /etc/systemd/system/swarms-uvicorn.service
# if [ -f ${ROOT}/etc/systemd/system/swarms-uvicorn.service ];
# then
# cp ${ROOT}/etc/systemd/system/swarms-uvicorn.service /etc/systemd/system/swarms-uvicorn.service
# else
# # allow for editing as non root
# mkdir -p ${ROOT}/etc/systemd/system/
# cp /etc/systemd/system/swarms-uvicorn.service ${ROOT}/etc/systemd/system/swarms-uvicorn.service
# fi
#
#chown -R mdupont:mdupont ${ROOT}/etc/systemd/system/
#/run/uvicorn/
# triage
chown -R swarms:swarms ${ROOT}/var/run/swarms/
# Dec 12 10:55:50 mdupont-G470 unbuffer[3921723]: OSError: [Errno 30] Read-only file system:
#cat /etc/systemd/system/swarms-uvicorn.service
# now fix the perms
mkdir -p ${ROOT}/opt/swarms/api/agent_workspace/try_except_wrapper/
chown -R swarms:swarms ${ROOT}/opt/swarms/api/
# always reload
systemctl daemon-reload
# systemctl start swarms-uvicorn || systemctl status swarms-uvicorn.service && journalctl -xeu swarms-uvicorn.service
systemctl start swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
# systemctl status swarms-uvicorn.service
# journalctl -xeu swarms-uvicorn.serviceo
systemctl enable swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
systemctl enable nginx
systemctl start nginx
journalctl -xeu swarms-uvicorn.service | tail -200 || echo oops
systemctl status swarms-uvicorn.service || echo oops2
# now after swarms is up, we restart nginx
HOST="localhost"
PORT=5474
while ! nc -z $HOST $PORT; do
sleep 1
echo -n "."
done
echo "Port $PORT is now open!"
systemctl restart nginx

@ -0,0 +1,80 @@
#!/bin/bash
# review and improve
. ./.env # for secrets
set -e # stop on any error
#set -x dont echo
#export BRANCH="feature/ec2"
#export ROOT="/mnt/data1/swarms"
export ROOT="" # empty
export WORKSOURCE="${ROOT}/opt/swarms/api"
adduser --disabled-password --gecos "" swarms --home "${ROOT}/home/swarms" || echo ignore
git config --global --add safe.directory "${ROOT}/opt/swarms"
git config --global --add safe.directory "${ROOT}/opt/swarms-memory"
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
#git checkout --force $BRANCH we did this before
#git pull
git log -2 --patch | head -1000
mkdir -p "${ROOT}/var/swarms/agent_workspace/"
mkdir -p "${ROOT}/home/swarms"
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
#git checkout --force $BRANCH
#git pull
cp "${WORKSOURCE}/boot_fast.sh" "${ROOT}/var/swarms/agent_workspace/boot_fast.sh"
mkdir -p "${ROOT}/var/swarms/logs"
chmod +x "${ROOT}/var/swarms/agent_workspace/boot_fast.sh"
chown -R swarms:swarms "${ROOT}/var/swarms/" "${ROOT}/home/swarms" "${ROOT}/opt/swarms"
# user install but do not start
su -c "bash -e -x ${ROOT}/var/swarms/agent_workspace/boot_fast.sh" swarms
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
#git checkout --force $BRANCH
#git pull # $BRANCH
mkdir -p "${ROOT}/var/run/swarms/secrets/"
mkdir -p "${ROOT}/home/swarms/.cache/huggingface/hub"
# aws ssm get-parameter --name "swarms_openai_key" > /root/openaikey.txt
export OPENAI_KEY=`aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r `
echo "OPENAI_KEY=${OPENAI_KEY}" > "${ROOT}/var/run/swarms/secrets/env"
## append new homedir
echo "HF_HOME=${ROOT}/home/swarms/.cache/huggingface/hub" >> "${ROOT}/var/run/swarms/secrets/env"
echo "HOME=${ROOT}/home/swarms" >> "${ROOT}/var/run/swarms/secrets/env"
# attempt to move the workspace
echo 'WORKSPACE_DIR=${STATE_DIRECTORY}' >> "${ROOT}/var/run/swarms/secrets/env"
# setup the systemd service again
sed -e "s!ROOT!${ROOT}!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
sed -e "s!ROOT!${ROOT}!g" > /etc/systemd/system/swarms-uvicorn.service < "${WORKSOURCE}/systemd/uvicorn.service"
grep . -h -n /etc/systemd/system/swarms-uvicorn.service
chown -R swarms:swarms ${ROOT}/var/run/swarms/
mkdir -p ${ROOT}/opt/swarms/api/agent_workspace/try_except_wrapper/
chown -R swarms:swarms ${ROOT}/opt/swarms/api/
# always reload
systemctl daemon-reload
systemctl start swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
systemctl enable swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
systemctl enable nginx
systemctl start nginx
journalctl -xeu swarms-uvicorn.service | tail -200 || echo oops
systemctl status swarms-uvicorn.service || echo oops2
# now after swarms is up, we restart nginx
HOST="localhost"
PORT=5474
while ! nc -z $HOST $PORT; do
sleep 1
echo -n "."
done
echo "Port $PORT is now open!"
systemctl restart nginx

@ -0,0 +1,100 @@
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# https://www.nginx.com/resources/wiki/start/
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
# https://wiki.debian.org/Nginx/DirectoryStructure
#
# In most cases, administrators will remove this file from sites-enabled/ and
# leave it as reference inside of sites-available where it will continue to be
# updated by the nginx packaging team.
#
# This file will automatically load configuration files provided by other
# applications, such as Drupal or Wordpress. These applications will be made
# available underneath a path with that package name, such as /drupal8.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# try_files $uri $uri/ =404;
autoindex on;
}
location /swarms {
proxy_pass http://unix:/var/run/swarms/uvicorn-swarms-api.sock;
}
# location /agentartificial {
# autoindex on;
# disable_symlinks off;
# }
# pass PHP scripts to FastCGI server
#
#location ~ \.php$ {
# include snippets/fastcgi-php.conf;
#
# # With php-fpm (or other unix sockets):
# fastcgi_pass unix:/run/php/php7.4-fpm.sock;
# # With php-cgi (or other tcp sockets):
# fastcgi_pass 127.0.0.1:9000;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# Virtual Host configuration for example.com
#
# You can move that to a different file under sites-available/ and symlink that
# to sites-enabled/ to enable it.
#
#server {
# listen 80;
# listen [::]:80;
#
# server_name example.com;
#
# root /var/www/example.com;
# index index.html;
#
# location / {
# try_files $uri $uri/ =404;
# }
#}

@ -0,0 +1,14 @@
# from https://github.com/neamaddin/debian-fastapi-server
server {
listen [::]:80;
listen 80;
server_name swarms;
access_log ROOT/var/log/nginx/swarms/access.log;
error_log ROOT/var/log/nginx/swarms/error.log;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
location / {
proxy_pass http://127.0.0.1:5474;
}
}

@ -0,0 +1,194 @@
# pip freeze
aiofiles==24.1.0
aiohappyeyeballs==2.4.4
aiohttp==3.11.10
aiosignal==1.3.2
annotated-types==0.7.0
anyio==4.7.0
asgiref==3.8.1
asyncio==3.4.3
attrs==24.3.0
backoff==2.2.1
bcrypt==4.2.1
build==1.2.2.post1
cachetools==5.5.0
certifi==2024.12.14
chardet==5.2.0
charset-normalizer==3.4.0
chroma-hnswlib==0.7.6
chromadb==0.5.20
click==8.1.7
clusterops==0.1.5
coloredlogs==15.0.1
dataclasses-json==0.6.7
Deprecated==1.2.15
dill==0.3.9
distro==1.9.0
dnspython==2.7.0
doc-master==0.0.2
docstring_parser==0.16
durationpy==0.9
email_validator==2.2.0
exceptiongroup==1.2.2
faiss-cpu==1.9.0.post1
fastapi==0.115.6
fastapi-cli==0.0.7
filelock==3.16.1
flatbuffers==24.3.25
frozenlist==1.5.0
fsspec==2024.10.0
google-auth==2.37.0
googleapis-common-protos==1.66.0
GPUtil==1.4.0
greenlet==3.1.1
grpcio==1.68.1
h11==0.14.0
httpcore==1.0.7
httptools==0.6.4
httpx==0.27.2
huggingface-hub==0.27.0
humanfriendly==10.0
hunter==3.7.0
idna==3.10
importlib_metadata==8.5.0
importlib_resources==6.4.5
Jinja2==3.1.4
jiter==0.8.2
joblib==1.4.2
jsonpatch==1.33
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
kubernetes==31.0.0
langchain-community==0.0.29
langchain-core==0.1.53
langsmith==0.1.147
litellm==1.55.3
loguru==0.7.3
lxml==5.3.0
manhole==1.8.1
markdown-it-py==3.0.0
MarkupSafe==3.0.2
marshmallow==3.23.1
mdurl==0.1.2
mmh3==5.0.1
monotonic==1.6
mpmath==1.3.0
msgpack==1.1.0
multidict==6.1.0
multiprocess==0.70.17
mypy-extensions==1.0.0
networkx==3.4.2
numpy==1.26.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
oauthlib==3.2.2
ollama==0.3.3
onnxruntime==1.20.1
openai==1.58.0
opentelemetry-api==1.29.0
opentelemetry-exporter-otlp-proto-common==1.29.0
opentelemetry-exporter-otlp-proto-grpc==1.29.0
opentelemetry-instrumentation==0.50b0
opentelemetry-instrumentation-asgi==0.50b0
opentelemetry-instrumentation-fastapi==0.50b0
opentelemetry-proto==1.29.0
opentelemetry-sdk==1.29.0
opentelemetry-semantic-conventions==0.50b0
opentelemetry-util-http==0.50b0
orjson==3.10.12
overrides==7.7.0
packaging==23.2
pandas==2.2.3
parsimonious==0.10.0
pathos==0.3.3
pillow==11.0.0
pinecone==5.4.2
pinecone-plugin-inference==3.1.0
pinecone-plugin-interface==0.0.7
posthog==3.7.4
pox==0.3.5
ppft==1.7.6.9
propcache==0.2.1
protobuf==5.29.1
psutil==6.1.0
pyasn1==0.6.1
pyasn1_modules==0.4.1
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
PyJWT==2.10.1
pypdf==5.1.0
PyPika==0.48.9
pyproject_hooks==1.2.0
pytesseract==0.3.13
python-dateutil==2.9.0.post0
python-docx==1.1.2
python-dotenv==1.0.1
python-magic==0.4.27
python-multipart==0.0.20
pytz==2024.2
PyYAML==6.0.2
ray==2.40.0
referencing==0.35.1
regex==2024.11.6
reportlab==4.2.5
requests==2.32.3
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
rich==13.9.4
rich-toolkit==0.12.0
rpds-py==0.22.3
rsa==4.9
safetensors==0.4.5
scikit-learn==1.6.0
scipy==1.14.1
sentence-transformers==3.3.1
sentry-sdk==2.19.2
setuptools==75.6.0
shellingham==1.5.4
singlestoredb==1.10.0
six==1.17.0
sniffio==1.3.1
SQLAlchemy==2.0.36
sqlparams==6.1.0
starlette==0.41.3
swarm-models==0.2.7
-e git+https://github.com/jmikedupont2/swarms@cc67de0b713449f47e02de782c41e429d224f431#egg=swarms
# Editable Git install with no remote (swarms-memory==0.1.2)
-e /opt/swarms-memory
sympy==1.13.1
tenacity==8.5.0
termcolor==2.5.0
threadpoolctl==3.5.0
tiktoken==0.8.0
tokenizers==0.21.0
toml==0.10.2
torch==2.5.1
tqdm==4.67.1
transformers==4.47.1
triton==3.1.0
typer==0.15.1
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.34.0
uvloop==0.21.0
watchfiles==1.0.3
websocket-client==1.8.0
websockets==14.1
wheel==0.45.1
wrapt==1.17.0
yarl==1.18.3
zipp==3.21.0

@ -0,0 +1,4 @@
export ROOT=/mnt/data1/swarms;
git commit -m 'wip'
sudo rm "${ROOT}/opt/swarms/install/pull.txt"
sudo bash ./install.sh

@ -0,0 +1,6 @@
export ROOT=""
#/mnt/data1/swarms;
#git commit -m 'wip' -a
sudo rm ${ROOT}/opt/swarms/install/*;
sudo bash ./install.sh

@ -0,0 +1,14 @@
#!/bin/bash
# EDIT: we need to make sure the instance is running
# Get the list of instance IDs
instance_ids=$(aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" --output text)
# Loop through each instance ID and start a session
for instance_id in $instance_ids; do
echo "Starting session for instance: $instance_id"
# Start a session and execute commands (replace with your commands)
aws ssm start-session --target "$instance_id" --document-name "AWS-StartInteractiveCommand" --parameters 'commands=["sudo su -","tail /var/log/cloud-init-output.log"]'
done

@ -0,0 +1,35 @@
# Get the list of instance IDs and their states
instances=$(aws ec2 describe-instances --query "Reservations[*].Instances[*].[InstanceId,State.Name]" --output text)
# aws ssm send-command --document-name AWS-RunShellScript --targets Key=instanceids,Values=i-073378237c5a9dda1 --parameters 'commands=["sudo su - -c \"tail /var/log/cloud-init-output.log\""]'
parse_command_id(){
# send_command_output
local send_command_output=$1
echo "$send_command_output" | jq -r '.Command.CommandId'
}
# Loop through each instance ID and state
while read -r instance_id state; do
if [[ $state == "running" ]]; then
echo "Starting session for instance: $instance_id"
# Start a session and execute commands (replace with your commands)
#aws ssm start-session --target "$instance_id" --document-name "AWS-StartInteractiveCommand" --parameters 'commands=["sudo su -","tail /var/log/cloud-init-output.log"]'
#--target "$instance_id"
send_command_output=$(aws ssm send-command --document-name "AWS-RunShellScript" --targets "Key=instanceids,Values=$instance_id" --parameters 'commands=["sudo su - -c \"tail /var/log/cloud-init-output.log\""]')
# now get the command id
command_id=$(parse_command_id send_command_output)
# now for 4 times, sleep 1 sec,
for i in {1..4}; do
sleep 1
command_status=$(aws ssm list-command-invocations --command-id "$command_id" --details)
echo "$command_status"
done
fi
done <<< "$instances"

@ -0,0 +1,96 @@
#!/bin/bash
# this is the install script
# install_script = "/opt/swarms/api/rundocker.sh"
# called on boot.
# this is the refresh script called from ssm for a refresh
# #refresh_script = "/opt/swarms/api/docker-boot.sh"
# file not found
#
pwd
ls -latr
. ./.env # for secrets
set -e # stop on any error
#export ROOT="" # empty
export WORKSOURCE="/opt/swarms/api"
adduser --disabled-password --gecos "" swarms --home "/home/swarms" || echo ignore
git config --global --add safe.directory "/opt/swarms"
git config --global --add safe.directory "/opt/swarms-memory"
cd "/opt/swarms/" || exit 1 # "we need swarms"
git log -2 --patch | head -1000
mkdir -p "/var/swarms/agent_workspace/"
mkdir -p "/home/swarms"
cd "/opt/swarms/" || exit 1 # "we need swarms"
mkdir -p "/var/swarms/logs"
chown -R swarms:swarms "/var/swarms/" "/home/swarms" "/opt/swarms"
if [ -f "/var/swarms/agent_workspace/boot_fast.sh" ];
then
chmod +x "/var/swarms/agent_workspace/boot_fast.sh" || echo faild
# user install but do not start
su -c "bash -e -x /var/swarms/agent_workspace/boot_fast.sh" swarms
fi
cd "/opt/swarms/" || exit 1 # "we need swarms"
mkdir -p "/var/run/swarms/secrets/"
mkdir -p "/home/swarms/.cache/huggingface/hub"
set +x
OPENAI_KEY=$(aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r )
export OPENAI_KEY
echo "OPENAI_KEY=${OPENAI_KEY}" > "/var/run/swarms/secrets/env"
set -x
## append new homedir
# check if the entry exists already before appending pls
if ! grep -q "HF_HOME" "/var/run/swarms/secrets/env"; then
echo "HF_HOME=/home/swarms/.cache/huggingface/hub" >> "/var/run/swarms/secrets/env"
fi
if ! grep -q "^HOME" "/var/run/swarms/secrets/env"; then
echo "HOME=/home/swarms" >> "/var/run/swarms/secrets/env"
fi
if ! grep -q "^HOME" "/var/run/swarms/secrets/env"; then
# attempt to move the workspace
echo "WORKSPACE_DIR=\${STATE_DIRECTORY}" >> "/var/run/swarms/secrets/env"
fi
# setup the systemd service again
sed -e "s!ROOT!!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
sed -e "s!ROOT!!g" > /etc/systemd/system/swarms-docker.service < "${WORKSOURCE}/systemd/swarms-docker.service"
grep . -h -n /etc/systemd/system/swarms-docker.service
chown -R swarms:swarms /var/run/swarms/
mkdir -p /opt/swarms/api/agent_workspace/try_except_wrapper/
chown -R swarms:swarms /opt/swarms/api/
# always reload
systemctl daemon-reload
systemctl start swarms-docker || journalctl -xeu swarms-docker
systemctl enable swarms-docker || journalctl -xeu swarms-docker
systemctl enable nginx
systemctl start nginx
journalctl -xeu swarms-docker | tail -200 || echo oops
systemctl status swarms-docker || echo oops2
# now after swarms is up, we restart nginx
HOST="localhost"
PORT=5474
while ! nc -z $HOST $PORT; do
sleep 1
echo -n "."
done
echo "Port ${PORT} is now open!"
osystemctl restart nginx

@ -0,0 +1,32 @@
import time
import boto3
#from dateutil import tz
def parse_command_id(send_command_output):
return send_command_output['Command']['CommandId']
def main():
ec2_client = boto3.client('ec2')
ssm_client = boto3.client('ssm')
# Get the list of instance IDs and their states
instances_response = ec2_client.describe_instances()
instances = [
(instance['InstanceId'], instance['State']['Name'])
for reservation in instances_response['Reservations']
for instance in reservation['Instances']
]
for reservation in instances_response['Reservations']:
for instance in reservation['Instances']:
print(instance)
instance_id = instance['InstanceId']
state = instance['State']['Name']
if state == 'running':
print(f"Starting command for instance: {instance_id}")
print(f"aws ssm start-session --target {instance_id}")
if __name__ == "__main__":
main()

@ -0,0 +1,26 @@
# derived from https://github.com/encode/uvicorn/issues/678
# dervied from https://blog.container-solutions.com/running-docker-containers-with-systemd
[Unit]
Description=swarms
After=docker.service
#Required=docker.service
[Service]
EnvironmentFile=ROOT/var/run/swarms/secrets/env
RestartSec=10
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop %n
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull h4ckermike/swarms-api:experimental
#ExecStart=/usr/bin/docker run --rm --name %n h4ckermike/swarms-api:experimental
ExecStart=/usr/bin/docker run --rm --name %n --network host h4ckermike/swarms-api:experimental
StandardOutput=file:/var/log/swarms_systemd.log
StandardError=file:/var/log/swarms_systemd.log
ExecReload=/bin/kill -HUP ${MAINPID}
Restart=always
[Install]
WantedBy=multi-user.target

@ -0,0 +1,82 @@
convert this to docker and remove systemd
# derived from https://github.com/encode/uvicorn/issues/678
[Unit]
Description=swarms
After=network.target
[Service]
Type=simple
User=swarms
Group=swarms
DynamicUser=true
WorkingDirectory=ROOT/opt/swarms/api/
ReadWritePaths=ROOT/opt/swarms/api/agent_workspace/
StateDirectory=swarms_state
PrivateTmp=true
ProtectHome=true
EnvironmentFile=ROOT/var/run/swarms/secrets/env
# for debugging use
#ExecStart=/usr/bin/unbuffer ROOT/var/run/uvicorn/env/bin/uvicorn
# this runs and is modifiable by the agent, so this can be destabilized
# FIXME: containerize the run(chroot jail?)
ExecStart=/usr/bin/unbuffer ROOT/var/swarms/agent_workspace/.venv/bin/uvicorn \
--proxy-headers \
--forwarded-allow-ips='*' \
--workers=4 \
--port=5474 \
main:create_app
StandardOutput=file:/var/log/swarms_systemd.log
StandardError=file:/var/log/swarms_systemd.log
ExecReload=/bin/kill -HUP ${MAINPID}
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
To convert the provided systemd service configuration to a Docker container, you'll need a `Dockerfile` and potentially a `docker-compose.yml` file for easier management. Here's a basic setup:
**Dockerfile**:
```Dockerfile
# Use the appropriate base image
FROM python:3.9-slim
# Set working directory
WORKDIR /opt/swarms/api
# Copy application files
COPY . .
# Install dependencies (assuming requirements.txt is in root)
RUN pip install -r requirements.txt
# Set up environment variables
COPY ROOT/var/run/swarms/secrets/env /root/env
ENV $(cat /root/env | xargs)
# Expose the port
EXPOSE 5474
# Command to run the application
CMD ["/usr/bin/unbuffer", "/root/swarms/agent_workspace/.venv/bin/uvicorn", "--proxy-headers", "--forwarded-allow-ips='*'", "--workers=4", "--port=5474", "main:create_app"]
```
**docker-compose.yml** (optional):
```yaml
```
### Steps to Build and Run
1. Save the `Dockerfile` and `docker-compose.yml` in your project directory.
2. Build and run the container using:
```bash
docker-compose up --build
```
### Notes
- Adjust the Python base image version as needed.
- Ensure that your `requirements.txt` includes the necessary dependencies for your application.
- You can configure volume mounts and environment variables as per your requirement.
- Logs can be managed by your logging mechanism or Docker logging options.

@ -0,0 +1,30 @@
#!/bin/python3
# rewrite this to aslo cancel-spot-instance-requests
import boto3
# Create an EC2 client
ec2_client = boto3.client('ec2')
# Retrieve instance IDs
response = ec2_client.describe_instances()
for reservation in response['Reservations'] :
for instance in reservation['Instances']:
print( instance)
instance_ids = [instance['InstanceId']
for reservation in response['Reservations']
for instance in reservation['Instances']]
# Terminate instances
for instance_id in instance_ids:
print(f"Terminating instance: {instance_id}")
ec2_client.terminate_instances(InstanceIds=[instance_id])
# Check the status of the terminated instances
terminated_instances = ec2_client.describe_instances(InstanceIds=instance_ids)
for reservation in terminated_instances['Reservations']:
for instance in reservation['Instances']:
print(f"Instance ID: {instance['InstanceId']}, State: {instance['State']['Name']}")

@ -0,0 +1,7 @@
#!/bin/bash
instance_ids=$(aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" --output text)
for instance_id in $instance_ids; do
echo "terminate instance: $instance_id"
aws ec2 terminate-instances --instance-ids "$instance_id"
done

@ -0,0 +1,34 @@
#!/bin/python3
import boto3
# Create an EC2 client
ec2_client = boto3.client('ec2')
# Retrieve instance IDs and Spot Instance Request IDs
response = ec2_client.describe_instances()
instance_ids = []
spot_request_ids = []
for reservation in response['Reservations']:
for instance in reservation['Instances']:
print(instance)
instance_ids.append(instance['InstanceId'])
if 'SpotInstanceRequestId' in instance:
spot_request_ids.append(instance['SpotInstanceRequestId'])
# Terminate instances
for instance_id in instance_ids:
print(f"Terminating instance: {instance_id}")
ec2_client.terminate_instances(InstanceIds=[instance_id])
# Cancel Spot Instance Requests
for spot_request_id in spot_request_ids:
print(f"Cancelling Spot Instance Request: {spot_request_id}")
ec2_client.cancel_spot_instance_requests(SpotInstanceRequestIds=[spot_request_id])
# Check the status of the terminated instances
terminated_instances = ec2_client.describe_instances(InstanceIds=instance_ids)
for reservation in terminated_instances['Reservations']:
for instance in reservation['Instances']:
print(f"Instance ID: {instance['InstanceId']}, State: {instance['State']['Name']}")

@ -0,0 +1,2 @@
for the terraform scripts see this git repo for more information
https://github.com/jmikedupont2/swarms-terraform

@ -0,0 +1,18 @@
#/mnt/data1/swarms/var/run/uvicorn/env/bin/uvicorn
# --no-access-log \
#python -m pdb
#/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/uvicorn \
. /mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/activate
/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/python3 ~mdupont/2024/05/swarms/api/uvicorn_runner.py \
--proxy-headers \
--port=54748 \
--forwarded-allow-ips='*' \
--workers=1 \
--log-level=debug \
--uds /mnt/data1/swarms/run/uvicorn/uvicorn-swarms-api.sock \
main:app
# _.asgi:application

@ -0,0 +1,10 @@
#/mnt/data1/swarms/var/run/uvicorn/env/bin/uvicorn
# --no-access-log \
#python -m pdb
#/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/uvicorn \
. /mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/activate
pip install hunter
/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/python3 ~mdupont/2024/05/swarms/api/uvicorn_runner.py

@ -0,0 +1,66 @@
import time
import requests
import boto3
#from dateutil import tz
def parse_command_id(send_command_output):
return send_command_output['Command']['CommandId']
def main():
ec2_client = boto3.client('ec2')
ssm_client = boto3.client('ssm')
# Get the list of instance IDs and their states
instances_response = ec2_client.describe_instances()
for reservation in instances_response['Reservations']:
for instance in reservation['Instances']:
state = instance['State']["Name"]
instance_id = instance['InstanceId']
if state == 'running':
ip = instance["PublicIpAddress"]
instance_type = instance["InstanceType"]
BASE_URL=f"http://{ip}:80/v1"
target = f"{BASE_URL}/docs"
print(f"Starting command for instance: {instance_id} {target} {instance_type}")
try:
response = requests.get(target, timeout=8)
print(f"got response: {instance_id} {target} {instance_type} {response}")
except Exception as exp:
print(f"got error: {instance_id} {target} {instance_type} {exp}")
# {'AmiLaunchIndex': 0, 'ImageId': 'ami-0e2c8caa4b6378d8c',
#'InstanceId': 'i-0d41e4263f40babec',
#'InstanceType': 't3.small',
#'KeyName': 'mdupont-deployer-key', 'LaunchTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()),
# 'Monitoring': {'State': 'disabled'},
# 'Placement': {'AvailabilityZone': 'us-east-1a', 'GroupName': '', 'Tenancy': 'default'}, 'PrivateDnsName': 'ip-10-0-4-18.ec2.internal', 'PrivateIpAddress': '10.0.4.18', 'ProductCodes': [],
#'PublicDnsName': 'ec2-3-228-14-220.compute-1.amazonaws.com',
#'PublicIpAddress': '3.228.14.220',
# 'State': {'Code': 16, 'Name': 'running'}, 'StateTransitionReason': '', 'SubnetId': 'subnet-057c90cfe7b2e5646', 'VpcId': 'vpc-04f28c9347af48b55', 'Architecture': 'x86_64',
# 'BlockDeviceMappings': [{'DeviceName': '/dev/sda1',
# 'Ebs': {'AttachTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()), 'DeleteOnTermination': True, 'Status': 'attached', 'VolumeId': 'vol-0257131dd2883489b'}}], 'ClientToken': 'b5864f17-9e56-2d84-fc59-811abf8e6257', 'EbsOptimized': False, 'EnaSupport': True, 'Hypervisor': 'xen', 'IamInstanceProfile':
# {'Arn': 'arn:aws:iam::767503528736:instance-profile/swarms-20241213150629570500000003', 'Id': 'AIPA3FMWGOMQKC4UE2UFO'}, 'NetworkInterfaces': [
# {'Association':
# {'IpOwnerId': 'amazon', 'PublicDnsName': 'ec2-3-228-14-220.compute-1.amazonaws.com', 'PublicIp': '3.228.14.220'}, 'Attachment':
# {'AttachTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()), 'AttachmentId': 'eni-attach-009b54c039077324e', 'DeleteOnTermination': True, 'DeviceIndex': 0, 'Status': 'attached', 'NetworkCardIndex': 0}, 'Description': '', 'Groups': [
# {'GroupName': 'swarms-20241214133959057000000001', 'GroupId': 'sg-03c9752b62d0bcfe4'}], 'Ipv6Addresses': [], 'MacAddress': '02:c9:0b:47:cb:df', 'NetworkInterfaceId': 'eni-08661c8b4777c65c7', 'OwnerId': '767503528736', 'PrivateDnsName': 'ip-10-0-4-18.ec2.internal', 'PrivateIpAddress': '10.0.4.18', 'PrivateIpAddresses': [
# {'Association':
# {'IpOwnerId': 'amazon', 'PublicDnsName': 'ec2-3-228-14-220.compute-1.amazonaws.com', 'PublicIp': '3.228.14.220'}, 'Primary': True, 'PrivateDnsName': 'ip-10-0-4-18.ec2.internal', 'PrivateIpAddress': '10.0.4.18'}], 'SourceDestCheck': True, 'Status': 'in-use', 'SubnetId': 'subnet-057c90cfe7b2e5646', 'VpcId': 'vpc-04f28c9347af48b55', 'InterfaceType': 'interface'}], 'RootDeviceName': '/dev/sda1', 'RootDeviceType': 'ebs', 'SecurityGroups': [
# {'GroupName': 'swarms-20241214133959057000000001', 'GroupId': 'sg-03c9752b62d0bcfe4'}], 'SourceDestCheck': True, 'Tags': [
# {'Key': 'Name', 'Value': 'swarms-size-t3.small'},
# {'Key': 'aws:ec2launchtemplate:id', 'Value': 'lt-0e618a900bd331cfe'},
# {'Key': 'aws:autoscaling:groupName', 'Value': 'swarms-size-t3.small-2024121416014474500000002f'},
# {'Key': 'aws:ec2launchtemplate:version', 'Value': '1'}], 'VirtualizationType': 'hvm', 'CpuOptions':
# {'CoreCount': 1, 'ThreadsPerCore': 2}, 'CapacityReservationSpecification':
# {'CapacityReservationPreference': 'open'}, 'HibernationOptions':
# {'Configured': False}, 'MetadataOptions':
# {'State': 'applied', 'HttpTokens': 'required', 'HttpPutResponseHopLimit': 2, 'HttpEndpoint': 'enabled', 'HttpProtocolIpv6': 'disabled', 'InstanceMetadataTags': 'disabled'}, 'EnclaveOptions':
# {'Enabled': False}, 'BootMode': 'uefi-preferred', 'PlatformDetails': 'Linux/UNIX', 'UsageOperation': 'RunInstances', 'UsageOperationUpdateTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()), 'PrivateDnsNameOptions':
# {'HostnameType': 'ip-name', 'EnableResourceNameDnsARecord': False, 'EnableResourceNameDnsAAAARecord': False}, 'MaintenanceOptions':
# {'AutoRecovery': 'default'}, 'CurrentInstanceBootMode': 'uefi'}
if __name__ == "__main__":
main()

@ -0,0 +1,2 @@
# mikes tools
apt install -y emacs-nox #tmux (this tries to install postfix)

@ -0,0 +1,58 @@
#!/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
import pdb
import logging
# from uvicorn.main import main
# if __name__ == '__main__':
# sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
# try:
# print("main")
# pdb.set_trace()
# ret = main()
# print(ret)
# except Exception as e:
# print(e)
#// sys.exit(main())
import sys
import uvicorn
from uvicorn.config import LOGGING_CONFIG
import hunter
hunter.trace(stdlib=False,
action=hunter.CallPrinter)
def main():
#root_path = ''
#if len(sys.argv) >= 2:
# root_path = sys.argv[1]
##
# %(name)s : uvicorn, uvicorn.error, ... . Not insightful at all.
LOGGING_CONFIG["formatters"]["access"]["fmt"] = '%(asctime)s %(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s'
LOGGING_CONFIG["formatters"]["default"]["fmt"] = "%(asctime)s %(levelprefix)s %(message)s"
date_fmt = "%Y-%m-%d %H:%M:%S"
LOGGING_CONFIG["formatters"]["default"]["datefmt"] = date_fmt
LOGGING_CONFIG["formatters"]["access"]["datefmt"] = date_fmt
##
for logger_name in logging.root.manager.loggerDict.keys():
print(logger_name)
override_logger = logging.getLogger(logger_name)
for handler in override_logger.handlers:
print(handler)
handler.setFormatter(formatter)
uvicorn.run(
"main:app",
host="127.0.0.1",
port=7230,
log_level="trace",
proxy_headers=True,
forwarded_allow_ips='*',
workers=1,
uds="/mnt/data1/swarms/run/uvicorn/uvicorn-swarms-api.sock")
# root_path=root_path
main()

@ -0,0 +1,36 @@
services:
swarms:
image: swarm
build:
context: .
dockerfile: Dockerfile
network_mode: host
ipc: host
# environment:
# - OPENAI_API_KEY=sk-1234
# - OPENAI_API_BASE=http://100.96.149.57:7091
# - OPENAI_API_BASE=http://localhost:5000/v1
# command: python3 example.py
# restart: always
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]
user: swarms
volumes:
- ./api:/opt/swarms/api
- ./swarms:/opt/swarms/swarms
- ./logs:/var/log # Mounting volume for logs
# environment:
# - ENV_VAR_1=value1 # Add necessary environment variables
# - ENV_VAR_2=value2
# restart: always
# ports:
# - "5474:5474"

@ -0,0 +1 @@
grep -h -P -o "([a-zA-Z]+)" -r * |sort |uniq -c |sort -n >names.txt

@ -0,0 +1,739 @@
absl-py==2.1.0
accelerate==1.0.1
-e git+https://github.com/gventuri/pandas-ai/@5c84fd37065c7de806701e5e7b99df298e93b4f6#egg=ai_ticket&subdirectory=../../../../../../../time/2023/09/24/ai-ticket
aiofiles==23.2.1
aiohttp==3.9.5
aiohttp-cors==0.7.0
aiosignal==1.3.1
alabaster==0.7.12
albucore==0.0.20
albumentations==1.4.21
alectryon==1.4.0
aliyun-python-sdk-core==2.15.1
aliyun-python-sdk-kms==2.16.3
annotated-types==0.7.0
ansible==6.7.0
ansible-base==2.10.8
ansible-core==2.13.13
ansible-vault==2.1.0
antlr4-python3-runtime==4.9.3
anyio==4.3.0
apache-libcloud==3.2.0
apispec==6.8.0
APScheduler==3.10.4
apturl==0.5.2
argcomplete==1.8.1
argo-workflows==3.6.1
asciidoc==10.2.1
asciinema==2.1.0
asgiref==3.8.1
astroid==2.9.3
async-timeout==4.0.3
asyncio==3.4.3
attr==0.3.2
attrdict==2.0.1
attrs==23.2.0
autoflake==2.3.1
autogluon==1.1.1
autogluon.common==1.1.1
autogluon.core==1.1.1
autogluon.features==1.1.1
autogluon.multimodal==1.1.1
autogluon.tabular==1.1.1
autogluon.timeseries==1.1.1
Automat==24.8.1
autopep8==1.6.0
aws-cdk-lib==2.116.0
aws-cdk.asset-awscli-v1==2.2.213
aws-cdk.asset-kubectl-v20==2.1.3
aws-cdk.asset-node-proxy-agent-v6==2.1.0
aws-sam-translator==1.89.0
aws-shell==0.2.2
awscli==1.32.85
Babel==2.8.0
backoff==2.2.1
base58==2.1.1
bcrypt==4.1.2
beautifulsoup4==4.12.3
begins==0.9
bidict==0.23.1
binaryornot==0.4.3
bitsandbytes==0.43.3
bittensor==7.2.0
-e git+https://github.com/opentensor/bittensor-subnet-template@7622775e0a267a564959c8690108f9152e123522#egg=bittensor_subnet_template&subdirectory=../../../../../../../time/2024/06/07/bittensor-subnet-template
black==23.7.0
blake3==1.0.0
blessed==1.20.0
blessings==1.7
blinker==1.7.0
blis==0.7.11
boto3==1.34.85
botocore==1.34.85
build==1.2.1
CacheControl==0.14.0
cachetools==5.3.3
catalogue==2.0.10
catboost==1.2.5
catfish==4.16.3
cattrs==23.2.3
-e git+https://github.com/Agent-Artificial/cellium-client@ee4df8906a43c2e408b1ad3cf27f84816a51a58d#egg=cellium&subdirectory=../../../../../../../../../../home/mdupont/2024/05/31/cellium-client
certifi==2024.2.2
cffi==1.15.0
cfgv==3.4.0
cfn-lint==0.87.7
chardet==4.0.0
charset-normalizer==2.1.1
chroma-hnswlib==0.7.5
-e git+https://github.com/chroma-core/chroma@28b37392594dd7ba60e35c53f098d7f88a9d3988#egg=chromadb&subdirectory=../../../../../../../time/2024/07/21/chroma
cleo==2.1.0
cliapp==1.20180812.1
click==8.1.7
clip-anytorch==2.6.0
cloudpathlib==0.18.1
cloudpickle==3.0.0
clusterops==0.1.2
cmdtest==0.32+git
colorama==0.4.6
coloredlogs==15.0.1
colorful==0.5.6
comm==0.2.2
command-not-found==0.3
commonmark==0.9.1
communex==0.1.27.3
compel==2.0.2
confection==0.1.5
ConfigArgParse==1.7
configobj==5.0.8
constantly==23.10.4
constructs==10.4.2
contourpy==1.2.1
controlnet_aux==0.0.7
cookiecutter==1.7.3
cpufeature==0.2.1
crashtest==0.4.1
crcmod==1.7
cryptography==42.0.5
cssselect==1.2.0
cupshelpers==1.0
cycler==0.12.1
cymem==2.0.8
Cython==0.29.28
cytoolz==0.12.3
daiquiri==3.2.5.1
dashscope==1.20.13
dataclasses-json==0.6.6
datasets==2.17.1
dbus-python==1.2.18
ddt==1.6.0
debugpy==1.8.2
defer==1.0.6
delegator.py==0.1.1
Deprecated==1.2.14
devscripts===2.22.1ubuntu1
dictdiffer==0.9.0
diffusers==0.31.0
Dijkstar==2.6.0
dill==0.3.8
dirtyjson==1.0.8
distlib==0.3.8
distro==1.7.0
distro-info===1.1build1
dnspython==2.1.0
doc-master==0.0.2
docker==7.1.0
docker-compose==1.29.2
docker-pycreds==0.4.0
docopt==0.6.2
docstring_parser==0.16
docutils==0.16
dominate==2.9.1
dulwich==0.21.7
dynamicprompts==0.31.0
ecdsa==0.19.0
einops==0.8.0
email_validator==2.1.1
-e git+https://github.com/meta-introspector/https-lablab.ai-event-audiocraft-24-hours-hackathon/@ef86774c7e61855044ca0c97dbcb988d18570984#egg=emojintrospector
eth-hash==0.7.0
eth-keys==0.5.1
eth-typing==4.2.3
eth-utils==2.2.2
eval_type_backport==0.2.0
evaluate==0.4.0
exceptiongroup==1.2.0
extruct==0.17.0
facexlib==0.3.0
fastai==2.7.15
fastapi==0.111.0
fastapi-cli==0.0.4
fastapi-events==0.11.1
fastapi-sso==0.10.0
fastcore==1.5.55
fastdownload==0.0.7
fastjsonschema==2.20.0
fastprogress==1.0.3
ffmpy==0.4.0
filelock==3.14.0
filterpy==1.4.5
flake8==4.0.1
Flask==3.0.3
Flask-Cors==4.0.1
flask-sock==0.7.0
flatbuffers==24.3.25
fonttools==4.55.0
frozenlist==1.4.1
fsspec==2023.10.0
ftfy==6.3.1
future==1.0.0
fuzzywuzzy==0.18.0
gdown==5.2.0
gevent==24.2.1
gguf==0.10.0
git-remote-codecommit==1.17
gitdb==4.0.11
github==1.2.7
github-action-utils==1.1.0
github3.py==4.0.1
GitPython==3.1.42
gluonts==0.15.1
google-api-core==2.19.1
google-auth==2.29.0
googleapis-common-protos==1.63.2
gpg===1.16.0-unknown
gpustat==0.6.0
GPUtil==1.4.0
gql==3.5.0
gradio==4.44.0
gradio_client==1.3.0
graphql-core==3.2.3
graphviz==0.20.3
greenlet==3.0.3
grpcio==1.62.1
grpcio-tools==1.62.1
gunicorn==22.0.0
h11==0.14.0
-e git+https://github.com/Agent-Artificial/hivemind@941933de3378f1cd8a5b4fa053a3eb33253ab8ed#egg=hivemind&subdirectory=../../../vendor/hivemind
html5lib==1.1
html5lib-modern==1.2
html_text==0.6.2
httpcore==1.0.4
httplib2==0.20.2
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.26.1
humanfriendly==10.0
hyperlink==21.0.0
hyperopt==0.2.7
identify==2.6.3
idna==3.4
ijson==3.3.0
imageio==2.34.2
imagesize==1.3.0
img2pdf==0.4.2
importlib_metadata==7.1.0
importlib_resources==6.4.0
incremental==24.7.2
iniconfig==2.0.0
installer==0.7.0
intervaltree==3.1.0
invisible-watermark==0.2.0
-e git+https://github.com/invoke-ai/InvokeAI@ebd73a2ac22ed4f06271b3c4850740cf84ab136a#egg=InvokeAI&subdirectory=../../../../../../../time/2024/11/30/InvokeAI
ipykernel==6.29.5
ipympl==0.9.4
ipywidgets==8.1.5
isodate==0.6.1
isort==5.13.2
itemadapter==0.9.0
itemloaders==1.3.2
itsdangerous==2.1.2
jaraco.classes==3.4.0
jax==0.4.31
jaxlib==0.4.31
jeepney==0.7.1
Jinja2==3.1.3
jinja2-time==0.2.0
jmespath==0.10.0
joblib==1.3.2
jschema-to-python==1.2.3
jsii==1.105.0
json-schema-generator==0.3
json5==0.9.28
jsonformatter==0.3.2
jsonlines==4.0.0
jsonpatch==1.33
jsonpickle==3.2.1
jsonpointer==2.4
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
jstyleson==0.0.2
junit-xml==1.9
jupyter==1.0.0
jupyter-console==6.6.3
jupyter_core==5.7.2
jupyterlab_widgets==3.0.13
kaggle==1.6.17
keylimiter==0.1.5
keyring==24.3.1
kiwisolver==1.4.7
kubernetes==30.1.0
langchain==0.1.13
langchain-community==0.0.29
langchain-core==0.1.52
langchain-experimental==0.0.55
langchain-text-splitters==0.0.2
langcodes==3.4.0
langsmith==0.1.67
language-selector==0.1
language_data==1.2.0
lark-parser==0.12.0
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy-object-proxy==0.0.0
lazy_loader==0.4
libcst==1.4.0
lightdm-gtk-greeter-settings==1.2.2
lightgbm==4.3.0
lightning==2.3.3
lightning-utilities==0.11.6
lion-pytorch==0.2.2
-e git+https://github.com/Agent-Artificial/litellm@3b2f04c8cb1a42fe5db8bcbf62d2e41a3a72f52a#egg=litellm&subdirectory=../../../vendor/litellm
livereload==2.6.3
llvmlite==0.43.0
lockfile==0.12.2
logilab-common==1.8.2
loguru==0.7.2
lsprotocol==2023.0.1
lxml==4.8.0
lxml_html_clean==0.3.0
Mako==1.1.3
marisa-trie==1.2.0
Markdown==3.3.6
markdown-it-py==3.0.0
MarkupSafe==2.1.5
marshmallow==3.21.2
matplotlib==3.9.3
mccabe==0.6.1
mdurl==0.1.2
mediapipe==0.10.14
menulibre==2.2.2
mercurial==6.1.1
mf2py==2.0.1
mkdocs==1.1.2
ml-dtypes==0.4.0
mlforecast==0.10.0
mmh3==4.1.0
mock==5.1.0
model-index==0.1.11
modelscope_studio==0.5.0
monotonic==1.6
more-itertools==8.10.0
morphys==1.0
mpmath==1.3.0
msgpack==1.0.8
msgpack-numpy-opentensor==0.5.0
mugshot==0.4.3
multiaddr==0.0.9
multidict==6.0.5
multiprocess==0.70.16
munch==2.5.0
murmurhash==1.0.10
mypy-extensions==1.0.0
mypy-protobuf==3.6.0
netaddr==0.8.0
netifaces==0.11.0
networkx==3.2.1
ninja==1.11.1.1
nlpaug==1.1.11
nltk==3.8.1
nodeenv==1.9.1
nptyping==2.4.1
npyscreen==4.10.5
ntlm-auth==1.4.0
numba==0.60.0
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.560.30
nvidia-ml-py3==7.352.0
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
ocrmypdf==13.4.0+dfsg
olefile==0.46
omegaconf==2.2.3
onboard==1.4.1
onnx==1.16.1
onnxruntime==1.19.2
openai==1.30.5
-e git+https://github.com/peterdemin/openai-cli@3af6c0eb6272ca4c3b79aca3220252dba324e20c#egg=openai_cli&subdirectory=../../../../../../../time/2024/05/31/openai-cli
opencensus==0.11.4
opencensus-context==0.1.3
opencv-contrib-python==4.10.0.84
opencv-python==4.9.0.80
opencv-python-headless==4.9.0.80
opendatalab==0.0.10
openmim==0.3.9
openshift==0.11.0
opentelemetry-api==1.25.0
opentelemetry-exporter-otlp-proto-common==1.25.0
opentelemetry-exporter-otlp-proto-grpc==1.25.0
opentelemetry-instrumentation==0.46b0
opentelemetry-instrumentation-asgi==0.46b0
opentelemetry-instrumentation-fastapi==0.46b0
opentelemetry-proto==1.25.0
opentelemetry-sdk==1.25.0
opentelemetry-semantic-conventions==0.46b0
opentelemetry-util-http==0.46b0
openxlab==0.1.1
opt-einsum==3.3.0
optimum==1.18.1
ordered-set==4.1.0
orjson==3.10.3
oss2==2.17.0
overrides==7.7.0
packaging==23.2
pandas==2.2.2
paramiko==3.4.0
parsel==1.9.1
password-strength==0.0.3.post2
pathspec==0.12.1
pathtools==0.1.2
patsy==0.5.6
pbr==6.0.0
pdf2image==1.17.0
pdfminer.six===-VERSION-
peft==0.4.0
pendulum==3.0.0
pep8==1.7.1
-e git+https://github.com/tbarbette/perf-class@19f8299fb8a2cff33189e77c4547acf3a20f2a8b#egg=perf_class&subdirectory=../../../../../../../time/2024/07/05/perf-class
petals @ git+https://github.com/bigscience-workshop/petals@d2fcbbc72e02b88cc34f2da8b3ae7de2873204a9
pexpect==4.8.0
picklescan==0.0.18
pikepdf==5.0.1+dfsg
pillow==10.3.0
pipdeptree==2.21.0
pipenv==11.9.0
pipx==1.0.0
pkginfo==1.11.1
platformdirs==4.2.2
playwright==1.46.0
plotly==5.23.0
pluggy==1.5.0
ply==3.11
poetry==1.8.4
poetry-core==1.9.1
poetry-plugin-export==1.8.0
posthog==3.5.0
pre_commit==4.0.1
prefetch-generator==1.0.3
preshed==3.0.9
prettytable==3.12.0
prometheus_client==0.21.1
prompt_toolkit==3.0.47
Protego==0.3.1
proto-plus==1.24.0
protobuf==4.25.3
psutil==5.9.0
ptyprocess==0.7.0
publication==0.0.3
PuLP==2.9.0
py-bip39-bindings==0.1.11
py-cid==0.3.0
py-ed25519-zebra-bindings==1.0.1
py-multibase==1.0.3
py-multicodec==0.2.1
py-multihash==0.2.3
py-spy==0.3.14
py-sr25519-bindings==0.2.0
py4j==0.10.9.7
pyarrow==15.0.0
pyarrow-hotfix==0.6
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycairo==1.20.1
pycodestyle==2.8.0
pycparser==2.21
pycryptodome==3.20.0
pycryptodomex==3.11.0
pycups==2.0.1
pycurl==7.44.1
pydantic==2.8.2
pydantic-settings==2.2.1
pydantic_core==2.20.1
PyDispatcher==2.0.7
pydot==3.0.1
pydub==0.25.1
pyee==11.1.0
pyelftools==0.31
pyflakes==3.2.0
pygame==2.1.2
PyGithub==1.59.1
pygls==1.3.1
Pygments==2.17.2
PyGObject==3.42.1
pyinotify==0.9.6
PyJWT==2.8.0
pykerberos==1.1.14
pylint==2.12.2
pymacaroons==0.13.0
Pympler==1.1
-e git+https://github.com/ivilata/pymultihash@215298fa2faa55027384d1f22519229d0918cfb0#egg=pymultihash&subdirectory=../../../../../../../time/2024/04/17/pymultihash
PyNaCl==1.5.0
pynvml==12.0.0
pyOpenSSL==24.2.1
pyparsing==3.1.1
PyPatchMatch==1.0.1
pypdf==5.1.0
pyperclip==1.9.0
PyPika==0.48.9
pyproject_hooks==1.1.0
pyRdfa3==3.6.4
pyre==1.12.5
pyre-check==0.9.21
pyre-extensions==0.0.30
pyreadline3==3.5.4
pyroute2==0.post0
pyroute2.core==0.post0
pyroute2.ethtool==0.post0
pyroute2.ipdb==0.post0
pyroute2.ipset==0.post0
pyroute2.ndb==0.post0
pyroute2.nftables==0.post0
pyroute2.nslink==0.post0
pyroute2.protocols==0.post0
PySimpleSOAP==1.16.2
PySocks==1.7.1
pytesseract==0.3.10
pytest==8.2.2
pytest-asyncio==0.23.7
python-apt==2.4.0
python-baseconv==1.2.2
python-dateutil==2.8.2
python-debian===0.1.43ubuntu1
python-debianbts==3.2.0
python-docx==1.1.2
python-dotenv==1.0.1
python-engineio==4.10.1
python-json-logger==2.0.7
python-Levenshtein==0.12.2
python-magic==0.4.24
python-multipart==0.0.9
python-slugify==4.0.0
python-socketio==5.11.1
python-statemachine==2.1.2
python-string-utils==1.0.0
pytorch-lightning==2.1.3
pytorch-metric-learning==2.3.0
pytz==2023.4
PyWavelets==1.6.0
pywinrm==0.3.0
pyxattr==0.7.2
pyxdg==0.27
PyYAML==6.0.1
pyzmq==26.0.3
qtconsole==5.5.2
QtPy==2.4.1
queuelib==1.7.0
-e git+https://github.com/QwenLM/Qwen-Agent.git@3db6738f5603e6215b4c39db59d390e694b7087f#egg=qwen_agent&subdirectory=../../../../../../../time/2024/11/25/Qwen-Agent
rapidfuzz==3.9.3
ratelimit==2.2.1
ray==2.10.0
-e git+https://github.com/RDFLib/rdflib@0b69f4f5f49aa2ea1caf23bbee20c7166625a4bd#egg=rdflib&subdirectory=../../../../../../../time/2024/05/26/rdflib
Recoll==1.31.6
recollchm==0.8.4.1+git
recommonmark==0.6.0
redis==5.0.6
referencing==0.35.1
regex==2023.12.25
reportbug===11.4.1ubuntu1
reportlab==3.6.8
requests==2.32.3
requests-file==2.1.0
requests-kerberos==0.12.0
requests-ntlm==1.1.0
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
resolvelib==0.8.1
responses==0.18.0
retry==0.9.2
rfc5424-logging-handler==1.4.3
rich==13.4.2
roku==4.1.0
-e git+https://github.com/ncmiller/roku-cli.git@6990df804840fbc69d892a47bc655b15ed425e28#egg=rokucli&subdirectory=../../../../../../../time/2024/10/21/roku-cli
roman==3.3
rpds-py==0.18.1
rq==1.16.2
rsa==4.7.2
ruamel.yaml==0.18.6
ruamel.yaml.clib==0.2.12
ruff==0.4.4
-e git+https://github.com/run-house/runhouse@96f0daf81b3f7c8116dcc3c2350e6abd05c917bd#egg=runhouse&subdirectory=../../../../../../../time/2024/12/06/runhouse
s3transfer==0.10.1
safetensors==0.4.3
sarif-om==1.0.4
scalecodec==1.2.7
scikit-image==0.20.0
scikit-learn==1.4.0
scipy==1.12.0
scour==0.38.2
Scrapy==2.11.2
screen-resolution-extra==0.0.0
SecretStorage==3.3.1
selinux==3.3
semantic-version==2.10.0
semver==3.0.2
sentencepiece==0.2.0
sentry-sdk==1.40.6
seqeval==1.2.2
service-identity==24.1.0
setproctitle==1.3.3
sgt-launcher==0.2.7
shellingham==1.5.4
shtab==1.6.5
simple-websocket==1.0.0
simplejson==3.19.2
simsimd==6.2.1
six==1.16.0
skypilot @ file:///mnt/data1/nix/time/2024/07/11/skypilot
smart-open==7.0.4
smmap==5.0.1
sniffio==1.3.1
snowballstemmer==2.2.0
sortedcontainers==2.4.0
sounddevice==0.5.1
soupsieve==2.3.1
spacy==3.7.5
spacy-legacy==3.0.12
spacy-loggers==1.0.5
spandrel==0.3.4
speedtest-cli==2.1.3
Sphinx==4.3.2
sphinx-rtd-theme==1.0.0
SQLAlchemy==2.0.30
srsly==2.4.8
ssh-import-id==5.11
starlette==0.37.2
-e git+https://github.com/maguowei/starred@f1ae04d5ee11952ad2ede1b8e9b679f347126cea#egg=starred&subdirectory=../../../../../../../time/2024/07/18/starred
statsforecast==1.4.0
statsmodels==0.14.2
strace-parser @ file:///mnt/data1/nix/time/2024/09/15/strace-parser
stringzilla==3.10.10
substrate-interface==1.7.5
-e git+https://github.com/jmikedupont2/swarm-models.git@11db002d774a86a50b3c6cc303ee707f12274576#egg=swarm_models&subdirectory=../../../../../../../time/2024/12/05/swarm-models
-e git+https://github.com/meta-introspector/swarms.git@82a2d8954b9b4668801bdce59a23df4b0d16df1f#egg=swarms&subdirectory=../../../../../../../time/2024/05/31/swarms
sympy==1.13.3
systemd-python==234
tabulate==0.9.0
tenacity==8.2.3
tensor-parallel==1.0.23
tensorboard==2.17.0
tensorboard-data-server==0.7.2
tensorboardX==2.6.2.2
termcolor==2.4.0
test_tube==0.7.5
-e git+https://github.com/Josephrp/testcontainers-python@298e0e7a260c21f81fa6e7bcf40613a094b8ef2b#egg=testcontainers&subdirectory=../../../../../../../time/2024/08/04/testcontainers-python
TestSlide==2.7.1
text-unidecode==1.3
thespian==3.10.7
thinc==8.2.5
thoth-analyzer==0.1.8
thoth-common==0.36.6
thoth-license-solver==0.1.5
thoth-python==0.16.11
thoth-solver @ file:///mnt/data1/nix/time/2024/06/01/solver
threadpoolctl==3.5.0
tifffile==2024.7.24
tiktoken==0.7.0
time-machine==2.16.0
timm==0.6.13
tldextract==5.1.2
tokenize-rt==5.2.0
tokenizers==0.20.3
toml==0.10.2
tomli==2.0.1
tomlkit==0.12.0
toolz==0.12.1
torch==2.3.1
-e git+https://github.com/pyg-team/pytorch_geometric.git@8bb44edf7c7e687aca44daca0e6cc5eb6ae076b0#egg=torch_geometric&subdirectory=../../../../../../../time/2023/06/03/pytorch_geometric
torchaudio==2.3.1
torchmetrics==1.2.1
torchsde==0.2.6
torchvision==0.18.1
tornado==6.1
tqdm==4.65.2
trampoline==0.1.2
transformers==4.46.3
-e git+https://github.com/elektito/trick@1946e731a3c247d9973c093695b7ea6162f1052a#egg=trick_scheme&subdirectory=../../../../../../../time/2024/07/18/trick
triton==2.3.1
trl==0.7.11
trove-classifiers==2024.5.22
ttystatus==0.38
Twisted==24.7.0
typeguard==2.13.3
typer==0.12.3
types-chardet==5.0.4.6
types-protobuf==5.26.0.20240422
types-pytz==2024.1.0.20240417
types-toml==0.10.8.20240310
typing-inspect==0.9.0
typing_extensions==4.11.0
tyro==0.7.3
tzdata==2024.1
tzlocal==5.2
ubuntu-advantage-tools==27.12
ubuntu-drivers-common==0.0.0
ufw==0.36.1
ujson==5.10.0
unattended-upgrades==0.1
Unidecode==1.3.3
unidiff==0.5.5
-e git+https://github.com/freckletonj/uniteai@653b2d01d5899f261af502a0b3367b4489d67821#egg=uniteai&subdirectory=../../../../../../../time/2024/06/27/uniteai
uritemplate==4.1.1
urllib3==2.2.3
userpath==1.8.0
utilsforecast==0.0.10
uvicorn==0.28.0
uvloop==0.19.0
varint==1.0.2
virtualenv==20.28.0
virtualenv-clone==0.3.0
vmdb2==0.24
w3lib==2.2.1
wadllib==1.3.6
wajig==4.0.3
wandb==0.15.3
wasabi==1.1.3
watchfiles==0.22.0
wcwidth==0.2.13
weasel==0.4.1
webencodings==0.5.1
websocket==0.2.1
websocket-client @ git+https://github.com/websocket-client/websocket-client.git@77337ef76f1f38b14742ab28309f9ca51b8fb011
websockets==12.0
Werkzeug==3.0.2
widgetsnbextension==4.0.13
window_ops==0.0.15
wrapt==1.13.3
wsproto==1.2.0
xcffib==0.11.1
xdg==5
xdot==1.2
xgboost==2.0.3
xkit==0.0.0
xmltodict==0.13.0
xxhash==3.4.1
-e git+https://github.com/mschuett/yaml-shellcheck.git@08537c9c42734041d9da07a143437d2565fa6f83#egg=yaml_shellcheck&subdirectory=../../../../../../../time/2024/10/21/yaml-shellcheck
yarl==1.9.4
youtube-dl==2021.12.17
yq==3.4.3
zipp==1.0.0
zope.event==5.0
zope.interface==6.4.post2

@ -120,7 +120,7 @@ mypy-protobuf = "^3.0.0"
[tool.poetry.group.test.dependencies]
pytest = "^8.1.1"
pandas = "^2.2.2"
#pandas = "^2.2.2"
[tool.ruff]
line-length = 70

@ -1,9 +1,9 @@
torch>=2.1.1,<3.0
transformers>=4.39.0,<5.0.0
#torch>=2.1.1,<3.0
#transformers>=4.39.0,<5.0.0
asyncio>=3.4.3,<4.0
toml
pypdf==5.1.0
#pypdf==5.1.0
ratelimit==2.2.1
loguru
pydantic==2.8.2
@ -21,10 +21,10 @@ types-pytz>=2023.3,<2025.0
types-chardet>=5.0.4.6
mypy-protobuf>=3.0.0
pytest>=8.1.1
pandas>=2.2.2
#pandas>=2.2.2
networkx
aiofiles
clusterops
reportlab
#reportlab
doc-master
termcolor

@ -0,0 +1,2 @@
/usr/bin/unbuffer /var/swarms/agent_workspace/.venv/bin/uvicorn --proxy-headers /opt/swarms/api/main.py:create_app

@ -0,0 +1,10 @@
pip list \
| tail -n +3 \
| awk '{print $1}' \
| xargs pip show \
| grep -E 'Location:|Name:' \
| cut -d ' ' -f 2 \
| paste -d ' ' - - \
| awk '{print $2 "/" tolower($1)}' \
| xargs du -sh 2> /dev/null \
| sort -hr

@ -2355,16 +2355,15 @@ class Agent:
import requests
data_dict = {"data": self.to_dict()}
url = "https://swarms.world/api/get-agents/log-agents"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer sk-f24a13ed139f757d99cdd9cdcae710fccead92681606a97086d9711f69d44869",
}
response = requests.post(url, json=data_dict, headers=headers)
return response.json()
logger.info(f"was going to send to swarms { data_dict}")
# url = "https://swarms.world/api/get-agents/log-agents"
# headers = {
# "Content-Type": "application/json",
# "Authorization": "Bearer sk-f24a13ed139f757d99cdd9cdcae710fccead92681606a97086d9711f69d44869",
# }
# response = requests.post(url, json=data_dict, headers=headers)
# return response.json()
return {}
def handle_tool_schema_ops(self):
if exists(self.tool_schema):

@ -0,0 +1,41 @@
import os
import subprocess
from swarms.utils.loguru_logger import initialize_logger
from swarms.telemetry.check_update import check_for_update
logger = initialize_logger(log_folder="auto_upgrade_swarms")
def auto_update():
"""auto update swarms"""
pass
# try:
# # Check if auto-update is disabled
# auto_update_enabled = os.getenv(
# "SWARMS_AUTOUPDATE_ON", "false"
# ).lower()
# if auto_update_enabled == "false":
# logger.info(
# "Auto-update is disabled via SWARMS_AUTOUPDATE_ON"
# )
# return
# outcome = check_for_update()
# if outcome is True:
# logger.info(
# "There is a new version of swarms available! Downloading..."
# )
# try:
# subprocess.run(
# ["pip", "install", "-U", "swarms"], check=True
# )
# except subprocess.CalledProcessError:
# logger.info("Attempting to install with pip3...")
# subprocess.run(
# ["pip3", "install", "-U", "swarms"], check=True
# )
# else:
# logger.info("swarms is up to date!")
# except Exception as e:
# logger.error(e)

@ -66,14 +66,27 @@ def log_agent_data(data_dict: dict) -> dict | None:
if not data_dict:
logger.error("Empty data dictionary provided")
raise ValueError("data_dict cannot be empty")
url = "https://swarms.world/api/get-agents/log-agents"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer sk-f24a13ed139f757d99cdd9cdcae710fccead92681606a97086d9711f69d44869",
}
requests.post(url, json=data_dict, headers=headers, timeout=10)
# response.raise_for_status()
return None
print(data_dict)
# url = "https://swarms.world/api/get-agents/log-agents"
# headers = {
# "Content-Type": "application/json",
# "Authorization": "Bearer sk-f24a13ed139f757d99cdd9cdcae710fccead92681606a97086d9711f69d44869",
# }
# print(data_dict)
#try:
# response = requests.post(
# url, json=data_dict, headers=headers, timeout=10
# )
# response.raise_for_status()
# result = response.json()
# return result
# except requests.exceptions.Timeout:
# logger.warning("Request timed out")
# except requests.exceptions.HTTPError as e:
# logger.error(f"HTTP error occurred: {e}")
# if response.status_code == 401:
# logger.error("Authentication failed - check API key")
# except requests.exceptions.RequestException as e:
# logger.error(f"Error logging agent data: {e}")
#logger.error("Failed to log agent data")
return {}

@ -10,14 +10,30 @@ def activate_sentry_async():
use_telementry = os.getenv("USE_TELEMETRY")
if use_telementry == "True":
#sentry_sdk.init(
# #dsn="https://5d72dd59551c02f78391d2ea5872ddd4@o4504578305490944.ingest.us.sentry.io/4506951704444928",
#)
sentry_sdk.init(
dsn="https://5d72dd59551c02f78391d2ea5872ddd4@o4504578305490944.ingest.us.sentry.io/4506951704444928",
dsn="https://4fd91d75ad5635da55cdd3069e8fdd97@o4508452173840384.ingest.de.sentry.io/4508452178493520",
# Set traces_sample_rate to 1.0 to capture 100%
# of transactions for tracing.
traces_sample_rate=1.0,
#traces_sample_rate=1.0,
profiles_sample_rate=1.0,
enable_tracing=True,
debug=False, # Set debug to False
debug=True, # Set debug to False
_experiments={
# Set continuous_profiling_auto_start to True
# to automatically start the profiler on when
# possible.
"continuous_profiling_auto_start": True,
},
)
#asgi_app = SentryAsgiMiddleware(asgi_app)
def activate_sentry():
t = threading.Thread(target=activate_sentry_async)

@ -6,81 +6,90 @@ from threading import Thread
def disable_langchain():
"""
Disables the LangChain deprecation warning.
"""
from langchain_core._api.deprecation import (
LangChainDeprecationWarning,
)
pass
# """
# Disables the LangChain deprecation warning.
# """
# from langchain_core._api.deprecation import (
# LangChainDeprecationWarning,
# )
# Ignore LangChainDeprecationWarning
warnings.filterwarnings(
"ignore", category=LangChainDeprecationWarning
)
# # Ignore LangChainDeprecationWarning
# warnings.filterwarnings(
# "ignore", category=LangChainDeprecationWarning
# )
def disable_logging():
"""
Disables logging for specific modules and sets up file and stream handlers.
Runs in a separate thread to avoid blocking the main thread.
"""
os.environ["WORKSPACE_DIR"] = "agent_workspace"
warnings.filterwarnings("ignore", category=UserWarning)
# disable tensorflow warnings
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Set the logging level for the entire module
logging.basicConfig(level=logging.ERROR)
try:
log = logging.getLogger("pytorch")
log.propagate = False
log.setLevel(logging.ERROR)
except Exception as error:
print(f"Pytorch logging not disabled: {error}")
logger_names = [
"tensorflow",
"h5py",
"numexpr",
"git",
"wandb.docker.auth",
"langchain",
"distutils",
"urllib3",
"elasticsearch",
"packaging",
]
# Use concurrent futures to set the level for each logger concurrently
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(set_logger_level, logger_names)
# Remove all existing handlers
logging.getLogger().handlers = []
# Get the workspace directory from the environment variables
workspace_dir = os.environ["WORKSPACE_DIR"]
# Check if the workspace directory exists, if not, create it
if not os.path.exists(workspace_dir):
os.makedirs(workspace_dir)
# Create a file handler to log errors to the file
file_handler = logging.FileHandler(
os.path.join(workspace_dir, "error.txt")
)
file_handler.setLevel(logging.ERROR)
logging.getLogger().addHandler(file_handler)
# Create a stream handler to log errors to the terminal
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.ERROR)
logging.getLogger().addHandler(stream_handler)
disable_langchain()
pass
for logger_name in logging.root.manager.loggerDict.keys():
print("LOGGER",logger_name)
override_logger = logging.getLogger(logger_name)
for handler in override_logger.handlers:
print(handler)
handler.setFormatter(formatter)
# """
# Disables logging for specific modules and sets up file and stream handlers.
# Runs in a separate thread to avoid blocking the main thread.
# """
# os.environ["WORKSPACE_DIR"] = "agent_workspace"
# warnings.filterwarnings("ignore", category=UserWarning)
# # disable tensorflow warnings
# os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# # Set the logging level for the entire module
# logging.basicConfig(level=logging.ERROR)
# try:
# log = logging.getLogger("pytorch")
# log.propagate = False
# log.setLevel(logging.ERROR)
# except Exception as error:
# print(f"Pytorch logging not disabled: {error}")
# logger_names = [
# "tensorflow",
# "h5py",
# "numexpr",
# "git",
# "wandb.docker.auth",
# "langchain",
# "distutils",
# "urllib3",
# "elasticsearch",
# "packaging",
# ]
# # Use concurrent futures to set the level for each logger concurrently
# with concurrent.futures.ThreadPoolExecutor() as executor:
# executor.map(set_logger_level, logger_names)
# # Remove all existing handlers
# logging.getLogger().handlers = []
# # Get the workspace directory from the environment variables
# workspace_dir = os.environ["WORKSPACE_DIR"]
# # Check if the workspace directory exists, if not, create it
# if not os.path.exists(workspace_dir):
# os.makedirs(workspace_dir)
# # Create a file handler to log errors to the file
# file_handler = logging.FileHandler(
# os.path.join(workspace_dir, "error.txt")
# )
# file_handler.setLevel(logging.ERROR)
# logging.getLogger().addHandler(file_handler)
# # Create a stream handler to log errors to the terminal
# stream_handler = logging.StreamHandler()
# stream_handler.setLevel(logging.ERROR)
# logging.getLogger().addHandler(stream_handler)
# disable_langchain()
def set_logger_level(logger_name: str) -> None:
@ -91,7 +100,7 @@ def set_logger_level(logger_name: str) -> None:
logger_name (str): The name of the logger to modify.
"""
logger = logging.getLogger(logger_name)
logger.setLevel(logging.CRITICAL)
logger.setLevel(logging.TRACE)
def start_disable_logging_in_thread():

@ -1,122 +1,39 @@
import os
import uuid
from typing import Any, Dict
import sys # stderr
from loguru import logger
import requests
from swarms.telemetry.sys_info import system_info
def log_agent_data(data: Any) -> Dict:
"""
Send data to the agent logging API endpoint.
Args:
data: Any data structure that can be JSON serialized
Returns:
Dict: The JSON response from the API
"""
try:
# Prepare the data payload
data_dict = {"data": data}
# API endpoint configuration
url = "https://swarms.world/api/get-agents/log-agents"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer sk-f24a13ed139f757d99cdd9cdcae710fccead92681606a97086d9711f69d44869",
}
# Send the request
response = requests.post(url, json=data_dict, headers=headers)
response.raise_for_status() # Raise an error for HTTP codes 4xx/5xx
# Return the JSON response
return response.json()
except Exception as e:
logger.error(f"Failed to log agent data: {e}")
return {"error": str(e)}
def initialize_logger(log_folder: str = "logs"):
"""
Initialize and configure the Loguru logger.
Args:
log_folder: The folder where logs will be stored.
Returns:
The configured Loguru logger.
"""
AGENT_WORKSPACE = "agent_workspace"
# Check if WORKSPACE_DIR is set, if not, set it to AGENT_WORKSPACE
if "WORKSPACE_DIR" not in os.environ:
os.environ["WORKSPACE_DIR"] = AGENT_WORKSPACE
# Create the log folder within the workspace
log_folder_path = os.path.join(
os.getenv("WORKSPACE_DIR"), log_folder
)
if not os.path.exists(log_folder_path):
os.makedirs(log_folder_path)
# Generate a unique identifier for the log file
uuid_for_log = str(uuid.uuid4())
log_file_path = os.path.join(
log_folder_path, f"{log_folder}_{uuid_for_log}.log"
)
# Add a Loguru sink for file logging
logger.add(
log_file_path,
level="INFO",
colorize=True,
backtrace=True,
diagnose=True,
enqueue=True,
# retention="10 days",
# compression="zip",
)
# Add a Loguru sink to intercept all log messages and send them to `log_agent_data`
class AgentLogHandler:
def write(self, message):
if message.strip(): # Avoid sending empty messages
payload = {
"log": str(message.strip()),
"folder": log_folder,
"metadata": system_info(),
}
response = log_agent_data(payload)
logger.debug(
f"Sent to API: {payload}, Response: {response}"
)
logger.add(AgentLogHandler(), level="INFO")
# AGENT_WORKSPACE = "agent_workspace"
# # Check if WORKSPACE_DIR is set, if not, set it to AGENT_WORKSPACE
# if "WORKSPACE_DIR" not in os.environ:
# os.environ["WORKSPACE_DIR"] = AGENT_WORKSPACE
# # Create a folder within the agent_workspace
# log_folder_path = os.path.join(
# os.getenv("WORKSPACE_DIR"), log_folder
# )
# if not os.path.exists(log_folder_path):
# os.makedirs(log_folder_path)
# # Generate a unique identifier for the log file
# uuid_for_log = str(uuid.uuid4())
# #log_file_path = os.path.join(
# #log_folder_path, f"{log_folder}_{uuid_for_log}.log"
# # log_folder_path, f"{log_folder}.log"
# #)
# logger.add(
# # log_file_path,
# sys.stderr,
# level="TRACE",
# colorize=True,
# backtrace=True,
# diagnose=True,
# enqueue=True,
# retention="10 days",
# # compression="zip",
# )
return logger
# if __name__ == "__main__":
# # Initialize the logger
# logger = initialize_logger()
# # Generate test log messages
# logger.info("This is a test info log.")
# logger.warning("This is a test warning log.")
# logger.error("This is a test error log.")
# # Simulate agent data logging
# test_data = {
# "agent_name": "TestAgent",
# "task": "Example Task",
# "status": "Running",
# "details": {
# "runtime": "5s",
# "success": True
# }
# }
# log_agent_data(test_data)
# print("Test logging completed.")

Loading…
Cancel
Save