park first version to test bugfixing update install fix dir install update almost working bugfix move to trace level for now log file name logging bugfix pep8 next wip adding hunter telehack update work in progress now to test we need to make sure the app is called correctly. main:create_app in swarms/api/systemd/uvicorn.service force pull fixup logging wip wip wip wip wip wip wip wip wip wip wip wip adding scripts bugfix wip wip wip wip wip no more sock update wip wip bugfix install useradd switch branch get rid of local ignore error on user more local more local bugfix APT rerun adding nginx adding expect for unbuffer debugging start nginx bugfix print hf_home and aws-cli bugfix adding values for undefined variables + key_name = "fixme" + user_id = "fixme" merge fix now for the free tier ubuntu wip example cleanup remove emacs update terminate spot instances as well bugfix https://meta-introspector.sentry.io/issues/14387352/?query=is%3Aunresolved%20issue.priority%3A%5Bhigh%2C%20medium%5D&referrer=issue-stream&stream_index=0 multiprocessing/process.py in _bootstrap at line 329 ModuleNotFoundError No module named 'pathos' removing callbacks for flexibility and security we dont want to have hard coded callbacks in the code now lets add a sleep to wait for swarms removing prints OSError: [Errno 5] Input/output error File "src/hunter/_tracer.pyx", line 45, in hunter._tracer.trace_func File "src/hunter/_predicates.pyx", line 587, in hunter._predicates.fast_call File "src/hunter/_predicates.pyx", line 360, in hunter._predicates.fast_When_call File "main.py", line 56, in process_hunter_event print("MOD", mod) OSError: [Errno 5] Input/output error (3 additional frame(s) were not displayed starting the documentation of terraform check api status now testing test working adding pip freeze fix bug rebase just run just run bootfast update just run start of cloudwatch logs now remove the branch remove checkout just run dont echo remove echo adding in the new install for systemd and nginx from swarms. we will move this out into ssm files we can apply on boot later update update wip build dockerfile in github building Update docker-image.yml wip readme start of dockerization create boot script to be called from ssm adding header adding docker boot going to test shell check update rundocker bugfix expose on network docker adding torch and together compose tests compose testpull/703/head
parent
135b02a812
commit
86fae9fa33
@ -0,0 +1,34 @@
|
|||||||
|
name: Docker Compose Test
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
push:
|
||||||
|
pull_request:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
|
||||||
|
build:
|
||||||
|
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Login to Docker Hub
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
username: ${{ vars.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
# - name: Build and push Docker images
|
||||||
|
# uses: docker/build-push-action@v6.10.0
|
||||||
|
# with:
|
||||||
|
# push: true
|
||||||
|
# tags: h4ckermike/swarms-api:experimental
|
||||||
|
# - name: Build the Docker image
|
||||||
|
# run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)
|
||||||
|
- uses: adambirds/docker-compose-action@v1.5.0
|
||||||
|
with:
|
||||||
|
compose-file: "./docker-compose.yml"
|
||||||
|
up-flags: "--build"
|
||||||
|
down-flags: "--volumes"
|
||||||
|
#test-container: "test-container"
|
||||||
|
#test-command: "npm test"
|
@ -0,0 +1,2 @@
|
|||||||
|
rerun:
|
||||||
|
bash ./rerun.sh
|
@ -0,0 +1,95 @@
|
|||||||
|
`sudo bash ./install.sh`
|
||||||
|
|
||||||
|
to redo all the steps remove the lock files
|
||||||
|
|
||||||
|
`rm ${ROOT}/opt/swarms/install/* `
|
||||||
|
|
||||||
|
or in my system:
|
||||||
|
```
|
||||||
|
export ROOT=/mnt/data1/swarms
|
||||||
|
sudo rm ${ROOT}/opt/swarms/install/*
|
||||||
|
```
|
||||||
|
|
||||||
|
rerun
|
||||||
|
```
|
||||||
|
export ROOT=/mnt/data1/swarms;
|
||||||
|
sudo rm ${ROOT}/opt/swarms/install/*;
|
||||||
|
sudo bash ./install.sh
|
||||||
|
```
|
||||||
|
* setup
|
||||||
|
To install on linux:
|
||||||
|
https://docs.aws.amazon.com/systems-manager/
|
||||||
|
|
||||||
|
```
|
||||||
|
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
|
||||||
|
sudo dpkg -i ./session-manager-plugin.deb
|
||||||
|
```
|
||||||
|
|
||||||
|
* run
|
||||||
|
|
||||||
|
To redo the installation steps for the Swarms tool on your system, follow these commands sequentially:
|
||||||
|
|
||||||
|
1. Set the ROOT variable:
|
||||||
|
```bash
|
||||||
|
export ROOT=/mnt/data1/swarms
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Remove the lock files:
|
||||||
|
```bash
|
||||||
|
sudo rm ${ROOT}/opt/swarms/install/*
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Run the installation script again:
|
||||||
|
```bash
|
||||||
|
sudo bash ./install.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
For setting up the Session Manager plugin on Linux, you can follow these commands:
|
||||||
|
|
||||||
|
1. Download the Session Manager plugin:
|
||||||
|
```bash
|
||||||
|
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install the plugin:
|
||||||
|
```bash
|
||||||
|
sudo dpkg -i ./session-manager-plugin.deb
|
||||||
|
```
|
||||||
|
|
||||||
|
After that, you can run your desired commands or workflows.** get the instance id
|
||||||
|
`aws ec2 describe-instances`
|
||||||
|
|
||||||
|
** start a session
|
||||||
|
`aws ssm start-session --target i-XXXX`
|
||||||
|
|
||||||
|
** on the machine:
|
||||||
|
```
|
||||||
|
sudo su -
|
||||||
|
tail /var/log/cloud-init-output.log
|
||||||
|
```
|
||||||
|
|
||||||
|
Convert this to an automation of your choice to run all the steps
|
||||||
|
and run this on all the instances
|
||||||
|
|
||||||
|
To get the instance ID and start a session using AWS CLI, follow these steps:
|
||||||
|
|
||||||
|
1. **Get the Instance ID:**
|
||||||
|
Run the following command to list your instances and their details:
|
||||||
|
```bash
|
||||||
|
aws ec2 describe-instances
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Start a Session:**
|
||||||
|
Replace `i-XXXX` with your actual instance ID from the previous step:
|
||||||
|
```bash
|
||||||
|
aws ssm start-session --target i-XXXX
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **On the Machine:**
|
||||||
|
After starting the session, you can execute the following commands:
|
||||||
|
```bash
|
||||||
|
sudo su -
|
||||||
|
tail /var/log/cloud-init-output.log
|
||||||
|
```
|
||||||
|
|
||||||
|
|
@ -0,0 +1,32 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# to be run as swarms user
|
||||||
|
set -e
|
||||||
|
set -x
|
||||||
|
export ROOT=""
|
||||||
|
export HOME="${ROOT}/home/swarms"
|
||||||
|
unset CONDA_EXE
|
||||||
|
unset CONDA_PYTHON_EXE
|
||||||
|
export PATH="${ROOT}/var/swarms/agent_workspace/.venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/var/swarms/agent_workspace/.venv/" ];
|
||||||
|
then
|
||||||
|
virtualenv "${ROOT}/var/swarms/agent_workspace/.venv/"
|
||||||
|
fi
|
||||||
|
ls "${ROOT}/var/swarms/agent_workspace/"
|
||||||
|
. "${ROOT}/var/swarms/agent_workspace/.venv/bin/activate"
|
||||||
|
|
||||||
|
pip install fastapi uvicorn termcolor
|
||||||
|
# these are tried to be installed by the app on boot
|
||||||
|
pip install sniffio pydantic-core httpcore exceptiongroup annotated-types pydantic anyio httpx ollama
|
||||||
|
pip install -e "${ROOT}/opt/swarms/"
|
||||||
|
cd "${ROOT}/var/swarms/"
|
||||||
|
pip install -e "${ROOT}/opt/swarms-memory"
|
||||||
|
pip install "fastapi[standard]"
|
||||||
|
pip install "loguru"
|
||||||
|
pip install "hunter" # for tracing
|
||||||
|
pip install pydantic==2.8.2
|
||||||
|
pip install pathos || echo oops
|
||||||
|
pip freeze
|
||||||
|
# launch as systemd
|
||||||
|
# python /opt/swarms/api/main.py
|
@ -0,0 +1,17 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# to be run as swarms user
|
||||||
|
set -e
|
||||||
|
set -x
|
||||||
|
export ROOT=""
|
||||||
|
export HOME="${ROOT}/home/swarms"
|
||||||
|
unset CONDA_EXE
|
||||||
|
unset CONDA_PYTHON_EXE
|
||||||
|
export PATH="${ROOT}/var/swarms/agent_workspace/.venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
|
||||||
|
|
||||||
|
ls "${ROOT}/var/swarms/agent_workspace/"
|
||||||
|
. "${ROOT}/var/swarms/agent_workspace/.venv/bin/activate"
|
||||||
|
|
||||||
|
pip install -e "${ROOT}/opt/swarms/"
|
||||||
|
cd "${ROOT}/var/swarms/"
|
||||||
|
pip install -e "${ROOT}/opt/swarms-memory"
|
@ -0,0 +1,52 @@
|
|||||||
|
import os
|
||||||
|
import json
|
||||||
|
import boto3
|
||||||
|
|
||||||
|
# Create .cache directory if it doesn't exist
|
||||||
|
os.makedirs('.cache', exist_ok=True)
|
||||||
|
|
||||||
|
def cache(name, value):
|
||||||
|
cache_file = f'.cache/{name}'
|
||||||
|
if not os.path.isfile(cache_file):
|
||||||
|
with open(cache_file, 'w') as f:
|
||||||
|
f.write(value)
|
||||||
|
|
||||||
|
# Initialize Boto3 SSM client
|
||||||
|
ssm = boto3.client('ssm')
|
||||||
|
|
||||||
|
# List commands from AWS SSM
|
||||||
|
response = ssm.list_commands()
|
||||||
|
|
||||||
|
cache("aws_ssm_list_commands", response)
|
||||||
|
|
||||||
|
# Retrieve commands
|
||||||
|
print(response)
|
||||||
|
commands = response["Commands"]
|
||||||
|
run_ids = [cmd['CommandId'] for cmd in commands]
|
||||||
|
print(f"RUNIDS: {run_ids}")
|
||||||
|
|
||||||
|
# Check the status of each command
|
||||||
|
for command in commands:
|
||||||
|
#print(command)
|
||||||
|
command_id = command['CommandId']
|
||||||
|
status = command['Status']
|
||||||
|
#eG: command= {'CommandId': '820dcf47-e8d7-4c23-8e8a-bc64de2883ff', 'DocumentName': 'AWS-RunShellScript', 'DocumentVersion': '$DEFAULT', 'Comment': '', 'ExpiresAfter': datetime.datetime(2024, 12, 13, 12, 41, 24, 683000, tzinfo=tzlocal()), 'Parameters': {'commands': ['sudo su - -c "tail /var/log/cloud-init-output.log"']}, 'InstanceIds': [], 'Targets': [{'Key': 'instanceids', 'Values': ['i-073378237c5a9dda1']}], 'RequestedDateTime': datetime.datetime(2024, 12, 13, 10, 41, 24, 683000, tzinfo=tzlocal()), 'Status': 'Success', 'StatusDetails': 'Success', 'OutputS3Region': 'us-east-1', 'OutputS3BucketName': '', 'OutputS3KeyPrefix': '', 'MaxConcurrency': '50', 'MaxErrors': '0', 'TargetCount': 1, 'CompletedCount': 1, 'ErrorCount': 0, 'DeliveryTimedOutCount': 0, 'ServiceRole': '', 'NotificationConfig': {'NotificationArn': '', 'NotificationEvents': [], 'NotificationType': ''}, 'CloudWatchOutputConfig': {'CloudWatchLogGroupName': '', 'CloudWatchOutputEnabled': False}, 'TimeoutSeconds': 3600, 'AlarmConfiguration': {'IgnorePollAlarmFailure': False, 'Alarms': []}, 'TriggeredAlarms': []}], 'ResponseMetadata': {'RequestId': '535839c4-9b87-4526-9c01-ed57f07d21ef', 'HTTPStatusCode': 200, 'HTTPHeaders': {'server': 'Server', 'date': 'Fri, 13 Dec 2024 16:58:53 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '2068', 'connection': 'keep-alive', 'x-amzn-requestid': '535839c4-9b87-4526-9c01-ed57f07d21ef'}, 'RetryAttempts': 0}}
|
||||||
|
|
||||||
|
if status == "Success":
|
||||||
|
print(f"Check logs of {command_id}")
|
||||||
|
# use ssm to fetch logs using CommandId
|
||||||
|
|
||||||
|
# Assuming you have the command_id from the previous command output
|
||||||
|
command_id = command['CommandId']
|
||||||
|
instance_id = command['Targets'][0]['Values'][0] # Get the instance ID
|
||||||
|
|
||||||
|
# Fetching logs using CommandId
|
||||||
|
log_response = ssm.get_command_invocation(
|
||||||
|
CommandId=command_id,
|
||||||
|
InstanceId=instance_id
|
||||||
|
)
|
||||||
|
print(log_response['StandardOutputContent']) # Output logs
|
||||||
|
print(log_response['StandardErrorContent']) # Error logs (if any)
|
||||||
|
print(f"aws ssm start-session --target {instance_id}")
|
||||||
|
|
||||||
|
|
@ -0,0 +1,27 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# run swarms via docker via systemd
|
||||||
|
# this script is called from ssm
|
||||||
|
# pull the new version via systemd
|
||||||
|
|
||||||
|
# now allow for reconfigure of the systemd
|
||||||
|
export WORKSOURCE="/opt/swarms/api"
|
||||||
|
mkdir -p "/var/run/swarms/secrets/"
|
||||||
|
mkdir -p "/home/swarms/.cache/huggingface/hub"
|
||||||
|
|
||||||
|
if ! grep -q "^OPENAI_KEY" "/var/run/swarms/secrets/env"; then
|
||||||
|
|
||||||
|
OPENAI_KEY=$(aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r )
|
||||||
|
export OPENAI_KEY
|
||||||
|
echo "OPENAI_KEY=${OPENAI_KEY}" > "/var/run/swarms/secrets/env"
|
||||||
|
fi
|
||||||
|
|
||||||
|
sed -e "s!ROOT!!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
|
||||||
|
sed -e "s!ROOT!!g" > /etc/systemd/system/swarms-docker.service < "${WORKSOURCE}/systemd/swarms-docker.service"
|
||||||
|
grep . -h -n /etc/systemd/system/swarms-docker.service
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
# start and stop the service pulls the docker image
|
||||||
|
#systemctl stop swarms-docker || journalctl -xeu swarms-docker
|
||||||
|
#systemctl start swarms-docker || journalctl -xeu swarms-docker
|
||||||
|
systemctl restart swarms-docker || journalctl -xeu swarms-docker.service
|
||||||
|
systemctl enable swarms-docker || journalctl -xeu swarms-docker
|
@ -0,0 +1,54 @@
|
|||||||
|
import time
|
||||||
|
|
||||||
|
import boto3
|
||||||
|
#from dateutil import tz
|
||||||
|
|
||||||
|
|
||||||
|
def parse_command_id(send_command_output):
|
||||||
|
return send_command_output['Command']['CommandId']
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ec2_client = boto3.client('ec2')
|
||||||
|
ssm_client = boto3.client('ssm')
|
||||||
|
|
||||||
|
# Get the list of instance IDs and their states
|
||||||
|
instances_response = ec2_client.describe_instances()
|
||||||
|
instances = [
|
||||||
|
(instance['InstanceId'], instance['State']['Name'])
|
||||||
|
for reservation in instances_response['Reservations']
|
||||||
|
for instance in reservation['Instances']
|
||||||
|
]
|
||||||
|
|
||||||
|
for instance_id, state in instances:
|
||||||
|
if state == 'running':
|
||||||
|
print(f"Starting command for instance: {instance_id}")
|
||||||
|
|
||||||
|
# Send command to the instance
|
||||||
|
send_command_output = ssm_client.send_command(
|
||||||
|
DocumentName="AWS-RunShellScript",
|
||||||
|
Targets=[{"Key": "instanceids", "Values": [instance_id]}],
|
||||||
|
Parameters={'commands': ['sudo su - -c "tail /var/log/cloud-init-output.log"']}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get the command ID
|
||||||
|
command_id = parse_command_id(send_command_output)
|
||||||
|
|
||||||
|
# Check the command status every second for 4 seconds
|
||||||
|
for _ in range(4):
|
||||||
|
time.sleep(20)
|
||||||
|
command_status = ssm_client.list_command_invocations(CommandId=command_id, Details=True)
|
||||||
|
|
||||||
|
print(command_status)
|
||||||
|
for invocation in command_status['CommandInvocations']:
|
||||||
|
if invocation['Status'] == 'Success':
|
||||||
|
for plugin in invocation['CommandPlugins']:
|
||||||
|
if plugin['Status'] == 'Success':
|
||||||
|
print(f"Output from instance {instance_id}:\n{plugin['Output']}")
|
||||||
|
else:
|
||||||
|
print(f"Error in plugin execution for instance {instance_id}: {plugin['StatusDetails']}")
|
||||||
|
else:
|
||||||
|
print(f"Command for instance {instance_id} is still in progress... Status: {invocation['Status']}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
@ -0,0 +1,166 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# review and improve
|
||||||
|
. ./.env # for secrets
|
||||||
|
set -e # stop on any error
|
||||||
|
#set -x dont echo secrets
|
||||||
|
export BRANCH="feature/ec2"
|
||||||
|
#export ROOT="/mnt/data1/swarms"
|
||||||
|
export ROOT="" # empty
|
||||||
|
export WORKSOURCE="${ROOT}/opt/swarms/api"
|
||||||
|
|
||||||
|
if [ ! -d "${ROOT}/opt/swarms/install/" ]; then
|
||||||
|
mkdir -p "${ROOT}/opt/swarms/install"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/opt/swarms/install/apt.txt" ]; then
|
||||||
|
apt update
|
||||||
|
apt install --allow-change-held-packages -y git python3-virtualenv nginx
|
||||||
|
apt install --allow-change-held-packages -y expect
|
||||||
|
apt install --allow-change-held-packages -y jq netcat-traditional # missing packages
|
||||||
|
snap install aws-cli --classic
|
||||||
|
echo 1 >"${ROOT}/opt/swarms/install/apt.txt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/opt/swarms/install/setup.txt" ]; then
|
||||||
|
#rm -rf ./src/swarms # oops
|
||||||
|
#adduser --disabled-password --comment "" swarms --home "${ROOT}/home/swarms" || echo ignore
|
||||||
|
adduser --disabled-password --gecos "" swarms --home "${ROOT}/home/swarms" || echo ignore
|
||||||
|
git config --global --add safe.directory "${ROOT}/opt/swarms"
|
||||||
|
git config --global --add safe.directory "${ROOT}/opt/swarms-memory"
|
||||||
|
# we should have done this
|
||||||
|
if [ ! -d "${ROOT}/opt/swarms/" ];
|
||||||
|
then
|
||||||
|
git clone https://github.com/jmikedupont2/swarms "${ROOT}/opt/swarms/"
|
||||||
|
fi
|
||||||
|
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
# git remote add local /time/2024/05/swarms/ || git remote set-url local /time/2024/05/swarms/
|
||||||
|
# git fetch local
|
||||||
|
# git stash
|
||||||
|
git checkout --force $BRANCH
|
||||||
|
git pull
|
||||||
|
git log -2 --patch | head -1000
|
||||||
|
if [ ! -d "${ROOT}/opt/swarms-memory/" ];
|
||||||
|
then
|
||||||
|
git clone https://github.com/The-Swarm-Corporation/swarms-memory "${ROOT}/opt/swarms-memory"
|
||||||
|
fi
|
||||||
|
# where the swarms will run
|
||||||
|
mkdir -p "${ROOT}/var/swarms/agent_workspace/"
|
||||||
|
mkdir -p "${ROOT}/home/swarms"
|
||||||
|
chown -R swarms:swarms "${ROOT}/var/swarms/agent_workspace" "${ROOT}/home/swarms"
|
||||||
|
|
||||||
|
# now for my local setup I aslo need to do this or we have to change the systemctl home var
|
||||||
|
#mkdir -p "/home/swarms"
|
||||||
|
#chown -R swarms:swarms "/home/swarms"
|
||||||
|
|
||||||
|
# copy the run file from git
|
||||||
|
cp "${WORKSOURCE}/boot.sh" "${ROOT}/var/swarms/agent_workspace/boot.sh"
|
||||||
|
mkdir -p "${ROOT}/var/swarms/logs"
|
||||||
|
chmod +x "${ROOT}/var/swarms/agent_workspace/boot.sh"
|
||||||
|
chown -R swarms:swarms "${ROOT}/var/swarms/" "${ROOT}/home/swarms" "${ROOT}/opt/swarms"
|
||||||
|
|
||||||
|
echo 1 >"${ROOT}/opt/swarms/install/setup.txt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/opt/swarms/install/boot.txt" ]; then
|
||||||
|
# user install but do not start
|
||||||
|
su -c "bash -e -x ${ROOT}/var/swarms/agent_workspace/boot.sh" swarms
|
||||||
|
echo 1 >"${ROOT}/opt/swarms/install/boot.txt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
## pull
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/opt/swarms/install/pull.txt" ]; then
|
||||||
|
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
# git fetch local
|
||||||
|
# git stash
|
||||||
|
git checkout --force $BRANCH
|
||||||
|
git pull # $BRANCH
|
||||||
|
echo 1 >"${ROOT}/opt/swarms/install/pull.txt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/opt/swarms/install/config.txt" ]; then
|
||||||
|
mkdir -p "${ROOT}/var/run/swarms/secrets/"
|
||||||
|
mkdir -p "${ROOT}/home/swarms/.cache/huggingface/hub"
|
||||||
|
# aws ssm get-parameter --name "swarms_openai_key" > /root/openaikey.txt
|
||||||
|
export OPENAI_KEY=`aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r `
|
||||||
|
echo "OPENAI_KEY=${OPENAI_KEY}" > "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
|
||||||
|
## append new homedir
|
||||||
|
echo "HF_HOME=${ROOT}/home/swarms/.cache/huggingface/hub" >> "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
echo "HOME=${ROOT}/home/swarms" >> "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
# attempt to move the workspace
|
||||||
|
echo 'WORKSPACE_DIR=${STATE_DIRECTORY}' >> "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
#EnvironmentFile=ROOT/var/run/swarms/secrets/env
|
||||||
|
#ExecStart=ROOT/var/run/uvicorn/env/bin/uvicorn \
|
||||||
|
# --uds ROOT/run/uvicorn/uvicorn-swarms-api.sock \
|
||||||
|
echo 1 >"${ROOT}/opt/swarms/install/config.txt"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -f "${ROOT}/opt/swarms/install/nginx.txt" ]; then
|
||||||
|
mkdir -p ${ROOT}/var/log/nginx/swarms/
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
# create sock
|
||||||
|
mkdir -p ${ROOT}/run/uvicorn/
|
||||||
|
chown -R swarms:swarms ${ROOT}/run/uvicorn
|
||||||
|
|
||||||
|
# reconfigure
|
||||||
|
# now we setup the service and replace root in the files
|
||||||
|
#echo cat "${WORKSOURCE}/nginx/site.conf" \| sed -e "s!ROOT!${ROOT}!g"
|
||||||
|
sed -e "s!ROOT!${ROOT}!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
|
||||||
|
#cat /etc/nginx/sites-enabled/default
|
||||||
|
|
||||||
|
# ROOT/var/run/swarms/uvicorn-swarms-api.sock;
|
||||||
|
# access_log ROOT/var/log/nginx/swarms/access.log;
|
||||||
|
# error_log ROOT/var/log/nginx/swarms/error.log;
|
||||||
|
#echo cat "${WORKSOURCE}/systemd/uvicorn.service" \| sed -e "s!ROOT!/${ROOT}/!g"
|
||||||
|
#cat "${WORKSOURCE}/systemd/uvicorn.service"
|
||||||
|
sed -e "s!ROOT!${ROOT}!g" > /etc/systemd/system/swarms-uvicorn.service < "${WORKSOURCE}/systemd/uvicorn.service"
|
||||||
|
grep . -h -n /etc/systemd/system/swarms-uvicorn.service
|
||||||
|
|
||||||
|
# if [ -f ${ROOT}/etc/systemd/system/swarms-uvicorn.service ];
|
||||||
|
# then
|
||||||
|
# cp ${ROOT}/etc/systemd/system/swarms-uvicorn.service /etc/systemd/system/swarms-uvicorn.service
|
||||||
|
# else
|
||||||
|
# # allow for editing as non root
|
||||||
|
# mkdir -p ${ROOT}/etc/systemd/system/
|
||||||
|
# cp /etc/systemd/system/swarms-uvicorn.service ${ROOT}/etc/systemd/system/swarms-uvicorn.service
|
||||||
|
# fi
|
||||||
|
|
||||||
|
#
|
||||||
|
#chown -R mdupont:mdupont ${ROOT}/etc/systemd/system/
|
||||||
|
#/run/uvicorn/
|
||||||
|
# triage
|
||||||
|
chown -R swarms:swarms ${ROOT}/var/run/swarms/
|
||||||
|
# Dec 12 10:55:50 mdupont-G470 unbuffer[3921723]: OSError: [Errno 30] Read-only file system:
|
||||||
|
#cat /etc/systemd/system/swarms-uvicorn.service
|
||||||
|
|
||||||
|
# now fix the perms
|
||||||
|
mkdir -p ${ROOT}/opt/swarms/api/agent_workspace/try_except_wrapper/
|
||||||
|
chown -R swarms:swarms ${ROOT}/opt/swarms/api/
|
||||||
|
|
||||||
|
# always reload
|
||||||
|
systemctl daemon-reload
|
||||||
|
# systemctl start swarms-uvicorn || systemctl status swarms-uvicorn.service && journalctl -xeu swarms-uvicorn.service
|
||||||
|
systemctl start swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
|
||||||
|
# systemctl status swarms-uvicorn.service
|
||||||
|
# journalctl -xeu swarms-uvicorn.serviceo
|
||||||
|
systemctl enable swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
|
||||||
|
systemctl enable nginx
|
||||||
|
systemctl start nginx
|
||||||
|
|
||||||
|
journalctl -xeu swarms-uvicorn.service | tail -200 || echo oops
|
||||||
|
systemctl status swarms-uvicorn.service || echo oops2
|
||||||
|
|
||||||
|
# now after swarms is up, we restart nginx
|
||||||
|
HOST="localhost"
|
||||||
|
PORT=5474
|
||||||
|
while ! nc -z $HOST $PORT; do
|
||||||
|
sleep 1
|
||||||
|
echo -n "."
|
||||||
|
done
|
||||||
|
echo "Port $PORT is now open!"
|
||||||
|
|
||||||
|
systemctl restart nginx
|
@ -0,0 +1,80 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# review and improve
|
||||||
|
. ./.env # for secrets
|
||||||
|
set -e # stop on any error
|
||||||
|
#set -x dont echo
|
||||||
|
#export BRANCH="feature/ec2"
|
||||||
|
#export ROOT="/mnt/data1/swarms"
|
||||||
|
export ROOT="" # empty
|
||||||
|
export WORKSOURCE="${ROOT}/opt/swarms/api"
|
||||||
|
|
||||||
|
adduser --disabled-password --gecos "" swarms --home "${ROOT}/home/swarms" || echo ignore
|
||||||
|
git config --global --add safe.directory "${ROOT}/opt/swarms"
|
||||||
|
git config --global --add safe.directory "${ROOT}/opt/swarms-memory"
|
||||||
|
|
||||||
|
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
#git checkout --force $BRANCH we did this before
|
||||||
|
#git pull
|
||||||
|
git log -2 --patch | head -1000
|
||||||
|
|
||||||
|
mkdir -p "${ROOT}/var/swarms/agent_workspace/"
|
||||||
|
mkdir -p "${ROOT}/home/swarms"
|
||||||
|
|
||||||
|
|
||||||
|
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
#git checkout --force $BRANCH
|
||||||
|
#git pull
|
||||||
|
|
||||||
|
cp "${WORKSOURCE}/boot_fast.sh" "${ROOT}/var/swarms/agent_workspace/boot_fast.sh"
|
||||||
|
mkdir -p "${ROOT}/var/swarms/logs"
|
||||||
|
chmod +x "${ROOT}/var/swarms/agent_workspace/boot_fast.sh"
|
||||||
|
chown -R swarms:swarms "${ROOT}/var/swarms/" "${ROOT}/home/swarms" "${ROOT}/opt/swarms"
|
||||||
|
|
||||||
|
# user install but do not start
|
||||||
|
su -c "bash -e -x ${ROOT}/var/swarms/agent_workspace/boot_fast.sh" swarms
|
||||||
|
|
||||||
|
cd "${ROOT}/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
#git checkout --force $BRANCH
|
||||||
|
#git pull # $BRANCH
|
||||||
|
|
||||||
|
mkdir -p "${ROOT}/var/run/swarms/secrets/"
|
||||||
|
mkdir -p "${ROOT}/home/swarms/.cache/huggingface/hub"
|
||||||
|
# aws ssm get-parameter --name "swarms_openai_key" > /root/openaikey.txt
|
||||||
|
export OPENAI_KEY=`aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r `
|
||||||
|
echo "OPENAI_KEY=${OPENAI_KEY}" > "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
|
||||||
|
## append new homedir
|
||||||
|
echo "HF_HOME=${ROOT}/home/swarms/.cache/huggingface/hub" >> "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
echo "HOME=${ROOT}/home/swarms" >> "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
# attempt to move the workspace
|
||||||
|
echo 'WORKSPACE_DIR=${STATE_DIRECTORY}' >> "${ROOT}/var/run/swarms/secrets/env"
|
||||||
|
|
||||||
|
# setup the systemd service again
|
||||||
|
sed -e "s!ROOT!${ROOT}!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
|
||||||
|
sed -e "s!ROOT!${ROOT}!g" > /etc/systemd/system/swarms-uvicorn.service < "${WORKSOURCE}/systemd/uvicorn.service"
|
||||||
|
grep . -h -n /etc/systemd/system/swarms-uvicorn.service
|
||||||
|
|
||||||
|
chown -R swarms:swarms ${ROOT}/var/run/swarms/
|
||||||
|
mkdir -p ${ROOT}/opt/swarms/api/agent_workspace/try_except_wrapper/
|
||||||
|
chown -R swarms:swarms ${ROOT}/opt/swarms/api/
|
||||||
|
|
||||||
|
# always reload
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl start swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
|
||||||
|
systemctl enable swarms-uvicorn || journalctl -xeu swarms-uvicorn.service
|
||||||
|
systemctl enable nginx
|
||||||
|
systemctl start nginx
|
||||||
|
|
||||||
|
journalctl -xeu swarms-uvicorn.service | tail -200 || echo oops
|
||||||
|
systemctl status swarms-uvicorn.service || echo oops2
|
||||||
|
|
||||||
|
# now after swarms is up, we restart nginx
|
||||||
|
HOST="localhost"
|
||||||
|
PORT=5474
|
||||||
|
while ! nc -z $HOST $PORT; do
|
||||||
|
sleep 1
|
||||||
|
echo -n "."
|
||||||
|
done
|
||||||
|
echo "Port $PORT is now open!"
|
||||||
|
|
||||||
|
systemctl restart nginx
|
@ -0,0 +1,100 @@
|
|||||||
|
##
|
||||||
|
# You should look at the following URL's in order to grasp a solid understanding
|
||||||
|
# of Nginx configuration files in order to fully unleash the power of Nginx.
|
||||||
|
# https://www.nginx.com/resources/wiki/start/
|
||||||
|
# https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
|
||||||
|
# https://wiki.debian.org/Nginx/DirectoryStructure
|
||||||
|
#
|
||||||
|
# In most cases, administrators will remove this file from sites-enabled/ and
|
||||||
|
# leave it as reference inside of sites-available where it will continue to be
|
||||||
|
# updated by the nginx packaging team.
|
||||||
|
#
|
||||||
|
# This file will automatically load configuration files provided by other
|
||||||
|
# applications, such as Drupal or Wordpress. These applications will be made
|
||||||
|
# available underneath a path with that package name, such as /drupal8.
|
||||||
|
#
|
||||||
|
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
|
||||||
|
##
|
||||||
|
|
||||||
|
# Default server configuration
|
||||||
|
#
|
||||||
|
server {
|
||||||
|
listen 80 default_server;
|
||||||
|
listen [::]:80 default_server;
|
||||||
|
|
||||||
|
# SSL configuration
|
||||||
|
#
|
||||||
|
# listen 443 ssl default_server;
|
||||||
|
# listen [::]:443 ssl default_server;
|
||||||
|
#
|
||||||
|
# Note: You should disable gzip for SSL traffic.
|
||||||
|
# See: https://bugs.debian.org/773332
|
||||||
|
#
|
||||||
|
# Read up on ssl_ciphers to ensure a secure configuration.
|
||||||
|
# See: https://bugs.debian.org/765782
|
||||||
|
#
|
||||||
|
# Self signed certs generated by the ssl-cert package
|
||||||
|
# Don't use them in a production server!
|
||||||
|
#
|
||||||
|
# include snippets/snakeoil.conf;
|
||||||
|
|
||||||
|
root /var/www/html;
|
||||||
|
|
||||||
|
# Add index.php to the list if you are using PHP
|
||||||
|
index index.html index.htm index.nginx-debian.html;
|
||||||
|
|
||||||
|
server_name _;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
# First attempt to serve request as file, then
|
||||||
|
# as directory, then fall back to displaying a 404.
|
||||||
|
# try_files $uri $uri/ =404;
|
||||||
|
autoindex on;
|
||||||
|
}
|
||||||
|
location /swarms {
|
||||||
|
proxy_pass http://unix:/var/run/swarms/uvicorn-swarms-api.sock;
|
||||||
|
}
|
||||||
|
|
||||||
|
# location /agentartificial {
|
||||||
|
# autoindex on;
|
||||||
|
# disable_symlinks off;
|
||||||
|
# }
|
||||||
|
|
||||||
|
# pass PHP scripts to FastCGI server
|
||||||
|
#
|
||||||
|
#location ~ \.php$ {
|
||||||
|
# include snippets/fastcgi-php.conf;
|
||||||
|
#
|
||||||
|
# # With php-fpm (or other unix sockets):
|
||||||
|
# fastcgi_pass unix:/run/php/php7.4-fpm.sock;
|
||||||
|
# # With php-cgi (or other tcp sockets):
|
||||||
|
# fastcgi_pass 127.0.0.1:9000;
|
||||||
|
#}
|
||||||
|
|
||||||
|
# deny access to .htaccess files, if Apache's document root
|
||||||
|
# concurs with nginx's one
|
||||||
|
#
|
||||||
|
#location ~ /\.ht {
|
||||||
|
# deny all;
|
||||||
|
#}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Virtual Host configuration for example.com
|
||||||
|
#
|
||||||
|
# You can move that to a different file under sites-available/ and symlink that
|
||||||
|
# to sites-enabled/ to enable it.
|
||||||
|
#
|
||||||
|
#server {
|
||||||
|
# listen 80;
|
||||||
|
# listen [::]:80;
|
||||||
|
#
|
||||||
|
# server_name example.com;
|
||||||
|
#
|
||||||
|
# root /var/www/example.com;
|
||||||
|
# index index.html;
|
||||||
|
#
|
||||||
|
# location / {
|
||||||
|
# try_files $uri $uri/ =404;
|
||||||
|
# }
|
||||||
|
#}
|
@ -0,0 +1,14 @@
|
|||||||
|
# from https://github.com/neamaddin/debian-fastapi-server
|
||||||
|
server {
|
||||||
|
listen [::]:80;
|
||||||
|
listen 80;
|
||||||
|
server_name swarms;
|
||||||
|
access_log ROOT/var/log/nginx/swarms/access.log;
|
||||||
|
error_log ROOT/var/log/nginx/swarms/error.log;
|
||||||
|
add_header X-Content-Type-Options "nosniff" always;
|
||||||
|
add_header X-XSS-Protection "1; mode=block" always;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://127.0.0.1:5474;
|
||||||
|
}
|
||||||
|
}
|
@ -0,0 +1,194 @@
|
|||||||
|
# pip freeze
|
||||||
|
aiofiles==24.1.0
|
||||||
|
aiohappyeyeballs==2.4.4
|
||||||
|
aiohttp==3.11.10
|
||||||
|
aiosignal==1.3.2
|
||||||
|
annotated-types==0.7.0
|
||||||
|
anyio==4.7.0
|
||||||
|
asgiref==3.8.1
|
||||||
|
asyncio==3.4.3
|
||||||
|
attrs==24.3.0
|
||||||
|
backoff==2.2.1
|
||||||
|
bcrypt==4.2.1
|
||||||
|
build==1.2.2.post1
|
||||||
|
cachetools==5.5.0
|
||||||
|
certifi==2024.12.14
|
||||||
|
chardet==5.2.0
|
||||||
|
charset-normalizer==3.4.0
|
||||||
|
chroma-hnswlib==0.7.6
|
||||||
|
chromadb==0.5.20
|
||||||
|
click==8.1.7
|
||||||
|
clusterops==0.1.5
|
||||||
|
coloredlogs==15.0.1
|
||||||
|
dataclasses-json==0.6.7
|
||||||
|
Deprecated==1.2.15
|
||||||
|
dill==0.3.9
|
||||||
|
distro==1.9.0
|
||||||
|
dnspython==2.7.0
|
||||||
|
doc-master==0.0.2
|
||||||
|
docstring_parser==0.16
|
||||||
|
durationpy==0.9
|
||||||
|
email_validator==2.2.0
|
||||||
|
exceptiongroup==1.2.2
|
||||||
|
faiss-cpu==1.9.0.post1
|
||||||
|
fastapi==0.115.6
|
||||||
|
fastapi-cli==0.0.7
|
||||||
|
filelock==3.16.1
|
||||||
|
flatbuffers==24.3.25
|
||||||
|
frozenlist==1.5.0
|
||||||
|
fsspec==2024.10.0
|
||||||
|
google-auth==2.37.0
|
||||||
|
googleapis-common-protos==1.66.0
|
||||||
|
GPUtil==1.4.0
|
||||||
|
greenlet==3.1.1
|
||||||
|
grpcio==1.68.1
|
||||||
|
h11==0.14.0
|
||||||
|
httpcore==1.0.7
|
||||||
|
httptools==0.6.4
|
||||||
|
httpx==0.27.2
|
||||||
|
huggingface-hub==0.27.0
|
||||||
|
humanfriendly==10.0
|
||||||
|
hunter==3.7.0
|
||||||
|
idna==3.10
|
||||||
|
importlib_metadata==8.5.0
|
||||||
|
importlib_resources==6.4.5
|
||||||
|
Jinja2==3.1.4
|
||||||
|
jiter==0.8.2
|
||||||
|
joblib==1.4.2
|
||||||
|
jsonpatch==1.33
|
||||||
|
jsonpointer==3.0.0
|
||||||
|
jsonschema==4.23.0
|
||||||
|
jsonschema-specifications==2024.10.1
|
||||||
|
kubernetes==31.0.0
|
||||||
|
langchain-community==0.0.29
|
||||||
|
langchain-core==0.1.53
|
||||||
|
langsmith==0.1.147
|
||||||
|
litellm==1.55.3
|
||||||
|
loguru==0.7.3
|
||||||
|
lxml==5.3.0
|
||||||
|
manhole==1.8.1
|
||||||
|
markdown-it-py==3.0.0
|
||||||
|
MarkupSafe==3.0.2
|
||||||
|
marshmallow==3.23.1
|
||||||
|
mdurl==0.1.2
|
||||||
|
mmh3==5.0.1
|
||||||
|
monotonic==1.6
|
||||||
|
mpmath==1.3.0
|
||||||
|
msgpack==1.1.0
|
||||||
|
multidict==6.1.0
|
||||||
|
multiprocess==0.70.17
|
||||||
|
mypy-extensions==1.0.0
|
||||||
|
networkx==3.4.2
|
||||||
|
numpy==1.26.4
|
||||||
|
nvidia-cublas-cu12==12.4.5.8
|
||||||
|
nvidia-cuda-cupti-cu12==12.4.127
|
||||||
|
nvidia-cuda-nvrtc-cu12==12.4.127
|
||||||
|
nvidia-cuda-runtime-cu12==12.4.127
|
||||||
|
nvidia-cudnn-cu12==9.1.0.70
|
||||||
|
nvidia-cufft-cu12==11.2.1.3
|
||||||
|
nvidia-curand-cu12==10.3.5.147
|
||||||
|
nvidia-cusolver-cu12==11.6.1.9
|
||||||
|
nvidia-cusparse-cu12==12.3.1.170
|
||||||
|
nvidia-nccl-cu12==2.21.5
|
||||||
|
nvidia-nvjitlink-cu12==12.4.127
|
||||||
|
nvidia-nvtx-cu12==12.4.127
|
||||||
|
oauthlib==3.2.2
|
||||||
|
ollama==0.3.3
|
||||||
|
onnxruntime==1.20.1
|
||||||
|
openai==1.58.0
|
||||||
|
opentelemetry-api==1.29.0
|
||||||
|
opentelemetry-exporter-otlp-proto-common==1.29.0
|
||||||
|
opentelemetry-exporter-otlp-proto-grpc==1.29.0
|
||||||
|
opentelemetry-instrumentation==0.50b0
|
||||||
|
opentelemetry-instrumentation-asgi==0.50b0
|
||||||
|
opentelemetry-instrumentation-fastapi==0.50b0
|
||||||
|
opentelemetry-proto==1.29.0
|
||||||
|
opentelemetry-sdk==1.29.0
|
||||||
|
opentelemetry-semantic-conventions==0.50b0
|
||||||
|
opentelemetry-util-http==0.50b0
|
||||||
|
orjson==3.10.12
|
||||||
|
overrides==7.7.0
|
||||||
|
packaging==23.2
|
||||||
|
pandas==2.2.3
|
||||||
|
parsimonious==0.10.0
|
||||||
|
pathos==0.3.3
|
||||||
|
pillow==11.0.0
|
||||||
|
pinecone==5.4.2
|
||||||
|
pinecone-plugin-inference==3.1.0
|
||||||
|
pinecone-plugin-interface==0.0.7
|
||||||
|
posthog==3.7.4
|
||||||
|
pox==0.3.5
|
||||||
|
ppft==1.7.6.9
|
||||||
|
propcache==0.2.1
|
||||||
|
protobuf==5.29.1
|
||||||
|
psutil==6.1.0
|
||||||
|
pyasn1==0.6.1
|
||||||
|
pyasn1_modules==0.4.1
|
||||||
|
pydantic==2.8.2
|
||||||
|
pydantic_core==2.20.1
|
||||||
|
Pygments==2.18.0
|
||||||
|
PyJWT==2.10.1
|
||||||
|
pypdf==5.1.0
|
||||||
|
PyPika==0.48.9
|
||||||
|
pyproject_hooks==1.2.0
|
||||||
|
pytesseract==0.3.13
|
||||||
|
python-dateutil==2.9.0.post0
|
||||||
|
python-docx==1.1.2
|
||||||
|
python-dotenv==1.0.1
|
||||||
|
python-magic==0.4.27
|
||||||
|
python-multipart==0.0.20
|
||||||
|
pytz==2024.2
|
||||||
|
PyYAML==6.0.2
|
||||||
|
ray==2.40.0
|
||||||
|
referencing==0.35.1
|
||||||
|
regex==2024.11.6
|
||||||
|
reportlab==4.2.5
|
||||||
|
requests==2.32.3
|
||||||
|
requests-oauthlib==2.0.0
|
||||||
|
requests-toolbelt==1.0.0
|
||||||
|
rich==13.9.4
|
||||||
|
rich-toolkit==0.12.0
|
||||||
|
rpds-py==0.22.3
|
||||||
|
rsa==4.9
|
||||||
|
safetensors==0.4.5
|
||||||
|
scikit-learn==1.6.0
|
||||||
|
scipy==1.14.1
|
||||||
|
sentence-transformers==3.3.1
|
||||||
|
sentry-sdk==2.19.2
|
||||||
|
setuptools==75.6.0
|
||||||
|
shellingham==1.5.4
|
||||||
|
singlestoredb==1.10.0
|
||||||
|
six==1.17.0
|
||||||
|
sniffio==1.3.1
|
||||||
|
SQLAlchemy==2.0.36
|
||||||
|
sqlparams==6.1.0
|
||||||
|
starlette==0.41.3
|
||||||
|
swarm-models==0.2.7
|
||||||
|
-e git+https://github.com/jmikedupont2/swarms@cc67de0b713449f47e02de782c41e429d224f431#egg=swarms
|
||||||
|
# Editable Git install with no remote (swarms-memory==0.1.2)
|
||||||
|
-e /opt/swarms-memory
|
||||||
|
sympy==1.13.1
|
||||||
|
tenacity==8.5.0
|
||||||
|
termcolor==2.5.0
|
||||||
|
threadpoolctl==3.5.0
|
||||||
|
tiktoken==0.8.0
|
||||||
|
tokenizers==0.21.0
|
||||||
|
toml==0.10.2
|
||||||
|
torch==2.5.1
|
||||||
|
tqdm==4.67.1
|
||||||
|
transformers==4.47.1
|
||||||
|
triton==3.1.0
|
||||||
|
typer==0.15.1
|
||||||
|
typing-inspect==0.9.0
|
||||||
|
typing_extensions==4.12.2
|
||||||
|
tzdata==2024.2
|
||||||
|
urllib3==2.2.3
|
||||||
|
uvicorn==0.34.0
|
||||||
|
uvloop==0.21.0
|
||||||
|
watchfiles==1.0.3
|
||||||
|
websocket-client==1.8.0
|
||||||
|
websockets==14.1
|
||||||
|
wheel==0.45.1
|
||||||
|
wrapt==1.17.0
|
||||||
|
yarl==1.18.3
|
||||||
|
zipp==3.21.0
|
@ -0,0 +1,4 @@
|
|||||||
|
export ROOT=/mnt/data1/swarms;
|
||||||
|
git commit -m 'wip'
|
||||||
|
sudo rm "${ROOT}/opt/swarms/install/pull.txt"
|
||||||
|
sudo bash ./install.sh
|
@ -0,0 +1,6 @@
|
|||||||
|
export ROOT=""
|
||||||
|
#/mnt/data1/swarms;
|
||||||
|
#git commit -m 'wip' -a
|
||||||
|
|
||||||
|
sudo rm ${ROOT}/opt/swarms/install/*;
|
||||||
|
sudo bash ./install.sh
|
@ -0,0 +1,14 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# EDIT: we need to make sure the instance is running
|
||||||
|
# Get the list of instance IDs
|
||||||
|
instance_ids=$(aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" --output text)
|
||||||
|
|
||||||
|
# Loop through each instance ID and start a session
|
||||||
|
for instance_id in $instance_ids; do
|
||||||
|
echo "Starting session for instance: $instance_id"
|
||||||
|
|
||||||
|
# Start a session and execute commands (replace with your commands)
|
||||||
|
aws ssm start-session --target "$instance_id" --document-name "AWS-StartInteractiveCommand" --parameters 'commands=["sudo su -","tail /var/log/cloud-init-output.log"]'
|
||||||
|
|
||||||
|
done
|
@ -0,0 +1,35 @@
|
|||||||
|
# Get the list of instance IDs and their states
|
||||||
|
instances=$(aws ec2 describe-instances --query "Reservations[*].Instances[*].[InstanceId,State.Name]" --output text)
|
||||||
|
|
||||||
|
# aws ssm send-command --document-name AWS-RunShellScript --targets Key=instanceids,Values=i-073378237c5a9dda1 --parameters 'commands=["sudo su - -c \"tail /var/log/cloud-init-output.log\""]'
|
||||||
|
|
||||||
|
parse_command_id(){
|
||||||
|
# send_command_output
|
||||||
|
local send_command_output=$1
|
||||||
|
echo "$send_command_output" | jq -r '.Command.CommandId'
|
||||||
|
}
|
||||||
|
|
||||||
|
# Loop through each instance ID and state
|
||||||
|
while read -r instance_id state; do
|
||||||
|
if [[ $state == "running" ]]; then
|
||||||
|
echo "Starting session for instance: $instance_id"
|
||||||
|
|
||||||
|
# Start a session and execute commands (replace with your commands)
|
||||||
|
#aws ssm start-session --target "$instance_id" --document-name "AWS-StartInteractiveCommand" --parameters 'commands=["sudo su -","tail /var/log/cloud-init-output.log"]'
|
||||||
|
|
||||||
|
#--target "$instance_id"
|
||||||
|
send_command_output=$(aws ssm send-command --document-name "AWS-RunShellScript" --targets "Key=instanceids,Values=$instance_id" --parameters 'commands=["sudo su - -c \"tail /var/log/cloud-init-output.log\""]')
|
||||||
|
|
||||||
|
|
||||||
|
# now get the command id
|
||||||
|
command_id=$(parse_command_id send_command_output)
|
||||||
|
|
||||||
|
# now for 4 times, sleep 1 sec,
|
||||||
|
for i in {1..4}; do
|
||||||
|
sleep 1
|
||||||
|
command_status=$(aws ssm list-command-invocations --command-id "$command_id" --details)
|
||||||
|
echo "$command_status"
|
||||||
|
done
|
||||||
|
|
||||||
|
fi
|
||||||
|
done <<< "$instances"
|
@ -0,0 +1,96 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# this is the install script
|
||||||
|
# install_script = "/opt/swarms/api/rundocker.sh"
|
||||||
|
# called on boot.
|
||||||
|
|
||||||
|
# this is the refresh script called from ssm for a refresh
|
||||||
|
# #refresh_script = "/opt/swarms/api/docker-boot.sh"
|
||||||
|
|
||||||
|
# file not found
|
||||||
|
#
|
||||||
|
pwd
|
||||||
|
ls -latr
|
||||||
|
. ./.env # for secrets
|
||||||
|
set -e # stop on any error
|
||||||
|
#export ROOT="" # empty
|
||||||
|
export WORKSOURCE="/opt/swarms/api"
|
||||||
|
|
||||||
|
adduser --disabled-password --gecos "" swarms --home "/home/swarms" || echo ignore
|
||||||
|
git config --global --add safe.directory "/opt/swarms"
|
||||||
|
git config --global --add safe.directory "/opt/swarms-memory"
|
||||||
|
|
||||||
|
cd "/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
git log -2 --patch | head -1000
|
||||||
|
|
||||||
|
mkdir -p "/var/swarms/agent_workspace/"
|
||||||
|
mkdir -p "/home/swarms"
|
||||||
|
|
||||||
|
|
||||||
|
cd "/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
|
||||||
|
mkdir -p "/var/swarms/logs"
|
||||||
|
chown -R swarms:swarms "/var/swarms/" "/home/swarms" "/opt/swarms"
|
||||||
|
|
||||||
|
if [ -f "/var/swarms/agent_workspace/boot_fast.sh" ];
|
||||||
|
then
|
||||||
|
chmod +x "/var/swarms/agent_workspace/boot_fast.sh" || echo faild
|
||||||
|
|
||||||
|
# user install but do not start
|
||||||
|
su -c "bash -e -x /var/swarms/agent_workspace/boot_fast.sh" swarms
|
||||||
|
fi
|
||||||
|
cd "/opt/swarms/" || exit 1 # "we need swarms"
|
||||||
|
|
||||||
|
mkdir -p "/var/run/swarms/secrets/"
|
||||||
|
mkdir -p "/home/swarms/.cache/huggingface/hub"
|
||||||
|
|
||||||
|
set +x
|
||||||
|
OPENAI_KEY=$(aws ssm get-parameter --name "swarms_openai_key" | jq .Parameter.Value -r )
|
||||||
|
export OPENAI_KEY
|
||||||
|
echo "OPENAI_KEY=${OPENAI_KEY}" > "/var/run/swarms/secrets/env"
|
||||||
|
set -x
|
||||||
|
|
||||||
|
## append new homedir
|
||||||
|
# check if the entry exists already before appending pls
|
||||||
|
if ! grep -q "HF_HOME" "/var/run/swarms/secrets/env"; then
|
||||||
|
echo "HF_HOME=/home/swarms/.cache/huggingface/hub" >> "/var/run/swarms/secrets/env"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! grep -q "^HOME" "/var/run/swarms/secrets/env"; then
|
||||||
|
echo "HOME=/home/swarms" >> "/var/run/swarms/secrets/env"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! grep -q "^HOME" "/var/run/swarms/secrets/env"; then
|
||||||
|
# attempt to move the workspace
|
||||||
|
echo "WORKSPACE_DIR=\${STATE_DIRECTORY}" >> "/var/run/swarms/secrets/env"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# setup the systemd service again
|
||||||
|
sed -e "s!ROOT!!g" > /etc/nginx/sites-enabled/default < "${WORKSOURCE}/nginx/site.conf"
|
||||||
|
sed -e "s!ROOT!!g" > /etc/systemd/system/swarms-docker.service < "${WORKSOURCE}/systemd/swarms-docker.service"
|
||||||
|
grep . -h -n /etc/systemd/system/swarms-docker.service
|
||||||
|
|
||||||
|
chown -R swarms:swarms /var/run/swarms/
|
||||||
|
mkdir -p /opt/swarms/api/agent_workspace/try_except_wrapper/
|
||||||
|
chown -R swarms:swarms /opt/swarms/api/
|
||||||
|
|
||||||
|
# always reload
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl start swarms-docker || journalctl -xeu swarms-docker
|
||||||
|
systemctl enable swarms-docker || journalctl -xeu swarms-docker
|
||||||
|
systemctl enable nginx
|
||||||
|
systemctl start nginx
|
||||||
|
|
||||||
|
journalctl -xeu swarms-docker | tail -200 || echo oops
|
||||||
|
systemctl status swarms-docker || echo oops2
|
||||||
|
|
||||||
|
# now after swarms is up, we restart nginx
|
||||||
|
HOST="localhost"
|
||||||
|
PORT=5474
|
||||||
|
while ! nc -z $HOST $PORT; do
|
||||||
|
sleep 1
|
||||||
|
echo -n "."
|
||||||
|
done
|
||||||
|
echo "Port ${PORT} is now open!"
|
||||||
|
|
||||||
|
osystemctl restart nginx
|
@ -0,0 +1,32 @@
|
|||||||
|
import time
|
||||||
|
|
||||||
|
import boto3
|
||||||
|
#from dateutil import tz
|
||||||
|
|
||||||
|
|
||||||
|
def parse_command_id(send_command_output):
|
||||||
|
return send_command_output['Command']['CommandId']
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ec2_client = boto3.client('ec2')
|
||||||
|
ssm_client = boto3.client('ssm')
|
||||||
|
|
||||||
|
# Get the list of instance IDs and their states
|
||||||
|
instances_response = ec2_client.describe_instances()
|
||||||
|
|
||||||
|
instances = [
|
||||||
|
(instance['InstanceId'], instance['State']['Name'])
|
||||||
|
for reservation in instances_response['Reservations']
|
||||||
|
for instance in reservation['Instances']
|
||||||
|
]
|
||||||
|
for reservation in instances_response['Reservations']:
|
||||||
|
for instance in reservation['Instances']:
|
||||||
|
print(instance)
|
||||||
|
instance_id = instance['InstanceId']
|
||||||
|
state = instance['State']['Name']
|
||||||
|
if state == 'running':
|
||||||
|
print(f"Starting command for instance: {instance_id}")
|
||||||
|
print(f"aws ssm start-session --target {instance_id}")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
@ -0,0 +1,26 @@
|
|||||||
|
# derived from https://github.com/encode/uvicorn/issues/678
|
||||||
|
# dervied from https://blog.container-solutions.com/running-docker-containers-with-systemd
|
||||||
|
[Unit]
|
||||||
|
Description=swarms
|
||||||
|
After=docker.service
|
||||||
|
#Required=docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
EnvironmentFile=ROOT/var/run/swarms/secrets/env
|
||||||
|
RestartSec=10
|
||||||
|
TimeoutStartSec=0
|
||||||
|
Restart=always
|
||||||
|
ExecStartPre=-/usr/bin/docker stop %n
|
||||||
|
ExecStartPre=-/usr/bin/docker rm %n
|
||||||
|
ExecStartPre=/usr/bin/docker pull h4ckermike/swarms-api:experimental
|
||||||
|
#ExecStart=/usr/bin/docker run --rm --name %n h4ckermike/swarms-api:experimental
|
||||||
|
ExecStart=/usr/bin/docker run --rm --name %n --network host h4ckermike/swarms-api:experimental
|
||||||
|
StandardOutput=file:/var/log/swarms_systemd.log
|
||||||
|
StandardError=file:/var/log/swarms_systemd.log
|
||||||
|
ExecReload=/bin/kill -HUP ${MAINPID}
|
||||||
|
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
@ -0,0 +1,82 @@
|
|||||||
|
convert this to docker and remove systemd
|
||||||
|
# derived from https://github.com/encode/uvicorn/issues/678
|
||||||
|
[Unit]
|
||||||
|
Description=swarms
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=swarms
|
||||||
|
Group=swarms
|
||||||
|
DynamicUser=true
|
||||||
|
WorkingDirectory=ROOT/opt/swarms/api/
|
||||||
|
ReadWritePaths=ROOT/opt/swarms/api/agent_workspace/
|
||||||
|
StateDirectory=swarms_state
|
||||||
|
PrivateTmp=true
|
||||||
|
ProtectHome=true
|
||||||
|
EnvironmentFile=ROOT/var/run/swarms/secrets/env
|
||||||
|
|
||||||
|
# for debugging use
|
||||||
|
#ExecStart=/usr/bin/unbuffer ROOT/var/run/uvicorn/env/bin/uvicorn
|
||||||
|
|
||||||
|
# this runs and is modifiable by the agent, so this can be destabilized
|
||||||
|
# FIXME: containerize the run(chroot jail?)
|
||||||
|
ExecStart=/usr/bin/unbuffer ROOT/var/swarms/agent_workspace/.venv/bin/uvicorn \
|
||||||
|
--proxy-headers \
|
||||||
|
--forwarded-allow-ips='*' \
|
||||||
|
--workers=4 \
|
||||||
|
--port=5474 \
|
||||||
|
main:create_app
|
||||||
|
StandardOutput=file:/var/log/swarms_systemd.log
|
||||||
|
StandardError=file:/var/log/swarms_systemd.log
|
||||||
|
ExecReload=/bin/kill -HUP ${MAINPID}
|
||||||
|
RestartSec=10
|
||||||
|
Restart=always
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
|
||||||
|
To convert the provided systemd service configuration to a Docker container, you'll need a `Dockerfile` and potentially a `docker-compose.yml` file for easier management. Here's a basic setup:
|
||||||
|
|
||||||
|
**Dockerfile**:
|
||||||
|
```Dockerfile
|
||||||
|
# Use the appropriate base image
|
||||||
|
FROM python:3.9-slim
|
||||||
|
|
||||||
|
# Set working directory
|
||||||
|
WORKDIR /opt/swarms/api
|
||||||
|
|
||||||
|
# Copy application files
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Install dependencies (assuming requirements.txt is in root)
|
||||||
|
RUN pip install -r requirements.txt
|
||||||
|
|
||||||
|
# Set up environment variables
|
||||||
|
COPY ROOT/var/run/swarms/secrets/env /root/env
|
||||||
|
ENV $(cat /root/env | xargs)
|
||||||
|
|
||||||
|
# Expose the port
|
||||||
|
EXPOSE 5474
|
||||||
|
|
||||||
|
# Command to run the application
|
||||||
|
CMD ["/usr/bin/unbuffer", "/root/swarms/agent_workspace/.venv/bin/uvicorn", "--proxy-headers", "--forwarded-allow-ips='*'", "--workers=4", "--port=5474", "main:create_app"]
|
||||||
|
```
|
||||||
|
|
||||||
|
**docker-compose.yml** (optional):
|
||||||
|
```yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Steps to Build and Run
|
||||||
|
1. Save the `Dockerfile` and `docker-compose.yml` in your project directory.
|
||||||
|
2. Build and run the container using:
|
||||||
|
```bash
|
||||||
|
docker-compose up --build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
- Adjust the Python base image version as needed.
|
||||||
|
- Ensure that your `requirements.txt` includes the necessary dependencies for your application.
|
||||||
|
- You can configure volume mounts and environment variables as per your requirement.
|
||||||
|
- Logs can be managed by your logging mechanism or Docker logging options.
|
@ -0,0 +1,30 @@
|
|||||||
|
#!/bin/python3
|
||||||
|
# rewrite this to aslo cancel-spot-instance-requests
|
||||||
|
|
||||||
|
import boto3
|
||||||
|
|
||||||
|
# Create an EC2 client
|
||||||
|
ec2_client = boto3.client('ec2')
|
||||||
|
|
||||||
|
# Retrieve instance IDs
|
||||||
|
response = ec2_client.describe_instances()
|
||||||
|
|
||||||
|
|
||||||
|
for reservation in response['Reservations'] :
|
||||||
|
for instance in reservation['Instances']:
|
||||||
|
print( instance)
|
||||||
|
|
||||||
|
instance_ids = [instance['InstanceId']
|
||||||
|
for reservation in response['Reservations']
|
||||||
|
for instance in reservation['Instances']]
|
||||||
|
|
||||||
|
# Terminate instances
|
||||||
|
for instance_id in instance_ids:
|
||||||
|
print(f"Terminating instance: {instance_id}")
|
||||||
|
ec2_client.terminate_instances(InstanceIds=[instance_id])
|
||||||
|
|
||||||
|
# Check the status of the terminated instances
|
||||||
|
terminated_instances = ec2_client.describe_instances(InstanceIds=instance_ids)
|
||||||
|
for reservation in terminated_instances['Reservations']:
|
||||||
|
for instance in reservation['Instances']:
|
||||||
|
print(f"Instance ID: {instance['InstanceId']}, State: {instance['State']['Name']}")
|
@ -0,0 +1,7 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
instance_ids=$(aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" --output text)
|
||||||
|
|
||||||
|
for instance_id in $instance_ids; do
|
||||||
|
echo "terminate instance: $instance_id"
|
||||||
|
aws ec2 terminate-instances --instance-ids "$instance_id"
|
||||||
|
done
|
@ -0,0 +1,34 @@
|
|||||||
|
#!/bin/python3
|
||||||
|
|
||||||
|
import boto3
|
||||||
|
|
||||||
|
# Create an EC2 client
|
||||||
|
ec2_client = boto3.client('ec2')
|
||||||
|
|
||||||
|
# Retrieve instance IDs and Spot Instance Request IDs
|
||||||
|
response = ec2_client.describe_instances()
|
||||||
|
instance_ids = []
|
||||||
|
spot_request_ids = []
|
||||||
|
|
||||||
|
for reservation in response['Reservations']:
|
||||||
|
for instance in reservation['Instances']:
|
||||||
|
print(instance)
|
||||||
|
instance_ids.append(instance['InstanceId'])
|
||||||
|
if 'SpotInstanceRequestId' in instance:
|
||||||
|
spot_request_ids.append(instance['SpotInstanceRequestId'])
|
||||||
|
|
||||||
|
# Terminate instances
|
||||||
|
for instance_id in instance_ids:
|
||||||
|
print(f"Terminating instance: {instance_id}")
|
||||||
|
ec2_client.terminate_instances(InstanceIds=[instance_id])
|
||||||
|
|
||||||
|
# Cancel Spot Instance Requests
|
||||||
|
for spot_request_id in spot_request_ids:
|
||||||
|
print(f"Cancelling Spot Instance Request: {spot_request_id}")
|
||||||
|
ec2_client.cancel_spot_instance_requests(SpotInstanceRequestIds=[spot_request_id])
|
||||||
|
|
||||||
|
# Check the status of the terminated instances
|
||||||
|
terminated_instances = ec2_client.describe_instances(InstanceIds=instance_ids)
|
||||||
|
for reservation in terminated_instances['Reservations']:
|
||||||
|
for instance in reservation['Instances']:
|
||||||
|
print(f"Instance ID: {instance['InstanceId']}, State: {instance['State']['Name']}")
|
@ -0,0 +1,2 @@
|
|||||||
|
for the terraform scripts see this git repo for more information
|
||||||
|
https://github.com/jmikedupont2/swarms-terraform
|
@ -0,0 +1,18 @@
|
|||||||
|
|
||||||
|
#/mnt/data1/swarms/var/run/uvicorn/env/bin/uvicorn
|
||||||
|
# --no-access-log \
|
||||||
|
|
||||||
|
#python -m pdb
|
||||||
|
#/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/uvicorn \
|
||||||
|
|
||||||
|
. /mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/activate
|
||||||
|
/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/python3 ~mdupont/2024/05/swarms/api/uvicorn_runner.py \
|
||||||
|
--proxy-headers \
|
||||||
|
--port=54748 \
|
||||||
|
--forwarded-allow-ips='*' \
|
||||||
|
--workers=1 \
|
||||||
|
--log-level=debug \
|
||||||
|
--uds /mnt/data1/swarms/run/uvicorn/uvicorn-swarms-api.sock \
|
||||||
|
main:app
|
||||||
|
|
||||||
|
# _.asgi:application
|
@ -0,0 +1,10 @@
|
|||||||
|
|
||||||
|
#/mnt/data1/swarms/var/run/uvicorn/env/bin/uvicorn
|
||||||
|
# --no-access-log \
|
||||||
|
|
||||||
|
#python -m pdb
|
||||||
|
#/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/uvicorn \
|
||||||
|
|
||||||
|
. /mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/activate
|
||||||
|
pip install hunter
|
||||||
|
/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/python3 ~mdupont/2024/05/swarms/api/uvicorn_runner.py
|
@ -0,0 +1,66 @@
|
|||||||
|
import time
|
||||||
|
import requests
|
||||||
|
import boto3
|
||||||
|
#from dateutil import tz
|
||||||
|
|
||||||
|
|
||||||
|
def parse_command_id(send_command_output):
|
||||||
|
return send_command_output['Command']['CommandId']
|
||||||
|
|
||||||
|
def main():
|
||||||
|
ec2_client = boto3.client('ec2')
|
||||||
|
ssm_client = boto3.client('ssm')
|
||||||
|
|
||||||
|
# Get the list of instance IDs and their states
|
||||||
|
instances_response = ec2_client.describe_instances()
|
||||||
|
|
||||||
|
for reservation in instances_response['Reservations']:
|
||||||
|
for instance in reservation['Instances']:
|
||||||
|
state = instance['State']["Name"]
|
||||||
|
instance_id = instance['InstanceId']
|
||||||
|
if state == 'running':
|
||||||
|
|
||||||
|
ip = instance["PublicIpAddress"]
|
||||||
|
instance_type = instance["InstanceType"]
|
||||||
|
BASE_URL=f"http://{ip}:80/v1"
|
||||||
|
target = f"{BASE_URL}/docs"
|
||||||
|
print(f"Starting command for instance: {instance_id} {target} {instance_type}")
|
||||||
|
try:
|
||||||
|
response = requests.get(target, timeout=8)
|
||||||
|
print(f"got response: {instance_id} {target} {instance_type} {response}")
|
||||||
|
except Exception as exp:
|
||||||
|
print(f"got error: {instance_id} {target} {instance_type} {exp}")
|
||||||
|
|
||||||
|
# {'AmiLaunchIndex': 0, 'ImageId': 'ami-0e2c8caa4b6378d8c',
|
||||||
|
#'InstanceId': 'i-0d41e4263f40babec',
|
||||||
|
#'InstanceType': 't3.small',
|
||||||
|
#'KeyName': 'mdupont-deployer-key', 'LaunchTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()),
|
||||||
|
# 'Monitoring': {'State': 'disabled'},
|
||||||
|
# 'Placement': {'AvailabilityZone': 'us-east-1a', 'GroupName': '', 'Tenancy': 'default'}, 'PrivateDnsName': 'ip-10-0-4-18.ec2.internal', 'PrivateIpAddress': '10.0.4.18', 'ProductCodes': [],
|
||||||
|
#'PublicDnsName': 'ec2-3-228-14-220.compute-1.amazonaws.com',
|
||||||
|
#'PublicIpAddress': '3.228.14.220',
|
||||||
|
# 'State': {'Code': 16, 'Name': 'running'}, 'StateTransitionReason': '', 'SubnetId': 'subnet-057c90cfe7b2e5646', 'VpcId': 'vpc-04f28c9347af48b55', 'Architecture': 'x86_64',
|
||||||
|
# 'BlockDeviceMappings': [{'DeviceName': '/dev/sda1',
|
||||||
|
# 'Ebs': {'AttachTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()), 'DeleteOnTermination': True, 'Status': 'attached', 'VolumeId': 'vol-0257131dd2883489b'}}], 'ClientToken': 'b5864f17-9e56-2d84-fc59-811abf8e6257', 'EbsOptimized': False, 'EnaSupport': True, 'Hypervisor': 'xen', 'IamInstanceProfile':
|
||||||
|
# {'Arn': 'arn:aws:iam::767503528736:instance-profile/swarms-20241213150629570500000003', 'Id': 'AIPA3FMWGOMQKC4UE2UFO'}, 'NetworkInterfaces': [
|
||||||
|
# {'Association':
|
||||||
|
# {'IpOwnerId': 'amazon', 'PublicDnsName': 'ec2-3-228-14-220.compute-1.amazonaws.com', 'PublicIp': '3.228.14.220'}, 'Attachment':
|
||||||
|
# {'AttachTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()), 'AttachmentId': 'eni-attach-009b54c039077324e', 'DeleteOnTermination': True, 'DeviceIndex': 0, 'Status': 'attached', 'NetworkCardIndex': 0}, 'Description': '', 'Groups': [
|
||||||
|
# {'GroupName': 'swarms-20241214133959057000000001', 'GroupId': 'sg-03c9752b62d0bcfe4'}], 'Ipv6Addresses': [], 'MacAddress': '02:c9:0b:47:cb:df', 'NetworkInterfaceId': 'eni-08661c8b4777c65c7', 'OwnerId': '767503528736', 'PrivateDnsName': 'ip-10-0-4-18.ec2.internal', 'PrivateIpAddress': '10.0.4.18', 'PrivateIpAddresses': [
|
||||||
|
# {'Association':
|
||||||
|
# {'IpOwnerId': 'amazon', 'PublicDnsName': 'ec2-3-228-14-220.compute-1.amazonaws.com', 'PublicIp': '3.228.14.220'}, 'Primary': True, 'PrivateDnsName': 'ip-10-0-4-18.ec2.internal', 'PrivateIpAddress': '10.0.4.18'}], 'SourceDestCheck': True, 'Status': 'in-use', 'SubnetId': 'subnet-057c90cfe7b2e5646', 'VpcId': 'vpc-04f28c9347af48b55', 'InterfaceType': 'interface'}], 'RootDeviceName': '/dev/sda1', 'RootDeviceType': 'ebs', 'SecurityGroups': [
|
||||||
|
# {'GroupName': 'swarms-20241214133959057000000001', 'GroupId': 'sg-03c9752b62d0bcfe4'}], 'SourceDestCheck': True, 'Tags': [
|
||||||
|
# {'Key': 'Name', 'Value': 'swarms-size-t3.small'},
|
||||||
|
# {'Key': 'aws:ec2launchtemplate:id', 'Value': 'lt-0e618a900bd331cfe'},
|
||||||
|
# {'Key': 'aws:autoscaling:groupName', 'Value': 'swarms-size-t3.small-2024121416014474500000002f'},
|
||||||
|
# {'Key': 'aws:ec2launchtemplate:version', 'Value': '1'}], 'VirtualizationType': 'hvm', 'CpuOptions':
|
||||||
|
# {'CoreCount': 1, 'ThreadsPerCore': 2}, 'CapacityReservationSpecification':
|
||||||
|
# {'CapacityReservationPreference': 'open'}, 'HibernationOptions':
|
||||||
|
# {'Configured': False}, 'MetadataOptions':
|
||||||
|
# {'State': 'applied', 'HttpTokens': 'required', 'HttpPutResponseHopLimit': 2, 'HttpEndpoint': 'enabled', 'HttpProtocolIpv6': 'disabled', 'InstanceMetadataTags': 'disabled'}, 'EnclaveOptions':
|
||||||
|
# {'Enabled': False}, 'BootMode': 'uefi-preferred', 'PlatformDetails': 'Linux/UNIX', 'UsageOperation': 'RunInstances', 'UsageOperationUpdateTime': datetime.datetime(2024, 12, 14, 16, 1, 50, tzinfo=tzutc()), 'PrivateDnsNameOptions':
|
||||||
|
# {'HostnameType': 'ip-name', 'EnableResourceNameDnsARecord': False, 'EnableResourceNameDnsAAAARecord': False}, 'MaintenanceOptions':
|
||||||
|
# {'AutoRecovery': 'default'}, 'CurrentInstanceBootMode': 'uefi'}
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
@ -0,0 +1,2 @@
|
|||||||
|
# mikes tools
|
||||||
|
apt install -y emacs-nox #tmux (this tries to install postfix)
|
@ -0,0 +1,58 @@
|
|||||||
|
#!/mnt/data1/swarms/var/swarms/agent_workspace/.venv/bin/python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import pdb
|
||||||
|
import logging
|
||||||
|
|
||||||
|
# from uvicorn.main import main
|
||||||
|
# if __name__ == '__main__':
|
||||||
|
# sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
|
||||||
|
|
||||||
|
# try:
|
||||||
|
# print("main")
|
||||||
|
# pdb.set_trace()
|
||||||
|
# ret = main()
|
||||||
|
# print(ret)
|
||||||
|
# except Exception as e:
|
||||||
|
# print(e)
|
||||||
|
|
||||||
|
#// sys.exit(main())
|
||||||
|
import sys
|
||||||
|
import uvicorn
|
||||||
|
from uvicorn.config import LOGGING_CONFIG
|
||||||
|
import hunter
|
||||||
|
hunter.trace(stdlib=False,
|
||||||
|
action=hunter.CallPrinter)
|
||||||
|
|
||||||
|
def main():
|
||||||
|
#root_path = ''
|
||||||
|
#if len(sys.argv) >= 2:
|
||||||
|
# root_path = sys.argv[1]
|
||||||
|
##
|
||||||
|
# %(name)s : uvicorn, uvicorn.error, ... . Not insightful at all.
|
||||||
|
LOGGING_CONFIG["formatters"]["access"]["fmt"] = '%(asctime)s %(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s'
|
||||||
|
LOGGING_CONFIG["formatters"]["default"]["fmt"] = "%(asctime)s %(levelprefix)s %(message)s"
|
||||||
|
|
||||||
|
date_fmt = "%Y-%m-%d %H:%M:%S"
|
||||||
|
LOGGING_CONFIG["formatters"]["default"]["datefmt"] = date_fmt
|
||||||
|
LOGGING_CONFIG["formatters"]["access"]["datefmt"] = date_fmt
|
||||||
|
##
|
||||||
|
for logger_name in logging.root.manager.loggerDict.keys():
|
||||||
|
print(logger_name)
|
||||||
|
override_logger = logging.getLogger(logger_name)
|
||||||
|
for handler in override_logger.handlers:
|
||||||
|
print(handler)
|
||||||
|
handler.setFormatter(formatter)
|
||||||
|
|
||||||
|
uvicorn.run(
|
||||||
|
"main:app",
|
||||||
|
host="127.0.0.1",
|
||||||
|
port=7230,
|
||||||
|
log_level="trace",
|
||||||
|
proxy_headers=True,
|
||||||
|
forwarded_allow_ips='*',
|
||||||
|
workers=1,
|
||||||
|
uds="/mnt/data1/swarms/run/uvicorn/uvicorn-swarms-api.sock")
|
||||||
|
# root_path=root_path
|
||||||
|
main()
|
@ -0,0 +1,36 @@
|
|||||||
|
services:
|
||||||
|
swarms:
|
||||||
|
image: swarm
|
||||||
|
build:
|
||||||
|
context: .
|
||||||
|
dockerfile: Dockerfile
|
||||||
|
|
||||||
|
network_mode: host
|
||||||
|
ipc: host
|
||||||
|
|
||||||
|
# environment:
|
||||||
|
# - OPENAI_API_KEY=sk-1234
|
||||||
|
# - OPENAI_API_BASE=http://100.96.149.57:7091
|
||||||
|
# - OPENAI_API_BASE=http://localhost:5000/v1
|
||||||
|
# command: python3 example.py
|
||||||
|
|
||||||
|
# restart: always
|
||||||
|
# deploy:
|
||||||
|
# resources:
|
||||||
|
# reservations:
|
||||||
|
# devices:
|
||||||
|
# - driver: nvidia
|
||||||
|
# count: all
|
||||||
|
# capabilities: [gpu]
|
||||||
|
|
||||||
|
user: swarms
|
||||||
|
volumes:
|
||||||
|
- ./api:/opt/swarms/api
|
||||||
|
- ./swarms:/opt/swarms/swarms
|
||||||
|
- ./logs:/var/log # Mounting volume for logs
|
||||||
|
# environment:
|
||||||
|
# - ENV_VAR_1=value1 # Add necessary environment variables
|
||||||
|
# - ENV_VAR_2=value2
|
||||||
|
# restart: always
|
||||||
|
# ports:
|
||||||
|
# - "5474:5474"
|
@ -0,0 +1 @@
|
|||||||
|
grep -h -P -o "([a-zA-Z]+)" -r * |sort |uniq -c |sort -n >names.txt
|
@ -0,0 +1,739 @@
|
|||||||
|
absl-py==2.1.0
|
||||||
|
accelerate==1.0.1
|
||||||
|
-e git+https://github.com/gventuri/pandas-ai/@5c84fd37065c7de806701e5e7b99df298e93b4f6#egg=ai_ticket&subdirectory=../../../../../../../time/2023/09/24/ai-ticket
|
||||||
|
aiofiles==23.2.1
|
||||||
|
aiohttp==3.9.5
|
||||||
|
aiohttp-cors==0.7.0
|
||||||
|
aiosignal==1.3.1
|
||||||
|
alabaster==0.7.12
|
||||||
|
albucore==0.0.20
|
||||||
|
albumentations==1.4.21
|
||||||
|
alectryon==1.4.0
|
||||||
|
aliyun-python-sdk-core==2.15.1
|
||||||
|
aliyun-python-sdk-kms==2.16.3
|
||||||
|
annotated-types==0.7.0
|
||||||
|
ansible==6.7.0
|
||||||
|
ansible-base==2.10.8
|
||||||
|
ansible-core==2.13.13
|
||||||
|
ansible-vault==2.1.0
|
||||||
|
antlr4-python3-runtime==4.9.3
|
||||||
|
anyio==4.3.0
|
||||||
|
apache-libcloud==3.2.0
|
||||||
|
apispec==6.8.0
|
||||||
|
APScheduler==3.10.4
|
||||||
|
apturl==0.5.2
|
||||||
|
argcomplete==1.8.1
|
||||||
|
argo-workflows==3.6.1
|
||||||
|
asciidoc==10.2.1
|
||||||
|
asciinema==2.1.0
|
||||||
|
asgiref==3.8.1
|
||||||
|
astroid==2.9.3
|
||||||
|
async-timeout==4.0.3
|
||||||
|
asyncio==3.4.3
|
||||||
|
attr==0.3.2
|
||||||
|
attrdict==2.0.1
|
||||||
|
attrs==23.2.0
|
||||||
|
autoflake==2.3.1
|
||||||
|
autogluon==1.1.1
|
||||||
|
autogluon.common==1.1.1
|
||||||
|
autogluon.core==1.1.1
|
||||||
|
autogluon.features==1.1.1
|
||||||
|
autogluon.multimodal==1.1.1
|
||||||
|
autogluon.tabular==1.1.1
|
||||||
|
autogluon.timeseries==1.1.1
|
||||||
|
Automat==24.8.1
|
||||||
|
autopep8==1.6.0
|
||||||
|
aws-cdk-lib==2.116.0
|
||||||
|
aws-cdk.asset-awscli-v1==2.2.213
|
||||||
|
aws-cdk.asset-kubectl-v20==2.1.3
|
||||||
|
aws-cdk.asset-node-proxy-agent-v6==2.1.0
|
||||||
|
aws-sam-translator==1.89.0
|
||||||
|
aws-shell==0.2.2
|
||||||
|
awscli==1.32.85
|
||||||
|
Babel==2.8.0
|
||||||
|
backoff==2.2.1
|
||||||
|
base58==2.1.1
|
||||||
|
bcrypt==4.1.2
|
||||||
|
beautifulsoup4==4.12.3
|
||||||
|
begins==0.9
|
||||||
|
bidict==0.23.1
|
||||||
|
binaryornot==0.4.3
|
||||||
|
bitsandbytes==0.43.3
|
||||||
|
bittensor==7.2.0
|
||||||
|
-e git+https://github.com/opentensor/bittensor-subnet-template@7622775e0a267a564959c8690108f9152e123522#egg=bittensor_subnet_template&subdirectory=../../../../../../../time/2024/06/07/bittensor-subnet-template
|
||||||
|
black==23.7.0
|
||||||
|
blake3==1.0.0
|
||||||
|
blessed==1.20.0
|
||||||
|
blessings==1.7
|
||||||
|
blinker==1.7.0
|
||||||
|
blis==0.7.11
|
||||||
|
boto3==1.34.85
|
||||||
|
botocore==1.34.85
|
||||||
|
build==1.2.1
|
||||||
|
CacheControl==0.14.0
|
||||||
|
cachetools==5.3.3
|
||||||
|
catalogue==2.0.10
|
||||||
|
catboost==1.2.5
|
||||||
|
catfish==4.16.3
|
||||||
|
cattrs==23.2.3
|
||||||
|
-e git+https://github.com/Agent-Artificial/cellium-client@ee4df8906a43c2e408b1ad3cf27f84816a51a58d#egg=cellium&subdirectory=../../../../../../../../../../home/mdupont/2024/05/31/cellium-client
|
||||||
|
certifi==2024.2.2
|
||||||
|
cffi==1.15.0
|
||||||
|
cfgv==3.4.0
|
||||||
|
cfn-lint==0.87.7
|
||||||
|
chardet==4.0.0
|
||||||
|
charset-normalizer==2.1.1
|
||||||
|
chroma-hnswlib==0.7.5
|
||||||
|
-e git+https://github.com/chroma-core/chroma@28b37392594dd7ba60e35c53f098d7f88a9d3988#egg=chromadb&subdirectory=../../../../../../../time/2024/07/21/chroma
|
||||||
|
cleo==2.1.0
|
||||||
|
cliapp==1.20180812.1
|
||||||
|
click==8.1.7
|
||||||
|
clip-anytorch==2.6.0
|
||||||
|
cloudpathlib==0.18.1
|
||||||
|
cloudpickle==3.0.0
|
||||||
|
clusterops==0.1.2
|
||||||
|
cmdtest==0.32+git
|
||||||
|
colorama==0.4.6
|
||||||
|
coloredlogs==15.0.1
|
||||||
|
colorful==0.5.6
|
||||||
|
comm==0.2.2
|
||||||
|
command-not-found==0.3
|
||||||
|
commonmark==0.9.1
|
||||||
|
communex==0.1.27.3
|
||||||
|
compel==2.0.2
|
||||||
|
confection==0.1.5
|
||||||
|
ConfigArgParse==1.7
|
||||||
|
configobj==5.0.8
|
||||||
|
constantly==23.10.4
|
||||||
|
constructs==10.4.2
|
||||||
|
contourpy==1.2.1
|
||||||
|
controlnet_aux==0.0.7
|
||||||
|
cookiecutter==1.7.3
|
||||||
|
cpufeature==0.2.1
|
||||||
|
crashtest==0.4.1
|
||||||
|
crcmod==1.7
|
||||||
|
cryptography==42.0.5
|
||||||
|
cssselect==1.2.0
|
||||||
|
cupshelpers==1.0
|
||||||
|
cycler==0.12.1
|
||||||
|
cymem==2.0.8
|
||||||
|
Cython==0.29.28
|
||||||
|
cytoolz==0.12.3
|
||||||
|
daiquiri==3.2.5.1
|
||||||
|
dashscope==1.20.13
|
||||||
|
dataclasses-json==0.6.6
|
||||||
|
datasets==2.17.1
|
||||||
|
dbus-python==1.2.18
|
||||||
|
ddt==1.6.0
|
||||||
|
debugpy==1.8.2
|
||||||
|
defer==1.0.6
|
||||||
|
delegator.py==0.1.1
|
||||||
|
Deprecated==1.2.14
|
||||||
|
devscripts===2.22.1ubuntu1
|
||||||
|
dictdiffer==0.9.0
|
||||||
|
diffusers==0.31.0
|
||||||
|
Dijkstar==2.6.0
|
||||||
|
dill==0.3.8
|
||||||
|
dirtyjson==1.0.8
|
||||||
|
distlib==0.3.8
|
||||||
|
distro==1.7.0
|
||||||
|
distro-info===1.1build1
|
||||||
|
dnspython==2.1.0
|
||||||
|
doc-master==0.0.2
|
||||||
|
docker==7.1.0
|
||||||
|
docker-compose==1.29.2
|
||||||
|
docker-pycreds==0.4.0
|
||||||
|
docopt==0.6.2
|
||||||
|
docstring_parser==0.16
|
||||||
|
docutils==0.16
|
||||||
|
dominate==2.9.1
|
||||||
|
dulwich==0.21.7
|
||||||
|
dynamicprompts==0.31.0
|
||||||
|
ecdsa==0.19.0
|
||||||
|
einops==0.8.0
|
||||||
|
email_validator==2.1.1
|
||||||
|
-e git+https://github.com/meta-introspector/https-lablab.ai-event-audiocraft-24-hours-hackathon/@ef86774c7e61855044ca0c97dbcb988d18570984#egg=emojintrospector
|
||||||
|
eth-hash==0.7.0
|
||||||
|
eth-keys==0.5.1
|
||||||
|
eth-typing==4.2.3
|
||||||
|
eth-utils==2.2.2
|
||||||
|
eval_type_backport==0.2.0
|
||||||
|
evaluate==0.4.0
|
||||||
|
exceptiongroup==1.2.0
|
||||||
|
extruct==0.17.0
|
||||||
|
facexlib==0.3.0
|
||||||
|
fastai==2.7.15
|
||||||
|
fastapi==0.111.0
|
||||||
|
fastapi-cli==0.0.4
|
||||||
|
fastapi-events==0.11.1
|
||||||
|
fastapi-sso==0.10.0
|
||||||
|
fastcore==1.5.55
|
||||||
|
fastdownload==0.0.7
|
||||||
|
fastjsonschema==2.20.0
|
||||||
|
fastprogress==1.0.3
|
||||||
|
ffmpy==0.4.0
|
||||||
|
filelock==3.14.0
|
||||||
|
filterpy==1.4.5
|
||||||
|
flake8==4.0.1
|
||||||
|
Flask==3.0.3
|
||||||
|
Flask-Cors==4.0.1
|
||||||
|
flask-sock==0.7.0
|
||||||
|
flatbuffers==24.3.25
|
||||||
|
fonttools==4.55.0
|
||||||
|
frozenlist==1.4.1
|
||||||
|
fsspec==2023.10.0
|
||||||
|
ftfy==6.3.1
|
||||||
|
future==1.0.0
|
||||||
|
fuzzywuzzy==0.18.0
|
||||||
|
gdown==5.2.0
|
||||||
|
gevent==24.2.1
|
||||||
|
gguf==0.10.0
|
||||||
|
git-remote-codecommit==1.17
|
||||||
|
gitdb==4.0.11
|
||||||
|
github==1.2.7
|
||||||
|
github-action-utils==1.1.0
|
||||||
|
github3.py==4.0.1
|
||||||
|
GitPython==3.1.42
|
||||||
|
gluonts==0.15.1
|
||||||
|
google-api-core==2.19.1
|
||||||
|
google-auth==2.29.0
|
||||||
|
googleapis-common-protos==1.63.2
|
||||||
|
gpg===1.16.0-unknown
|
||||||
|
gpustat==0.6.0
|
||||||
|
GPUtil==1.4.0
|
||||||
|
gql==3.5.0
|
||||||
|
gradio==4.44.0
|
||||||
|
gradio_client==1.3.0
|
||||||
|
graphql-core==3.2.3
|
||||||
|
graphviz==0.20.3
|
||||||
|
greenlet==3.0.3
|
||||||
|
grpcio==1.62.1
|
||||||
|
grpcio-tools==1.62.1
|
||||||
|
gunicorn==22.0.0
|
||||||
|
h11==0.14.0
|
||||||
|
-e git+https://github.com/Agent-Artificial/hivemind@941933de3378f1cd8a5b4fa053a3eb33253ab8ed#egg=hivemind&subdirectory=../../../vendor/hivemind
|
||||||
|
html5lib==1.1
|
||||||
|
html5lib-modern==1.2
|
||||||
|
html_text==0.6.2
|
||||||
|
httpcore==1.0.4
|
||||||
|
httplib2==0.20.2
|
||||||
|
httptools==0.6.1
|
||||||
|
httpx==0.27.0
|
||||||
|
huggingface-hub==0.26.1
|
||||||
|
humanfriendly==10.0
|
||||||
|
hyperlink==21.0.0
|
||||||
|
hyperopt==0.2.7
|
||||||
|
identify==2.6.3
|
||||||
|
idna==3.4
|
||||||
|
ijson==3.3.0
|
||||||
|
imageio==2.34.2
|
||||||
|
imagesize==1.3.0
|
||||||
|
img2pdf==0.4.2
|
||||||
|
importlib_metadata==7.1.0
|
||||||
|
importlib_resources==6.4.0
|
||||||
|
incremental==24.7.2
|
||||||
|
iniconfig==2.0.0
|
||||||
|
installer==0.7.0
|
||||||
|
intervaltree==3.1.0
|
||||||
|
invisible-watermark==0.2.0
|
||||||
|
-e git+https://github.com/invoke-ai/InvokeAI@ebd73a2ac22ed4f06271b3c4850740cf84ab136a#egg=InvokeAI&subdirectory=../../../../../../../time/2024/11/30/InvokeAI
|
||||||
|
ipykernel==6.29.5
|
||||||
|
ipympl==0.9.4
|
||||||
|
ipywidgets==8.1.5
|
||||||
|
isodate==0.6.1
|
||||||
|
isort==5.13.2
|
||||||
|
itemadapter==0.9.0
|
||||||
|
itemloaders==1.3.2
|
||||||
|
itsdangerous==2.1.2
|
||||||
|
jaraco.classes==3.4.0
|
||||||
|
jax==0.4.31
|
||||||
|
jaxlib==0.4.31
|
||||||
|
jeepney==0.7.1
|
||||||
|
Jinja2==3.1.3
|
||||||
|
jinja2-time==0.2.0
|
||||||
|
jmespath==0.10.0
|
||||||
|
joblib==1.3.2
|
||||||
|
jschema-to-python==1.2.3
|
||||||
|
jsii==1.105.0
|
||||||
|
json-schema-generator==0.3
|
||||||
|
json5==0.9.28
|
||||||
|
jsonformatter==0.3.2
|
||||||
|
jsonlines==4.0.0
|
||||||
|
jsonpatch==1.33
|
||||||
|
jsonpickle==3.2.1
|
||||||
|
jsonpointer==2.4
|
||||||
|
jsonschema==4.23.0
|
||||||
|
jsonschema-specifications==2023.12.1
|
||||||
|
jstyleson==0.0.2
|
||||||
|
junit-xml==1.9
|
||||||
|
jupyter==1.0.0
|
||||||
|
jupyter-console==6.6.3
|
||||||
|
jupyter_core==5.7.2
|
||||||
|
jupyterlab_widgets==3.0.13
|
||||||
|
kaggle==1.6.17
|
||||||
|
keylimiter==0.1.5
|
||||||
|
keyring==24.3.1
|
||||||
|
kiwisolver==1.4.7
|
||||||
|
kubernetes==30.1.0
|
||||||
|
langchain==0.1.13
|
||||||
|
langchain-community==0.0.29
|
||||||
|
langchain-core==0.1.52
|
||||||
|
langchain-experimental==0.0.55
|
||||||
|
langchain-text-splitters==0.0.2
|
||||||
|
langcodes==3.4.0
|
||||||
|
langsmith==0.1.67
|
||||||
|
language-selector==0.1
|
||||||
|
language_data==1.2.0
|
||||||
|
lark-parser==0.12.0
|
||||||
|
launchpadlib==1.10.16
|
||||||
|
lazr.restfulclient==0.14.4
|
||||||
|
lazr.uri==1.0.6
|
||||||
|
lazy-object-proxy==0.0.0
|
||||||
|
lazy_loader==0.4
|
||||||
|
libcst==1.4.0
|
||||||
|
lightdm-gtk-greeter-settings==1.2.2
|
||||||
|
lightgbm==4.3.0
|
||||||
|
lightning==2.3.3
|
||||||
|
lightning-utilities==0.11.6
|
||||||
|
lion-pytorch==0.2.2
|
||||||
|
-e git+https://github.com/Agent-Artificial/litellm@3b2f04c8cb1a42fe5db8bcbf62d2e41a3a72f52a#egg=litellm&subdirectory=../../../vendor/litellm
|
||||||
|
livereload==2.6.3
|
||||||
|
llvmlite==0.43.0
|
||||||
|
lockfile==0.12.2
|
||||||
|
logilab-common==1.8.2
|
||||||
|
loguru==0.7.2
|
||||||
|
lsprotocol==2023.0.1
|
||||||
|
lxml==4.8.0
|
||||||
|
lxml_html_clean==0.3.0
|
||||||
|
Mako==1.1.3
|
||||||
|
marisa-trie==1.2.0
|
||||||
|
Markdown==3.3.6
|
||||||
|
markdown-it-py==3.0.0
|
||||||
|
MarkupSafe==2.1.5
|
||||||
|
marshmallow==3.21.2
|
||||||
|
matplotlib==3.9.3
|
||||||
|
mccabe==0.6.1
|
||||||
|
mdurl==0.1.2
|
||||||
|
mediapipe==0.10.14
|
||||||
|
menulibre==2.2.2
|
||||||
|
mercurial==6.1.1
|
||||||
|
mf2py==2.0.1
|
||||||
|
mkdocs==1.1.2
|
||||||
|
ml-dtypes==0.4.0
|
||||||
|
mlforecast==0.10.0
|
||||||
|
mmh3==4.1.0
|
||||||
|
mock==5.1.0
|
||||||
|
model-index==0.1.11
|
||||||
|
modelscope_studio==0.5.0
|
||||||
|
monotonic==1.6
|
||||||
|
more-itertools==8.10.0
|
||||||
|
morphys==1.0
|
||||||
|
mpmath==1.3.0
|
||||||
|
msgpack==1.0.8
|
||||||
|
msgpack-numpy-opentensor==0.5.0
|
||||||
|
mugshot==0.4.3
|
||||||
|
multiaddr==0.0.9
|
||||||
|
multidict==6.0.5
|
||||||
|
multiprocess==0.70.16
|
||||||
|
munch==2.5.0
|
||||||
|
murmurhash==1.0.10
|
||||||
|
mypy-extensions==1.0.0
|
||||||
|
mypy-protobuf==3.6.0
|
||||||
|
netaddr==0.8.0
|
||||||
|
netifaces==0.11.0
|
||||||
|
networkx==3.2.1
|
||||||
|
ninja==1.11.1.1
|
||||||
|
nlpaug==1.1.11
|
||||||
|
nltk==3.8.1
|
||||||
|
nodeenv==1.9.1
|
||||||
|
nptyping==2.4.1
|
||||||
|
npyscreen==4.10.5
|
||||||
|
ntlm-auth==1.4.0
|
||||||
|
numba==0.60.0
|
||||||
|
numpy==1.26.4
|
||||||
|
nvidia-cublas-cu12==12.1.3.1
|
||||||
|
nvidia-cuda-cupti-cu12==12.1.105
|
||||||
|
nvidia-cuda-nvrtc-cu12==12.1.105
|
||||||
|
nvidia-cuda-runtime-cu12==12.1.105
|
||||||
|
nvidia-cudnn-cu12==8.9.2.26
|
||||||
|
nvidia-cufft-cu12==11.0.2.54
|
||||||
|
nvidia-curand-cu12==10.3.2.106
|
||||||
|
nvidia-cusolver-cu12==11.4.5.107
|
||||||
|
nvidia-cusparse-cu12==12.1.0.106
|
||||||
|
nvidia-ml-py==12.560.30
|
||||||
|
nvidia-ml-py3==7.352.0
|
||||||
|
nvidia-nccl-cu12==2.20.5
|
||||||
|
nvidia-nvjitlink-cu12==12.3.101
|
||||||
|
nvidia-nvtx-cu12==12.1.105
|
||||||
|
oauthlib==3.2.2
|
||||||
|
ocrmypdf==13.4.0+dfsg
|
||||||
|
olefile==0.46
|
||||||
|
omegaconf==2.2.3
|
||||||
|
onboard==1.4.1
|
||||||
|
onnx==1.16.1
|
||||||
|
onnxruntime==1.19.2
|
||||||
|
openai==1.30.5
|
||||||
|
-e git+https://github.com/peterdemin/openai-cli@3af6c0eb6272ca4c3b79aca3220252dba324e20c#egg=openai_cli&subdirectory=../../../../../../../time/2024/05/31/openai-cli
|
||||||
|
opencensus==0.11.4
|
||||||
|
opencensus-context==0.1.3
|
||||||
|
opencv-contrib-python==4.10.0.84
|
||||||
|
opencv-python==4.9.0.80
|
||||||
|
opencv-python-headless==4.9.0.80
|
||||||
|
opendatalab==0.0.10
|
||||||
|
openmim==0.3.9
|
||||||
|
openshift==0.11.0
|
||||||
|
opentelemetry-api==1.25.0
|
||||||
|
opentelemetry-exporter-otlp-proto-common==1.25.0
|
||||||
|
opentelemetry-exporter-otlp-proto-grpc==1.25.0
|
||||||
|
opentelemetry-instrumentation==0.46b0
|
||||||
|
opentelemetry-instrumentation-asgi==0.46b0
|
||||||
|
opentelemetry-instrumentation-fastapi==0.46b0
|
||||||
|
opentelemetry-proto==1.25.0
|
||||||
|
opentelemetry-sdk==1.25.0
|
||||||
|
opentelemetry-semantic-conventions==0.46b0
|
||||||
|
opentelemetry-util-http==0.46b0
|
||||||
|
openxlab==0.1.1
|
||||||
|
opt-einsum==3.3.0
|
||||||
|
optimum==1.18.1
|
||||||
|
ordered-set==4.1.0
|
||||||
|
orjson==3.10.3
|
||||||
|
oss2==2.17.0
|
||||||
|
overrides==7.7.0
|
||||||
|
packaging==23.2
|
||||||
|
pandas==2.2.2
|
||||||
|
paramiko==3.4.0
|
||||||
|
parsel==1.9.1
|
||||||
|
password-strength==0.0.3.post2
|
||||||
|
pathspec==0.12.1
|
||||||
|
pathtools==0.1.2
|
||||||
|
patsy==0.5.6
|
||||||
|
pbr==6.0.0
|
||||||
|
pdf2image==1.17.0
|
||||||
|
pdfminer.six===-VERSION-
|
||||||
|
peft==0.4.0
|
||||||
|
pendulum==3.0.0
|
||||||
|
pep8==1.7.1
|
||||||
|
-e git+https://github.com/tbarbette/perf-class@19f8299fb8a2cff33189e77c4547acf3a20f2a8b#egg=perf_class&subdirectory=../../../../../../../time/2024/07/05/perf-class
|
||||||
|
petals @ git+https://github.com/bigscience-workshop/petals@d2fcbbc72e02b88cc34f2da8b3ae7de2873204a9
|
||||||
|
pexpect==4.8.0
|
||||||
|
picklescan==0.0.18
|
||||||
|
pikepdf==5.0.1+dfsg
|
||||||
|
pillow==10.3.0
|
||||||
|
pipdeptree==2.21.0
|
||||||
|
pipenv==11.9.0
|
||||||
|
pipx==1.0.0
|
||||||
|
pkginfo==1.11.1
|
||||||
|
platformdirs==4.2.2
|
||||||
|
playwright==1.46.0
|
||||||
|
plotly==5.23.0
|
||||||
|
pluggy==1.5.0
|
||||||
|
ply==3.11
|
||||||
|
poetry==1.8.4
|
||||||
|
poetry-core==1.9.1
|
||||||
|
poetry-plugin-export==1.8.0
|
||||||
|
posthog==3.5.0
|
||||||
|
pre_commit==4.0.1
|
||||||
|
prefetch-generator==1.0.3
|
||||||
|
preshed==3.0.9
|
||||||
|
prettytable==3.12.0
|
||||||
|
prometheus_client==0.21.1
|
||||||
|
prompt_toolkit==3.0.47
|
||||||
|
Protego==0.3.1
|
||||||
|
proto-plus==1.24.0
|
||||||
|
protobuf==4.25.3
|
||||||
|
psutil==5.9.0
|
||||||
|
ptyprocess==0.7.0
|
||||||
|
publication==0.0.3
|
||||||
|
PuLP==2.9.0
|
||||||
|
py-bip39-bindings==0.1.11
|
||||||
|
py-cid==0.3.0
|
||||||
|
py-ed25519-zebra-bindings==1.0.1
|
||||||
|
py-multibase==1.0.3
|
||||||
|
py-multicodec==0.2.1
|
||||||
|
py-multihash==0.2.3
|
||||||
|
py-spy==0.3.14
|
||||||
|
py-sr25519-bindings==0.2.0
|
||||||
|
py4j==0.10.9.7
|
||||||
|
pyarrow==15.0.0
|
||||||
|
pyarrow-hotfix==0.6
|
||||||
|
pyasn1==0.6.0
|
||||||
|
pyasn1_modules==0.4.0
|
||||||
|
pycairo==1.20.1
|
||||||
|
pycodestyle==2.8.0
|
||||||
|
pycparser==2.21
|
||||||
|
pycryptodome==3.20.0
|
||||||
|
pycryptodomex==3.11.0
|
||||||
|
pycups==2.0.1
|
||||||
|
pycurl==7.44.1
|
||||||
|
pydantic==2.8.2
|
||||||
|
pydantic-settings==2.2.1
|
||||||
|
pydantic_core==2.20.1
|
||||||
|
PyDispatcher==2.0.7
|
||||||
|
pydot==3.0.1
|
||||||
|
pydub==0.25.1
|
||||||
|
pyee==11.1.0
|
||||||
|
pyelftools==0.31
|
||||||
|
pyflakes==3.2.0
|
||||||
|
pygame==2.1.2
|
||||||
|
PyGithub==1.59.1
|
||||||
|
pygls==1.3.1
|
||||||
|
Pygments==2.17.2
|
||||||
|
PyGObject==3.42.1
|
||||||
|
pyinotify==0.9.6
|
||||||
|
PyJWT==2.8.0
|
||||||
|
pykerberos==1.1.14
|
||||||
|
pylint==2.12.2
|
||||||
|
pymacaroons==0.13.0
|
||||||
|
Pympler==1.1
|
||||||
|
-e git+https://github.com/ivilata/pymultihash@215298fa2faa55027384d1f22519229d0918cfb0#egg=pymultihash&subdirectory=../../../../../../../time/2024/04/17/pymultihash
|
||||||
|
PyNaCl==1.5.0
|
||||||
|
pynvml==12.0.0
|
||||||
|
pyOpenSSL==24.2.1
|
||||||
|
pyparsing==3.1.1
|
||||||
|
PyPatchMatch==1.0.1
|
||||||
|
pypdf==5.1.0
|
||||||
|
pyperclip==1.9.0
|
||||||
|
PyPika==0.48.9
|
||||||
|
pyproject_hooks==1.1.0
|
||||||
|
pyRdfa3==3.6.4
|
||||||
|
pyre==1.12.5
|
||||||
|
pyre-check==0.9.21
|
||||||
|
pyre-extensions==0.0.30
|
||||||
|
pyreadline3==3.5.4
|
||||||
|
pyroute2==0.post0
|
||||||
|
pyroute2.core==0.post0
|
||||||
|
pyroute2.ethtool==0.post0
|
||||||
|
pyroute2.ipdb==0.post0
|
||||||
|
pyroute2.ipset==0.post0
|
||||||
|
pyroute2.ndb==0.post0
|
||||||
|
pyroute2.nftables==0.post0
|
||||||
|
pyroute2.nslink==0.post0
|
||||||
|
pyroute2.protocols==0.post0
|
||||||
|
PySimpleSOAP==1.16.2
|
||||||
|
PySocks==1.7.1
|
||||||
|
pytesseract==0.3.10
|
||||||
|
pytest==8.2.2
|
||||||
|
pytest-asyncio==0.23.7
|
||||||
|
python-apt==2.4.0
|
||||||
|
python-baseconv==1.2.2
|
||||||
|
python-dateutil==2.8.2
|
||||||
|
python-debian===0.1.43ubuntu1
|
||||||
|
python-debianbts==3.2.0
|
||||||
|
python-docx==1.1.2
|
||||||
|
python-dotenv==1.0.1
|
||||||
|
python-engineio==4.10.1
|
||||||
|
python-json-logger==2.0.7
|
||||||
|
python-Levenshtein==0.12.2
|
||||||
|
python-magic==0.4.24
|
||||||
|
python-multipart==0.0.9
|
||||||
|
python-slugify==4.0.0
|
||||||
|
python-socketio==5.11.1
|
||||||
|
python-statemachine==2.1.2
|
||||||
|
python-string-utils==1.0.0
|
||||||
|
pytorch-lightning==2.1.3
|
||||||
|
pytorch-metric-learning==2.3.0
|
||||||
|
pytz==2023.4
|
||||||
|
PyWavelets==1.6.0
|
||||||
|
pywinrm==0.3.0
|
||||||
|
pyxattr==0.7.2
|
||||||
|
pyxdg==0.27
|
||||||
|
PyYAML==6.0.1
|
||||||
|
pyzmq==26.0.3
|
||||||
|
qtconsole==5.5.2
|
||||||
|
QtPy==2.4.1
|
||||||
|
queuelib==1.7.0
|
||||||
|
-e git+https://github.com/QwenLM/Qwen-Agent.git@3db6738f5603e6215b4c39db59d390e694b7087f#egg=qwen_agent&subdirectory=../../../../../../../time/2024/11/25/Qwen-Agent
|
||||||
|
rapidfuzz==3.9.3
|
||||||
|
ratelimit==2.2.1
|
||||||
|
ray==2.10.0
|
||||||
|
-e git+https://github.com/RDFLib/rdflib@0b69f4f5f49aa2ea1caf23bbee20c7166625a4bd#egg=rdflib&subdirectory=../../../../../../../time/2024/05/26/rdflib
|
||||||
|
Recoll==1.31.6
|
||||||
|
recollchm==0.8.4.1+git
|
||||||
|
recommonmark==0.6.0
|
||||||
|
redis==5.0.6
|
||||||
|
referencing==0.35.1
|
||||||
|
regex==2023.12.25
|
||||||
|
reportbug===11.4.1ubuntu1
|
||||||
|
reportlab==3.6.8
|
||||||
|
requests==2.32.3
|
||||||
|
requests-file==2.1.0
|
||||||
|
requests-kerberos==0.12.0
|
||||||
|
requests-ntlm==1.1.0
|
||||||
|
requests-oauthlib==2.0.0
|
||||||
|
requests-toolbelt==1.0.0
|
||||||
|
resolvelib==0.8.1
|
||||||
|
responses==0.18.0
|
||||||
|
retry==0.9.2
|
||||||
|
rfc5424-logging-handler==1.4.3
|
||||||
|
rich==13.4.2
|
||||||
|
roku==4.1.0
|
||||||
|
-e git+https://github.com/ncmiller/roku-cli.git@6990df804840fbc69d892a47bc655b15ed425e28#egg=rokucli&subdirectory=../../../../../../../time/2024/10/21/roku-cli
|
||||||
|
roman==3.3
|
||||||
|
rpds-py==0.18.1
|
||||||
|
rq==1.16.2
|
||||||
|
rsa==4.7.2
|
||||||
|
ruamel.yaml==0.18.6
|
||||||
|
ruamel.yaml.clib==0.2.12
|
||||||
|
ruff==0.4.4
|
||||||
|
-e git+https://github.com/run-house/runhouse@96f0daf81b3f7c8116dcc3c2350e6abd05c917bd#egg=runhouse&subdirectory=../../../../../../../time/2024/12/06/runhouse
|
||||||
|
s3transfer==0.10.1
|
||||||
|
safetensors==0.4.3
|
||||||
|
sarif-om==1.0.4
|
||||||
|
scalecodec==1.2.7
|
||||||
|
scikit-image==0.20.0
|
||||||
|
scikit-learn==1.4.0
|
||||||
|
scipy==1.12.0
|
||||||
|
scour==0.38.2
|
||||||
|
Scrapy==2.11.2
|
||||||
|
screen-resolution-extra==0.0.0
|
||||||
|
SecretStorage==3.3.1
|
||||||
|
selinux==3.3
|
||||||
|
semantic-version==2.10.0
|
||||||
|
semver==3.0.2
|
||||||
|
sentencepiece==0.2.0
|
||||||
|
sentry-sdk==1.40.6
|
||||||
|
seqeval==1.2.2
|
||||||
|
service-identity==24.1.0
|
||||||
|
setproctitle==1.3.3
|
||||||
|
sgt-launcher==0.2.7
|
||||||
|
shellingham==1.5.4
|
||||||
|
shtab==1.6.5
|
||||||
|
simple-websocket==1.0.0
|
||||||
|
simplejson==3.19.2
|
||||||
|
simsimd==6.2.1
|
||||||
|
six==1.16.0
|
||||||
|
skypilot @ file:///mnt/data1/nix/time/2024/07/11/skypilot
|
||||||
|
smart-open==7.0.4
|
||||||
|
smmap==5.0.1
|
||||||
|
sniffio==1.3.1
|
||||||
|
snowballstemmer==2.2.0
|
||||||
|
sortedcontainers==2.4.0
|
||||||
|
sounddevice==0.5.1
|
||||||
|
soupsieve==2.3.1
|
||||||
|
spacy==3.7.5
|
||||||
|
spacy-legacy==3.0.12
|
||||||
|
spacy-loggers==1.0.5
|
||||||
|
spandrel==0.3.4
|
||||||
|
speedtest-cli==2.1.3
|
||||||
|
Sphinx==4.3.2
|
||||||
|
sphinx-rtd-theme==1.0.0
|
||||||
|
SQLAlchemy==2.0.30
|
||||||
|
srsly==2.4.8
|
||||||
|
ssh-import-id==5.11
|
||||||
|
starlette==0.37.2
|
||||||
|
-e git+https://github.com/maguowei/starred@f1ae04d5ee11952ad2ede1b8e9b679f347126cea#egg=starred&subdirectory=../../../../../../../time/2024/07/18/starred
|
||||||
|
statsforecast==1.4.0
|
||||||
|
statsmodels==0.14.2
|
||||||
|
strace-parser @ file:///mnt/data1/nix/time/2024/09/15/strace-parser
|
||||||
|
stringzilla==3.10.10
|
||||||
|
substrate-interface==1.7.5
|
||||||
|
-e git+https://github.com/jmikedupont2/swarm-models.git@11db002d774a86a50b3c6cc303ee707f12274576#egg=swarm_models&subdirectory=../../../../../../../time/2024/12/05/swarm-models
|
||||||
|
-e git+https://github.com/meta-introspector/swarms.git@82a2d8954b9b4668801bdce59a23df4b0d16df1f#egg=swarms&subdirectory=../../../../../../../time/2024/05/31/swarms
|
||||||
|
sympy==1.13.3
|
||||||
|
systemd-python==234
|
||||||
|
tabulate==0.9.0
|
||||||
|
tenacity==8.2.3
|
||||||
|
tensor-parallel==1.0.23
|
||||||
|
tensorboard==2.17.0
|
||||||
|
tensorboard-data-server==0.7.2
|
||||||
|
tensorboardX==2.6.2.2
|
||||||
|
termcolor==2.4.0
|
||||||
|
test_tube==0.7.5
|
||||||
|
-e git+https://github.com/Josephrp/testcontainers-python@298e0e7a260c21f81fa6e7bcf40613a094b8ef2b#egg=testcontainers&subdirectory=../../../../../../../time/2024/08/04/testcontainers-python
|
||||||
|
TestSlide==2.7.1
|
||||||
|
text-unidecode==1.3
|
||||||
|
thespian==3.10.7
|
||||||
|
thinc==8.2.5
|
||||||
|
thoth-analyzer==0.1.8
|
||||||
|
thoth-common==0.36.6
|
||||||
|
thoth-license-solver==0.1.5
|
||||||
|
thoth-python==0.16.11
|
||||||
|
thoth-solver @ file:///mnt/data1/nix/time/2024/06/01/solver
|
||||||
|
threadpoolctl==3.5.0
|
||||||
|
tifffile==2024.7.24
|
||||||
|
tiktoken==0.7.0
|
||||||
|
time-machine==2.16.0
|
||||||
|
timm==0.6.13
|
||||||
|
tldextract==5.1.2
|
||||||
|
tokenize-rt==5.2.0
|
||||||
|
tokenizers==0.20.3
|
||||||
|
toml==0.10.2
|
||||||
|
tomli==2.0.1
|
||||||
|
tomlkit==0.12.0
|
||||||
|
toolz==0.12.1
|
||||||
|
torch==2.3.1
|
||||||
|
-e git+https://github.com/pyg-team/pytorch_geometric.git@8bb44edf7c7e687aca44daca0e6cc5eb6ae076b0#egg=torch_geometric&subdirectory=../../../../../../../time/2023/06/03/pytorch_geometric
|
||||||
|
torchaudio==2.3.1
|
||||||
|
torchmetrics==1.2.1
|
||||||
|
torchsde==0.2.6
|
||||||
|
torchvision==0.18.1
|
||||||
|
tornado==6.1
|
||||||
|
tqdm==4.65.2
|
||||||
|
trampoline==0.1.2
|
||||||
|
transformers==4.46.3
|
||||||
|
-e git+https://github.com/elektito/trick@1946e731a3c247d9973c093695b7ea6162f1052a#egg=trick_scheme&subdirectory=../../../../../../../time/2024/07/18/trick
|
||||||
|
triton==2.3.1
|
||||||
|
trl==0.7.11
|
||||||
|
trove-classifiers==2024.5.22
|
||||||
|
ttystatus==0.38
|
||||||
|
Twisted==24.7.0
|
||||||
|
typeguard==2.13.3
|
||||||
|
typer==0.12.3
|
||||||
|
types-chardet==5.0.4.6
|
||||||
|
types-protobuf==5.26.0.20240422
|
||||||
|
types-pytz==2024.1.0.20240417
|
||||||
|
types-toml==0.10.8.20240310
|
||||||
|
typing-inspect==0.9.0
|
||||||
|
typing_extensions==4.11.0
|
||||||
|
tyro==0.7.3
|
||||||
|
tzdata==2024.1
|
||||||
|
tzlocal==5.2
|
||||||
|
ubuntu-advantage-tools==27.12
|
||||||
|
ubuntu-drivers-common==0.0.0
|
||||||
|
ufw==0.36.1
|
||||||
|
ujson==5.10.0
|
||||||
|
unattended-upgrades==0.1
|
||||||
|
Unidecode==1.3.3
|
||||||
|
unidiff==0.5.5
|
||||||
|
-e git+https://github.com/freckletonj/uniteai@653b2d01d5899f261af502a0b3367b4489d67821#egg=uniteai&subdirectory=../../../../../../../time/2024/06/27/uniteai
|
||||||
|
uritemplate==4.1.1
|
||||||
|
urllib3==2.2.3
|
||||||
|
userpath==1.8.0
|
||||||
|
utilsforecast==0.0.10
|
||||||
|
uvicorn==0.28.0
|
||||||
|
uvloop==0.19.0
|
||||||
|
varint==1.0.2
|
||||||
|
virtualenv==20.28.0
|
||||||
|
virtualenv-clone==0.3.0
|
||||||
|
vmdb2==0.24
|
||||||
|
w3lib==2.2.1
|
||||||
|
wadllib==1.3.6
|
||||||
|
wajig==4.0.3
|
||||||
|
wandb==0.15.3
|
||||||
|
wasabi==1.1.3
|
||||||
|
watchfiles==0.22.0
|
||||||
|
wcwidth==0.2.13
|
||||||
|
weasel==0.4.1
|
||||||
|
webencodings==0.5.1
|
||||||
|
websocket==0.2.1
|
||||||
|
websocket-client @ git+https://github.com/websocket-client/websocket-client.git@77337ef76f1f38b14742ab28309f9ca51b8fb011
|
||||||
|
websockets==12.0
|
||||||
|
Werkzeug==3.0.2
|
||||||
|
widgetsnbextension==4.0.13
|
||||||
|
window_ops==0.0.15
|
||||||
|
wrapt==1.13.3
|
||||||
|
wsproto==1.2.0
|
||||||
|
xcffib==0.11.1
|
||||||
|
xdg==5
|
||||||
|
xdot==1.2
|
||||||
|
xgboost==2.0.3
|
||||||
|
xkit==0.0.0
|
||||||
|
xmltodict==0.13.0
|
||||||
|
xxhash==3.4.1
|
||||||
|
-e git+https://github.com/mschuett/yaml-shellcheck.git@08537c9c42734041d9da07a143437d2565fa6f83#egg=yaml_shellcheck&subdirectory=../../../../../../../time/2024/10/21/yaml-shellcheck
|
||||||
|
yarl==1.9.4
|
||||||
|
youtube-dl==2021.12.17
|
||||||
|
yq==3.4.3
|
||||||
|
zipp==1.0.0
|
||||||
|
zope.event==5.0
|
||||||
|
zope.interface==6.4.post2
|
@ -0,0 +1,2 @@
|
|||||||
|
|
||||||
|
/usr/bin/unbuffer /var/swarms/agent_workspace/.venv/bin/uvicorn --proxy-headers /opt/swarms/api/main.py:create_app
|
@ -0,0 +1,10 @@
|
|||||||
|
pip list \
|
||||||
|
| tail -n +3 \
|
||||||
|
| awk '{print $1}' \
|
||||||
|
| xargs pip show \
|
||||||
|
| grep -E 'Location:|Name:' \
|
||||||
|
| cut -d ' ' -f 2 \
|
||||||
|
| paste -d ' ' - - \
|
||||||
|
| awk '{print $2 "/" tolower($1)}' \
|
||||||
|
| xargs du -sh 2> /dev/null \
|
||||||
|
| sort -hr
|
@ -0,0 +1,41 @@
|
|||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from swarms.utils.loguru_logger import initialize_logger
|
||||||
|
from swarms.telemetry.check_update import check_for_update
|
||||||
|
|
||||||
|
logger = initialize_logger(log_folder="auto_upgrade_swarms")
|
||||||
|
|
||||||
|
|
||||||
|
def auto_update():
|
||||||
|
"""auto update swarms"""
|
||||||
|
pass
|
||||||
|
# try:
|
||||||
|
# # Check if auto-update is disabled
|
||||||
|
# auto_update_enabled = os.getenv(
|
||||||
|
# "SWARMS_AUTOUPDATE_ON", "false"
|
||||||
|
# ).lower()
|
||||||
|
# if auto_update_enabled == "false":
|
||||||
|
# logger.info(
|
||||||
|
# "Auto-update is disabled via SWARMS_AUTOUPDATE_ON"
|
||||||
|
# )
|
||||||
|
# return
|
||||||
|
|
||||||
|
# outcome = check_for_update()
|
||||||
|
# if outcome is True:
|
||||||
|
# logger.info(
|
||||||
|
# "There is a new version of swarms available! Downloading..."
|
||||||
|
# )
|
||||||
|
# try:
|
||||||
|
# subprocess.run(
|
||||||
|
# ["pip", "install", "-U", "swarms"], check=True
|
||||||
|
# )
|
||||||
|
# except subprocess.CalledProcessError:
|
||||||
|
# logger.info("Attempting to install with pip3...")
|
||||||
|
# subprocess.run(
|
||||||
|
# ["pip3", "install", "-U", "swarms"], check=True
|
||||||
|
# )
|
||||||
|
# else:
|
||||||
|
# logger.info("swarms is up to date!")
|
||||||
|
# except Exception as e:
|
||||||
|
# logger.error(e)
|
Loading…
Reference in new issue