commit
9550e55243
@ -0,0 +1,52 @@
|
||||
OPENAI_API_KEY="your_openai_api_key_here"
|
||||
GOOGLE_API_KEY=""
|
||||
ANTHROPIC_API_KEY=""
|
||||
AI21_API_KEY="your_api_key_here"
|
||||
COHERE_API_KEY="your_api_key_here"
|
||||
ALEPHALPHA_API_KEY="your_api_key_here"
|
||||
HUGGINFACEHUB_API_KEY="your_api_key_here"
|
||||
<<<<<<< HEAD
|
||||
|
||||
=======
|
||||
STABILITY_API_KEY="your_api_key_here"
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
WOLFRAM_ALPHA_APPID="your_wolfram_alpha_appid_here"
|
||||
ZAPIER_NLA_API_KEY="your_zapier_nla_api_key_here"
|
||||
|
||||
EVAL_PORT=8000
|
||||
MODEL_NAME="gpt-4"
|
||||
CELERY_BROKER_URL="redis://localhost:6379"
|
||||
|
||||
SERVER="http://localhost:8000"
|
||||
USE_GPU=True
|
||||
PLAYGROUND_DIR="playground"
|
||||
|
||||
LOG_LEVEL="INFO"
|
||||
BOT_NAME="Orca"
|
||||
|
||||
WINEDB_HOST="your_winedb_host_here"
|
||||
WINEDB_PASSWORD="your_winedb_password_here"
|
||||
BING_SEARCH_URL="your_bing_search_url_here"
|
||||
|
||||
BING_SUBSCRIPTION_KEY="your_bing_subscription_key_here"
|
||||
SERPAPI_API_KEY="your_serpapi_api_key_here"
|
||||
IFTTTKey="your_iftttkey_here"
|
||||
|
||||
BRAVE_API_KEY="your_brave_api_key_here"
|
||||
SPOONACULAR_KEY="your_spoonacular_key_here"
|
||||
HF_API_KEY="your_huggingface_api_key_here"
|
||||
|
||||
|
||||
REDIS_HOST=
|
||||
REDIS_PORT=
|
||||
|
||||
#dbs
|
||||
PINECONE_API_KEY=""
|
||||
BING_COOKIE=""
|
||||
|
||||
<<<<<<< HEAD
|
||||
PSG_CONNECTION_STRING=""
|
||||
=======
|
||||
PSG_CONNECTION_STRING=""
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,6 @@
|
||||
[flake8]
|
||||
<<<<<<< HEAD
|
||||
extend-ignore = E501, W292, W291
|
||||
=======
|
||||
extend-ignore = E501, W292, W291, W293
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,30 @@
|
||||
<<<<<<< HEAD
|
||||
# These are supported funding model platforms
|
||||
|
||||
github: [kyegomez]
|
||||
patreon: # Replace with a single Patreon username
|
||||
open_collective: # Replace with a single Open Collective username
|
||||
ko_fi: # Replace with a single Ko-fi username
|
||||
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
|
||||
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
|
||||
liberapay: # Replace with a single Liberapay username
|
||||
issuehunt: # Replace with a single IssueHunt username
|
||||
otechie: # Replace with a single Otechie username
|
||||
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
|
||||
custom: #Nothing
|
||||
=======
|
||||
---
|
||||
# These are supported funding model platforms
|
||||
|
||||
github: [kyegomez]
|
||||
# patreon: # Replace with a single Patreon username
|
||||
# open_collective: # Replace with a single Open Collective username
|
||||
# ko_fi: # Replace with a single Ko-fi username
|
||||
# tidelift: # Replace with a single Tidelift platform-name/package-name
|
||||
# community_bridge: # Replace with a single Community Bridge project-name
|
||||
# liberapay: # Replace with a single Liberapay username
|
||||
# issuehunt: # Replace with a single IssueHunt username
|
||||
# otechie: # Replace with a single Otechie username
|
||||
# lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name
|
||||
# custom: #Nothing
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,27 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: "[BUG] "
|
||||
labels: bug
|
||||
assignees: kyegomez
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
@ -0,0 +1,20 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: 'kyegomez'
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
@ -0,0 +1,29 @@
|
||||
Thank you for contributing to Swarms!
|
||||
|
||||
Replace this comment with:
|
||||
- Description: a description of the change,
|
||||
- Issue: the issue # it fixes (if applicable),
|
||||
- Dependencies: any dependencies required for this change,
|
||||
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
|
||||
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
|
||||
|
||||
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally.
|
||||
|
||||
See contribution guidelines for more information on how to write/run tests, lint, etc:
|
||||
https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md
|
||||
|
||||
If you're adding a new integration, please include:
|
||||
1. a test for the integration, preferably unit tests that do not rely on network access,
|
||||
2. an example notebook showing its use.
|
||||
|
||||
|
||||
Maintainer responsibilities:
|
||||
- General / Misc / if you don't know who to tag: kye@apac.ai
|
||||
- DataLoaders / VectorStores / Retrievers: kye@apac.ai
|
||||
- swarms.models: kye@apac.ai
|
||||
- swarms.memory: kye@apac.ai
|
||||
- swarms.structures: kye@apac.ai
|
||||
|
||||
If no one reviews your PR within a few days, feel free to email Kye at kye@apac.ai
|
||||
|
||||
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/kyegomez/swarms
|
@ -0,0 +1,21 @@
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
---
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
|
||||
- package-ecosystem: "pip"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
<<<<<<< HEAD
|
||||
|
||||
=======
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,27 @@
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
---
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
# this is a config file for the github action labeler
|
||||
|
||||
# Add 'label1' to any changes within 'example' folder or any subfolders
|
||||
example_change:
|
||||
<<<<<<< HEAD
|
||||
- example/**
|
||||
=======
|
||||
- example/**
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
# Add 'label2' to any file changes within 'example2' folder
|
||||
example2_change: example2/*
|
||||
|
||||
<<<<<<< HEAD
|
||||
# Add label3 to any change to .txt files within the entire repository. Quotation marks are required for the leading asterisk
|
||||
text_files:
|
||||
- '**/*.txt'
|
||||
=======
|
||||
# Add label3 to any change to .txt files within the entire repository.
|
||||
# Quotation marks are required for the leading asterisk
|
||||
text_files:
|
||||
- '**/*.txt'
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,56 @@
|
||||
name: release
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types:
|
||||
- closed
|
||||
branches:
|
||||
- master
|
||||
paths:
|
||||
- 'pyproject.toml'
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.4.2"
|
||||
|
||||
jobs:
|
||||
if_release:
|
||||
if: |
|
||||
${{ github.event.pull_request.merged == true }}
|
||||
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Install poetry
|
||||
run: pipx install poetry==$POETRY_VERSION
|
||||
<<<<<<< HEAD
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.10"
|
||||
=======
|
||||
- name: Set up Python 3.9
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.9"
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
cache: "poetry"
|
||||
- name: Build project for distribution
|
||||
run: poetry build
|
||||
- name: Check Version
|
||||
id: check-version
|
||||
run: |
|
||||
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
|
||||
- name: Create Release
|
||||
uses: ncipollo/release-action@v1
|
||||
with:
|
||||
artifacts: "dist/*"
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
draft: false
|
||||
generateReleaseNotes: true
|
||||
tag: v${{ steps.check-version.outputs.version }}
|
||||
commit: master
|
||||
- name: Publish to PyPI
|
||||
env:
|
||||
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_API_TOKEN }}
|
||||
run: |
|
||||
poetry publish
|
@ -0,0 +1,61 @@
|
||||
# This workflow uses actions that are not certified by GitHub.
|
||||
# They are provided by a third-party and are governed by
|
||||
# separate terms of service, privacy policy, and support
|
||||
# documentation.
|
||||
|
||||
# This workflow checks out code, performs a Codacy security scan
|
||||
# and integrates the results with the
|
||||
# GitHub Advanced Security code scanning feature. For more information on
|
||||
# the Codacy security scan action usage and parameters, see
|
||||
# https://github.com/codacy/codacy-analysis-cli-action.
|
||||
# For more information on Codacy Analysis CLI in general, see
|
||||
# https://github.com/codacy/codacy-analysis-cli.
|
||||
|
||||
name: Codacy Security Scan
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
pull_request:
|
||||
# The branches below must be a subset of the branches above
|
||||
branches: [ "master" ]
|
||||
schedule:
|
||||
- cron: '18 23 * * 4'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
codacy-security-scan:
|
||||
permissions:
|
||||
contents: read # for actions/checkout to fetch code
|
||||
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
|
||||
actions: read # only required for a private repository by github/codeql-action/upload-sarif to get the Action run status
|
||||
name: Codacy Security Scan
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
# Checkout the repository to the GitHub Actions runner
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# Execute Codacy Analysis CLI and generate a SARIF output with the security issues identified during the analysis
|
||||
- name: Run Codacy Analysis CLI
|
||||
uses: codacy/codacy-analysis-cli-action@5cc54a75f9ad88159bb54046196d920e40e367a5
|
||||
with:
|
||||
# Check https://github.com/codacy/codacy-analysis-cli#project-token to get your project token from your Codacy repository
|
||||
# You can also omit the token and run the tools that support default configurations
|
||||
project-token: ${{ secrets.CODACY_PROJECT_TOKEN }}
|
||||
verbose: true
|
||||
output: results.sarif
|
||||
format: sarif
|
||||
# Adjust severity of non-security issues
|
||||
gh-code-scanning-compat: true
|
||||
# Force 0 exit code to allow SARIF file generation
|
||||
# This will handover control about PR rejection to the GitHub side
|
||||
max-allowed-issues: 2147483647
|
||||
|
||||
# Upload the SARIF file generated in the previous step
|
||||
- name: Upload SARIF results file
|
||||
uses: github/codeql-action/upload-sarif@v2
|
||||
with:
|
||||
sarif_file: results.sarif
|
@ -0,0 +1,30 @@
|
||||
name: Linting and Formatting
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
|
||||
jobs:
|
||||
lint_and_format:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Find Python files
|
||||
run: find swarms -name "*.py" -type f -exec autopep8 --in-place --aggressive --aggressive {} +
|
||||
|
||||
- name: Push changes
|
||||
uses: ad-m/github-push-action@master
|
||||
with:
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
@ -0,0 +1,42 @@
|
||||
name: Continuous Integration
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Run unit tests
|
||||
run: pytest tests/unit
|
||||
|
||||
- name: Run integration tests
|
||||
run: pytest tests/integration
|
||||
|
||||
- name: Run code coverage
|
||||
run: pytest --cov=swarms tests/
|
||||
|
||||
- name: Run linters
|
||||
run: pylint swarms
|
||||
|
||||
- name: Build documentation
|
||||
run: make docs
|
||||
|
||||
- name: Validate documentation
|
||||
run: sphinx-build -b linkcheck docs build/docs
|
||||
|
||||
- name: Run performance tests
|
||||
run: find ./tests -name '*.py' -exec pytest {} \;
|
@ -0,0 +1,37 @@
|
||||
<<<<<<< HEAD
|
||||
name: Docker Image CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
pull_request:
|
||||
branches: [ "master" ]
|
||||
=======
|
||||
---
|
||||
name: Docker Image CI
|
||||
|
||||
on: # yamllint disable-line rule:truthy
|
||||
push:
|
||||
branches: ["master"]
|
||||
pull_request:
|
||||
branches: ["master"]
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
jobs:
|
||||
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
<<<<<<< HEAD
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Build the Docker image
|
||||
run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)
|
||||
=======
|
||||
name: Build Docker image
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Build the Docker image
|
||||
run: docker build . --file Dockerfile --tag my-image-name:$(date +%s)
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,98 @@
|
||||
name: Docker
|
||||
|
||||
# This workflow uses actions that are not certified by GitHub.
|
||||
# They are provided by a third-party and are governed by
|
||||
# separate terms of service, privacy policy, and support
|
||||
# documentation.
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '31 19 * * *'
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
# Publish semver tags as releases.
|
||||
tags: [ 'v*.*.*' ]
|
||||
pull_request:
|
||||
branches: [ "master" ]
|
||||
|
||||
env:
|
||||
# Use docker.io for Docker Hub if empty
|
||||
REGISTRY: ghcr.io
|
||||
# github.repository as <account>/<repo>
|
||||
IMAGE_NAME: ${{ github.repository }}
|
||||
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
# This is used to complete the identity challenge
|
||||
# with sigstore/fulcio when running outside of PRs.
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
# Install the cosign tool except on PR
|
||||
# https://github.com/sigstore/cosign-installer
|
||||
- name: Install cosign
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: sigstore/cosign-installer@1fc5bd396d372bee37d608f955b336615edf79c8 #v3.2.0
|
||||
with:
|
||||
cosign-release: 'v2.1.1'
|
||||
|
||||
# Set up BuildKit Docker container builder to be able to build
|
||||
# multi-platform images and export cache
|
||||
# https://github.com/docker/setup-buildx-action
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
|
||||
|
||||
# Login against a Docker registry except on PR
|
||||
# https://github.com/docker/login-action
|
||||
- name: Log into registry ${{ env.REGISTRY }}
|
||||
if: github.event_name != 'pull_request'
|
||||
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
# Extract metadata (tags, labels) for Docker
|
||||
# https://github.com/docker/metadata-action
|
||||
- name: Extract Docker metadata
|
||||
id: meta
|
||||
uses: docker/metadata-action@96383f45573cb7f253c731d3b3ab81c87ef81934 # v5.0.0
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
|
||||
# Build and push Docker image with Buildx (don't push on PR)
|
||||
# https://github.com/docker/build-push-action
|
||||
- name: Build and push Docker image
|
||||
id: build-and-push
|
||||
uses: docker/build-push-action@4a13e500e55cf31b7a5d59a38ab2040ab0f42f56 # v5.1.0
|
||||
with:
|
||||
context: .
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
cache-from: type=gha
|
||||
cache-to: type=gha,mode=max
|
||||
|
||||
# Sign the resulting Docker image digest except on PRs.
|
||||
# This will only write to the public Rekor transparency log when the Docker
|
||||
# repository is public to avoid leaking data. If you would like to publish
|
||||
# transparency data even for private images, pass --force to cosign below.
|
||||
# https://github.com/sigstore/cosign
|
||||
- name: Sign the published Docker image
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
env:
|
||||
# https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-an-intermediate-environment-variable
|
||||
TAGS: ${{ steps.meta.outputs.tags }}
|
||||
DIGEST: ${{ steps.build-and-push.outputs.digest }}
|
||||
# This step uses the identity token to provision an ephemeral certificate
|
||||
# against the sigstore community Fulcio instance.
|
||||
run: echo "${TAGS}" | xargs -I {} cosign sign --yes {}@${DIGEST}
|
@ -0,0 +1,20 @@
|
||||
name: Docs WorkFlow
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
- main
|
||||
- develop
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
- run: pip install mkdocs-material
|
||||
- run: pip install mkdocs-glightbox
|
||||
- run: pip install "mkdocstrings[python]"
|
||||
- run: mkdocs gh-deploy --force
|
@ -0,0 +1,28 @@
|
||||
name: Documentation Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Build documentation
|
||||
run: make docs
|
||||
|
||||
- name: Validate documentation
|
||||
run: sphinx-build -b linkcheck docs build/docs
|
@ -0,0 +1,66 @@
|
||||
# This workflow uses actions that are not certified by GitHub.
|
||||
# They are provided by a third-party and are governed by
|
||||
# separate terms of service, privacy policy, and support
|
||||
# documentation.
|
||||
|
||||
# This workflow lets you generate SLSA provenance file for your project.
|
||||
# The generation satisfies level 3 for the provenance requirements - see https://slsa.dev/spec/v0.1/requirements
|
||||
# The project is an initiative of the OpenSSF (openssf.org) and is developed at
|
||||
# https://github.com/slsa-framework/slsa-github-generator.
|
||||
# The provenance file can be verified using https://github.com/slsa-framework/slsa-verifier.
|
||||
# For more information about SLSA and how it improves the supply-chain, visit slsa.dev.
|
||||
|
||||
name: SLSA generic generator
|
||||
on:
|
||||
workflow_dispatch:
|
||||
release:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
digests: ${{ steps.hash.outputs.digests }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
# ========================================================
|
||||
#
|
||||
# Step 1: Build your artifacts.
|
||||
#
|
||||
# ========================================================
|
||||
- name: Build artifacts
|
||||
run: |
|
||||
# These are some amazing artifacts.
|
||||
echo "artifact1" > artifact1
|
||||
echo "artifact2" > artifact2
|
||||
|
||||
# ========================================================
|
||||
#
|
||||
# Step 2: Add a step to generate the provenance subjects
|
||||
# as shown below. Update the sha256 sum arguments
|
||||
# to include all binaries that you generate
|
||||
# provenance for.
|
||||
#
|
||||
# ========================================================
|
||||
- name: Generate subject for provenance
|
||||
id: hash
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
# List the artifacts the provenance will refer to.
|
||||
files=$(ls artifact*)
|
||||
# Generate the subjects (base64 encoded).
|
||||
echo "hashes=$(sha256sum $files | base64 -w0)" >> "${GITHUB_OUTPUT}"
|
||||
|
||||
provenance:
|
||||
needs: [build]
|
||||
permissions:
|
||||
actions: read # To read the workflow path.
|
||||
id-token: write # To sign the provenance.
|
||||
contents: write # To add assets to a release.
|
||||
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v1.4.0
|
||||
with:
|
||||
base64-subjects: "${{ needs.build.outputs.digests }}"
|
||||
upload-assets: true # Optional: Upload to a new release
|
@ -0,0 +1,22 @@
|
||||
# This workflow will triage pull requests and apply a label based on the
|
||||
# paths that are modified in the pull request.
|
||||
#
|
||||
# To use this workflow, you will need to set up a .github/labeler.yml
|
||||
# file with configuration. For more information, see:
|
||||
# https://github.com/actions/labeler
|
||||
|
||||
name: Labeler
|
||||
on: [pull_request_target]
|
||||
|
||||
jobs:
|
||||
label:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
steps:
|
||||
- uses: actions/labeler@v4
|
||||
with:
|
||||
repo-token: "${{ secrets.GITHUB_TOKEN }}"
|
@ -0,0 +1,49 @@
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
---
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
# This is a basic workflow to help you get started with Actions
|
||||
|
||||
name: Lint
|
||||
|
||||
<<<<<<< HEAD
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
flake8-lint:
|
||||
runs-on: ubuntu-latest
|
||||
name: Lint
|
||||
=======
|
||||
on: [push, pull_request] # yamllint disable-line rule:truthy
|
||||
|
||||
jobs:
|
||||
yaml-lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out source repository
|
||||
uses: actions/checkout@v4
|
||||
- name: yaml Lint
|
||||
uses: ibiqlik/action-yamllint@v3
|
||||
flake8-lint:
|
||||
runs-on: ubuntu-latest
|
||||
name: flake8 Lint
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
steps:
|
||||
- name: Check out source repository
|
||||
uses: actions/checkout@v4
|
||||
- name: Set up Python environment
|
||||
<<<<<<< HEAD
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.11"
|
||||
- name: flake8 Lint
|
||||
uses: py-actions/flake8@v2
|
||||
=======
|
||||
uses: py-actions/flake8@v2
|
||||
ruff-lint:
|
||||
runs-on: ubuntu-latest
|
||||
name: ruff Lint
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: chartboost/ruff-action@v1
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,25 @@
|
||||
name: Linting
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Run linters
|
||||
run: pylint swarms
|
@ -0,0 +1,27 @@
|
||||
name: Makefile CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
pull_request:
|
||||
branches: [ "master" ]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: configure
|
||||
run: ./configure
|
||||
|
||||
- name: Install dependencies
|
||||
run: make
|
||||
|
||||
- name: Run check
|
||||
run: make check
|
||||
|
||||
- name: Run distcheck
|
||||
run: make distcheck
|
@ -0,0 +1,42 @@
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
---
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
name: Pull Request Checks
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
- name: Install dependencies
|
||||
<<<<<<< HEAD
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Run tests and checks
|
||||
run: |
|
||||
find tests/ -name "*.py" | xargs pytest
|
||||
pylint swarms
|
||||
=======
|
||||
run: |
|
||||
pip install -r requirements.txt
|
||||
pip install pytest
|
||||
|
||||
- name: Run tests and checks
|
||||
run: |
|
||||
pytest
|
||||
pylint swarms
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,18 @@
|
||||
name: readthedocs/actions
|
||||
on:
|
||||
pull_request_target:
|
||||
types:
|
||||
- opened
|
||||
paths:
|
||||
- "docs/**"
|
||||
|
||||
permissions:
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
pull-request-links:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: readthedocs/actions/preview@v1
|
||||
with:
|
||||
project-slug: swarms
|
@ -0,0 +1,23 @@
|
||||
name: Pylint
|
||||
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.8", "3.9", "3.10"]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install pylint
|
||||
- name: Analysing the code with pylint
|
||||
run: |
|
||||
pylint $(git ls-files '*.py')
|
@ -0,0 +1,39 @@
|
||||
# This workflow will install Python dependencies, run tests and lint with a single version of Python
|
||||
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
|
||||
|
||||
name: Python application
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
pull_request:
|
||||
branches: [ "master" ]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python 3.10
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.10"
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install flake8 pytest
|
||||
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
|
||||
- name: Lint with flake8
|
||||
run: |
|
||||
# stop the build if there are Python syntax errors or undefined names
|
||||
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
|
||||
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
|
||||
- name: Test with pytest
|
||||
run: |
|
||||
pytest
|
@ -0,0 +1,51 @@
|
||||
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
|
||||
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
|
||||
|
||||
name: Python package
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
pull_request:
|
||||
branches: [ "master" ]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.7", "3.9", "3.10", "3.11"]
|
||||
|
||||
steps:
|
||||
<<<<<<< HEAD
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v3
|
||||
=======
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v4
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install --upgrade swarms
|
||||
python -m pip install flake8 pytest
|
||||
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
|
||||
- name: Lint with flake8
|
||||
run: |
|
||||
# stop the build if there are Python syntax errors or undefined names
|
||||
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
|
||||
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
|
||||
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
|
||||
- name: Test with pytest
|
||||
run: |
|
||||
<<<<<<< HEAD
|
||||
find ./tests -name '*.py' -exec pytest {} \;
|
||||
=======
|
||||
pytest
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,58 @@
|
||||
<<<<<<< HEAD
|
||||
|
||||
name: Upload Python Package
|
||||
|
||||
on:
|
||||
=======
|
||||
---
|
||||
name: Upload Python Package
|
||||
|
||||
on: # yamllint disable-line rule:truthy
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
<<<<<<< HEAD
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.x'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install build
|
||||
- name: Build package
|
||||
run: python -m build
|
||||
- name: Publish package
|
||||
uses: pypa/gh-action-pypi-publish@b7f401de30cb6434a1e19f805ff006643653240e
|
||||
with:
|
||||
user: __token__
|
||||
password: ${{ secrets.PYPI_API_TOKEN }}
|
||||
=======
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.x'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install build
|
||||
- name: Build package
|
||||
run: python -m build
|
||||
- name: Publish package
|
||||
uses: pypa/gh-action-pypi-publish@b7f401de30cb6434a1e19f805ff006643653240e
|
||||
with:
|
||||
user: __token__
|
||||
password: ${{ secrets.PYPI_API_TOKEN }}
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,23 @@
|
||||
name: Quality
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "master" ]
|
||||
pull_request:
|
||||
branches: [ "master" ]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
fail-fast: false
|
||||
steps:
|
||||
- name: Checkout actions
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Init environment
|
||||
uses: ./.github/actions/init-environment
|
||||
- name: Run linter
|
||||
run: |
|
||||
pylint `git diff --name-only --diff-filter=d origin/master HEAD | grep -E '\.py$' | tr '\n' ' '`
|
@ -0,0 +1,8 @@
|
||||
name: Ruff
|
||||
on: [ push, pull_request ]
|
||||
jobs:
|
||||
ruff:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: chartboost/ruff-action@v1
|
@ -0,0 +1,23 @@
|
||||
name: Python application test
|
||||
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python 3.8
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.8
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install pytest
|
||||
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
|
||||
- name: Run tests with pytest
|
||||
run: |
|
||||
find tests/ -name "*.py" | xargs pytest
|
@ -0,0 +1,27 @@
|
||||
# This workflow warns and then closes issues and PRs that have had no activity for a specified amount of time.
|
||||
#
|
||||
# You can adjust the behavior by modifying this file.
|
||||
# For more information, see:
|
||||
# https://github.com/actions/stale
|
||||
name: Mark stale issues and pull requests
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '26 12 * * *'
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
steps:
|
||||
- uses: actions/stale@v8
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
stale-issue-message: 'Stale issue message'
|
||||
stale-pr-message: 'Stale pull request message'
|
||||
stale-issue-label: 'no-issue-activity'
|
||||
stale-pr-label: 'no-pr-activity'
|
@ -0,0 +1,117 @@
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
---
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
name: test
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [master]
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.4.2"
|
||||
|
||||
<<<<<<< HEAD
|
||||
jobs:
|
||||
=======
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.8"
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
test_type:
|
||||
- "core"
|
||||
- "extended"
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: "snok/install-poetry@v1"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: "1.4.2"
|
||||
cache-key: ${{ matrix.test_type }}
|
||||
install-command: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
echo "Running core tests, installing dependencies with poetry..."
|
||||
poetry install
|
||||
else
|
||||
echo "Running extended tests, installing dependencies with poetry..."
|
||||
poetry install -E extended_testing
|
||||
fi
|
||||
- name: Run ${{matrix.test_type}} tests
|
||||
run: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
make test
|
||||
else
|
||||
make extended_tests
|
||||
fi
|
||||
shell: bash
|
||||
name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: "1.4.2"
|
||||
cache-key: ${{ matrix.test_type }}
|
||||
install-command: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
echo "Running core tests, installing dependencies with poetry..."
|
||||
poetry install
|
||||
else
|
||||
echo "Running extended tests, installing dependencies with poetry..."
|
||||
poetry install -E extended_testing
|
||||
fi
|
||||
- name: Run ${{matrix.test_type}} tests
|
||||
run: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
make test
|
||||
else
|
||||
make extended_tests
|
||||
fi
|
||||
shell: bash
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.8"
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
test_type:
|
||||
- "core"
|
||||
- "extended"
|
||||
name: Python ${{ matrix.python-version }} ${{ matrix.test_type }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: "1.4.2"
|
||||
cache-key: ${{ matrix.test_type }}
|
||||
install-command: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
echo "Running core tests, installing dependencies with poetry..."
|
||||
poetry install
|
||||
else
|
||||
echo "Running extended tests, installing dependencies with poetry..."
|
||||
poetry install -E extended_testing
|
||||
fi
|
||||
- name: Run ${{matrix.test_type}} tests
|
||||
run: |
|
||||
if [ "${{ matrix.test_type }}" == "core" ]; then
|
||||
make test
|
||||
else
|
||||
make extended_tests
|
||||
fi
|
||||
shell: bash
|
@ -0,0 +1,34 @@
|
||||
name: Unit Tests
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
|
||||
- name: Install dependencies
|
||||
<<<<<<< HEAD
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Run unit tests
|
||||
run: find tests/ -name "*.py" | xargs pytest
|
||||
=======
|
||||
run: |
|
||||
pip install -r requirements.txt
|
||||
pip install pytest
|
||||
|
||||
- name: Run unit tests
|
||||
run: pytest
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,49 @@
|
||||
name: build
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
|
||||
build:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
<<<<<<< HEAD
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install -r requirements.txt
|
||||
|
||||
- name: Run Python unit tests
|
||||
run: python3 -m unittest tests/
|
||||
=======
|
||||
python-version: '3.9'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install -r requirements.txt
|
||||
pip install pytest
|
||||
|
||||
- name: Run Python unit tests
|
||||
run: pytest
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
- name: Verify that the Docker image for the action builds
|
||||
run: docker build . --file Dockerfile
|
||||
|
||||
- name: Verify integration test results
|
||||
<<<<<<< HEAD
|
||||
run: find tests/ -name "*.py" | xargs pytest
|
||||
=======
|
||||
run: pytest
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,19 @@
|
||||
name: Welcome WorkFlow
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
pull_request_target:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: 👋 Welcome
|
||||
permissions: write-all
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/first-interaction@v1.2.0
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-message: "Hello there, thank you for opening an Issue ! 🙏🏻 The team was notified and they will get back to you asap."
|
||||
pr-message: "Hello there, thank you for opening an PR ! 🙏🏻 The team was notified and they will get back to you asap."
|
@ -0,0 +1,194 @@
|
||||
__pycache__/
|
||||
.venv/
|
||||
|
||||
.env
|
||||
|
||||
image/
|
||||
audio/
|
||||
video/
|
||||
dataframe/
|
||||
|
||||
static/generated
|
||||
swarms/__pycache__
|
||||
venv
|
||||
.DS_Store
|
||||
|
||||
.DS_STORE
|
||||
swarms/agents/.DS_Store
|
||||
|
||||
_build
|
||||
stderr_log.txt
|
||||
|
||||
.DS_STORE
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
.grit
|
||||
error.txt
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
.ruff_cache
|
||||
|
||||
|
||||
errors.txt
|
||||
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
share/python-wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
MANIFEST
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
.pytest_cache/
|
||||
cover/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
db.sqlite3
|
||||
db.sqlite3-journal
|
||||
|
||||
# Flask stuff:
|
||||
instance/
|
||||
.webassets-cache
|
||||
|
||||
# Scrapy stuff:
|
||||
.scrapy
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
.pybuilder/
|
||||
target/
|
||||
|
||||
# Jupyter Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# IPython
|
||||
profile_default/
|
||||
ipython_config.py
|
||||
.DS_Store
|
||||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
||||
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
||||
# install all needed dependencies.
|
||||
#Pipfile.lock
|
||||
|
||||
# poetry
|
||||
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
||||
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
||||
# commonly ignored for libraries.
|
||||
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
||||
#poetry.lock
|
||||
|
||||
# pdm
|
||||
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
||||
#pdm.lock
|
||||
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
||||
# in version control.
|
||||
# https://pdm.fming.dev/#use-with-ide
|
||||
.pdm.toml
|
||||
|
||||
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
||||
__pypackages__/
|
||||
|
||||
# Celery stuff
|
||||
celerybeat-schedule
|
||||
celerybeat.pid
|
||||
|
||||
# SageMath parsed files
|
||||
*.sage.py
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
env.bak/
|
||||
venv.bak/
|
||||
|
||||
# Spyder project settings
|
||||
.spyderproject
|
||||
.spyproject
|
||||
|
||||
# Rope project settings
|
||||
.ropeproject
|
||||
|
||||
# mkdocs documentation
|
||||
/site
|
||||
|
||||
# mypy
|
||||
.mypy_cache/
|
||||
.dmypy.json
|
||||
dmypy.json
|
||||
|
||||
# Pyre type checker
|
||||
.pyre/
|
||||
|
||||
# pytype static type analyzer
|
||||
.pytype/
|
||||
|
||||
# Cython debug symbols
|
||||
cython_debug/
|
||||
|
||||
# PyCharm
|
||||
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
||||
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
||||
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
||||
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
||||
<<<<<<< HEAD
|
||||
#.idea/
|
||||
=======
|
||||
#.idea/
|
||||
.vscode/settings.json
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,18 @@
|
||||
repos:
|
||||
- repo: https://github.com/ambv/black
|
||||
rev: 22.3.0
|
||||
hooks:
|
||||
- id: black
|
||||
- repo: https://github.com/charliermarsh/ruff-pre-commit
|
||||
rev: 'v0.0.255'
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [----unsafe-fixes]
|
||||
- repo: https://github.com/nbQA-dev/nbQA
|
||||
rev: 1.6.3
|
||||
hooks:
|
||||
- id: nbqa-black
|
||||
additional_dependencies: [ipython==8.12, black]
|
||||
- id: nbqa-ruff
|
||||
args: ["--ignore=I001"]
|
||||
additional_dependencies: [ipython==8.12, ruff]
|
@ -0,0 +1,17 @@
|
||||
version: 2
|
||||
|
||||
build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: "3.11"
|
||||
|
||||
mkdocs:
|
||||
configuration: mkdocs.yml
|
||||
|
||||
python:
|
||||
install:
|
||||
<<<<<<< HEAD
|
||||
- requirements: requirements.txt
|
||||
=======
|
||||
- requirements: requirements.txt
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,4 @@
|
||||
rules:
|
||||
line-length:
|
||||
level: warning
|
||||
allow-non-breakable-inline-mappings: true
|
@ -0,0 +1,128 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
kye@apac.ai.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
@ -0,0 +1,42 @@
|
||||
|
||||
# ==================================
|
||||
# Use an official Python runtime as a parent image
|
||||
FROM python:3.9-slim
|
||||
|
||||
# Set environment variables
|
||||
ENV PYTHONDONTWRITEBYTECODE 1
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /usr/src/swarm_cloud
|
||||
|
||||
|
||||
# Install Python dependencies
|
||||
# COPY requirements.txt and pyproject.toml if you're using poetry for dependency management
|
||||
COPY requirements.txt .
|
||||
RUN pip install --upgrade pip
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Install the 'swarms' package, assuming it's available on PyPI
|
||||
RUN pip install swarms
|
||||
|
||||
# Copy the rest of the application
|
||||
COPY . .
|
||||
|
||||
# Add entrypoint script if needed
|
||||
# COPY ./entrypoint.sh .
|
||||
# RUN chmod +x /usr/src/swarm_cloud/entrypoint.sh
|
||||
|
||||
# Expose port if your application has a web interface
|
||||
# EXPOSE 5000
|
||||
|
||||
# # Define environment variable for the swarm to work
|
||||
# ENV SWARM_API_KEY=your_swarm_api_key_here
|
||||
|
||||
# # Add Docker CMD or ENTRYPOINT script to run the application
|
||||
# CMD python your_swarm_startup_script.py
|
||||
# Or use the entrypoint script if you have one
|
||||
# ENTRYPOINT ["/usr/src/swarm_cloud/entrypoint.sh"]
|
||||
|
||||
# If you're using `CMD` to execute a Python script, make sure it's executable
|
||||
# RUN chmod +x your_swarm_startup_script.py
|
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -0,0 +1,324 @@
|
||||

|
||||
|
||||
<div align="center">
|
||||
|
||||
Swarms is a modular framework that enables reliable and useful multi-agent collaboration at scale to automate real-world tasks.
|
||||
|
||||
|
||||
[](https://github.com/kyegomez/swarms/issues) [](https://github.com/kyegomez/swarms/network) [](https://github.com/kyegomez/swarms/stargazers) [](https://github.com/kyegomez/swarms/blob/main/LICENSE)[](https://star-history.com/#kyegomez/swarms)[](https://libraries.io/github/kyegomez/swarms) [](https://pepy.tech/project/swarms)
|
||||
|
||||
[](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=)
|
||||
|
||||
[](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)
|
||||
|
||||
</div>
|
||||
|
||||
|
||||
----
|
||||
|
||||
## Installation
|
||||
`pip3 install --upgrade swarms`
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
Run example in Collab: <a target="_blank" href="https://colab.research.google.com/github/kyegomez/swarms/blob/master/playground/swarms_example.ipynb">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a>
|
||||
|
||||
<<<<<<< HEAD
|
||||
### `Agent` Example
|
||||
- Reliable Structure that provides LLMS autonomy
|
||||
- Extremely Customizeable with stopping conditions, interactivity, dynamical temperature, loop intervals, and so much more
|
||||
- Enterprise Grade + Production Grade: `Agent` is designed and optimized for automating real-world tasks at scale!
|
||||
=======
|
||||
### `Flow` Example
|
||||
- Reliable Structure that provides LLMS autonomy
|
||||
- Extremely Customizeable with stopping conditions, interactivity, dynamical temperature, loop intervals, and so much more
|
||||
- Enterprise Grade + Production Grade: `Flow` is designed and optimized for automating real-world tasks at scale!
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
```python
|
||||
|
||||
import os
|
||||
|
||||
from dotenv import load_dotenv
|
||||
|
||||
<<<<<<< HEAD
|
||||
# Import the OpenAIChat model and the Agent struct
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.structs import Agent
|
||||
=======
|
||||
# Import the OpenAIChat model and the Flow struct
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.structs import Flow
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
# Load the environment variables
|
||||
load_dotenv()
|
||||
|
||||
# Get the API key from the environment
|
||||
api_key = os.environ.get("OPENAI_API_KEY")
|
||||
|
||||
# Initialize the language model
|
||||
llm = OpenAIChat(
|
||||
temperature=0.5,
|
||||
openai_api_key=api_key,
|
||||
)
|
||||
|
||||
|
||||
## Initialize the workflow
|
||||
<<<<<<< HEAD
|
||||
flow = Agent(llm=llm, max_loops=1, dashboard=True)
|
||||
=======
|
||||
flow = Flow(llm=llm, max_loops=1, dashboard=True)
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
# Run the workflow on a task
|
||||
out = flow.run("Generate a 10,000 word blog on health and wellness.")
|
||||
|
||||
|
||||
|
||||
```
|
||||
|
||||
------
|
||||
|
||||
### `SequentialWorkflow`
|
||||
- A Sequential swarm of autonomous agents where each agent's outputs are fed into the next agent
|
||||
- Save and Restore Workflow states!
|
||||
<<<<<<< HEAD
|
||||
- Integrate Agent's with various LLMs and Multi-Modality Models
|
||||
|
||||
```python
|
||||
from swarms.models import OpenAIChat, BioGPT, Anthropic
|
||||
from swarms.structs import Agent, SequentialWorkflow
|
||||
=======
|
||||
- Integrate Flow's with various LLMs and Multi-Modality Models
|
||||
|
||||
```python
|
||||
from swarms.models import OpenAIChat, BioGPT, Anthropic
|
||||
from swarms.structs import Flow
|
||||
from swarms.structs.sequential_workflow import SequentialWorkflow
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
|
||||
# Example usage
|
||||
api_key = (
|
||||
"" # Your actual API key here
|
||||
)
|
||||
|
||||
# Initialize the language flow
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
max_tokens=3000,
|
||||
)
|
||||
|
||||
biochat = BioGPT()
|
||||
|
||||
# Use Anthropic
|
||||
anthropic = Anthropic()
|
||||
|
||||
# Initialize the agent with the language flow
|
||||
<<<<<<< HEAD
|
||||
agent1 = Agent(llm=llm, max_loops=1, dashboard=False)
|
||||
|
||||
# Create another agent for a different task
|
||||
agent2 = Agent(llm=llm, max_loops=1, dashboard=False)
|
||||
|
||||
# Create another agent for a different task
|
||||
agent3 = Agent(llm=biochat, max_loops=1, dashboard=False)
|
||||
|
||||
# agent4 = Agent(llm=anthropic, max_loops="auto")
|
||||
=======
|
||||
agent1 = Flow(llm=llm, max_loops=1, dashboard=False)
|
||||
|
||||
# Create another agent for a different task
|
||||
agent2 = Flow(llm=llm, max_loops=1, dashboard=False)
|
||||
|
||||
# Create another agent for a different task
|
||||
agent3 = Flow(llm=biochat, max_loops=1, dashboard=False)
|
||||
|
||||
# agent4 = Flow(llm=anthropic, max_loops="auto")
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
# Create the workflow
|
||||
workflow = SequentialWorkflow(max_loops=1)
|
||||
|
||||
# Add tasks to the workflow
|
||||
workflow.add("Generate a 10,000 word blog on health and wellness.", agent1)
|
||||
|
||||
# Suppose the next task takes the output of the first task as input
|
||||
workflow.add("Summarize the generated blog", agent2)
|
||||
|
||||
workflow.add("Create a references sheet of materials for the curriculm", agent3)
|
||||
|
||||
# Run the workflow
|
||||
workflow.run()
|
||||
|
||||
# Output the results
|
||||
for task in workflow.tasks:
|
||||
print(f"Task: {task.description}, Result: {task.result}")
|
||||
|
||||
```
|
||||
|
||||
## `Multi Modal Autonomous Agents`
|
||||
- Run the flow with multiple modalities useful for various real-world tasks in manufacturing, logistics, and health.
|
||||
|
||||
```python
|
||||
<<<<<<< HEAD
|
||||
from swarms.structs import Agent
|
||||
=======
|
||||
from swarms.structs import Flow
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
from swarms.models.gpt4_vision_api import GPT4VisionAPI
|
||||
from swarms.prompts.multi_modal_autonomous_instruction_prompt import (
|
||||
MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
|
||||
)
|
||||
|
||||
llm = GPT4VisionAPI()
|
||||
|
||||
task = (
|
||||
"Analyze this image of an assembly line and identify any issues such as"
|
||||
" misaligned parts, defects, or deviations from the standard assembly"
|
||||
" process. IF there is anything unsafe in the image, explain why it is"
|
||||
" unsafe and how it could be improved."
|
||||
)
|
||||
img = "assembly_line.jpg"
|
||||
|
||||
## Initialize the workflow
|
||||
<<<<<<< HEAD
|
||||
flow = Agent(
|
||||
=======
|
||||
flow = Flow(
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
llm=llm,
|
||||
max_loops='auto'
|
||||
sop=MULTI_MODAL_AUTO_AGENT_SYSTEM_PROMPT_1,
|
||||
dashboard=True,
|
||||
)
|
||||
|
||||
flow.run(task=task, img=img)
|
||||
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
|
||||
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Features 🤖
|
||||
The Swarms framework is designed with a strong emphasis on reliability, performance, and production-grade readiness.
|
||||
Below are the key features that make Swarms an ideal choice for enterprise-level AI deployments.
|
||||
|
||||
## 🚀 Production-Grade Readiness
|
||||
- **Scalable Architecture**: Built to scale effortlessly with your growing business needs.
|
||||
- **Enterprise-Level Security**: Incorporates top-notch security features to safeguard your data and operations.
|
||||
- **Containerization and Microservices**: Easily deployable in containerized environments, supporting microservices architecture.
|
||||
|
||||
## ⚙️ Reliability and Robustness
|
||||
- **Fault Tolerance**: Designed to handle failures gracefully, ensuring uninterrupted operations.
|
||||
- **Consistent Performance**: Maintains high performance even under heavy loads or complex computational demands.
|
||||
- **Automated Backup and Recovery**: Features automatic backup and recovery processes, reducing the risk of data loss.
|
||||
|
||||
## 💡 Advanced AI Capabilities
|
||||
|
||||
The Swarms framework is equipped with a suite of advanced AI capabilities designed to cater to a wide range of applications and scenarios, ensuring versatility and cutting-edge performance.
|
||||
|
||||
### Multi-Modal Autonomous Agents
|
||||
- **Versatile Model Support**: Seamlessly works with various AI models, including NLP, computer vision, and more, for comprehensive multi-modal capabilities.
|
||||
- **Context-Aware Processing**: Employs context-aware processing techniques to ensure relevant and accurate responses from agents.
|
||||
|
||||
### Function Calling Models for API Execution
|
||||
- **Automated API Interactions**: Function calling models that can autonomously execute API calls, enabling seamless integration with external services and data sources.
|
||||
- **Dynamic Response Handling**: Capable of processing and adapting to responses from APIs for real-time decision making.
|
||||
|
||||
### Varied Architectures of Swarms
|
||||
- **Flexible Configuration**: Supports multiple swarm architectures, from centralized to decentralized, for diverse application needs.
|
||||
- **Customizable Agent Roles**: Allows customization of agent roles and behaviors within the swarm to optimize performance and efficiency.
|
||||
|
||||
### Generative Models
|
||||
- **Advanced Generative Capabilities**: Incorporates state-of-the-art generative models to create content, simulate scenarios, or predict outcomes.
|
||||
- **Creative Problem Solving**: Utilizes generative AI for innovative problem-solving approaches and idea generation.
|
||||
|
||||
### Enhanced Decision-Making
|
||||
- **AI-Powered Decision Algorithms**: Employs advanced algorithms for swift and effective decision-making in complex scenarios.
|
||||
- **Risk Assessment and Management**: Capable of assessing risks and managing uncertain situations with AI-driven insights.
|
||||
|
||||
### Real-Time Adaptation and Learning
|
||||
- **Continuous Learning**: Agents can continuously learn and adapt from new data, improving their performance and accuracy over time.
|
||||
- **Environment Adaptability**: Designed to adapt to different operational environments, enhancing robustness and reliability.
|
||||
|
||||
|
||||
## 🔄 Efficient Workflow Automation
|
||||
- **Streamlined Task Management**: Simplifies complex tasks with automated workflows, reducing manual intervention.
|
||||
- **Customizable Workflows**: Offers customizable workflow options to fit specific business needs and requirements.
|
||||
- **Real-Time Analytics and Reporting**: Provides real-time insights into agent performance and system health.
|
||||
|
||||
## 🌐 Wide-Ranging Integration
|
||||
- **API-First Design**: Easily integrates with existing systems and third-party applications via robust APIs.
|
||||
- **Cloud Compatibility**: Fully compatible with major cloud platforms for flexible deployment options.
|
||||
- **Continuous Integration/Continuous Deployment (CI/CD)**: Supports CI/CD practices for seamless updates and deployment.
|
||||
|
||||
## 📊 Performance Optimization
|
||||
- **Resource Management**: Efficiently manages computational resources for optimal performance.
|
||||
- **Load Balancing**: Automatically balances workloads to maintain system stability and responsiveness.
|
||||
- **Performance Monitoring Tools**: Includes comprehensive monitoring tools for tracking and optimizing performance.
|
||||
|
||||
## 🛡️ Security and Compliance
|
||||
- **Data Encryption**: Implements end-to-end encryption for data at rest and in transit.
|
||||
- **Compliance Standards Adherence**: Adheres to major compliance standards ensuring legal and ethical usage.
|
||||
- **Regular Security Updates**: Regular updates to address emerging security threats and vulnerabilities.
|
||||
|
||||
## 💬 Community and Support
|
||||
- **Extensive Documentation**: Detailed documentation for easy implementation and troubleshooting.
|
||||
- **Active Developer Community**: A vibrant community for sharing ideas, solutions, and best practices.
|
||||
- **Professional Support**: Access to professional support for enterprise-level assistance and guidance.
|
||||
|
||||
Swarms framework is not just a tool but a robust, scalable, and secure partner in your AI journey, ready to tackle the challenges of modern AI applications in a business environment.
|
||||
|
||||
|
||||
## Documentation
|
||||
- For documentation, go here, [swarms.apac.ai](https://swarms.apac.ai)
|
||||
|
||||
|
||||
<<<<<<< HEAD
|
||||
|
||||
## 🫶 Contributions:
|
||||
|
||||
Swarms is an open-source project, and contributions are welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) file in the repository for more information on how to contribute.
|
||||
|
||||
To see how to contribute, visit [Contribution guidelines](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md)
|
||||
|
||||
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=kyegomez/swarms" />
|
||||
</a>
|
||||
=======
|
||||
## Contribute
|
||||
- We're always looking for contributors to help us improve and expand this project. If you're interested, please check out our [Contributing Guidelines](CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1)
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
## Community
|
||||
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
|
||||
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)
|
||||
|
||||
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
## Discovery Call
|
||||
Book a discovery call with the Swarms team to learn how to optimize and scale your swarm! [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)
|
||||
|
||||
# License
|
||||
MIT
|
||||
<<<<<<< HEAD
|
||||
|
||||
|
||||
|
||||
|
||||
=======
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
@ -0,0 +1,32 @@
|
||||
# Security Policy
|
||||
===============
|
||||
|
||||
## Supported Versions
|
||||
------------------
|
||||
|
||||
* * * * *
|
||||
|
||||
| Version | Supported |
|
||||
| --- | --- |
|
||||
| 2.0.5 | :white_check_mark: |
|
||||
| 2.0.4 | :white_check_mark: |
|
||||
| 2.0.3 | :white_check_mark: |
|
||||
| 2.0.2 | :white_check_mark: |
|
||||
| 2.0.1 | :white_check_mark: |
|
||||
| 2.0.0 | :white_check_mark: |
|
||||
|
||||
# Reporting a Vulnerability
|
||||
-------------------------
|
||||
|
||||
* * * * *
|
||||
|
||||
If you discover a security vulnerability in any of the above versions, please report it immediately to our security team by sending an email to kye@apac.ai. We take security vulnerabilities seriously and appreciate your efforts in disclosing them responsibly.
|
||||
|
||||
Please provide detailed information on the vulnerability, including steps to reproduce, potential impact, and any known mitigations. Our security team will acknowledge receipt of your report within 24 hours and will provide regular updates on the progress of the investigation.
|
||||
|
||||
Once the vulnerability has been thoroughly assessed, we will take the necessary steps to address it. This may include releasing a security patch, issuing a security advisory, or implementing other appropriate mitigations.
|
||||
|
||||
We aim to respond to all vulnerability reports in a timely manner and work towards resolving them as quickly as possible. We thank you for your contribution to the security of our software.
|
||||
|
||||
Please note that any vulnerability reports that are not related to the specified versions or do not provide sufficient information may be declined.
|
||||
|
@ -0,0 +1,23 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Navigate to the directory containing the 'swarms' folder
|
||||
# cd /path/to/your/code/directory
|
||||
|
||||
# Run autopep8 with max aggressiveness (-aaa) and in-place modification (-i)
|
||||
# on all Python files (*.py) under the 'swarms' directory.
|
||||
autopep8 --in-place --aggressive --aggressive --recursive --experimental --list-fixes swarms/
|
||||
|
||||
# Run black with default settings, since black does not have an aggressiveness level.
|
||||
# Black will format all Python files it finds in the 'swarms' directory.
|
||||
black --experimental-string-processing swarms/
|
||||
|
||||
# Run ruff on the 'swarms' directory.
|
||||
# Add any additional flags if needed according to your version of ruff.
|
||||
<<<<<<< HEAD
|
||||
#ruff --unsafe_fix
|
||||
=======
|
||||
ruff --unsafe_fix
|
||||
>>>>>>> 3d3dddaf0c7894ec2df14c51f7dd843c41c878c4
|
||||
|
||||
# YAPF
|
||||
yapf --recursive --in-place --verbose --style=google --parallel swarms
|
@ -0,0 +1,42 @@
|
||||
## **Applications of Swarms: Revolutionizing Customer Support**
|
||||
|
||||
---
|
||||
|
||||
**Introduction**:
|
||||
In today's fast-paced digital world, responsive and efficient customer support is a linchpin for business success. The introduction of AI-driven swarms in the customer support domain can transform the way businesses interact with and assist their customers. By leveraging the combined power of multiple AI agents working in concert, businesses can achieve unprecedented levels of efficiency, customer satisfaction, and operational cost savings.
|
||||
|
||||
---
|
||||
|
||||
### **The Benefits of Using Swarms for Customer Support:**
|
||||
|
||||
1. **24/7 Availability**: Swarms never sleep. Customers receive instantaneous support at any hour, ensuring constant satisfaction and loyalty.
|
||||
|
||||
2. **Infinite Scalability**: Whether it's ten inquiries or ten thousand, swarms can handle fluctuating volumes with ease, eliminating the need for vast human teams and minimizing response times.
|
||||
|
||||
3. **Adaptive Intelligence**: Swarms learn collectively, meaning that a solution found for one customer can be instantly applied to benefit all. This leads to constantly improving support experiences, evolving with every interaction.
|
||||
|
||||
---
|
||||
|
||||
### **Features - Reinventing Customer Support**:
|
||||
|
||||
- **AI Inbox Monitor**: Continuously scans email inboxes, identifying and categorizing support requests for swift responses.
|
||||
|
||||
- **Intelligent Debugging**: Proactively helps customers by diagnosing and troubleshooting underlying issues.
|
||||
|
||||
- **Automated Refunds & Coupons**: Seamless integration with payment systems like Stripe allows for instant issuance of refunds or coupons if a problem remains unresolved.
|
||||
|
||||
- **Full System Integration**: Holistically connects with CRM, email systems, and payment portals, ensuring a cohesive and unified support experience.
|
||||
|
||||
- **Conversational Excellence**: With advanced LLMs (Language Model Transformers), the swarm agents can engage in natural, human-like conversations, enhancing customer comfort and trust.
|
||||
|
||||
- **Rule-based Operation**: By working with rule engines, swarms ensure that all actions adhere to company guidelines, ensuring consistent, error-free support.
|
||||
|
||||
- **Turing Test Ready**: Crafted to meet and exceed the Turing Test standards, ensuring that every customer interaction feels genuine and personal.
|
||||
|
||||
---
|
||||
|
||||
**Conclusion**:
|
||||
Swarms are not just another technological advancement; they represent the future of customer support. Their ability to provide round-the-clock, scalable, and continuously improving support can redefine customer experience standards. By adopting swarms, businesses can stay ahead of the curve, ensuring unparalleled customer loyalty and satisfaction.
|
||||
|
||||
**Experience the future of customer support. Dive into the swarm revolution.**
|
||||
|
@ -0,0 +1,103 @@
|
||||
## Usage Documentation: Discord Bot with Advanced Features
|
||||
|
||||
---
|
||||
|
||||
### Overview:
|
||||
|
||||
This code provides a structure for a Discord bot with advanced features such as voice channel interactions, image generation, and text-based interactions using OpenAI models.
|
||||
|
||||
---
|
||||
|
||||
### Setup:
|
||||
|
||||
1. Ensure that the necessary libraries are installed:
|
||||
```bash
|
||||
pip install discord.py python-dotenv dalle3 invoke openai
|
||||
```
|
||||
|
||||
2. Create a `.env` file in the same directory as your bot script and add the following:
|
||||
```
|
||||
DISCORD_TOKEN=your_discord_bot_token
|
||||
STORAGE_SERVICE=your_storage_service_endpoint
|
||||
SAVE_DIRECTORY=path_to_save_generated_images
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Bot Class and its Methods:
|
||||
|
||||
#### `__init__(self, agent, llm, command_prefix="!")`:
|
||||
|
||||
Initializes the bot with the given agent, language model (`llm`), and a command prefix (default is `!`).
|
||||
|
||||
#### `add_command(self, name, func)`:
|
||||
|
||||
Allows you to dynamically add new commands to the bot. The `name` is the command's name and `func` is the function to execute when the command is called.
|
||||
|
||||
#### `run(self)`:
|
||||
|
||||
Starts the bot using the `DISCORD_TOKEN` from the `.env` file.
|
||||
|
||||
---
|
||||
|
||||
### Commands:
|
||||
|
||||
1. **!greet**: Greets the user.
|
||||
|
||||
2. **!help_me**: Provides a list of commands and their descriptions.
|
||||
|
||||
3. **!join**: Joins the voice channel the user is in.
|
||||
|
||||
4. **!leave**: Leaves the voice channel the bot is currently in.
|
||||
|
||||
5. **!listen**: Starts listening to voice in the current voice channel and records the audio.
|
||||
|
||||
6. **!generate_image [prompt]**: Generates images based on the provided prompt using the DALL-E3 model.
|
||||
|
||||
7. **!send_text [text] [use_agent=True]**: Sends the provided text to the worker (either the agent or the LLM) and returns the response.
|
||||
|
||||
---
|
||||
|
||||
### Usage:
|
||||
|
||||
Initialize the `llm` (Language Learning Model) with your OpenAI API key:
|
||||
|
||||
```python
|
||||
from swarms.models import OpenAIChat
|
||||
llm = OpenAIChat(
|
||||
openai_api_key="Your_OpenAI_API_Key",
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
Initialize the bot with the `llm`:
|
||||
|
||||
```python
|
||||
from apps.discord import Bot
|
||||
bot = Bot(llm=llm)
|
||||
```
|
||||
|
||||
Send a task to the bot:
|
||||
|
||||
```python
|
||||
task = "What were the winning Boston Marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
bot.send_text(task)
|
||||
```
|
||||
|
||||
Start the bot:
|
||||
|
||||
```python
|
||||
bot.run()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Additional Notes:
|
||||
|
||||
- The bot makes use of the `dalle3` library for image generation. Ensure you have the model and necessary setup for it.
|
||||
|
||||
- For the storage service, you might want to integrate with a cloud service like Google Cloud Storage or AWS S3 to store and retrieve generated images. The given code assumes a method `.upload()` for the storage service to upload files.
|
||||
|
||||
- Ensure that you've granted the bot necessary permissions on Discord, especially if you want to use voice channel features.
|
||||
|
||||
- Handle API keys and tokens securely. Avoid hardcoding them directly into your code. Use environment variables or secure secret management tools.
|
@ -0,0 +1,7 @@
|
||||
.md-typeset__table {
|
||||
min-width: 100%;
|
||||
}
|
||||
|
||||
.md-typeset table:not([class]) {
|
||||
display: table;
|
||||
}
|
After Width: | Height: | Size: 200 KiB |
After Width: | Height: | Size: 235 KiB |
After Width: | Height: | Size: 148 KiB |
After Width: | Height: | Size: 170 KiB |
After Width: | Height: | Size: 878 KiB |
@ -0,0 +1,123 @@
|
||||
# Contributing
|
||||
|
||||
Thank you for your interest in contributing to Swarms! We welcome contributions from the community to help improve usability and readability. By contributing, you can be a part of creating a dynamic and interactive AI system.
|
||||
|
||||
To get started, please follow the guidelines below.
|
||||
|
||||
|
||||
## Optimization Priorities
|
||||
|
||||
To continuously improve Swarms, we prioritize the following design objectives:
|
||||
|
||||
1. **Usability**: Increase the ease of use and user-friendliness of the swarm system to facilitate adoption and interaction with basic input.
|
||||
|
||||
2. **Reliability**: Improve the swarm's ability to obtain the desired output even with basic and un-detailed input.
|
||||
|
||||
3. **Speed**: Reduce the time it takes for the swarm to accomplish tasks by improving the communication layer, critiquing, and self-alignment with meta prompting.
|
||||
|
||||
4. **Scalability**: Ensure that the system is asynchronous, concurrent, and self-healing to support scalability.
|
||||
|
||||
Our goal is to continuously improve Swarms by following this roadmap while also being adaptable to new needs and opportunities as they arise.
|
||||
|
||||
## Join the Swarms Community
|
||||
|
||||
Join the Swarms community on Discord to connect with other contributors, coordinate work, and receive support.
|
||||
|
||||
- [Join the Swarms Discord Server](https://discord.gg/qUtxnK2NMf)
|
||||
|
||||
|
||||
## Report and Issue
|
||||
The easiest way to contribute to our docs is through our public [issue tracker](https://github.com/kyegomez/swarms-docs/issues). Feel free to submit bugs, request features or changes, or contribute to the project directly.
|
||||
|
||||
## Pull Requests
|
||||
|
||||
Swarms docs are built using [MkDocs](https://squidfunk.github.io/mkdocs-material/getting-started/).
|
||||
|
||||
To directly contribute to Swarms documentation, first fork the [swarms-docs](https://github.com/kyegomez/swarms-docs) repository to your GitHub account. Then clone your repository to your local machine.
|
||||
|
||||
From inside the directory run:
|
||||
|
||||
```pip install -r requirements.txt```
|
||||
|
||||
To run `swarms-docs` locally run:
|
||||
|
||||
```mkdocs serve```
|
||||
|
||||
You should see something similar to the following:
|
||||
|
||||
```
|
||||
INFO - Building documentation...
|
||||
INFO - Cleaning site directory
|
||||
INFO - Documentation built in 0.19 seconds
|
||||
INFO - [09:28:33] Watching paths for changes: 'docs', 'mkdocs.yml'
|
||||
INFO - [09:28:33] Serving on http://127.0.0.1:8000/
|
||||
INFO - [09:28:37] Browser connected: http://127.0.0.1:8000/
|
||||
```
|
||||
|
||||
Follow the typical PR process to contribute changes.
|
||||
|
||||
* Create a feature branch.
|
||||
* Commit changes.
|
||||
* Submit a PR.
|
||||
|
||||
|
||||
-------
|
||||
---
|
||||
|
||||
## Taking on Tasks
|
||||
|
||||
We have a growing list of tasks and issues that you can contribute to. To get started, follow these steps:
|
||||
|
||||
1. Visit the [Swarms GitHub repository](https://github.com/kyegomez/swarms) and browse through the existing issues.
|
||||
|
||||
2. Find an issue that interests you and make a comment stating that you would like to work on it. Include a brief description of how you plan to solve the problem and any questions you may have.
|
||||
|
||||
3. Once a project coordinator assigns the issue to you, you can start working on it.
|
||||
|
||||
If you come across an issue that is unclear but still interests you, please post in the Discord server mentioned above. Someone from the community will be able to help clarify the issue in more detail.
|
||||
|
||||
We also welcome contributions to documentation, such as updating markdown files, adding docstrings, creating system architecture diagrams, and other related tasks.
|
||||
|
||||
## Submitting Your Work
|
||||
|
||||
To contribute your changes to Swarms, please follow these steps:
|
||||
|
||||
1. Fork the Swarms repository to your GitHub account. You can do this by clicking on the "Fork" button on the repository page.
|
||||
|
||||
2. Clone the forked repository to your local machine using the `git clone` command.
|
||||
|
||||
3. Before making any changes, make sure to sync your forked repository with the original repository to keep it up to date. You can do this by following the instructions [here](https://docs.github.com/en/github/collaborating-with-pull-requests/syncing-a-fork).
|
||||
|
||||
4. Create a new branch for your changes. This branch should have a descriptive name that reflects the task or issue you are working on.
|
||||
|
||||
5. Make your changes in the branch, focusing on a small, focused change that only affects a few files.
|
||||
|
||||
6. Run any necessary formatting or linting tools to ensure that your changes adhere to the project's coding standards.
|
||||
|
||||
7. Once your changes are ready, commit them to your branch with descriptive commit messages.
|
||||
|
||||
8. Push the branch to your forked repository.
|
||||
|
||||
9. Create a pull request (PR) from your branch to the main Swarms repository. Provide a clear and concise description of your changes in the PR.
|
||||
|
||||
10. Request a review from the project maintainers. They will review your changes, provide feedback, and suggest any necessary improvements.
|
||||
|
||||
11. Make any required updates or address any feedback provided during the review process.
|
||||
|
||||
12. Once your changes have been reviewed and approved, they will be merged into the main branch of the Swarms repository.
|
||||
|
||||
13. Congratulations! You have successfully contributed to Swarms.
|
||||
|
||||
Please note that during the review process, you may be asked to make changes or address certain issues. It is important to engage in open and constructive communication with the project maintainers to ensure the quality of your contributions.
|
||||
|
||||
## Developer Setup
|
||||
|
||||
If you are interested in setting up the Swarms development environment, please follow the instructions provided in the [developer setup guide](docs/developer-setup.md). This guide provides an overview of the different tools and technologies used in the project.
|
||||
|
||||
## Join the Agora Community
|
||||
|
||||
Swarms is brought to you by Agora, the open-source AI research organization. Join the Agora community to connect with other researchers and developers working on AI projects.
|
||||
|
||||
- [Join the Agora Discord Server](https://discord.gg/qUtxnK2NMf)
|
||||
|
||||
Thank you for your contributions and for being a part of the Swarms and Agora community! Together, we can advance Humanity through the power of AI.
|
@ -0,0 +1,358 @@
|
||||
# Architecture
|
||||
|
||||
## **1. Introduction**
|
||||
|
||||
In today's rapidly evolving digital world, harnessing the collaborative power of multiple computational agents is more crucial than ever. 'Swarms' represents a bold stride in this direction—a scalable and dynamic framework designed to enable swarms of agents to function in harmony and tackle complex tasks. This document serves as a comprehensive guide, elucidating the underlying architecture and strategies pivotal to realizing the Swarms vision.
|
||||
|
||||
---
|
||||
|
||||
## **2. The Vision**
|
||||
|
||||
At its heart, the Swarms framework seeks to emulate the collaborative efficiency witnessed in natural systems, like ant colonies or bird flocks. These entities, though individually simple, achieve remarkable outcomes through collaboration. Similarly, Swarms will unleash the collective potential of numerous agents, operating cohesively.
|
||||
|
||||
---
|
||||
|
||||
## **3. Architecture Overview**
|
||||
|
||||
### **3.1 Agent Level**
|
||||
The base level that serves as the building block for all further complexity.
|
||||
|
||||
#### Mechanics:
|
||||
* **Model**: At its core, each agent harnesses a powerful model like OpenAI's GPT.
|
||||
* **Vectorstore**: A memory structure allowing agents to store and retrieve information.
|
||||
* **Tools**: Utilities and functionalities that aid in the agent's task execution.
|
||||
|
||||
#### Interaction:
|
||||
Agents interact with the external world through their model and tools. The Vectorstore aids in retaining knowledge and facilitating inter-agent communication.
|
||||
|
||||
### **3.2 Worker Infrastructure Level**
|
||||
Building on the agent foundation, enhancing capability and readiness for swarm integration.
|
||||
|
||||
#### Mechanics:
|
||||
* **Human Input Integration**: Enables agents to accept and understand human-provided instructions.
|
||||
* **Unique Identifiers**: Assigns each agent a unique ID to facilitate tracking and communication.
|
||||
* **Asynchronous Tools**: Bolsters agents' capability to multitask and interact in real-time.
|
||||
|
||||
#### Interaction:
|
||||
Each worker is an enhanced agent, capable of operating independently or in sync with its peers, allowing for dynamic, scalable operations.
|
||||
|
||||
### **3.3 Swarm Level**
|
||||
Multiple Worker Nodes orchestrated into a synchronized, collaborative entity.
|
||||
|
||||
#### Mechanics:
|
||||
* **Orchestrator**: The maestro, responsible for directing the swarm, task allocation, and communication.
|
||||
* **Scalable Communication Layer**: Facilitates interactions among nodes and between nodes and the orchestrator.
|
||||
* **Task Assignment & Completion Protocols**: Structured procedures ensuring tasks are efficiently distributed and concluded.
|
||||
|
||||
#### Interaction:
|
||||
Nodes collaborate under the orchestrator's guidance, ensuring tasks are partitioned appropriately, executed, and results consolidated.
|
||||
|
||||
### **3.4 Hivemind Level**
|
||||
Envisioned as a 'Swarm of Swarms'. An upper echelon of collaboration.
|
||||
|
||||
#### Mechanics:
|
||||
* **Hivemind Orchestrator**: Oversees multiple swarm orchestrators, ensuring harmony on a grand scale.
|
||||
* **Inter-Swarm Communication Protocols**: Dictates how swarms interact, exchange information, and co-execute tasks.
|
||||
|
||||
#### Interaction:
|
||||
Multiple swarms, each a formidable force, combine their prowess under the Hivemind. This level tackles monumental tasks by dividing them among swarms.
|
||||
|
||||
---
|
||||
|
||||
## **4. Building the Framework: A Task Checklist**
|
||||
|
||||
### **4.1 Foundations: Agent Level**
|
||||
* Define and standardize agent properties.
|
||||
* Integrate desired model (e.g., OpenAI's GPT) with agent.
|
||||
* Implement Vectorstore mechanisms: storage, retrieval, and communication protocols.
|
||||
* Incorporate essential tools and utilities.
|
||||
* Conduct preliminary testing: Ensure agents can execute basic tasks and utilize the Vectorstore.
|
||||
|
||||
### **4.2 Enhancements: Worker Infrastructure Level**
|
||||
* Interface agents with human input mechanisms.
|
||||
* Assign and manage unique identifiers for each worker.
|
||||
* Integrate asynchronous capabilities: Ensure real-time response and multitasking.
|
||||
* Test worker nodes for both solitary and collaborative tasks.
|
||||
|
||||
### **4.3 Cohesion: Swarm Level**
|
||||
* Design and develop the orchestrator: Ensure it can manage multiple worker nodes.
|
||||
* Establish a scalable and efficient communication layer.
|
||||
* Implement task distribution and retrieval protocols.
|
||||
* Test swarms for efficiency, scalability, and robustness.
|
||||
|
||||
### **4.4 Apex Collaboration: Hivemind Level**
|
||||
* Build the Hivemind Orchestrator: Ensure it can oversee multiple swarms.
|
||||
* Define inter-swarm communication, prioritization, and task-sharing protocols.
|
||||
* Develop mechanisms to balance loads and optimize resource utilization across swarms.
|
||||
* Thoroughly test the Hivemind level for macro-task execution.
|
||||
|
||||
---
|
||||
|
||||
## **5. Integration and Communication Mechanisms**
|
||||
|
||||
### **5.1 Vectorstore as the Universal Communication Layer**
|
||||
Serving as the memory and communication backbone, the Vectorstore must:
|
||||
* Facilitate rapid storage and retrieval of high-dimensional vectors.
|
||||
* Enable similarity-based lookups: Crucial for recognizing patterns or finding similar outputs.
|
||||
* Scale seamlessly as agent count grows.
|
||||
|
||||
### **5.2 Orchestrator-Driven Communication**
|
||||
* Orchestrators, both at the swarm and hivemind level, should employ adaptive algorithms to optimally distribute tasks.
|
||||
* Ensure real-time monitoring of task execution and worker node health.
|
||||
* Integrate feedback loops: Allow for dynamic task reassignment in case of node failures or inefficiencies.
|
||||
|
||||
---
|
||||
|
||||
## **6. Conclusion & Forward Path**
|
||||
|
||||
The Swarms framework, once realized, will usher in a new era of computational efficiency and collaboration. While the roadmap ahead is intricate, with diligent planning, development, and testing, Swarms will redefine the boundaries of collaborative computing.
|
||||
|
||||
--------
|
||||
|
||||
|
||||
# Overview
|
||||
|
||||
### 1. Model
|
||||
|
||||
**Overview:**
|
||||
The foundational level where a trained model (e.g., OpenAI GPT model) is initialized. It's the base on which further abstraction levels build upon. It provides the core capabilities to perform tasks, answer queries, etc.
|
||||
|
||||
**Diagram:**
|
||||
```
|
||||
[ Model (openai) ]
|
||||
```
|
||||
|
||||
### 2. Agent Level
|
||||
|
||||
**Overview:**
|
||||
At the agent level, the raw model is coupled with tools and a vector store, allowing it to be more than just a model. The agent can now remember, use tools, and become a more versatile entity ready for integration into larger systems.
|
||||
|
||||
**Diagram:**
|
||||
```
|
||||
+-----------+
|
||||
| Agent |
|
||||
| +-------+ |
|
||||
| | Model | |
|
||||
| +-------+ |
|
||||
| +-----------+ |
|
||||
| | VectorStore | |
|
||||
| +-----------+ |
|
||||
| +-------+ |
|
||||
| | Tools | |
|
||||
| +-------+ |
|
||||
+-----------+
|
||||
```
|
||||
|
||||
### 3. Worker Infrastructure Level
|
||||
|
||||
**Overview:**
|
||||
The worker infrastructure is a step above individual agents. Here, an agent is paired with additional utilities like human input and other tools, making it a more advanced, responsive unit capable of complex tasks.
|
||||
|
||||
**Diagram:**
|
||||
```
|
||||
+----------------+
|
||||
| WorkerNode |
|
||||
| +-----------+ |
|
||||
| | Agent | |
|
||||
| | +-------+ | |
|
||||
| | | Model | | |
|
||||
| | +-------+ | |
|
||||
| | +-------+ | |
|
||||
| | | Tools | | |
|
||||
| | +-------+ | |
|
||||
| +-----------+ |
|
||||
| |
|
||||
| +-----------+ |
|
||||
| |Human Input| |
|
||||
| +-----------+ |
|
||||
| |
|
||||
| +-------+ |
|
||||
| | Tools | |
|
||||
| +-------+ |
|
||||
+----------------+
|
||||
```
|
||||
|
||||
### 4. Swarm Level
|
||||
|
||||
**Overview:**
|
||||
At the swarm level, the orchestrator is central. It's responsible for assigning tasks to worker nodes, monitoring their completion, and handling the communication layer (for example, through a vector store or another universal communication mechanism) between worker nodes.
|
||||
|
||||
**Diagram:**
|
||||
```
|
||||
+------------+
|
||||
|Orchestrator|
|
||||
+------------+
|
||||
|
|
||||
+---------------------------+
|
||||
| |
|
||||
| Swarm-level Communication|
|
||||
| Layer (e.g. |
|
||||
| Vector Store) |
|
||||
+---------------------------+
|
||||
/ | \
|
||||
+---------------+ +---------------+ +---------------+
|
||||
|WorkerNode 1 | |WorkerNode 2 | |WorkerNode n |
|
||||
| | | | | |
|
||||
+---------------+ +---------------+ +---------------+
|
||||
| Task Assigned | Task Completed | Communication |
|
||||
```
|
||||
|
||||
### 5. Hivemind Level
|
||||
|
||||
**Overview:**
|
||||
At the Hivemind level, it's a multi-swarm setup, with an upper-layer orchestrator managing multiple swarm-level orchestrators. The Hivemind orchestrator is responsible for broader tasks like assigning macro-tasks to swarms, handling inter-swarm communications, and ensuring the overall system is functioning smoothly.
|
||||
|
||||
**Diagram:**
|
||||
```
|
||||
+--------+
|
||||
|Hivemind|
|
||||
+--------+
|
||||
|
|
||||
+--------------+
|
||||
|Hivemind |
|
||||
|Orchestrator |
|
||||
+--------------+
|
||||
/ | \
|
||||
+------------+ +------------+ +------------+
|
||||
|Orchestrator| |Orchestrator| |Orchestrator|
|
||||
+------------+ +------------+ +------------+
|
||||
| | |
|
||||
+--------------+ +--------------+ +--------------+
|
||||
| Swarm-level| | Swarm-level| | Swarm-level|
|
||||
|Communication| |Communication| |Communication|
|
||||
| Layer | | Layer | | Layer |
|
||||
+--------------+ +--------------+ +--------------+
|
||||
/ \ / \ / \
|
||||
+-------+ +-------+ +-------+ +-------+ +-------+
|
||||
|Worker | |Worker | |Worker | |Worker | |Worker |
|
||||
| Node | | Node | | Node | | Node | | Node |
|
||||
+-------+ +-------+ +-------+ +-------+ +-------+
|
||||
```
|
||||
|
||||
This setup allows the Hivemind level to operate at a grander scale, with the capability to manage hundreds or even thousands of worker nodes across multiple swarms efficiently.
|
||||
|
||||
|
||||
|
||||
-------
|
||||
# **Swarms Framework Development Strategy Checklist**
|
||||
|
||||
## **Introduction**
|
||||
|
||||
The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
|
||||
|
||||
---
|
||||
|
||||
## **1. Agent Level Development**
|
||||
|
||||
### **1.1 Model Integration**
|
||||
- [ ] Research the most suitable models (e.g., OpenAI's GPT).
|
||||
- [ ] Design an API for the agent to call the model.
|
||||
- [ ] Implement error handling when model calls fail.
|
||||
- [ ] Test the model with sample data for accuracy and speed.
|
||||
|
||||
### **1.2 Vectorstore Implementation**
|
||||
- [ ] Design the schema for the vector storage system.
|
||||
- [ ] Implement storage methods to add, delete, and update vectors.
|
||||
- [ ] Develop retrieval methods with optimization for speed.
|
||||
- [ ] Create protocols for vector-based communication between agents.
|
||||
- [ ] Conduct stress tests to ascertain storage and retrieval speed.
|
||||
|
||||
### **1.3 Tools & Utilities Integration**
|
||||
- [ ] List out essential tools required for agent functionality.
|
||||
- [ ] Develop or integrate APIs for each tool.
|
||||
- [ ] Implement error handling and logging for tool interactions.
|
||||
- [ ] Validate tools integration with unit tests.
|
||||
|
||||
---
|
||||
|
||||
## **2. Worker Infrastructure Level Development**
|
||||
|
||||
### **2.1 Human Input Integration**
|
||||
- [ ] Design a UI/UX for human interaction with worker nodes.
|
||||
- [ ] Create APIs for input collection.
|
||||
- [ ] Implement input validation and error handling.
|
||||
- [ ] Test human input methods for clarity and ease of use.
|
||||
|
||||
### **2.2 Unique Identifier System**
|
||||
- [ ] Research optimal formats for unique ID generation.
|
||||
- [ ] Develop methods for generating and assigning IDs to agents.
|
||||
- [ ] Implement a tracking system to manage and monitor agents via IDs.
|
||||
- [ ] Validate the uniqueness and reliability of the ID system.
|
||||
|
||||
### **2.3 Asynchronous Operation Tools**
|
||||
- [ ] Incorporate libraries/frameworks to enable asynchrony.
|
||||
- [ ] Ensure tasks within an agent can run in parallel without conflict.
|
||||
- [ ] Test asynchronous operations for efficiency improvements.
|
||||
|
||||
---
|
||||
|
||||
## **3. Swarm Level Development**
|
||||
|
||||
### **3.1 Orchestrator Design & Development**
|
||||
- [ ] Draft a blueprint of orchestrator functionalities.
|
||||
- [ ] Implement methods for task distribution among worker nodes.
|
||||
- [ ] Develop communication protocols for the orchestrator to monitor workers.
|
||||
- [ ] Create feedback systems to detect and address worker node failures.
|
||||
- [ ] Test orchestrator with a mock swarm to ensure efficient task allocation.
|
||||
|
||||
### **3.2 Communication Layer Development**
|
||||
- [ ] Select a suitable communication protocol/framework (e.g., gRPC, WebSockets).
|
||||
- [ ] Design the architecture for scalable, low-latency communication.
|
||||
- [ ] Implement methods for sending, receiving, and broadcasting messages.
|
||||
- [ ] Test communication layer for reliability, speed, and error handling.
|
||||
|
||||
### **3.3 Task Management Protocols**
|
||||
- [ ] Develop a system to queue, prioritize, and allocate tasks.
|
||||
- [ ] Implement methods for real-time task status tracking.
|
||||
- [ ] Create a feedback loop for completed tasks.
|
||||
- [ ] Test task distribution, execution, and feedback systems for efficiency.
|
||||
|
||||
---
|
||||
|
||||
## **4. Hivemind Level Development**
|
||||
|
||||
### **4.1 Hivemind Orchestrator Development**
|
||||
- [ ] Extend swarm orchestrator functionalities to manage multiple swarms.
|
||||
- [ ] Create inter-swarm communication protocols.
|
||||
- [ ] Implement load balancing mechanisms to distribute tasks across swarms.
|
||||
- [ ] Validate hivemind orchestrator functionalities with multi-swarm setups.
|
||||
|
||||
### **4.2 Inter-Swarm Communication Protocols**
|
||||
- [ ] Design methods for swarms to exchange data.
|
||||
- [ ] Implement data reconciliation methods for swarms working on shared tasks.
|
||||
- [ ] Test inter-swarm communication for efficiency and data integrity.
|
||||
|
||||
---
|
||||
|
||||
## **5. Scalability & Performance Testing**
|
||||
|
||||
- [ ] Simulate heavy loads to test the limits of the framework.
|
||||
- [ ] Identify and address bottlenecks in both communication and computation.
|
||||
- [ ] Conduct speed tests under different conditions.
|
||||
- [ ] Test the system's responsiveness under various levels of stress.
|
||||
|
||||
---
|
||||
|
||||
## **6. Documentation & User Guide**
|
||||
|
||||
- [ ] Develop detailed documentation covering architecture, setup, and usage.
|
||||
- [ ] Create user guides with step-by-step instructions.
|
||||
- [ ] Incorporate visual aids, diagrams, and flowcharts for clarity.
|
||||
- [ ] Update documentation regularly with new features and improvements.
|
||||
|
||||
---
|
||||
|
||||
## **7. Continuous Integration & Deployment**
|
||||
|
||||
- [ ] Setup CI/CD pipelines for automated testing and deployment.
|
||||
- [ ] Ensure automatic rollback in case of deployment failures.
|
||||
- [ ] Integrate code quality and security checks in the pipeline.
|
||||
- [ ] Document deployment strategies and best practices.
|
||||
|
||||
---
|
||||
|
||||
## **Conclusion**
|
||||
|
||||
The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
|
||||
|
||||
(Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
|
@ -0,0 +1,86 @@
|
||||
# Bounty Program
|
||||
|
||||
Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
|
||||
|
||||
Here's how it works:
|
||||
|
||||
1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
|
||||
|
||||
2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
|
||||
|
||||
3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
|
||||
|
||||
4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
|
||||
|
||||
5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
|
||||
|
||||
## The Three Phases of Our Bounty Program
|
||||
|
||||
### Phase 1: Building the Foundation
|
||||
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
|
||||
|
||||
### Phase 2: Enhancing the System
|
||||
In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
|
||||
|
||||
### Phase 3: Towards Super-Intelligence
|
||||
The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
|
||||
|
||||
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
|
||||
|
||||
**To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together!
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## Bounties for Roadmap Items
|
||||
|
||||
To accelerate the development of Swarms and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
|
||||
|
||||
1. **Multi-Agent Debate Integration**: $2000
|
||||
2. **Meta Prompting Integration**: $1500
|
||||
3. **Swarms Class**: $1500
|
||||
4. **Integration of Additional Tools**: $1000
|
||||
5. **Task Completion and Evaluation Logic**: $2000
|
||||
6. **Ocean Integration**: $2500
|
||||
7. **Improved Communication**: $2000
|
||||
8. **Testing and Evaluation**: $1500
|
||||
9. **Worker Swarm Class**: $2000
|
||||
10. **Documentation**: $500
|
||||
|
||||
For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
|
||||
|
||||
# 3-Phase Testing Framework
|
||||
|
||||
To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
|
||||
|
||||
## Phase 1: Unit Testing
|
||||
In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
|
||||
|
||||
## Phase 2: Integration Testing
|
||||
After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
|
||||
|
||||
## Phase 3: Benchmarking & Stress Testing
|
||||
In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
|
||||
|
||||
By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
|
||||
|
||||
# Reverse Engineering to Reach Phase 3
|
||||
|
||||
To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
|
||||
|
||||
1. **Set Clear Expectations**: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
|
||||
|
||||
2. **Develop Testing Scenarios**: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
|
||||
|
||||
3. **Write Test Cases**: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
|
||||
|
||||
4. **Execute the Tests**: Run the test cases on our Swarm, making note of any issues or bugs that arise.
|
||||
|
||||
5. **Iterate and Improve**: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
|
||||
|
||||
6. **Repeat**: Repeat this process until our Swarm meets our expectations and passes all test cases.
|
||||
|
||||
By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
|
||||
|
||||
Let's shape the future of digital automation together!
|
@ -0,0 +1,122 @@
|
||||
# **Swarms Framework Development Strategy Checklist**
|
||||
|
||||
## **Introduction**
|
||||
|
||||
The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
|
||||
|
||||
---
|
||||
|
||||
## **1. Agent Level Development**
|
||||
|
||||
### **1.1 Model Integration**
|
||||
- [ ] Research the most suitable models (e.g., OpenAI's GPT).
|
||||
- [ ] Design an API for the agent to call the model.
|
||||
- [ ] Implement error handling when model calls fail.
|
||||
- [ ] Test the model with sample data for accuracy and speed.
|
||||
|
||||
### **1.2 Vectorstore Implementation**
|
||||
- [ ] Design the schema for the vector storage system.
|
||||
- [ ] Implement storage methods to add, delete, and update vectors.
|
||||
- [ ] Develop retrieval methods with optimization for speed.
|
||||
- [ ] Create protocols for vector-based communication between agents.
|
||||
- [ ] Conduct stress tests to ascertain storage and retrieval speed.
|
||||
|
||||
### **1.3 Tools & Utilities Integration**
|
||||
- [ ] List out essential tools required for agent functionality.
|
||||
- [ ] Develop or integrate APIs for each tool.
|
||||
- [ ] Implement error handling and logging for tool interactions.
|
||||
- [ ] Validate tools integration with unit tests.
|
||||
|
||||
---
|
||||
|
||||
## **2. Worker Infrastructure Level Development**
|
||||
|
||||
### **2.1 Human Input Integration**
|
||||
- [ ] Design a UI/UX for human interaction with worker nodes.
|
||||
- [ ] Create APIs for input collection.
|
||||
- [ ] Implement input validation and error handling.
|
||||
- [ ] Test human input methods for clarity and ease of use.
|
||||
|
||||
### **2.2 Unique Identifier System**
|
||||
- [ ] Research optimal formats for unique ID generation.
|
||||
- [ ] Develop methods for generating and assigning IDs to agents.
|
||||
- [ ] Implement a tracking system to manage and monitor agents via IDs.
|
||||
- [ ] Validate the uniqueness and reliability of the ID system.
|
||||
|
||||
### **2.3 Asynchronous Operation Tools**
|
||||
- [ ] Incorporate libraries/frameworks to enable asynchrony.
|
||||
- [ ] Ensure tasks within an agent can run in parallel without conflict.
|
||||
- [ ] Test asynchronous operations for efficiency improvements.
|
||||
|
||||
---
|
||||
|
||||
## **3. Swarm Level Development**
|
||||
|
||||
### **3.1 Orchestrator Design & Development**
|
||||
- [ ] Draft a blueprint of orchestrator functionalities.
|
||||
- [ ] Implement methods for task distribution among worker nodes.
|
||||
- [ ] Develop communication protocols for the orchestrator to monitor workers.
|
||||
- [ ] Create feedback systems to detect and address worker node failures.
|
||||
- [ ] Test orchestrator with a mock swarm to ensure efficient task allocation.
|
||||
|
||||
### **3.2 Communication Layer Development**
|
||||
- [ ] Select a suitable communication protocol/framework (e.g., gRPC, WebSockets).
|
||||
- [ ] Design the architecture for scalable, low-latency communication.
|
||||
- [ ] Implement methods for sending, receiving, and broadcasting messages.
|
||||
- [ ] Test communication layer for reliability, speed, and error handling.
|
||||
|
||||
### **3.3 Task Management Protocols**
|
||||
- [ ] Develop a system to queue, prioritize, and allocate tasks.
|
||||
- [ ] Implement methods for real-time task status tracking.
|
||||
- [ ] Create a feedback loop for completed tasks.
|
||||
- [ ] Test task distribution, execution, and feedback systems for efficiency.
|
||||
|
||||
---
|
||||
|
||||
## **4. Hivemind Level Development**
|
||||
|
||||
### **4.1 Hivemind Orchestrator Development**
|
||||
- [ ] Extend swarm orchestrator functionalities to manage multiple swarms.
|
||||
- [ ] Create inter-swarm communication protocols.
|
||||
- [ ] Implement load balancing mechanisms to distribute tasks across swarms.
|
||||
- [ ] Validate hivemind orchestrator functionalities with multi-swarm setups.
|
||||
|
||||
### **4.2 Inter-Swarm Communication Protocols**
|
||||
- [ ] Design methods for swarms to exchange data.
|
||||
- [ ] Implement data reconciliation methods for swarms working on shared tasks.
|
||||
- [ ] Test inter-swarm communication for efficiency and data integrity.
|
||||
|
||||
---
|
||||
|
||||
## **5. Scalability & Performance Testing**
|
||||
|
||||
- [ ] Simulate heavy loads to test the limits of the framework.
|
||||
- [ ] Identify and address bottlenecks in both communication and computation.
|
||||
- [ ] Conduct speed tests under different conditions.
|
||||
- [ ] Test the system's responsiveness under various levels of stress.
|
||||
|
||||
---
|
||||
|
||||
## **6. Documentation & User Guide**
|
||||
|
||||
- [ ] Develop detailed documentation covering architecture, setup, and usage.
|
||||
- [ ] Create user guides with step-by-step instructions.
|
||||
- [ ] Incorporate visual aids, diagrams, and flowcharts for clarity.
|
||||
- [ ] Update documentation regularly with new features and improvements.
|
||||
|
||||
---
|
||||
|
||||
## **7. Continuous Integration & Deployment**
|
||||
|
||||
- [ ] Setup CI/CD pipelines for automated testing and deployment.
|
||||
- [ ] Ensure automatic rollback in case of deployment failures.
|
||||
- [ ] Integrate code quality and security checks in the pipeline.
|
||||
- [ ] Document deployment strategies and best practices.
|
||||
|
||||
---
|
||||
|
||||
## **Conclusion**
|
||||
|
||||
The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
|
||||
|
||||
(Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
|
@ -0,0 +1,100 @@
|
||||
# Costs Structure of Deploying Autonomous Agents
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. Introduction
|
||||
2. Our Time: Generating System Prompts and Custom Tools
|
||||
3. Consultancy Fees
|
||||
4. Model Inference Infrastructure
|
||||
5. Deployment and Continual Maintenance
|
||||
6. Output Metrics: Blogs Generation Rates
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
Autonomous agents are revolutionizing various industries, from self-driving cars to chatbots and customer service solutions. The prospect of automation and improved efficiency makes these agents attractive investments. However, like any other technological solution, deploying autonomous agents involves several cost elements that organizations need to consider carefully. This comprehensive guide aims to provide an exhaustive outline of the costs associated with deploying autonomous agents.
|
||||
|
||||
---
|
||||
|
||||
## 2. Our Time: Generating System Prompts and Custom Tools
|
||||
|
||||
### Description
|
||||
|
||||
The deployment of autonomous agents often requires a substantial investment of time to develop system prompts and custom tools tailored to specific operational needs.
|
||||
|
||||
### Costs
|
||||
|
||||
| Task | Time Required (Hours) | Cost per Hour ($) | Total Cost ($) |
|
||||
| ------------------------ | --------------------- | ----------------- | -------------- |
|
||||
| System Prompts Design | 50 | 100 | 5,000 |
|
||||
| Custom Tools Development | 100 | 100 | 10,000 |
|
||||
| **Total** | **150** | | **15,000** |
|
||||
|
||||
---
|
||||
|
||||
## 3. Consultancy Fees
|
||||
|
||||
### Description
|
||||
|
||||
Consultation is often necessary for navigating the complexities of autonomous agents. This includes system assessment, customization, and other essential services.
|
||||
|
||||
### Costs
|
||||
|
||||
| Service | Fees ($) |
|
||||
| -------------------- | --------- |
|
||||
| Initial Assessment | 5,000 |
|
||||
| System Customization | 7,000 |
|
||||
| Training | 3,000 |
|
||||
| **Total** | **15,000**|
|
||||
|
||||
---
|
||||
|
||||
## 4. Model Inference Infrastructure
|
||||
|
||||
### Description
|
||||
|
||||
The hardware and software needed for the agent's functionality, known as the model inference infrastructure, form a significant part of the costs.
|
||||
|
||||
### Costs
|
||||
|
||||
| Component | Cost ($) |
|
||||
| -------------------- | --------- |
|
||||
| Hardware | 10,000 |
|
||||
| Software Licenses | 2,000 |
|
||||
| Cloud Services | 3,000 |
|
||||
| **Total** | **15,000**|
|
||||
|
||||
---
|
||||
|
||||
## 5. Deployment and Continual Maintenance
|
||||
|
||||
### Description
|
||||
|
||||
Once everything is in place, deploying the autonomous agents and their ongoing maintenance are the next major cost factors.
|
||||
|
||||
### Costs
|
||||
|
||||
| Task | Monthly Cost ($) | Annual Cost ($) |
|
||||
| ------------------- | ---------------- | --------------- |
|
||||
| Deployment | 5,000 | 60,000 |
|
||||
| Ongoing Maintenance | 1,000 | 12,000 |
|
||||
| **Total** | **6,000** | **72,000** |
|
||||
|
||||
---
|
||||
|
||||
## 6. Output Metrics: Blogs Generation Rates
|
||||
|
||||
### Description
|
||||
|
||||
To provide a sense of what an investment in autonomous agents can yield, we offer the following data regarding blogs that can be generated as an example of output.
|
||||
|
||||
### Blogs Generation Rates
|
||||
|
||||
| Timeframe | Number of Blogs |
|
||||
|-----------|-----------------|
|
||||
| Per Day | 20 |
|
||||
| Per Week | 140 |
|
||||
| Per Month | 600 |
|
||||
|
||||
|
@ -0,0 +1,9 @@
|
||||
# Demo Ideas
|
||||
|
||||
* We could also try to create an AI influencer run by a swarm, let it create a whole identity and generate images, memes, and other content for Twitter, Reddit, etc.
|
||||
|
||||
* had a thought that we should have either a more general one of these or a swarm or both -- need something connecting all the calendars, events, and initiatives of all the AI communities, langchain, laion, eluther, lesswrong, gato, rob miles, chatgpt hackers, etc etc
|
||||
|
||||
* Swarm of AI influencers to spread marketing
|
||||
|
||||
* Delegation System to better organize teams: Start with a team of passionate humans and let them self-report their skills/strengths so the agent has a concept of who to delegate to, then feed the agent a huge task list (like the bullet list a few messages above) that it breaks down into actionable steps and "prompts" specific team members to complete tasks. Could even suggest breakout teams of a few people with complementary skills to tackle more complex tasks. There can also be a live board that updates each time a team member completes something, to encourage momentum and keep track of progress
|
@ -0,0 +1,152 @@
|
||||
# Design Philosophy Document for Swarms
|
||||
|
||||
## Usable
|
||||
|
||||
### Objective
|
||||
|
||||
Our goal is to ensure that Swarms is intuitive and easy to use for all users, regardless of their level of technical expertise. This includes the developers who implement Swarms in their applications, as well as end users who interact with the implemented systems.
|
||||
|
||||
### Tactics
|
||||
|
||||
- Clear and Comprehensive Documentation: We will provide well-written and easily accessible documentation that guides users through using and understanding Swarms.
|
||||
- User-Friendly APIs: We'll design clean and self-explanatory APIs that help developers to understand their purpose quickly.
|
||||
- Prompt and Effective Support: We will ensure that support is readily available to assist users when they encounter problems or need help with Swarms.
|
||||
|
||||
## Reliable
|
||||
|
||||
### Objective
|
||||
|
||||
Swarms should be dependable and trustworthy. Users should be able to count on Swarms to perform consistently and without error or failure.
|
||||
|
||||
### Tactics
|
||||
|
||||
- Robust Error Handling: We will focus on error prevention, detection, and recovery to minimize failures in Swarms.
|
||||
- Comprehensive Testing: We will apply various testing methodologies such as unit testing, integration testing, and stress testing to validate the reliability of our software.
|
||||
- Continuous Integration/Continuous Delivery (CI/CD): We will use CI/CD pipelines to ensure that all changes are tested and validated before they're merged into the main branch.
|
||||
|
||||
## Fast
|
||||
|
||||
### Objective
|
||||
|
||||
Swarms should offer high performance and rapid response times. The system should be able to handle requests and tasks swiftly.
|
||||
|
||||
### Tactics
|
||||
|
||||
- Efficient Algorithms: We will focus on optimizing our algorithms and data structures to ensure they run as quickly as possible.
|
||||
- Caching: Where appropriate, we will use caching techniques to speed up response times.
|
||||
- Profiling and Performance Monitoring: We will regularly analyze the performance of Swarms to identify bottlenecks and opportunities for improvement.
|
||||
|
||||
## Scalable
|
||||
|
||||
### Objective
|
||||
|
||||
Swarms should be able to grow in capacity and complexity without compromising performance or reliability. It should be able to handle increased workloads gracefully.
|
||||
|
||||
### Tactics
|
||||
|
||||
- Modular Architecture: We will design Swarms using a modular architecture that allows for easy scaling and modification.
|
||||
- Load Balancing: We will distribute tasks evenly across available resources to prevent overload and maximize throughput.
|
||||
- Horizontal and Vertical Scaling: We will design Swarms to be capable of both horizontal (adding more machines) and vertical (adding more power to an existing machine) scaling.
|
||||
|
||||
### Philosophy
|
||||
|
||||
Swarms is designed with a philosophy of simplicity and reliability. We believe that software should be a tool that empowers users, not a hurdle that they need to overcome. Therefore, our focus is on usability, reliability, speed, and scalability. We want our users to find Swarms intuitive and dependable, fast and adaptable to their needs. This philosophy guides all of our design and development decisions.
|
||||
|
||||
# Swarm Architecture Design Document
|
||||
|
||||
## Overview
|
||||
|
||||
The goal of the Swarm Architecture is to provide a flexible and scalable system to build swarm intelligence models that can solve complex problems. This document details the proposed design to create a plug-and-play system, which makes it easy to create custom swarms, and provides pre-configured swarms with multi-modal agents.
|
||||
|
||||
## Design Principles
|
||||
|
||||
- **Modularity**: The system will be built in a modular fashion, allowing various components to be easily swapped or upgraded.
|
||||
- **Interoperability**: Different swarm classes and components should be able to work together seamlessly.
|
||||
- **Scalability**: The design should support the growth of the system by adding more components or swarms.
|
||||
- **Ease of Use**: Users should be able to easily create their own swarms or use pre-configured ones with minimal configuration.
|
||||
|
||||
## Design Components
|
||||
|
||||
### AbstractSwarm
|
||||
|
||||
The AbstractSwarm is an abstract base class which defines the basic structure of a swarm and the methods that need to be implemented. Any new swarm should inherit from this class and implement the required methods.
|
||||
|
||||
### Swarm Classes
|
||||
|
||||
Various Swarm classes can be implemented inheriting from the AbstractSwarm class. Each swarm class should implement the required methods for initializing the components, worker nodes, and boss node, and running the swarm.
|
||||
|
||||
Pre-configured swarm classes with multi-modal agents can be provided for ease of use. These classes come with a default configuration of tools and agents, which can be used out of the box.
|
||||
|
||||
### Tools and Agents
|
||||
|
||||
Tools and agents are the components that provide the actual functionality to the swarms. They can be language models, AI assistants, vector stores, or any other components that can help in problem solving.
|
||||
|
||||
To make the system plug-and-play, a standard interface should be defined for these components. Any new tool or agent should implement this interface, so that it can be easily plugged into the system.
|
||||
|
||||
## Usage
|
||||
|
||||
Users can either use pre-configured swarms or create their own custom swarms.
|
||||
|
||||
To use a pre-configured swarm, they can simply instantiate the corresponding swarm class and call the run method with the required objective.
|
||||
|
||||
To create a custom swarm, they need to:
|
||||
|
||||
1. Define a new swarm class inheriting from AbstractSwarm.
|
||||
2. Implement the required methods for the new swarm class.
|
||||
3. Instantiate the swarm class and call the run method.
|
||||
|
||||
### Example
|
||||
|
||||
```python
|
||||
# Using pre-configured swarm
|
||||
swarm = PreConfiguredSwarm(openai_api_key)
|
||||
swarm.run_swarms(objective)
|
||||
|
||||
# Creating custom swarm
|
||||
class CustomSwarm(AbstractSwarm):
|
||||
# Implement required methods
|
||||
|
||||
swarm = CustomSwarm(openai_api_key)
|
||||
swarm.run_swarms(objective)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
This Swarm Architecture design provides a scalable and flexible system for building swarm intelligence models. The plug-and-play design allows users to easily use pre-configured swarms or create their own custom swarms.
|
||||
|
||||
|
||||
# Swarming Architectures
|
||||
Sure, below are five different swarm architectures with their base requirements and an abstract class that processes these components:
|
||||
|
||||
1. **Hierarchical Swarm**: This architecture is characterized by a boss/worker relationship. The boss node takes high-level decisions and delegates tasks to the worker nodes. The worker nodes perform tasks and report back to the boss node.
|
||||
- Requirements: Boss node (can be a large language model), worker nodes (can be smaller language models), and a task queue for task management.
|
||||
|
||||
2. **Homogeneous Swarm**: In this architecture, all nodes in the swarm are identical and contribute equally to problem-solving. Each node has the same capabilities.
|
||||
- Requirements: Homogeneous nodes (can be language models of the same size), communication protocol for nodes to share information.
|
||||
|
||||
3. **Heterogeneous Swarm**: This architecture contains different types of nodes, each with its specific capabilities. This diversity can lead to more robust problem-solving.
|
||||
- Requirements: Different types of nodes (can be different types and sizes of language models), a communication protocol, and a mechanism to delegate tasks based on node capabilities.
|
||||
|
||||
4. **Competitive Swarm**: In this architecture, nodes compete with each other to find the best solution. The system may use a selection process to choose the best solutions.
|
||||
- Requirements: Nodes (can be language models), a scoring mechanism to evaluate node performance, a selection mechanism.
|
||||
|
||||
5. **Cooperative Swarm**: In this architecture, nodes work together and share information to find solutions. The focus is on cooperation rather than competition.
|
||||
- Requirements: Nodes (can be language models), a communication protocol, a consensus mechanism to agree on solutions.
|
||||
|
||||
|
||||
6. **Grid-based Swarm**: This architecture positions agents on a grid, where they can only interact with their neighbors. This is useful for simulations, especially in fields like ecology or epidemiology.
|
||||
- Requirements: Agents (can be language models), a grid structure, and a neighborhood definition (i.e., how to identify neighboring agents).
|
||||
|
||||
7. **Particle Swarm Optimization (PSO) Swarm**: In this architecture, each agent represents a potential solution to an optimization problem. Agents move in the solution space based on their own and their neighbors' past performance. PSO is especially useful for continuous numerical optimization problems.
|
||||
- Requirements: Agents (each representing a solution), a definition of the solution space, an evaluation function to rate the solutions, a mechanism to adjust agent positions based on performance.
|
||||
|
||||
8. **Ant Colony Optimization (ACO) Swarm**: Inspired by ant behavior, this architecture has agents leave a pheromone trail that other agents follow, reinforcing the best paths. It's useful for problems like the traveling salesperson problem.
|
||||
- Requirements: Agents (can be language models), a representation of the problem space, a pheromone updating mechanism.
|
||||
|
||||
9. **Genetic Algorithm (GA) Swarm**: In this architecture, agents represent potential solutions to a problem. They can 'breed' to create new solutions and can undergo 'mutations'. GA swarms are good for search and optimization problems.
|
||||
- Requirements: Agents (each representing a potential solution), a fitness function to evaluate solutions, a crossover mechanism to breed solutions, and a mutation mechanism.
|
||||
|
||||
10. **Stigmergy-based Swarm**: In this architecture, agents communicate indirectly by modifying the environment, and other agents react to such modifications. It's a decentralized method of coordinating tasks.
|
||||
- Requirements: Agents (can be language models), an environment that agents can modify, a mechanism for agents to perceive environment changes.
|
||||
|
||||
These architectures all have unique features and requirements, but they share the need for agents (often implemented as language models) and a mechanism for agents to communicate or interact, whether it's directly through messages, indirectly through the environment, or implicitly through a shared solution space. Some also require specific data structures, like a grid or problem space, and specific algorithms, like for evaluating solutions or updating agent positions.
|
@ -0,0 +1,469 @@
|
||||
|
||||
|
||||
# Swarms Monetization Strategy
|
||||
|
||||
This strategy includes a variety of business models, potential revenue streams, cashflow structures, and customer identification methods. Let's explore these further.
|
||||
|
||||
## Business Models
|
||||
|
||||
1. **Platform as a Service (PaaS):** Provide the Swarms AI platform on a subscription basis, charged monthly or annually. This could be tiered based on usage and access to premium features.
|
||||
|
||||
2. **API Usage-based Pricing:** Charge customers based on their usage of the Swarms API. The more requests made, the higher the fee.
|
||||
|
||||
3. **Managed Services:** Offer complete end-to-end solutions where you manage the entire AI infrastructure for the clients. This could be on a contract basis with a recurring fee.
|
||||
|
||||
4. **Training and Certification:** Provide Swarms AI training and certification programs for interested developers and businesses. These could be monetized as separate courses or subscription-based access.
|
||||
|
||||
5. **Partnerships:** Collaborate with large enterprises and offer them dedicated Swarm AI services. These could be performance-based contracts, ensuring a mutually beneficial relationship.
|
||||
|
||||
6. **Data as a Service (DaaS):** Leverage the data generated by Swarms for insights and analytics, providing valuable business intelligence to clients.
|
||||
|
||||
## Potential Revenue Streams
|
||||
|
||||
1. **Subscription Fees:** This would be the main revenue stream from providing the Swarms platform as a service.
|
||||
|
||||
2. **Usage Fees:** Additional revenue can come from usage fees for businesses that have high demand for Swarms API.
|
||||
|
||||
3. **Contract Fees:** From offering managed services and bespoke solutions to businesses.
|
||||
|
||||
4. **Training Fees:** Revenue from providing training and certification programs to developers and businesses.
|
||||
|
||||
5. **Partnership Contracts:** Large-scale projects with enterprises, involving dedicated Swarm AI services, could provide substantial income.
|
||||
|
||||
6. **Data Insights:** Revenue from selling valuable business intelligence derived from Swarm's aggregated and anonymized data.
|
||||
|
||||
## Potential Customers
|
||||
|
||||
1. **Businesses Across Sectors:** Any business seeking to leverage AI for automation, efficiency, and data insights could be a potential customer. This includes sectors like finance, eCommerce, logistics, healthcare, and more.
|
||||
|
||||
2. **Developers:** Both freelance and those working in organizations could use Swarms to enhance their projects and services.
|
||||
|
||||
3. **Enterprises:** Large enterprises looking to automate and optimize their operations could greatly benefit from Swarms.
|
||||
|
||||
4. **Educational Institutions:** Universities and research institutions could leverage Swarms for research and teaching purposes.
|
||||
|
||||
## Roadmap
|
||||
|
||||
1. **Landing Page Creation:** Develop a dedicated product page on apac.ai for Swarms.
|
||||
|
||||
2. **Hosted Swarms API:** Launch a cloud-based Swarms API service. It should be highly reliable, with robust documentation to attract daily users.
|
||||
|
||||
3. **Consumer and Enterprise Subscription Service:** Launch a comprehensive subscription service on The Domain. This would provide users with access to a wide array of APIs and data streams.
|
||||
|
||||
4. **Dedicated Capacity Deals:** Partner with large enterprises to offer them dedicated Swarm AI solutions for automating their operations.
|
||||
|
||||
5. **Enterprise Partnerships:** Develop partnerships with large enterprises for extensive contract-based projects.
|
||||
|
||||
6. **Integration with Collaboration Platforms:** Develop Swarms bots for platforms like Discord and Slack, charging users a subscription fee for access.
|
||||
|
||||
7. **Personal Data Instances:** Offer users dedicated instances of all their data that the Swarm can query as needed.
|
||||
|
||||
8. **Browser Extension:** Develop a browser extension that integrates with the Swarms platform, offering users a more seamless experience.
|
||||
|
||||
Remember, customer satisfaction and a value-centric approach are at the core of any successful monetization strategy. It's essential to continuously iterate and improve the product based on customer feedback and evolving market needs.
|
||||
|
||||
----
|
||||
|
||||
# Other ideas
|
||||
|
||||
1. **Platform as a Service (PaaS):** Create a cloud-based platform that allows users to build, run, and manage applications without the complexity of maintaining the infrastructure. You could charge users a subscription fee for access to the platform and provide different pricing tiers based on usage levels. This could be an attractive solution for businesses that do not have the capacity to build or maintain their own swarm intelligence solutions.
|
||||
|
||||
2. **Professional Services:** Offer consultancy and implementation services to businesses looking to utilize the Swarm technology. This could include assisting with integration into existing systems, offering custom development services, or helping customers to build specific solutions using the framework.
|
||||
|
||||
3. **Education and Training:** Create a certification program for developers or companies looking to become proficient with the Swarms framework. This could be sold as standalone courses, or bundled with other services.
|
||||
|
||||
4. **Managed Services:** Some companies may prefer to outsource the management of their Swarm-based systems. A managed services solution could take care of all the technical aspects, from hosting the solution to ensuring it runs smoothly, allowing the customer to focus on their core business.
|
||||
|
||||
5. **Data Analysis and Insights:** Swarm intelligence can generate valuable data and insights. By anonymizing and aggregating this data, you could provide industry reports, trend analysis, and other valuable insights to businesses.
|
||||
|
||||
As for the type of platform, Swarms can be offered as a cloud-based solution given its scalability and flexibility. This would also allow you to apply a SaaS/PaaS type monetization model, which provides recurring revenue.
|
||||
|
||||
Potential customers could range from small to large enterprises in various sectors such as logistics, eCommerce, finance, and technology, who are interested in leveraging artificial intelligence and machine learning for complex problem solving, optimization, and decision-making.
|
||||
|
||||
**Product Brief Monetization Strategy:**
|
||||
|
||||
Product Name: Swarms.AI Platform
|
||||
|
||||
Product Description: A cloud-based AI and ML platform harnessing the power of swarm intelligence.
|
||||
|
||||
1. **Platform as a Service (PaaS):** Offer tiered subscription plans (Basic, Premium, Enterprise) to accommodate different usage levels and business sizes.
|
||||
|
||||
2. **Professional Services:** Offer consultancy and custom development services to tailor the Swarms solution to the specific needs of the business.
|
||||
|
||||
3. **Education and Training:** Launch an online Swarms.AI Academy with courses and certifications for developers and businesses.
|
||||
|
||||
4. **Managed Services:** Provide a premium, fully-managed service offering that includes hosting, maintenance, and 24/7 support.
|
||||
|
||||
5. **Data Analysis and Insights:** Offer industry reports and customized insights generated from aggregated and anonymized Swarm data.
|
||||
|
||||
Potential Customers: Enterprises in sectors such as logistics, eCommerce, finance, and technology. This can be sold globally, provided there's an internet connection.
|
||||
|
||||
Marketing Channels: Online marketing (SEO, Content Marketing, Social Media), Partnerships with tech companies, Direct Sales to Enterprises.
|
||||
|
||||
This strategy is designed to provide multiple revenue streams, while ensuring the Swarms.AI platform is accessible and useful to a range of potential customers.
|
||||
|
||||
1. **AI Solution as a Service:** By offering the Swarms framework as a service, businesses can access and utilize the power of multiple LLM agents without the need to maintain the infrastructure themselves. Subscription can be tiered based on usage and additional features.
|
||||
|
||||
2. **Integration and Custom Development:** Offer integration services to businesses wanting to incorporate the Swarms framework into their existing systems. Also, you could provide custom development for businesses with specific needs not met by the standard framework.
|
||||
|
||||
3. **Training and Certification:** Develop an educational platform offering courses, webinars, and certifications on using the Swarms framework. This can serve both developers seeking to broaden their skills and businesses aiming to train their in-house teams.
|
||||
|
||||
4. **Managed Swarms Solutions:** For businesses that prefer to outsource their AI needs, provide a complete solution which includes the development, maintenance, and continuous improvement of swarms-based applications.
|
||||
|
||||
5. **Data Analytics Services:** Leveraging the aggregated insights from the AI swarms, you could offer data analytics services. Businesses can use these insights to make informed decisions and predictions.
|
||||
|
||||
**Type of Platform:**
|
||||
|
||||
Cloud-based platform or Software as a Service (SaaS) will be a suitable model. It offers accessibility, scalability, and ease of updates.
|
||||
|
||||
**Target Customers:**
|
||||
|
||||
The technology can be beneficial for businesses across sectors like eCommerce, technology, logistics, finance, healthcare, and education, among others.
|
||||
|
||||
**Product Brief Monetization Strategy:**
|
||||
|
||||
Product Name: Swarms.AI
|
||||
|
||||
1. **AI Solution as a Service:** Offer different tiered subscriptions (Standard, Premium, and Enterprise) each with varying levels of usage and features.
|
||||
|
||||
2. **Integration and Custom Development:** Offer custom development and integration services, priced based on the scope and complexity of the project.
|
||||
|
||||
3. **Training and Certification:** Launch the Swarms.AI Academy with courses and certifications, available for a fee.
|
||||
|
||||
4. **Managed Swarms Solutions:** Offer fully managed solutions tailored to business needs, priced based on scope and service level agreements.
|
||||
|
||||
5. **Data Analytics Services:** Provide insightful reports and data analyses, which can be purchased on a one-off basis or through a subscription.
|
||||
|
||||
By offering a variety of services and payment models, Swarms.AI will be able to cater to a diverse range of business needs, from small start-ups to large enterprises. Marketing channels would include digital marketing, partnerships with technology companies, presence in tech events, and direct sales to targeted industries.
|
||||
|
||||
|
||||
|
||||
# Roadmap
|
||||
|
||||
* Create a landing page for swarms apac.ai/product/swarms
|
||||
|
||||
* Create Hosted Swarms API for anybody to just use without need for mega gpu infra, charge usage based pricing. Prerequisites for success => Swarms has to be extremely reliable + we need world class documentation and many daily users => how do we get many daily users? We provide a seamless and fluid experience, how do we create a seamless and fluid experience? We write good code that is modular, provides feedback to the user in times of distress, and ultimately accomplishes the user's tasks.
|
||||
|
||||
* Hosted consumer and enterprise subscription as a service on The Domain, where users can interact with 1000s of APIs and ingest 1000s of different data streams.
|
||||
|
||||
* Hosted dedicated capacity deals with mega enterprises on automating many operations with Swarms for monthly subscription 300,000+$
|
||||
|
||||
* Partnerships with enterprises, massive contracts with performance based fee
|
||||
|
||||
* Have discord bot and or slack bot with users personal data, charge subscription + browser extension
|
||||
|
||||
* each user gets a dedicated ocean instance of all their data so the swarm can query it as needed.
|
||||
|
||||
|
||||
|
||||
|
||||
---
|
||||
---
|
||||
|
||||
|
||||
# Swarms Monetization Strategy: A Revolutionary AI-powered Future
|
||||
|
||||
Swarms is a powerful AI platform leveraging the transformative potential of Swarm Intelligence. Our ambition is to monetize this groundbreaking technology in ways that generate significant cashflow while providing extraordinary value to our customers.
|
||||
|
||||
Here we outline our strategic monetization pathways and provide a roadmap that plots our course to future success.
|
||||
|
||||
---
|
||||
|
||||
## I. Business Models
|
||||
|
||||
1. **Platform as a Service (PaaS):** We provide the Swarms platform as a service, billed on a monthly or annual basis. Subscriptions can range from $50 for basic access, to $500+ for premium features and extensive usage.
|
||||
|
||||
2. **API Usage-based Pricing:** Customers are billed according to their use of the Swarms API. Starting at $0.01 per request, this creates a cashflow model that rewards extensive platform usage.
|
||||
|
||||
3. **Managed Services:** We offer end-to-end solutions, managing clients' entire AI infrastructure. Contract fees start from $100,000 per month, offering both a sustainable cashflow and considerable savings for our clients.
|
||||
|
||||
4. **Training and Certification:** A Swarms AI training and certification program is available for developers and businesses. Course costs can range from $200 to $2,000, depending on course complexity and duration.
|
||||
|
||||
5. **Partnerships:** We forge collaborations with large enterprises, offering dedicated Swarm AI services. These performance-based contracts start from $1,000,000, creating a potentially lucrative cashflow stream.
|
||||
|
||||
6. **Data as a Service (DaaS):** Swarms generated data are mined for insights and analytics, with business intelligence reports offered from $500 each.
|
||||
|
||||
---
|
||||
|
||||
## II. Potential Revenue Streams
|
||||
|
||||
1. **Subscription Fees:** From $50 to $500+ per month for platform access.
|
||||
|
||||
2. **Usage Fees:** From $0.01 per API request, generating income from high platform usage.
|
||||
|
||||
3. **Contract Fees:** Starting from $100,000 per month for managed services.
|
||||
|
||||
4. **Training Fees:** From $200 to $2,000 for individual courses or subscription access.
|
||||
|
||||
5. **Partnership Contracts:** Contracts starting from $100,000, offering major income potential.
|
||||
|
||||
6. **Data Insights:** Business intelligence reports starting from $500.
|
||||
|
||||
---
|
||||
|
||||
## III. Potential Customers
|
||||
|
||||
1. **Businesses Across Sectors:** Our offerings cater to businesses across finance, eCommerce, logistics, healthcare, and more.
|
||||
|
||||
2. **Developers:** Both freelancers and organization-based developers can leverage Swarms for their projects.
|
||||
|
||||
3. **Enterprises:** Swarms offers large enterprises solutions for optimizing operations.
|
||||
|
||||
4. **Educational Institutions:** Universities and research institutions can use Swarms for research and teaching.
|
||||
|
||||
---
|
||||
|
||||
## IV. Roadmap
|
||||
|
||||
1. **Landing Page Creation:** Develop a dedicated Swarms product page on apac.ai.
|
||||
|
||||
2. **Hosted Swarms API:** Launch a reliable, well-documented cloud-based Swarms API service.
|
||||
|
||||
3. **Consumer and Enterprise Subscription Service:** Launch an extensive subscription service on The Domain, providing wide-ranging access to APIs and data streams.
|
||||
|
||||
4. **Dedicated Capacity Deals:** Offer large enterprises dedicated Swarm AI solutions, starting from $300,000 monthly subscription.
|
||||
|
||||
5. **Enterprise Partnerships:** Develop performance-based contracts with large enterprises.
|
||||
|
||||
6. **Integration with Collaboration Platforms:** Develop Swarms bots for platforms like Discord and Slack, charging a subscription fee for access.
|
||||
|
||||
7. **Personal Data Instances:** Offer users dedicated data instances that the Swarm can query as needed.
|
||||
|
||||
8. **Browser Extension:** Develop a browser extension that integrates with the Swarms platform for seamless user experience.
|
||||
|
||||
---
|
||||
|
||||
Our North Star remains customer satisfaction and value provision.
|
||||
As we embark on this journey, we continuously refine our product based on customer feedback and evolving market needs, ensuring we lead in the age of AI-driven solutions.
|
||||
|
||||
## **Platform Distribution Strategy for Swarms**
|
||||
|
||||
*Note: This strategy aims to diversify the presence of 'Swarms' across various platforms and mediums while focusing on monetization and value creation for its users.
|
||||
|
||||
---
|
||||
|
||||
### **1. Framework:**
|
||||
|
||||
#### **Objective:**
|
||||
To offer Swarms as an integrated solution within popular frameworks to ensure that developers and businesses can seamlessly incorporate its functionalities.
|
||||
|
||||
#### **Strategy:**
|
||||
|
||||
* **Language/Framework Integration:**
|
||||
* Target popular frameworks like Django, Flask for Python, Express.js for Node, etc.
|
||||
* Create SDKs or plugins for easy integration.
|
||||
|
||||
* **Monetization:**
|
||||
* Freemium Model: Offer basic integration for free, and charge for additional features or advanced integrations.
|
||||
* Licensing: Allow businesses to purchase licenses for enterprise-level integrations.
|
||||
|
||||
* **Promotion:**
|
||||
* Engage in partnerships with popular online coding platforms like Udemy, Coursera, etc., offering courses and tutorials on integrating Swarms.
|
||||
* Host webinars and write technical blogs to promote the integration benefits.
|
||||
|
||||
---
|
||||
|
||||
### **2. Paid API:**
|
||||
|
||||
#### **Objective:**
|
||||
To provide a scalable solution for developers and businesses that want direct access to Swarms' functionalities without integrating the entire framework.
|
||||
|
||||
#### **Strategy:**
|
||||
|
||||
* **API Endpoints:**
|
||||
* Offer various endpoints catering to different functionalities.
|
||||
* Maintain robust documentation to ensure ease of use.
|
||||
|
||||
* **Monetization:**
|
||||
* Usage-based Pricing: Charge based on the number of API calls.
|
||||
* Subscription Tiers: Provide tiered packages based on usage limits and advanced features.
|
||||
|
||||
* **Promotion:**
|
||||
* List on API marketplaces like RapidAPI.
|
||||
* Engage in SEO to make the API documentation discoverable.
|
||||
|
||||
---
|
||||
|
||||
### **3. Domain Hosted:**
|
||||
|
||||
#### **Objective:**
|
||||
To provide a centralized web platform where users can directly access and engage with Swarms' offerings.
|
||||
|
||||
#### **Strategy:**
|
||||
|
||||
* **User-Friendly Interface:**
|
||||
* Ensure a seamless user experience with intuitive design.
|
||||
* Incorporate features like real-time chat support, tutorials, and an FAQ section.
|
||||
|
||||
* **Monetization:**
|
||||
* Subscription Model: Offer monthly/annual subscriptions for premium features.
|
||||
* Affiliate Marketing: Partner with related tech products/services and earn through referrals.
|
||||
|
||||
* **Promotion:**
|
||||
* Invest in PPC advertising on platforms like Google Ads.
|
||||
* Engage in content marketing, targeting keywords related to Swarms' offerings.
|
||||
|
||||
---
|
||||
|
||||
### **4. Build Your Own (No-Code Platform):**
|
||||
|
||||
#### **Objective:**
|
||||
To cater to the non-developer audience, allowing them to leverage Swarms' features without any coding expertise.
|
||||
|
||||
#### **Strategy:**
|
||||
|
||||
* **Drag-and-Drop Interface:**
|
||||
* Offer customizable templates.
|
||||
* Ensure integration with popular platforms and apps.
|
||||
|
||||
* **Monetization:**
|
||||
* Freemium Model: Offer basic features for free, and charge for advanced functionalities.
|
||||
* Marketplace for Plugins: Allow third-party developers to sell their plugins/extensions on the platform.
|
||||
|
||||
* **Promotion:**
|
||||
* Partner with no-code communities and influencers.
|
||||
* Offer promotions and discounts to early adopters.
|
||||
|
||||
---
|
||||
|
||||
### **5. Marketplace for the No-Code Platform:**
|
||||
|
||||
#### **Objective:**
|
||||
To create an ecosystem where third-party developers can contribute, and users can enhance their Swarms experience.
|
||||
|
||||
#### **Strategy:**
|
||||
|
||||
* **Open API for Development:**
|
||||
* Offer robust documentation and developer support.
|
||||
* Ensure a strict quality check for marketplace additions.
|
||||
|
||||
* **Monetization:**
|
||||
* Revenue Sharing: Take a percentage cut from third-party sales.
|
||||
* Featured Listings: Charge developers for premium listings.
|
||||
|
||||
* **Promotion:**
|
||||
* Host hackathons and competitions to boost developer engagement.
|
||||
* Promote top plugins/extensions through email marketing and on the main platform.
|
||||
|
||||
---
|
||||
|
||||
### **Future Outlook & Expansion:**
|
||||
|
||||
* **Hosted Dedicated Capacity:** Hosted dedicated capacity deals for enterprises starting at 399,999$
|
||||
* **Decentralized Free Peer to peer endpoint hosted on The Grid:** Hosted endpoint by the people for the people.
|
||||
* **Browser Extenision:** Athena browser extension for deep browser automation, subscription, usage,
|
||||
|
||||
|
||||
* **Mobile Application:** Develop a mobile app version for Swarms to tap into the vast mobile user base.
|
||||
* **Global Expansion:** Localize the platform for non-English speaking regions to tap into global markets.
|
||||
* **Continuous Learning:** Regularly collect user feedback and iterate on the product features.
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
### **50 Creative Distribution Platforms for Swarms**
|
||||
|
||||
1. **E-commerce Integrations:** Platforms like Shopify, WooCommerce, where Swarms can add value to sellers.
|
||||
|
||||
2. **Web Browser Extensions:** Chrome, Firefox, and Edge extensions that bring Swarms features directly to users.
|
||||
|
||||
3. **Podcasting Platforms:** Swarms-themed content on platforms like Spotify, Apple Podcasts to reach aural learners.
|
||||
|
||||
4. **Virtual Reality (VR) Platforms:** Integration with VR experiences on Oculus or Viveport.
|
||||
|
||||
5. **Gaming Platforms:** Tools or plugins for game developers on Steam, Epic Games.
|
||||
|
||||
6. **Decentralized Platforms:** Using blockchain, create decentralized apps (DApps) versions of Swarms.
|
||||
|
||||
7. **Chat Applications:** Integrate with popular messaging platforms like WhatsApp, Telegram, Slack.
|
||||
|
||||
8. **AI Assistants:** Integration with Siri, Alexa, Google Assistant to provide Swarms functionalities via voice commands.
|
||||
|
||||
9. **Freelancing Websites:** Offer tools or services for freelancers on platforms like Upwork, Fiverr.
|
||||
|
||||
10. **Online Forums:** Platforms like Reddit, Quora, where users can discuss or access Swarms.
|
||||
|
||||
11. **Educational Platforms:** Sites like Khan Academy, Udacity where Swarms can enhance learning experiences.
|
||||
|
||||
12. **Digital Art Platforms:** Integrate with platforms like DeviantArt, Behance.
|
||||
|
||||
13. **Open-source Repositories:** Hosting Swarms on GitHub, GitLab, Bitbucket with open-source plugins.
|
||||
|
||||
14. **Augmented Reality (AR) Apps:** Create AR experiences powered by Swarms.
|
||||
|
||||
15. **Smart Home Devices:** Integrate Swarms' functionalities into smart home devices.
|
||||
|
||||
16. **Newsletters:** Platforms like Substack, where Swarms insights can be shared.
|
||||
|
||||
17. **Interactive Kiosks:** In malls, airports, and other public places.
|
||||
|
||||
18. **IoT Devices:** Incorporate Swarms in devices like smart fridges, smartwatches.
|
||||
|
||||
19. **Collaboration Tools:** Platforms like Trello, Notion, offering Swarms-enhanced productivity.
|
||||
|
||||
20. **Dating Apps:** An AI-enhanced matching algorithm powered by Swarms.
|
||||
|
||||
21. **Music Platforms:** Integrate with Spotify, SoundCloud for music-related AI functionalities.
|
||||
|
||||
22. **Recipe Websites:** Platforms like AllRecipes, Tasty with AI-recommended recipes.
|
||||
|
||||
23. **Travel & Hospitality:** Integrate with platforms like Airbnb, Tripadvisor for AI-based recommendations.
|
||||
|
||||
24. **Language Learning Apps:** Duolingo, Rosetta Stone integrations.
|
||||
|
||||
25. **Virtual Events Platforms:** Websites like Hopin, Zoom where Swarms can enhance the virtual event experience.
|
||||
|
||||
26. **Social Media Management:** Tools like Buffer, Hootsuite with AI insights by Swarms.
|
||||
|
||||
27. **Fitness Apps:** Platforms like MyFitnessPal, Strava with AI fitness insights.
|
||||
|
||||
28. **Mental Health Apps:** Integration into apps like Calm, Headspace for AI-driven wellness.
|
||||
|
||||
29. **E-books Platforms:** Amazon Kindle, Audible with AI-enhanced reading experiences.
|
||||
|
||||
30. **Sports Analysis Tools:** Websites like ESPN, Sky Sports where Swarms can provide insights.
|
||||
|
||||
31. **Financial Tools:** Integration into platforms like Mint, Robinhood for AI-driven financial advice.
|
||||
|
||||
32. **Public Libraries:** Digital platforms of public libraries for enhanced reading experiences.
|
||||
|
||||
33. **3D Printing Platforms:** Websites like Thingiverse, Shapeways with AI customization.
|
||||
|
||||
34. **Meme Platforms:** Websites like Memedroid, 9GAG where Swarms can suggest memes.
|
||||
|
||||
35. **Astronomy Apps:** Platforms like Star Walk, NASA's Eyes with AI-driven space insights.
|
||||
|
||||
36. **Weather Apps:** Integration into Weather.com, AccuWeather for predictive analysis.
|
||||
|
||||
37. **Sustainability Platforms:** Websites like Ecosia, GoodGuide with AI-driven eco-tips.
|
||||
|
||||
38. **Fashion Apps:** Platforms like ASOS, Zara with AI-based style recommendations.
|
||||
|
||||
39. **Pet Care Apps:** Integration into PetSmart, Chewy for AI-driven pet care tips.
|
||||
|
||||
40. **Real Estate Platforms:** Websites like Zillow, Realtor with AI-enhanced property insights.
|
||||
|
||||
41. **DIY Platforms:** Websites like Instructables, DIY.org with AI project suggestions.
|
||||
|
||||
42. **Genealogy Platforms:** Ancestry, MyHeritage with AI-driven family tree insights.
|
||||
|
||||
43. **Car Rental & Sale Platforms:** Integration into AutoTrader, Turo for AI-driven vehicle suggestions.
|
||||
|
||||
44. **Wedding Planning Websites:** Platforms like Zola, The Knot with AI-driven planning.
|
||||
|
||||
45. **Craft Platforms:** Websites like Etsy, Craftsy with AI-driven craft suggestions.
|
||||
|
||||
46. **Gift Recommendation Platforms:** AI-driven gift suggestions for websites like Gifts.com.
|
||||
|
||||
47. **Study & Revision Platforms:** Websites like Chegg, Quizlet with AI-driven study guides.
|
||||
|
||||
48. **Local Business Directories:** Yelp, Yellow Pages with AI-enhanced reviews.
|
||||
|
||||
49. **Networking Platforms:** LinkedIn, Meetup with AI-driven connection suggestions.
|
||||
|
||||
50. **Lifestyle Magazines' Digital Platforms:** Websites like Vogue, GQ with AI-curated fashion and lifestyle insights.
|
||||
|
||||
---
|
||||
|
||||
*Endnote: Leveraging these diverse platforms ensures that Swarms becomes an integral part of multiple ecosystems, enhancing its visibility and user engagement.*
|
@ -0,0 +1,7 @@
|
||||
This page summarizes questions we were asked on [Discord](https://discord.gg/gnWRz88eym), Hacker News, and Reddit. Feel free to post a question to [Discord](https://discord.gg/gnWRz88eym) or open a discussion on our [Github Page](https://github.com/kyegomez) or hit us up directly: [kye@apac.ai](mailto:hello@swarms.ai).
|
||||
|
||||
## 1. How is Swarms different from LangChain?
|
||||
|
||||
Swarms is an open source alternative to LangChain and differs in its approach to creating LLM pipelines and DAGs. In addition to agents, it uses more general-purpose DAGs and pipelines. A close proxy might be *Airflow for LLMs*. Swarms still implements chain of thought logic for prompt tasks that use "tools" but it also supports any type of input / output (images, audio, etc.).
|
||||
|
||||
|
@ -0,0 +1,101 @@
|
||||
# The Swarms Flywheel
|
||||
|
||||
1. **Building a Supportive Community:** Initiate by establishing an engaging and inclusive open-source community for both developers and sales freelancers around Swarms. Regular online meetups, webinars, tutorials, and sales training can make them feel welcome and encourage contributions and sales efforts.
|
||||
|
||||
2. **Increased Contributions and Sales Efforts:** The more engaged the community, the more developers will contribute to Swarms and the more effort sales freelancers will put into selling Swarms.
|
||||
|
||||
3. **Improvement in Quality and Market Reach:** More developer contributions mean better quality, reliability, and feature offerings from Swarms. Simultaneously, increased sales efforts from freelancers boost Swarms' market penetration and visibility.
|
||||
|
||||
4. **Rise in User Base:** As Swarms becomes more robust and more well-known, the user base grows, driving more revenue.
|
||||
|
||||
5. **Greater Financial Incentives:** Increased revenue can be redirected to offer more significant financial incentives to both developers and salespeople. Developers can be incentivized based on their contribution to Swarms, and salespeople can be rewarded with higher commissions.
|
||||
|
||||
6. **Attract More Developers and Salespeople:** These financial incentives, coupled with the recognition and experience from participating in a successful project, attract more developers and salespeople to the community.
|
||||
|
||||
7. **Wider Adoption of Swarms:** An ever-improving product, a growing user base, and an increasing number of passionate salespeople accelerate the adoption of Swarms.
|
||||
|
||||
8. **Return to Step 1:** As the community, user base, and sales network continue to grow, the cycle repeats, each time speeding up the flywheel.
|
||||
|
||||
|
||||
```markdown
|
||||
+---------------------+
|
||||
| Building a |
|
||||
| Supportive | <--+
|
||||
| Community | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Increased | |
|
||||
| Contributions & | |
|
||||
| Sales Efforts | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Improvement in | |
|
||||
| Quality & Market | |
|
||||
| Reach | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Rise in User | |
|
||||
| Base | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Greater Financial | |
|
||||
| Incentives | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Attract More | |
|
||||
| Developers & | |
|
||||
| Salespeople | |
|
||||
+--------+-----------+ |
|
||||
| |
|
||||
v |
|
||||
+--------+-----------+ |
|
||||
| Wider Adoption of | |
|
||||
| Swarms |----+
|
||||
+---------------------+
|
||||
```
|
||||
|
||||
|
||||
# Potential Risks and Mitigations:
|
||||
|
||||
1. **Insufficient Contributions or Quality of Work**: Open-source efforts rely on individuals being willing and able to spend time contributing. If not enough people participate, or the work they produce is of poor quality, the product development could stall.
|
||||
* **Mitigation**: Create a robust community with clear guidelines, support, and resources. Provide incentives for quality contributions, such as a reputation system, swag, or financial rewards. Conduct thorough code reviews to ensure the quality of contributions.
|
||||
|
||||
2. **Lack of Sales Results**: Commission-based salespeople will only continue to sell the product if they're successful. If they aren't making enough sales, they may lose motivation and cease their efforts.
|
||||
* **Mitigation**: Provide adequate sales training and resources. Ensure the product-market fit is strong, and adjust messaging or sales tactics as necessary. Consider implementing a minimum commission or base pay to reduce risk for salespeople.
|
||||
|
||||
3. **Poor User Experience or User Adoption**: If users don't find the product useful or easy to use, they won't adopt it, and the user base won't grow. This could also discourage salespeople and contributors.
|
||||
* **Mitigation**: Prioritize user experience in the product development process. Regularly gather and incorporate user feedback. Ensure robust user support is in place.
|
||||
|
||||
4. **Inadequate Financial Incentives**: If the financial rewards don't justify the time and effort contributors and salespeople are putting in, they will likely disengage.
|
||||
* **Mitigation**: Regularly review and adjust financial incentives as needed. Ensure that the method for calculating and distributing rewards is transparent and fair.
|
||||
|
||||
5. **Security and Compliance Risks**: As the user base grows and the software becomes more complex, the risk of security issues increases. Moreover, as contributors from various regions join, compliance with various international laws could become an issue.
|
||||
* **Mitigation**: Establish strong security practices from the start. Regularly conduct security audits. Seek legal counsel to understand and adhere to international laws and regulations.
|
||||
|
||||
## Activation Plan for the Flywheel:
|
||||
|
||||
1. **Community Building**: Begin by fostering a supportive community around Swarms. Encourage early adopters to contribute and provide feedback. Create comprehensive documentation, community guidelines, and a forum for discussion and support.
|
||||
|
||||
2. **Sales and Development Training**: Provide resources and training for salespeople and developers. Make sure they understand the product, its value, and how to effectively contribute or sell.
|
||||
|
||||
3. **Increase Contributions and Sales Efforts**: Encourage increased participation by highlighting successful contributions and sales, rewarding top contributors and salespeople, and regularly communicating about the project's progress and impact.
|
||||
|
||||
4. **Iterate and Improve**: Continually gather and implement feedback to improve Swarms and its market reach. The better the product and its alignment with the market, the more the user base will grow.
|
||||
|
||||
5. **Expand User Base**: As the product improves and sales efforts continue, the user base should grow. Ensure you have the infrastructure to support this growth and maintain a positive user experience.
|
||||
|
||||
6. **Increase Financial Incentives**: As the user base and product grow, so too should the financial incentives. Make sure rewards continue to be competitive and attractive.
|
||||
|
||||
7. **Attract More Contributors and Salespeople**: As the financial incentives and success of the product increase, this should attract more contributors and salespeople, further feeding the flywheel.
|
||||
|
||||
Throughout this process, it's important to regularly reassess and adjust your strategy as necessary. Stay flexible and responsive to changes in the market, user feedback, and the evolving needs of the community.
|
@ -0,0 +1,14 @@
|
||||
|
||||
## Purpose
|
||||
Artificial Intelligence has grown at an exponential rate over the past decade. Yet, we are far from fully harnessing its potential. Today's AI operates in isolation, each working separately in their corner. But life doesn't work like that. The world doesn't work like that. Success isn't built in silos; it's built in teams.
|
||||
|
||||
Imagine a world where AI models work in unison. Where they can collaborate, interact, and pool their collective intelligence to achieve more than any single model could. This is the future we envision. But today, we lack a framework for AI to collaborate effectively, to form a true swarm of intelligent agents.
|
||||
|
||||
|
||||
This is a difficult problem, one that has eluded solution. It requires sophisticated systems that can allow individual models to not just communicate but also understand each other, pool knowledge and resources, and create collective intelligence. This is the next frontier of AI.
|
||||
|
||||
But here at Swarms, we have a secret sauce. It's not just a technology or a breakthrough invention. It's a way of thinking - the philosophy of rapid iteration. With each cycle, we make massive progress. We experiment, we learn, and we grow. We have developed a pioneering framework that can enable AI models to work together as a swarm, combining their strengths to create richer, more powerful outputs.
|
||||
|
||||
We are uniquely positioned to take on this challenge with 1,500+ devoted researchers in Agora. We have assembled a team of world-class experts, experienced and driven, united by a shared vision. Our commitment to breaking barriers, pushing boundaries, and our belief in the power of collective intelligence makes us the best team to usher in this future to fundamentally advance our species, Humanity.
|
||||
|
||||
---
|
@ -0,0 +1,82 @@
|
||||
# Research Lists
|
||||
A compilation of projects, papers, blogs in autonomous agents.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [Projects](#projects)
|
||||
- [Articles](#articles)
|
||||
- [Talks](#talks)
|
||||
|
||||
|
||||
## Projects
|
||||
|
||||
### Developer tools
|
||||
- [2023/8/10] [ModelScope-Agent](https://github.com/modelscope/modelscope-agent) - An Agent Framework Connecting Models in ModelScope with the World
|
||||
- [2023/05/25] [Gorilla](https://github.com/ShishirPatil/gorilla) - An API store for LLMs
|
||||
- [2023/03/31] [BMTools](https://github.com/OpenBMB/BMTools) - Tool Learning for Big Models, Open-Source Solutions of ChatGPT-Plugins
|
||||
- [2023/03/09] [LMQL](https://github.com/eth-sri/lmql) - A query language for programming (large) language models.
|
||||
- [2022/10/25] [Langchain](https://github.com/hwchase17/langchain) - ⚡ Building applications with LLMs through composability ⚡
|
||||
|
||||
### Applications
|
||||
- [2023/07/08] [ShortGPT](https://github.com/RayVentura/ShortGPT) - 🚀🎬 ShortGPT - An experimental AI framework for automated short/video content creation. Enables creators to rapidly produce, manage, and deliver content using AI and automation.
|
||||
- [2023/07/05] [gpt-researcher](https://github.com/assafelovic/gpt-researcher) - GPT based autonomous agent that does online comprehensive research on any given topic
|
||||
- [2023/07/04] [DemoGPT](https://github.com/melih-unsal/DemoGPT) - 🧩DemoGPT enables you to create quick demos by just using prompts. [[demo]](demogpt.io)
|
||||
- [2023/06/30] [MetaGPT](https://github.com/geekan/MetaGPT) - 🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
|
||||
- [2023/06/11] [gpt-engineer](https://github.com/AntonOsika/gpt-engineer) - Specify what you want it to build, the AI asks for clarification, and then builds it.
|
||||
- [2023/05/16] [SuperAGI](https://github.com/TransformerOptimus/SuperAGI) - <⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
|
||||
- [2023/05/13] [Developer](https://github.com/smol-ai/developer) - Human-centric & Coherent Whole Program Synthesis aka your own personal junior developer
|
||||
- [2023/04/07] [AgentGPT](https://github.com/reworkd/AgentGPT) - 🤖 Assemble, configure, and deploy autonomous AI Agents in your browser. [[demo]](agentgpt.reworkd.ai)
|
||||
- [2023/04/03] [BabyAGI](https://github.com/yoheinakajima/babyagi) - an example of an AI-powered task management system
|
||||
- [2023/03/30] [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) - An experimental open-source attempt to make GPT-4 fully autonomous.
|
||||
|
||||
### Benchmarks
|
||||
- [2023/08/07] [AgentBench](https://github.com/THUDM/AgentBench) - A Comprehensive Benchmark to Evaluate LLMs as Agents. [paper](https://arxiv.org/abs/2308.03688)
|
||||
- [2023/06/18] [Auto-GPT-Benchmarks](https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks) - A repo built for the purpose of benchmarking the performance of agents, regardless of how they are set up and how they work.
|
||||
- [2023/05/28] [ToolBench](https://github.com/OpenBMB/ToolBench) - An open platform for training, serving, and evaluating large language model for tool learning.
|
||||
|
||||
## Articles
|
||||
### Research Papers
|
||||
- [2023/08/11] [BOLAA: Benchmarking and Orchestrating LLM-Augmented Autonomous Agents](https://arxiv.org/pdf/2308.05960v1.pdf), Zhiwei Liu, et al.
|
||||
- [2023/07/31] [ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs](https://arxiv.org/abs/2307.16789), Yujia Qin, et al.
|
||||
- [2023/07/16] [Communicative Agents for Software Development](https://arxiv.org/abs/2307.07924), Chen Qian, et al.
|
||||
- [2023/06/09] [Mind2Web: Towards a Generalist Agent for the Web](https://arxiv.org/pdf/2306.06070.pdf), Xiang Deng, et al. [[code]](https://github.com/OSU-NLP-Group/Mind2Web) [[demo]](https://osu-nlp-group.github.io/Mind2Web/)
|
||||
- [2023/06/05] [Orca: Progressive Learning from Complex Explanation Traces of GPT-4](https://arxiv.org/pdf/2306.02707.pdf), Subhabrata Mukherjee et al.
|
||||
- [2023/05/25] [Voyager: An Open-Ended Embodied Agent with Large Language Models](https://arxiv.org/pdf/2305.16291.pdf), Guanzhi Wang, et al. [[code]](https://github.com/MineDojo/Voyager) [[website]](https://voyager.minedojo.org/)
|
||||
- [2023/05/23] [ReWOO: Decoupling Reasoning from Observations for Efficient Augmented Language Models](https://arxiv.org/pdf/2305.18323.pdf), Binfeng Xu, et al. [[code]](https://github.com/billxbf/ReWOO)
|
||||
- [2023/05/17] [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601), Shunyu Yao, et al.[[code]](https://github.com/kyegomez/tree-of-thoughts) [[code-orig]](https://github.com/ysymyth/tree-of-thought-llm)
|
||||
- [2023/05/12] [MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers](https://arxiv.org/abs/2305.07185), Lili Yu, et al.
|
||||
- [2023/05/19] [FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance](https://arxiv.org/abs/2305.05176), Lingjiao Chen, et al.
|
||||
- [2023/05/06] [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](https://arxiv.org/abs/2305.04091), Lei Wang, et al.
|
||||
- [2023/05/01] [Learning to Reason and Memorize with Self-Notes](https://arxiv.org/abs/2305.00833), Jack Lanchantin, et al.
|
||||
- [2023/04/24] [WizardLM: Empowering Large Language Models to Follow Complex Instructions](https://arxiv.org/abs/2304.12244), Can Xu, et al.
|
||||
- [2023/04/22] [LLM+P: Empowering Large Language Models with Optimal Planning Proficiency](https://arxiv.org/abs/2304.11477), Bo Liu, et al.
|
||||
- [2023/04/07] [Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442), Joon Sung Park, et al. [[code]](https://github.com/mkturkcan/generative-agents)
|
||||
- [2023/03/30] [Self-Refine: Iterative Refinement with Self-Feedback](https://arxiv.org/abs/2303.17651), Aman Madaan, et al.[[code]](https://github.com/madaan/self-refine)
|
||||
- [2023/03/30] [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace](https://arxiv.org/pdf/2303.17580.pdf), Yongliang Shen, et al. [[code]](https://github.com/microsoft/JARVIS) [[demo]](https://huggingface.co/spaces/microsoft/HuggingGPT)
|
||||
- [2023/03/20] [Reflexion: Language Agents with Verbal Reinforcement Learning](https://arxiv.org/pdf/2303.11366.pdf), Noah Shinn, et al. [[code]](https://github.com/noahshinn024/reflexion)
|
||||
- [2023/03/04] [Towards A Unified Agent with Foundation Models](https://openreview.net/pdf?id=JK_B1tB6p-), Norman Di Palo et al.
|
||||
- [2023/02/23] [Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection](https://arxiv.org/abs/2302.12173), Sahar Abdelnab, et al.
|
||||
- [2023/02/09] [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/pdf/2302.04761.pdf), Timo Schick, et al. [[code]](https://github.com/lucidrains/toolformer-pytorch)
|
||||
- [2022/12/12] [LMQL: Prompting Is Programming: A Query Language for Large Language Models](https://arxiv.org/abs/2212.06094), Luca Beurer-Kellner, et al.
|
||||
- [2022/10/06] [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/pdf/2210.03629.pdf), Shunyu Yao, et al. [[code]](https://github.com/ysymyth/ReAct)
|
||||
- [2022/07/20] [Inner Monologue: Embodied Reasoning through Planning with Language Models](https://arxiv.org/pdf/2207.05608.pdf), Wenlong Huang, et al. [[demo]](https://innermonologue.github.io/)
|
||||
- [2022/04/04] [Do As I Can, Not As I Say: Grounding Language in Robotic Affordances](), Michael Ahn, e al. [[demo]](https://say-can.github.io/)
|
||||
- [2021/12/17] [WebGPT: Browser-assisted question-answering with human feedback](https://arxiv.org/pdf/2112.09332.pdf), Reiichiro Nakano, et al.
|
||||
- [2021/06/17] [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685), Edward J. Hu, et al.
|
||||
|
||||
|
||||
### Blog Articles
|
||||
|
||||
- [2023/08/14] [A Roadmap of AI Agents(Chinese)](https://zhuanlan.zhihu.com/p/649916692) By Haojie Pan
|
||||
- [2023/06/23] [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) By Lilian Weng
|
||||
- [2023/06/11] [A CRITICAL LOOK AT AI-GENERATED SOFTWARE](https://spectrum.ieee.org/ai-software) By JAIDEEP VAIDYAHAFIZ ASIF
|
||||
- [2023/04/29] [AUTO-GPT: UNLEASHING THE POWER OF AUTONOMOUS AI AGENTS](https://www.leewayhertz.com/autogpt/) By Akash Takyar
|
||||
- [2023/04/20] [Conscious Machines: Experiments, Theory, and Implementations(Chinese)](https://pattern.swarma.org/article/230) By Jiang Zhang
|
||||
- [2023/04/18] [Autonomous Agents & Agent Simulations](https://blog.langchain.dev/agents-round/) By Langchain
|
||||
- [2023/04/16] [4 Autonomous AI Agents you need to know](https://towardsdatascience.com/4-autonomous-ai-agents-you-need-to-know-d612a643fa92) By Sophia Yang
|
||||
- [2023/03/31] [ChatGPT that learns to use tools(Chinese)](https://zhuanlan.zhihu.com/p/618448188) By Haojie Pan
|
||||
|
||||
### Talks
|
||||
- [2023/06/05] [Two Paths to Intelligence](https://www.youtube.com/watch?v=rGgGOccMEiY&t=1497s) by Geoffrey Hinton
|
||||
- [2023/05/24] [State of GPT](https://www.youtube.com/watch?v=bZQun8Y4L2A) by Andrej Karpathy | OpenAI
|
@ -0,0 +1,13 @@
|
||||
## The Plan
|
||||
|
||||
### Phase 1: Building the Foundation
|
||||
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
|
||||
|
||||
### Phase 2: Optimizing the System
|
||||
In the second phase, we'll focus on optimizng Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
|
||||
|
||||
### Phase 3: Towards Super-Intelligence
|
||||
The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
|
||||
|
||||
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
|
||||
|
@ -0,0 +1,70 @@
|
||||
## BingChat User Guide
|
||||
|
||||
Welcome to the BingChat user guide! This document provides a step-by-step tutorial on how to leverage the BingChat class, an interface to the EdgeGPT model by OpenAI.
|
||||
|
||||
### Table of Contents
|
||||
1. [Installation & Prerequisites](#installation)
|
||||
2. [Setting Up BingChat](#setup)
|
||||
3. [Interacting with BingChat](#interacting)
|
||||
4. [Generating Images](#images)
|
||||
5. [Managing Cookies](#cookies)
|
||||
|
||||
### Installation & Prerequisites <a name="installation"></a>
|
||||
|
||||
Before initializing the BingChat model, ensure you have the necessary dependencies installed:
|
||||
|
||||
```shell
|
||||
pip install EdgeGPT
|
||||
```
|
||||
|
||||
Additionally, you must have a `cookies.json` file which is necessary for authenticating with EdgeGPT.
|
||||
|
||||
### Setting Up BingChat <a name="setup"></a>
|
||||
|
||||
To start, import the BingChat class:
|
||||
|
||||
```python
|
||||
from bing_chat import BingChat
|
||||
```
|
||||
|
||||
Initialize BingChat with the path to your `cookies.json`:
|
||||
|
||||
```python
|
||||
chat = BingChat(cookies_path="./path/to/cookies.json")
|
||||
```
|
||||
|
||||
### Interacting with BingChat <a name="interacting"></a>
|
||||
|
||||
You can obtain text responses from the EdgeGPT model by simply calling the instantiated object:
|
||||
|
||||
```python
|
||||
response = chat("Hello, my name is ChatGPT")
|
||||
print(response)
|
||||
```
|
||||
|
||||
You can also specify the conversation style:
|
||||
|
||||
```python
|
||||
from bing_chat import ConversationStyle
|
||||
response = chat("Tell me a joke", style=ConversationStyle.creative)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Generating Images <a name="images"></a>
|
||||
|
||||
BingChat allows you to generate images based on text prompts:
|
||||
|
||||
```python
|
||||
image_path = chat.create_img("Sunset over mountains", auth_cookie="YOUR_AUTH_COOKIE")
|
||||
print(f"Image saved at: {image_path}")
|
||||
```
|
||||
|
||||
Ensure you provide the required `auth_cookie` for image generation.
|
||||
|
||||
### Managing Cookies <a name="cookies"></a>
|
||||
|
||||
You can set a directory path for managing cookies using the `set_cookie_dir_path` method:
|
||||
|
||||
BingChat.set_cookie_dir_path("./path/to/cookies_directory")
|
||||
|
||||
|
@ -0,0 +1,3 @@
|
||||
This section of the documentation is dedicated to examples highlighting Swarms functionality.
|
||||
|
||||
We try to keep all examples up to date, but if you think there is a bug please [submit a pull request](https://github.com/kyegomez/swarms-docs/tree/main/docs/examples). We are also more than happy to include new examples :)
|
@ -0,0 +1,118 @@
|
||||
## ChatGPT User Guide with Abstraction
|
||||
|
||||
Welcome to the ChatGPT user guide! This document will walk you through the Reverse Engineered ChatGPT API, its usage, and how to leverage the abstraction in `revgpt.py` for seamless integration.
|
||||
|
||||
### Table of Contents
|
||||
1. [Installation](#installation)
|
||||
2. [Initial Setup and Configuration](#initial-setup)
|
||||
3. [Using the Abstract Class from `revgpt.py`](#using-abstract-class)
|
||||
4. [V1 Standard ChatGPT](#v1-standard-chatgpt)
|
||||
5. [V3 Official Chat API](#v3-official-chat-api)
|
||||
6. [Credits & Disclaimers](#credits-disclaimers)
|
||||
|
||||
### Installation <a name="installation"></a>
|
||||
|
||||
To kickstart your journey with ChatGPT, first, install the ChatGPT package:
|
||||
|
||||
```shell
|
||||
python -m pip install --upgrade revChatGPT
|
||||
```
|
||||
|
||||
**Supported Python Versions:**
|
||||
- Minimum: Python3.9
|
||||
- Recommended: Python3.11+
|
||||
|
||||
### Initial Setup and Configuration <a name="initial-setup"></a>
|
||||
|
||||
1. **Account Setup:** Register on [OpenAI's ChatGPT](https://chat.openai.com/).
|
||||
2. **Authentication:** Obtain your access token from OpenAI's platform.
|
||||
3. **Environment Variables:** Configure your environment with the necessary variables. An example of these variables can be found at the bottom of the guide.
|
||||
|
||||
### Using the Abstract Class from `revgpt.py` <a name="using-abstract-class"></a>
|
||||
|
||||
The abstraction provided in `revgpt.py` is designed to simplify your interactions with ChatGPT.
|
||||
|
||||
1. **Import the Necessary Modules:**
|
||||
|
||||
```python
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
from revgpt import AbstractChatGPT
|
||||
```
|
||||
|
||||
2. **Load Environment Variables:**
|
||||
|
||||
```python
|
||||
load_dotenv()
|
||||
```
|
||||
|
||||
3. **Initialize the ChatGPT Abstract Class:**
|
||||
|
||||
```python
|
||||
chat = AbstractChatGPT(api_key=os.getenv("ACCESS_TOKEN"), **config)
|
||||
```
|
||||
|
||||
4. **Start Interacting with ChatGPT:**
|
||||
|
||||
```python
|
||||
response = chat.ask("Hello, ChatGPT!")
|
||||
print(response)
|
||||
```
|
||||
|
||||
With the abstract class, you can seamlessly switch between different versions or models of ChatGPT without changing much of your code.
|
||||
|
||||
### V1 Standard ChatGPT <a name="v1-standard-chatgpt"></a>
|
||||
|
||||
If you wish to use V1 specifically:
|
||||
|
||||
1. Import the model:
|
||||
|
||||
```python
|
||||
from swarms.models.revgptV1 import RevChatGPTModelv1
|
||||
```
|
||||
|
||||
2. Initialize:
|
||||
|
||||
```python
|
||||
model = RevChatGPTModelv1(access_token=os.getenv("ACCESS_TOKEN"), **config)
|
||||
```
|
||||
|
||||
3. Interact:
|
||||
|
||||
```python
|
||||
response = model.run("What's the weather like?")
|
||||
print(response)
|
||||
```
|
||||
|
||||
### V3 Official Chat API <a name="v3-official-chat-api"></a>
|
||||
|
||||
For users looking to integrate the official V3 API:
|
||||
|
||||
1. Import the model:
|
||||
|
||||
```python
|
||||
from swarms.models.revgptV4 import RevChatGPTModelv4
|
||||
```
|
||||
|
||||
2. Initialize:
|
||||
|
||||
```python
|
||||
model = RevChatGPTModelv4(access_token=os.getenv("OPENAI_API_KEY"), **config)
|
||||
```
|
||||
|
||||
3. Interact:
|
||||
|
||||
```python
|
||||
response = model.run("Tell me a fun fact!")
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Credits & Disclaimers <a name="credits-disclaimers"></a>
|
||||
|
||||
- This project is not an official OpenAI product and is not affiliated with OpenAI. Use at your own discretion.
|
||||
- Many thanks to all the contributors who have made this project possible.
|
||||
- Special acknowledgment to [virtualharby](https://www.youtube.com/@virtualharby) for the motivating music!
|
||||
|
||||
---
|
||||
|
||||
By following this guide, you should now have a clear understanding of how to use the Reverse Engineered ChatGPT API and its abstraction. Happy coding!
|
@ -0,0 +1,344 @@
|
||||
# Tutorial: Understanding and Utilizing Worker Examples
|
||||
|
||||
## Table of Contents
|
||||
1. Introduction
|
||||
2. Code Overview
|
||||
- Import Statements
|
||||
- Initializing API Key and Language Model
|
||||
- Creating Swarm Tools
|
||||
- Appending Tools to a List
|
||||
- Initializing a Worker Node
|
||||
3. Understanding the `hf_agent` Tool
|
||||
4. Understanding the `omni_agent` Tool
|
||||
5. Understanding the `compile` Tool
|
||||
6. Running a Swarm
|
||||
7. Interactive Examples
|
||||
- Example 1: Initializing API Key and Language Model
|
||||
- Example 2: Using the `hf_agent` Tool
|
||||
- Example 3: Using the `omni_agent` Tool
|
||||
- Example 4: Using the `compile` Tool
|
||||
8. Conclusion
|
||||
|
||||
## 1. Introduction
|
||||
The provided code showcases a system built around a worker node that utilizes various AI models and tools to perform tasks. This tutorial will break down the code step by step, explaining its components, how they work together, and how to utilize its modularity for various tasks.
|
||||
|
||||
## 2. Code Overview
|
||||
|
||||
### Import Statements
|
||||
The code begins with import statements, bringing in necessary modules and classes. Key imports include the `OpenAIChat` class, which represents a language model, and several custom agents and tools from the `swarms` package.
|
||||
|
||||
```python
|
||||
import os
|
||||
import interpreter # Assuming this is a custom module
|
||||
from swarms.agents.hf_agents import HFAgent
|
||||
from swarms.agents.omni_modal_agent import OmniModalAgent
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.tools.autogpt import tool
|
||||
from swarms.workers import Worker
|
||||
```
|
||||
|
||||
### Initializing API Key and Language Model
|
||||
Here, an API key is initialized, and a language model (`OpenAIChat`) is created. This model is capable of generating human-like text based on the provided input.
|
||||
|
||||
```python
|
||||
# Initialize API Key
|
||||
api_key = "YOUR_OPENAI_API_KEY"
|
||||
|
||||
# Initialize the language model
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
### Creating Swarm Tools
|
||||
The code defines three tools: `hf_agent`, `omni_agent`, and `compile`. These tools encapsulate specific functionalities and can be invoked to perform tasks.
|
||||
|
||||
### Appending Tools to a List
|
||||
All defined tools are appended to a list called `tools`. This list is later used when initializing a worker node, allowing the node to access and utilize these tools.
|
||||
|
||||
```python
|
||||
# Append tools to a list
|
||||
tools = [
|
||||
hf_agent,
|
||||
omni_agent,
|
||||
compile
|
||||
]
|
||||
```
|
||||
|
||||
### Initializing a Worker Node
|
||||
A worker node is initialized using the `Worker` class. The worker node is equipped with the language model, a name, API key, and the list of tools. It's set up to perform tasks without human intervention.
|
||||
|
||||
```python
|
||||
# Initialize a single Worker node with previously defined tools in addition to its predefined tools
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key=api_key,
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=tools,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
```
|
||||
|
||||
## 3. Understanding the `hf_agent` Tool
|
||||
The `hf_agent` tool utilizes an OpenAI model (`text-davinci-003`) to perform tasks. It takes a task as input and returns a response. This tool is suitable for multi-modal tasks like generating images, videos, speech, etc. The tool's primary rule is not to be used for simple tasks like generating summaries.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def hf_agent(task: str = None):
|
||||
# Create an HFAgent instance with the specified model and API key
|
||||
agent = HFAgent(model="text-davinci-003", api_key=api_key)
|
||||
# Run the agent with the provided task and optional text input
|
||||
response = agent.run(task, text="¡Este es un API muy agradable!")
|
||||
return response
|
||||
```
|
||||
|
||||
## 4. Understanding the `omni_agent` Tool
|
||||
The `omni_agent` tool is more versatile and leverages the `llm` (language model) to interact with Huggingface models for various tasks. It's intended for multi-modal tasks such as document-question-answering, image-captioning, summarization, and more. The tool's rule is also not to be used for simple tasks.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def omni_agent(task: str = None):
|
||||
# Create an OmniModalAgent instance with the provided language model
|
||||
agent = OmniModalAgent(llm)
|
||||
# Run the agent with the provided task
|
||||
response = agent.run(task)
|
||||
return response
|
||||
```
|
||||
|
||||
## 5. Understanding the `compile` Tool
|
||||
The `compile` tool allows the execution of code locally, supporting various programming languages like Python, JavaScript, and Shell. It provides a natural language interface to your computer's capabilities. Users can chat with this tool in a terminal-like interface to perform tasks such as creating and editing files, controlling a browser, and more.
|
||||
|
||||
```python
|
||||
@tool
|
||||
def compile(task: str):
|
||||
# Use the interpreter module to chat with the local interpreter
|
||||
task = interpreter.chat(task, return_messages=True)
|
||||
interpreter.chat()
|
||||
interpreter.reset(task)
|
||||
|
||||
# Set environment variables for the interpreter
|
||||
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
|
||||
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
|
||||
os.environ["INTERPRETER_CLI_DEBUG"] = True
|
||||
```
|
||||
|
||||
## 6. Running a Swarm
|
||||
After defining tools and initializing the worker node, a specific task is provided as input to the worker node. The node then runs the task, and the response is printed to the console.
|
||||
|
||||
```python
|
||||
# Specify the task
|
||||
task = "What were the winning Boston Marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
|
||||
# Run the node on the task
|
||||
response = node.run(task)
|
||||
|
||||
# Print the response
|
||||
print(response)
|
||||
```
|
||||
|
||||
|
||||
## Full Code
|
||||
- The full code example of stacked swarms
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
import interpreter
|
||||
|
||||
from swarms.agents.hf_agents import HFAgent
|
||||
from swarms.agents.omni_modal_agent import OmniModalAgent
|
||||
from swarms.models import OpenAIChat
|
||||
from swarms.tools.autogpt import tool
|
||||
from swarms.workers import Worker
|
||||
|
||||
# Initialize API Key
|
||||
api_key = ""
|
||||
|
||||
|
||||
# Initialize the language model,
|
||||
# This model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
|
||||
llm = OpenAIChat(
|
||||
openai_api_key=api_key,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
|
||||
# wrap a function with the tool decorator to make it a tool, then add docstrings for tool documentation
|
||||
@tool
|
||||
def hf_agent(task: str = None):
|
||||
"""
|
||||
An tool that uses an openai model to call and respond to a task by search for a model on huggingface
|
||||
It first downloads the model then uses it.
|
||||
|
||||
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech, etc
|
||||
|
||||
"""
|
||||
agent = HFAgent(model="text-davinci-003", api_key=api_key)
|
||||
response = agent.run(task, text="¡Este es un API muy agradable!")
|
||||
return response
|
||||
|
||||
|
||||
# wrap a function with the tool decorator to make it a tool
|
||||
@tool
|
||||
def omni_agent(task: str = None):
|
||||
"""
|
||||
An tool that uses an openai Model to utilize and call huggingface models and guide them to perform a task.
|
||||
|
||||
Rules: Don't call this model for simple tasks like generating a summary, only call this tool for multi modal tasks like generating images, videos, speech
|
||||
The following tasks are what this tool should be used for:
|
||||
|
||||
Tasks omni agent is good for:
|
||||
--------------
|
||||
document-question-answering
|
||||
image-captioning
|
||||
image-question-answering
|
||||
image-segmentation
|
||||
speech-to-text
|
||||
summarization
|
||||
text-classification
|
||||
text-question-answering
|
||||
translation
|
||||
huggingface-tools/text-to-image
|
||||
huggingface-tools/text-to-video
|
||||
text-to-speech
|
||||
huggingface-tools/text-download
|
||||
huggingface-tools/image-transformation
|
||||
"""
|
||||
agent = OmniModalAgent(llm)
|
||||
response = agent.run(task)
|
||||
return response
|
||||
|
||||
|
||||
# Code Interpreter
|
||||
@tool
|
||||
def compile(task: str):
|
||||
"""
|
||||
Open Interpreter lets LLMs run code (Python, Javascript, Shell, and more) locally.
|
||||
You can chat with Open Interpreter through a ChatGPT-like interface in your terminal
|
||||
by running $ interpreter after installing.
|
||||
|
||||
This provides a natural-language interface to your computer's general-purpose capabilities:
|
||||
|
||||
Create and edit photos, videos, PDFs, etc.
|
||||
Control a Chrome browser to perform research
|
||||
Plot, clean, and analyze large datasets
|
||||
...etc.
|
||||
⚠️ Note: You'll be asked to approve code before it's run.
|
||||
|
||||
Rules: Only use when given to generate code or an application of some kind
|
||||
"""
|
||||
task = interpreter.chat(task, return_messages=True)
|
||||
interpreter.chat()
|
||||
interpreter.reset(task)
|
||||
|
||||
os.environ["INTERPRETER_CLI_AUTO_RUN"] = True
|
||||
os.environ["INTERPRETER_CLI_FAST_MODE"] = True
|
||||
os.environ["INTERPRETER_CLI_DEBUG"] = True
|
||||
|
||||
|
||||
# Append tools to an list
|
||||
tools = [hf_agent, omni_agent, compile]
|
||||
|
||||
|
||||
# Initialize a single Worker node with previously defined tools in addition to it's
|
||||
# predefined tools
|
||||
node = Worker(
|
||||
llm=llm,
|
||||
ai_name="Optimus Prime",
|
||||
openai_api_key=api_key,
|
||||
ai_role="Worker in a swarm",
|
||||
external_tools=tools,
|
||||
human_in_the_loop=False,
|
||||
temperature=0.5,
|
||||
)
|
||||
|
||||
# Specify task
|
||||
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
||||
|
||||
# Run the node on the task
|
||||
response = node.run(task)
|
||||
|
||||
# Print the response
|
||||
print(response)
|
||||
|
||||
|
||||
```
|
||||
|
||||
|
||||
## 8. Conclusion
|
||||
In this extensive tutorial, we've embarked on a journey to explore a sophisticated system designed to harness the power of AI models and tools for a myriad of tasks. We've peeled back the layers of code, dissected its various components, and gained a profound understanding of how these elements come together to create a versatile, modular, and powerful swarm-based AI system.
|
||||
|
||||
## What We've Learned
|
||||
|
||||
Throughout this tutorial, we've covered the following key aspects:
|
||||
|
||||
### Code Structure and Components
|
||||
We dissected the code into its fundamental building blocks:
|
||||
- **Import Statements:** We imported necessary modules and libraries, setting the stage for our system's functionality.
|
||||
- **Initializing API Key and Language Model:** We learned how to set up the essential API key and initialize the language model, a core component for text generation and understanding.
|
||||
- **Creating Swarm Tools:** We explored how to define tools, encapsulating specific functionalities that our system can leverage.
|
||||
- **Appending Tools to a List:** We aggregated our tools into a list, making them readily available for use.
|
||||
- **Initializing a Worker Node:** We created a worker node equipped with tools, a name, and configuration settings.
|
||||
|
||||
### Tools and Their Functions
|
||||
We dove deep into the purpose and functionality of three crucial tools:
|
||||
- **`hf_agent`:** We understood how this tool employs an OpenAI model for multi-modal tasks, and its use cases beyond simple summarization.
|
||||
- **`omni_agent`:** We explored the versatility of this tool, guiding Huggingface models to perform a wide range of multi-modal tasks.
|
||||
- **`compile`:** We saw how this tool allows the execution of code in multiple languages, providing a natural language interface for various computational tasks.
|
||||
|
||||
### Interactive Examples
|
||||
We brought the code to life through interactive examples, showcasing how to initialize the language model, generate text, perform document-question-answering, and execute code—all with practical, real-world scenarios.
|
||||
|
||||
## A Recap: The Worker Node's Role
|
||||
|
||||
At the heart of this system lies the "Worker Node," a versatile entity capable of wielding the power of AI models and tools to accomplish tasks. The Worker Node's role is pivotal in the following ways:
|
||||
|
||||
1. **Task Execution:** It is responsible for executing tasks, harnessing the capabilities of the defined tools to generate responses or perform actions.
|
||||
|
||||
2. **Modularity:** The Worker Node benefits from the modularity of the system. It can easily access and utilize a variety of tools, allowing it to adapt to diverse tasks and requirements.
|
||||
|
||||
3. **Human in the Loop:** While the example here is configured to operate without human intervention, the Worker Node can be customized to incorporate human input or approval when needed.
|
||||
|
||||
4. **Integration:** It can be extended to integrate with other AI models, APIs, or services, expanding its functionality and versatility.
|
||||
|
||||
## The Road Ahead: Future Features and Enhancements
|
||||
|
||||
As we conclude this tutorial, let's peek into the future of this system. While the current implementation is already powerful, there is always room for growth and improvement. Here are some potential future features and enhancements to consider:
|
||||
|
||||
### 1. Enhanced Natural Language Understanding
|
||||
- **Semantic Understanding:** Improve the system's ability to understand context and nuances in natural language, enabling more accurate responses.
|
||||
|
||||
### 2. Multimodal Capabilities
|
||||
- **Extended Multimodal Support:** Expand the `omni_agent` tool to support additional types of multimodal tasks, such as video generation or audio processing.
|
||||
|
||||
### 3. Customization and Integration
|
||||
- **User-defined Tools:** Allow users to define their own custom tools, opening up endless possibilities for tailoring the system to specific needs.
|
||||
|
||||
### 4. Collaborative Swarms
|
||||
- **Swarm Collaboration:** Enable multiple Worker Nodes to collaborate on complex tasks, creating a distributed, intelligent swarm system.
|
||||
|
||||
### 5. User-Friendly Interfaces
|
||||
- **Graphical User Interface (GUI):** Develop a user-friendly GUI for easier interaction and task management, appealing to a wider audience.
|
||||
|
||||
### 6. Continuous Learning
|
||||
- **Active Learning:** Implement mechanisms for the system to learn and adapt over time, improving its performance with each task.
|
||||
|
||||
### 7. Security and Privacy
|
||||
- **Enhanced Security:** Implement robust security measures to safeguard sensitive data and interactions within the system.
|
||||
|
||||
### 8. Community and Collaboration
|
||||
- **Open Source Community:** Foster an open-source community around the system, encouraging contributions and innovation from developers worldwide.
|
||||
|
||||
### 9. Integration with Emerging Technologies
|
||||
- **Integration with Emerging AI Models:** Keep the system up-to-date by seamlessly integrating with new and powerful AI models as they emerge in the industry.
|
||||
|
||||
## In Conclusion
|
||||
|
||||
In this tutorial, we've journeyed through a complex AI system, unraveling its inner workings, and understanding its potential. We've witnessed how code can transform into a powerful tool, capable of handling a vast array of tasks, from generating creative stories to executing code snippets.
|
||||
|
||||
As we conclude, we stand at the threshold of an exciting future for AI and technology. This system, with its modular design and the potential for continuous improvement, embodies the spirit of innovation and adaptability. Whether you're a developer, a researcher, or an enthusiast, the possibilities are boundless, and the journey is just beginning.
|
||||
|
||||
Embrace this knowledge, explore the system, and embark on your own quest to shape the future of AI. With each line of code, you have the power to transform ideas into reality and unlock new horizons of innovation. The future is yours to create, and the tools are at your fingertips.
|
@ -0,0 +1,187 @@
|
||||
```markdown
|
||||
# Swarm Alpha: Data Cruncher
|
||||
**Overview**: Processes large datasets.
|
||||
**Strengths**: Efficient data handling.
|
||||
**Weaknesses**: Requires structured data.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
FOR each data_entry IN dataset:
|
||||
result = PROCESS(data_entry)
|
||||
STORE(result)
|
||||
END FOR
|
||||
RETURN aggregated_results
|
||||
```
|
||||
|
||||
# Swarm Beta: Artistic Ally
|
||||
**Overview**: Generates art pieces.
|
||||
**Strengths**: Creativity.
|
||||
**Weaknesses**: Somewhat unpredictable.
|
||||
|
||||
**Pseudo Code**:
|
||||
```scss
|
||||
INITIATE canvas_parameters
|
||||
SELECT art_style
|
||||
DRAW(canvas_parameters, art_style)
|
||||
RETURN finished_artwork
|
||||
```
|
||||
|
||||
# Swarm Gamma: Sound Sculptor
|
||||
**Overview**: Crafts audio sequences.
|
||||
**Strengths**: Diverse audio outputs.
|
||||
**Weaknesses**: Complexity in refining outputs.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
DEFINE sound_parameters
|
||||
SELECT audio_style
|
||||
GENERATE_AUDIO(sound_parameters, audio_style)
|
||||
RETURN audio_sequence
|
||||
```
|
||||
|
||||
# Swarm Delta: Web Weaver
|
||||
**Overview**: Constructs web designs.
|
||||
**Strengths**: Modern design sensibility.
|
||||
**Weaknesses**: Limited to web interfaces.
|
||||
|
||||
**Pseudo Code**:
|
||||
```scss
|
||||
SELECT template
|
||||
APPLY user_preferences(template)
|
||||
DESIGN_web(template, user_preferences)
|
||||
RETURN web_design
|
||||
```
|
||||
|
||||
# Swarm Epsilon: Code Compiler
|
||||
**Overview**: Writes and compiles code snippets.
|
||||
**Strengths**: Quick code generation.
|
||||
**Weaknesses**: Limited to certain programming languages.
|
||||
|
||||
**Pseudo Code**:
|
||||
```scss
|
||||
DEFINE coding_task
|
||||
WRITE_CODE(coding_task)
|
||||
COMPILE(code)
|
||||
RETURN executable
|
||||
```
|
||||
|
||||
# Swarm Zeta: Security Shield
|
||||
**Overview**: Detects system vulnerabilities.
|
||||
**Strengths**: High threat detection rate.
|
||||
**Weaknesses**: Potential false positives.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
MONITOR system_activity
|
||||
IF suspicious_activity_detected:
|
||||
ANALYZE threat_level
|
||||
INITIATE mitigation_protocol
|
||||
END IF
|
||||
RETURN system_status
|
||||
```
|
||||
|
||||
# Swarm Eta: Researcher Relay
|
||||
**Overview**: Gathers and synthesizes research data.
|
||||
**Strengths**: Access to vast databases.
|
||||
**Weaknesses**: Depth of research can vary.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
DEFINE research_topic
|
||||
SEARCH research_sources(research_topic)
|
||||
SYNTHESIZE findings
|
||||
RETURN research_summary
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Swarm Theta: Sentiment Scanner
|
||||
**Overview**: Analyzes text for sentiment and emotional tone.
|
||||
**Strengths**: Accurate sentiment detection.
|
||||
**Weaknesses**: Contextual nuances might be missed.
|
||||
|
||||
**Pseudo Code**:
|
||||
```arduino
|
||||
INPUT text_data
|
||||
ANALYZE text_data FOR emotional_tone
|
||||
DETERMINE sentiment_value
|
||||
RETURN sentiment_value
|
||||
```
|
||||
|
||||
# Swarm Iota: Image Interpreter
|
||||
**Overview**: Processes and categorizes images.
|
||||
**Strengths**: High image recognition accuracy.
|
||||
**Weaknesses**: Can struggle with abstract visuals.
|
||||
|
||||
**Pseudo Code**:
|
||||
```objective-c
|
||||
LOAD image_data
|
||||
PROCESS image_data FOR features
|
||||
CATEGORIZE image_based_on_features
|
||||
RETURN image_category
|
||||
```
|
||||
|
||||
# Swarm Kappa: Language Learner
|
||||
**Overview**: Translates and interprets multiple languages.
|
||||
**Strengths**: Supports multiple languages.
|
||||
**Weaknesses**: Nuances in dialects might pose challenges.
|
||||
|
||||
**Pseudo Code**:
|
||||
```vbnet
|
||||
RECEIVE input_text, target_language
|
||||
TRANSLATE input_text TO target_language
|
||||
RETURN translated_text
|
||||
```
|
||||
|
||||
# Swarm Lambda: Trend Tracker
|
||||
**Overview**: Monitors and predicts trends based on data.
|
||||
**Strengths**: Proactive trend identification.
|
||||
**Weaknesses**: Requires continuous data stream.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
COLLECT data_over_time
|
||||
ANALYZE data_trends
|
||||
PREDICT upcoming_trends
|
||||
RETURN trend_forecast
|
||||
```
|
||||
|
||||
# Swarm Mu: Financial Forecaster
|
||||
**Overview**: Analyzes financial data to predict market movements.
|
||||
**Strengths**: In-depth financial analytics.
|
||||
**Weaknesses**: Market volatility can affect predictions.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
GATHER financial_data
|
||||
COMPUTE statistical_analysis
|
||||
FORECAST market_movements
|
||||
RETURN financial_projections
|
||||
```
|
||||
|
||||
# Swarm Nu: Network Navigator
|
||||
**Overview**: Optimizes and manages network traffic.
|
||||
**Strengths**: Efficient traffic management.
|
||||
**Weaknesses**: Depends on network infrastructure.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
MONITOR network_traffic
|
||||
IDENTIFY congestion_points
|
||||
OPTIMIZE traffic_flow
|
||||
RETURN network_status
|
||||
```
|
||||
|
||||
# Swarm Xi: Content Curator
|
||||
**Overview**: Gathers and presents content based on user preferences.
|
||||
**Strengths**: Personalized content delivery.
|
||||
**Weaknesses**: Limited by available content sources.
|
||||
|
||||
**Pseudo Code**:
|
||||
```sql
|
||||
DEFINE user_preferences
|
||||
SEARCH content_sources
|
||||
FILTER content_matching_preferences
|
||||
DISPLAY curated_content
|
||||
```
|
||||
|
@ -0,0 +1,50 @@
|
||||
# Swarms Multi-Agent Permissions System (SMAPS)
|
||||
|
||||
## Description
|
||||
SMAPS is a robust permissions management system designed to integrate seamlessly with Swarm's multi-agent AI framework. Drawing inspiration from Amazon's IAM, SMAPS ensures secure, granular control over agent actions while allowing for collaborative human-in-the-loop interventions.
|
||||
|
||||
## Technical Specification
|
||||
|
||||
### 1. Components
|
||||
|
||||
- **User Management**: Handle user registrations, roles, and profiles.
|
||||
- **Agent Management**: Register, monitor, and manage AI agents.
|
||||
- **Permissions Engine**: Define and enforce permissions based on roles.
|
||||
- **Multiplayer Interface**: Allows multiple human users to intervene, guide, or collaborate on tasks being executed by AI agents.
|
||||
|
||||
### 2. Features
|
||||
|
||||
- **Role-Based Access Control (RBAC)**:
|
||||
- Users can be assigned predefined roles (e.g., Admin, Agent Supervisor, Collaborator).
|
||||
- Each role has specific permissions associated with it, defining what actions can be performed on AI agents or tasks.
|
||||
|
||||
- **Dynamic Permissions**:
|
||||
- Create custom roles with specific permissions.
|
||||
- Permissions granularity: From broad (e.g., view all tasks) to specific (e.g., modify parameters of a particular agent).
|
||||
|
||||
- **Multiplayer Collaboration**:
|
||||
- Multiple users can join a task in real-time.
|
||||
- Collaborators can provide real-time feedback or guidance to AI agents.
|
||||
- A voting system for decision-making when human intervention is required.
|
||||
|
||||
- **Agent Supervision**:
|
||||
- Monitor agent actions in real-time.
|
||||
- Intervene, if necessary, to guide agent actions based on permissions.
|
||||
|
||||
- **Audit Trail**:
|
||||
- All actions, whether performed by humans or AI agents, are logged.
|
||||
- Review historical actions, decisions, and interventions for accountability and improvement.
|
||||
|
||||
### 3. Security
|
||||
|
||||
- **Authentication**: Secure login mechanisms with multi-factor authentication options.
|
||||
- **Authorization**: Ensure users and agents can only perform actions they are permitted to.
|
||||
- **Data Encryption**: All data, whether at rest or in transit, is encrypted using industry-standard protocols.
|
||||
|
||||
### 4. Integration
|
||||
|
||||
- **APIs**: Expose APIs for integrating SMAPS with other systems or for extending its capabilities.
|
||||
- **SDK**: Provide software development kits for popular programming languages to facilitate integration and extension.
|
||||
|
||||
## Documentation Description
|
||||
Swarms Multi-Agent Permissions System (SMAPS) offers a sophisticated permissions management mechanism tailored for multi-agent AI frameworks. It combines the robustness of Amazon IAM-like permissions with a unique "multiplayer" feature, allowing multiple humans to collaboratively guide AI agents in real-time. This ensures not only that tasks are executed efficiently but also that they uphold the highest standards of accuracy and ethics. With SMAPS, businesses can harness the power of swarms with confidence, knowing that they have full control and transparency over their AI operations.
|
@ -0,0 +1,73 @@
|
||||
# AgentArchive Documentation
|
||||
## Swarms Multi-Agent Framework
|
||||
|
||||
**AgentArchive is an advanced feature crafted to archive, bookmark, and harness the transcripts of agent runs. It promotes the storing and leveraging of successful agent interactions, offering a powerful means for users to derive "recipes" for future agents. Furthermore, with its public archive feature, users can contribute to and benefit from the collective wisdom of the community.**
|
||||
|
||||
---
|
||||
|
||||
## Overview:
|
||||
|
||||
AgentArchive empowers users to:
|
||||
1. Preserve complete transcripts of agent instances.
|
||||
2. Bookmark and annotate significant runs.
|
||||
3. Categorize runs using various tags.
|
||||
4. Transform successful runs into actionable "recipes".
|
||||
5. Publish and access a shared knowledge base via a public archive.
|
||||
|
||||
---
|
||||
|
||||
## Features:
|
||||
|
||||
### 1. Archiving:
|
||||
|
||||
- **Save Transcripts**: Retain the full narrative of an agent's interaction and choices.
|
||||
- **Searchable Database**: Dive into archives using specific keywords, timestamps, or tags.
|
||||
|
||||
### 2. Bookmarking:
|
||||
|
||||
- **Highlight Essential Runs**: Designate specific agent runs for future reference.
|
||||
- **Annotations**: Embed notes or remarks to bookmarked runs for clearer understanding.
|
||||
|
||||
### 3. Tagging:
|
||||
|
||||
Organize and classify agent runs via:
|
||||
- **Prompt**: The originating instruction that triggered the agent run.
|
||||
- **Tasks**: Distinct tasks or operations executed by the agent.
|
||||
- **Model**: The specific AI model or iteration used during the interaction.
|
||||
- **Temperature (Temp)**: The set randomness or innovation level for the agent.
|
||||
|
||||
### 4. Recipe Generation:
|
||||
|
||||
- **Standardization**: Convert successful run transcripts into replicable "recipes".
|
||||
- **Guidance**: Offer subsequent agents a structured approach, rooted in prior successes.
|
||||
- **Evolution**: Periodically refine recipes based on newer, enhanced runs.
|
||||
|
||||
### 5. Public Archive & Sharing:
|
||||
|
||||
- **Publish Successful Runs**: Users can choose to share their successful agent runs.
|
||||
- **Collaborative Knowledge Base**: Access a shared repository of successful agent interactions from the community.
|
||||
- **Ratings & Reviews**: Users can rate and review shared runs, highlighting particularly effective "recipes."
|
||||
- **Privacy & Redaction**: Ensure that any sensitive information is automatically redacted before publishing.
|
||||
|
||||
---
|
||||
|
||||
## Benefits:
|
||||
|
||||
1. **Efficiency**: Revisit past agent activities to inform and guide future decisions.
|
||||
2. **Consistency**: Guarantee a uniform approach to recurring challenges, leading to predictable and trustworthy outcomes.
|
||||
3. **Collaborative Learning**: Tap into a reservoir of shared experiences, fostering community-driven learning and growth.
|
||||
4. **Transparency**: By sharing successful runs, users can build trust and contribute to the broader community's success.
|
||||
|
||||
---
|
||||
|
||||
## Usage:
|
||||
|
||||
1. **Access AgentArchive**: Navigate to the dedicated section within the Swarms Multi-Agent Framework dashboard.
|
||||
2. **Search, Filter & Organize**: Utilize the search bar and tagging system for precise retrieval.
|
||||
3. **Bookmark, Annotate & Share**: Pin important runs, add notes, and consider sharing with the broader community.
|
||||
4. **Engage with Public Archive**: Explore, rate, and apply shared knowledge to enhance agent performance.
|
||||
|
||||
---
|
||||
|
||||
With AgentArchive, users not only benefit from their past interactions but can also leverage the collective expertise of the Swarms community, ensuring continuous improvement and shared success.
|
||||
|
@ -0,0 +1,67 @@
|
||||
# Swarms Multi-Agent Framework Documentation
|
||||
|
||||
## Table of Contents
|
||||
- Agent Failure Protocol
|
||||
- Swarm Failure Protocol
|
||||
|
||||
---
|
||||
|
||||
## Agent Failure Protocol
|
||||
|
||||
### 1. Overview
|
||||
Agent failures may arise from bugs, unexpected inputs, or external system changes. This protocol aims to diagnose, address, and prevent such failures.
|
||||
|
||||
### 2. Root Cause Analysis
|
||||
- **Data Collection**: Record the task, inputs, and environmental variables present during the failure.
|
||||
- **Diagnostic Tests**: Run the agent in a controlled environment replicating the failure scenario.
|
||||
- **Error Logging**: Analyze error logs to identify patterns or anomalies.
|
||||
|
||||
### 3. Solution Brainstorming
|
||||
- **Code Review**: Examine the code sections linked to the failure for bugs or inefficiencies.
|
||||
- **External Dependencies**: Check if external systems or data sources have changed.
|
||||
- **Algorithmic Analysis**: Evaluate if the agent's algorithms were overwhelmed or faced an unhandled scenario.
|
||||
|
||||
### 4. Risk Analysis & Solution Ranking
|
||||
- Assess the potential risks associated with each solution.
|
||||
- Rank solutions based on:
|
||||
- Implementation complexity
|
||||
- Potential negative side effects
|
||||
- Resource requirements
|
||||
- Assign a success probability score (0.0 to 1.0) based on the above factors.
|
||||
|
||||
### 5. Solution Implementation
|
||||
- Implement the top 3 solutions sequentially, starting with the highest success probability.
|
||||
- If all three solutions fail, trigger the "Human-in-the-Loop" protocol.
|
||||
|
||||
---
|
||||
|
||||
## Swarm Failure Protocol
|
||||
|
||||
### 1. Overview
|
||||
Swarm failures are more complex, often resulting from inter-agent conflicts, systemic bugs, or large-scale environmental changes. This protocol delves deep into such failures to ensure the swarm operates optimally.
|
||||
|
||||
### 2. Root Cause Analysis
|
||||
- **Inter-Agent Analysis**: Examine if agents were in conflict or if there was a breakdown in collaboration.
|
||||
- **System Health Checks**: Ensure all system components supporting the swarm are operational.
|
||||
- **Environment Analysis**: Investigate if external factors or systems impacted the swarm's operation.
|
||||
|
||||
### 3. Solution Brainstorming
|
||||
- **Collaboration Protocols**: Review and refine how agents collaborate.
|
||||
- **Resource Allocation**: Check if the swarm had adequate computational and memory resources.
|
||||
- **Feedback Loops**: Ensure agents are effectively learning from each other.
|
||||
|
||||
### 4. Risk Analysis & Solution Ranking
|
||||
- Assess the potential systemic risks posed by each solution.
|
||||
- Rank solutions considering:
|
||||
- Scalability implications
|
||||
- Impact on individual agents
|
||||
- Overall swarm performance potential
|
||||
- Assign a success probability score (0.0 to 1.0) based on the above considerations.
|
||||
|
||||
### 5. Solution Implementation
|
||||
- Implement the top 3 solutions sequentially, prioritizing the one with the highest success probability.
|
||||
- If all three solutions are unsuccessful, invoke the "Human-in-the-Loop" protocol for expert intervention.
|
||||
|
||||
---
|
||||
|
||||
By following these protocols, the Swarms Multi-Agent Framework can systematically address and prevent failures, ensuring a high degree of reliability and efficiency.
|
@ -0,0 +1,49 @@
|
||||
# Human-in-the-Loop Task Handling Protocol
|
||||
|
||||
## Overview
|
||||
|
||||
The Swarms Multi-Agent Framework recognizes the invaluable contributions humans can make, especially in complex scenarios where nuanced judgment is required. The "Human-in-the-Loop Task Handling Protocol" ensures that when agents encounter challenges they cannot handle autonomously, the most capable human collaborator is engaged to provide guidance, based on their skills and expertise.
|
||||
|
||||
## Protocol Steps
|
||||
|
||||
### 1. Task Initiation & Analysis
|
||||
|
||||
- When a task is initiated, agents first analyze the task's requirements.
|
||||
- The system maintains an understanding of each task's complexity, requirements, and potential challenges.
|
||||
|
||||
### 2. Automated Resolution Attempt
|
||||
|
||||
- Agents first attempt to resolve the task autonomously using their algorithms and data.
|
||||
- If the task can be completed without issues, it progresses normally.
|
||||
|
||||
### 3. Challenge Detection
|
||||
|
||||
- If agents encounter challenges or uncertainties they cannot resolve, the "Human-in-the-Loop" protocol is triggered.
|
||||
|
||||
### 4. Human Collaborator Identification
|
||||
|
||||
- The system maintains a dynamic profile of each human collaborator, cataloging their skills, expertise, and past performance on related tasks.
|
||||
- Using this profile data, the system identifies the most capable human collaborator to assist with the current challenge.
|
||||
|
||||
### 5. Real-time Collaboration
|
||||
|
||||
- The identified human collaborator is notified and provided with all the relevant information about the task and the challenge.
|
||||
- Collaborators can provide guidance, make decisions, or even take over specific portions of the task.
|
||||
|
||||
### 6. Task Completion & Feedback Loop
|
||||
|
||||
- Once the challenge is resolved, agents continue with the task until completion.
|
||||
- Feedback from human collaborators is used to update agent algorithms, ensuring continuous learning and improvement.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Maintain Up-to-date Human Profiles**: Ensure that the skillsets, expertise, and performance metrics of human collaborators are updated regularly.
|
||||
2. **Limit Interruptions**: Implement mechanisms to limit the frequency of human interventions, ensuring collaborators are not overwhelmed with requests.
|
||||
3. **Provide Context**: When seeking human intervention, provide collaborators with comprehensive context to ensure they can make informed decisions.
|
||||
4. **Continuous Training**: Regularly update and train agents based on feedback from human collaborators.
|
||||
5. **Measure & Optimize**: Monitor the efficiency of the "Human-in-the-Loop" protocol, aiming to reduce the frequency of interventions while maximizing the value of each intervention.
|
||||
6. **Skill Enhancement**: Encourage human collaborators to continuously enhance their skills, ensuring that the collective expertise of the group grows over time.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The integration of human expertise with AI capabilities is a cornerstone of the Swarms Multi-Agent Framework. This "Human-in-the-Loop Task Handling Protocol" ensures that tasks are executed efficiently, leveraging the best of both human judgment and AI automation. Through collaborative synergy, we can tackle challenges more effectively and drive innovation.
|
@ -0,0 +1,48 @@
|
||||
# Secure Communication Protocols
|
||||
|
||||
## Overview
|
||||
|
||||
The Swarms Multi-Agent Framework prioritizes the security and integrity of data, especially personal and sensitive information. Our Secure Communication Protocols ensure that all communications between agents are encrypted, authenticated, and resistant to tampering or unauthorized access.
|
||||
|
||||
## Features
|
||||
|
||||
### 1. End-to-End Encryption
|
||||
|
||||
- All inter-agent communications are encrypted using state-of-the-art cryptographic algorithms.
|
||||
- This ensures that data remains confidential and can only be read by the intended recipient agent.
|
||||
|
||||
### 2. Authentication
|
||||
|
||||
- Before initiating communication, agents authenticate each other using digital certificates.
|
||||
- This prevents impersonation attacks and ensures that agents are communicating with legitimate counterparts.
|
||||
|
||||
### 3. Forward Secrecy
|
||||
|
||||
- Key exchange mechanisms employ forward secrecy, meaning that even if a malicious actor gains access to an encryption key, they cannot decrypt past communications.
|
||||
|
||||
### 4. Data Integrity
|
||||
|
||||
- Cryptographic hashes ensure that the data has not been altered in transit.
|
||||
- Any discrepancies in data integrity result in the communication being rejected.
|
||||
|
||||
### 5. Zero-Knowledge Protocols
|
||||
|
||||
- When handling especially sensitive data, agents use zero-knowledge proofs to validate information without revealing the actual data.
|
||||
|
||||
### 6. Periodic Key Rotation
|
||||
|
||||
- To mitigate the risk of long-term key exposure, encryption keys are periodically rotated.
|
||||
- Old keys are securely discarded, ensuring that even if they are compromised, they cannot be used to decrypt communications.
|
||||
|
||||
## Best Practices for Handling Personal and Sensitive Information
|
||||
|
||||
1. **Data Minimization**: Agents should only request and process the minimum amount of personal data necessary for the task.
|
||||
2. **Anonymization**: Whenever possible, agents should anonymize personal data, stripping away identifying details.
|
||||
3. **Data Retention Policies**: Personal data should be retained only for the period necessary to complete the task, after which it should be securely deleted.
|
||||
4. **Access Controls**: Ensure that only authorized agents have access to personal and sensitive information. Implement strict access control mechanisms.
|
||||
5. **Regular Audits**: Conduct regular security audits to ensure compliance with privacy regulations and to detect any potential vulnerabilities.
|
||||
6. **Training**: All agents should be regularly updated and trained on the latest security protocols and best practices for handling sensitive data.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Secure communication is paramount in the Swarms Multi-Agent Framework, especially when dealing with personal and sensitive information. Adhering to these protocols and best practices ensures the safety, privacy, and trust of all stakeholders involved.
|
@ -0,0 +1,68 @@
|
||||
# Promptimizer Documentation
|
||||
## Swarms Multi-Agent Framework
|
||||
|
||||
**The Promptimizer Tool stands as a cornerstone innovation within the Swarms Multi-Agent Framework, meticulously engineered to refine and supercharge prompts across diverse categories. Capitalizing on extensive libraries of best-practice prompting techniques, this tool ensures your prompts are razor-sharp, tailored, and primed for optimal outcomes.**
|
||||
|
||||
---
|
||||
|
||||
## Overview:
|
||||
|
||||
The Promptimizer Tool is crafted to:
|
||||
1. Rigorously analyze and elevate the quality of provided prompts.
|
||||
2. Furnish best-in-class recommendations rooted in proven prompting strategies.
|
||||
3. Serve a spectrum of categories, from technical operations to expansive creative ventures.
|
||||
|
||||
---
|
||||
|
||||
## Core Features:
|
||||
|
||||
### 1. Deep Prompt Analysis:
|
||||
|
||||
- **Clarity Matrix**: A proprietary algorithm assessing prompt clarity, removing ambiguities and sharpening focus.
|
||||
- **Efficiency Gauge**: Evaluates the prompt's structure to ensure swift and precise desired results.
|
||||
|
||||
### 2. Adaptive Recommendations:
|
||||
|
||||
- **Technique Engine**: Suggests techniques aligned with the gold standard for the chosen category.
|
||||
- **Exemplar Database**: Offers an extensive array of high-quality prompt examples for comparison and inspiration.
|
||||
|
||||
### 3. Versatile Category Framework:
|
||||
|
||||
- **Tech Suite**: Optimizes prompts for technical tasks, ensuring actionable clarity.
|
||||
- **Narrative Craft**: Hones prompts to elicit vivid and coherent stories.
|
||||
- **Visual Visionary**: Shapes prompts for precise and dynamic visual generation.
|
||||
- **Sonic Sculptor**: Orchestrates prompts for audio creation, tuning into desired tones and moods.
|
||||
|
||||
### 4. Machine Learning Integration:
|
||||
|
||||
- **Feedback Dynamo**: Harnesses user feedback, continually refining the tool's recommendation capabilities.
|
||||
- **Live Library Updates**: Periodic syncing with the latest in prompting techniques, ensuring the tool remains at the cutting edge.
|
||||
|
||||
### 5. Collaboration & Sharing:
|
||||
|
||||
- **TeamSync**: Allows teams to collaborate on prompt optimization in real-time.
|
||||
- **ShareSpace**: Share and access a community-driven repository of optimized prompts, fostering collective growth.
|
||||
|
||||
---
|
||||
|
||||
## Benefits:
|
||||
|
||||
1. **Precision Engineering**: Harness the power of refined prompts, ensuring desired outcomes are achieved with surgical precision.
|
||||
2. **Learning Hub**: Immerse in a tool that not only refines but educates, enhancing the user's prompting acumen.
|
||||
3. **Versatile Mastery**: Navigate seamlessly across categories, ensuring top-tier prompt quality regardless of the domain.
|
||||
4. **Community-driven Excellence**: Dive into a world of shared knowledge, elevating the collective expertise of the Swarms community.
|
||||
|
||||
---
|
||||
|
||||
## Usage Workflow:
|
||||
|
||||
1. **Launch the Prompt Optimizer**: Access the tool directly from the Swarms Multi-Agent Framework dashboard.
|
||||
2. **Prompt Entry**: Input the initial prompt for refinement.
|
||||
3. **Category Selection**: Pinpoint the desired category for specialized optimization.
|
||||
4. **Receive & Review**: Engage with the tool's recommendations, comparing original and optimized prompts.
|
||||
5. **Collaborate, Implement & Share**: Work in tandem with team members, deploy the refined prompt, and consider contributing to the community repository.
|
||||
|
||||
---
|
||||
|
||||
By integrating the Promptimizer Tool into their workflow, Swarms users stand poised to redefine the boundaries of what's possible, turning each prompt into a beacon of excellence and efficiency.
|
||||
|
@ -0,0 +1,68 @@
|
||||
# Shorthand Communication System
|
||||
## Swarms Multi-Agent Framework
|
||||
|
||||
**The Enhanced Shorthand Communication System is designed to streamline agent-agent communication within the Swarms Multi-Agent Framework. This system employs concise alphanumeric notations to relay task-specific details to agents efficiently.**
|
||||
|
||||
---
|
||||
|
||||
## Format:
|
||||
|
||||
The shorthand format is structured as `[AgentType]-[TaskLayer].[TaskNumber]-[Priority]-[Status]`.
|
||||
|
||||
---
|
||||
|
||||
## Components:
|
||||
|
||||
### 1. Agent Type:
|
||||
- Denotes the specific agent role, such as:
|
||||
* `C`: Code agent
|
||||
* `D`: Data processing agent
|
||||
* `M`: Monitoring agent
|
||||
* `N`: Network agent
|
||||
* `R`: Resource management agent
|
||||
* `I`: Interface agent
|
||||
* `S`: Security agent
|
||||
|
||||
### 2. Task Layer & Number:
|
||||
- Represents the task's category.
|
||||
* Example: `1.8` signifies Task layer 1, task number 8.
|
||||
|
||||
### 3. Priority:
|
||||
- Indicates task urgency.
|
||||
* `H`: High
|
||||
* `M`: Medium
|
||||
* `L`: Low
|
||||
|
||||
### 4. Status:
|
||||
- Gives a snapshot of the task's progress.
|
||||
* `I`: Initialized
|
||||
* `P`: In-progress
|
||||
* `C`: Completed
|
||||
* `F`: Failed
|
||||
* `W`: Waiting
|
||||
|
||||
---
|
||||
|
||||
## Extended Features:
|
||||
|
||||
### 1. Error Codes (for failures):
|
||||
- `E01`: Resource issues
|
||||
- `E02`: Data inconsistency
|
||||
- `E03`: Dependency malfunction
|
||||
... and more as needed.
|
||||
|
||||
### 2. Collaboration Flag:
|
||||
- `+`: Denotes required collaboration.
|
||||
|
||||
---
|
||||
|
||||
## Example Codes:
|
||||
|
||||
- `C-1.8-H-I`: A high-priority coding task that's initializing.
|
||||
- `D-2.3-M-P`: A medium-priority data task currently in-progress.
|
||||
- `M-3.5-L-P+`: A low-priority monitoring task in progress needing collaboration.
|
||||
|
||||
---
|
||||
|
||||
By leveraging the Enhanced Shorthand Communication System, the Swarms Multi-Agent Framework can ensure swift interactions, concise communications, and effective task management.
|
||||
|
@ -0,0 +1,83 @@
|
||||
# Contributing to Swarms
|
||||
|
||||
Thank you for your interest in contributing to Swarms! We welcome contributions from the community to help improve usability and readability. By contributing, you can be a part of creating a dynamic and interactive AI system.
|
||||
|
||||
To get started, please follow the guidelines below.
|
||||
|
||||
## Join the Swarms Community
|
||||
|
||||
Join the Swarms community on Discord to connect with other contributors, coordinate work, and receive support.
|
||||
|
||||
- [Join the Swarms Discord Server](https://discord.gg/qUtxnK2NMf)
|
||||
|
||||
## Taking on Tasks
|
||||
|
||||
We have a growing list of tasks and issues that you can contribute to. To get started, follow these steps:
|
||||
|
||||
1. Visit the [Swarms GitHub repository](https://github.com/kyegomez/swarms) and browse through the existing issues.
|
||||
|
||||
2. Find an issue that interests you and make a comment stating that you would like to work on it. Include a brief description of how you plan to solve the problem and any questions you may have.
|
||||
|
||||
3. Once a project coordinator assigns the issue to you, you can start working on it.
|
||||
|
||||
If you come across an issue that is unclear but still interests you, please post in the Discord server mentioned above. Someone from the community will be able to help clarify the issue in more detail.
|
||||
|
||||
We also welcome contributions to documentation, such as updating markdown files, adding docstrings, creating system architecture diagrams, and other related tasks.
|
||||
|
||||
## Submitting Your Work
|
||||
|
||||
To contribute your changes to Swarms, please follow these steps:
|
||||
|
||||
1. Fork the Swarms repository to your GitHub account. You can do this by clicking on the "Fork" button on the repository page.
|
||||
|
||||
2. Clone the forked repository to your local machine using the `git clone` command.
|
||||
|
||||
3. Before making any changes, make sure to sync your forked repository with the original repository to keep it up to date. You can do this by following the instructions [here](https://docs.github.com/en/github/collaborating-with-pull-requests/syncing-a-fork).
|
||||
|
||||
4. Create a new branch for your changes. This branch should have a descriptive name that reflects the task or issue you are working on.
|
||||
|
||||
5. Make your changes in the branch, focusing on a small, focused change that only affects a few files.
|
||||
|
||||
6. Run any necessary formatting or linting tools to ensure that your changes adhere to the project's coding standards.
|
||||
|
||||
7. Once your changes are ready, commit them to your branch with descriptive commit messages.
|
||||
|
||||
8. Push the branch to your forked repository.
|
||||
|
||||
9. Create a pull request (PR) from your branch to the main Swarms repository. Provide a clear and concise description of your changes in the PR.
|
||||
|
||||
10. Request a review from the project maintainers. They will review your changes, provide feedback, and suggest any necessary improvements.
|
||||
|
||||
11. Make any required updates or address any feedback provided during the review process.
|
||||
|
||||
12. Once your changes have been reviewed and approved, they will be merged into the main branch of the Swarms repository.
|
||||
|
||||
13. Congratulations! You have successfully contributed to Swarms.
|
||||
|
||||
Please note that during the review process, you may be asked to make changes or address certain issues. It is important to engage in open and constructive communication with the project maintainers to ensure the quality of your contributions.
|
||||
|
||||
## Developer Setup
|
||||
|
||||
If you are interested in setting up the Swarms development environment, please follow the instructions provided in the [developer setup guide](docs/developer-setup.md). This guide provides an overview of the different tools and technologies used in the project.
|
||||
|
||||
## Optimization Priorities
|
||||
|
||||
To continuously improve Swarms, we prioritize the following design objectives:
|
||||
|
||||
1. **Usability**: Increase the ease of use and user-friendliness of the swarm system to facilitate adoption and interaction with basic input.
|
||||
|
||||
2. **Reliability**: Improve the swarm's ability to obtain the desired output even with basic and un-detailed input.
|
||||
|
||||
3. **Speed**: Reduce the time it takes for the swarm to accomplish tasks by improving the communication layer, critiquing, and self-alignment with meta prompting.
|
||||
|
||||
4. **Scalability**: Ensure that the system is asynchronous, concurrent, and self-healing to support scalability.
|
||||
|
||||
Our goal is to continuously improve Swarms by following this roadmap while also being adaptable to new needs and opportunities as they arise.
|
||||
|
||||
## Join the Agora Community
|
||||
|
||||
Swarms is brought to you by Agora, the open-source AI research organization. Join the Agora community to connect with other researchers and developers working on AI projects.
|
||||
|
||||
- [Join the Agora Discord Server](https://discord.gg/qUtxnK2NMf)
|
||||
|
||||
Thank you for your contributions and for being a part of the Swarms and Agora community! Together, we can advance Humanity through the power of AI.
|
@ -0,0 +1,368 @@
|
||||
# Swarms Documentation
|
||||
|
||||
## ClassName
|
||||
|
||||
Swarms
|
||||
|
||||
## Purpose
|
||||
|
||||
The Swarms module provides a powerful framework for creating and managing swarms of autonomous agents to accomplish complex tasks. It consists of the `WorkerNode` and `BossNode` classes, along with the `LLM` utility class, which allow you to easily set up and run a swarm of agents to tackle any objective. The module is highly configurable and extensible, providing flexibility to accommodate various use cases.
|
||||
|
||||
## Usage example
|
||||
|
||||
```python
|
||||
from swarms import Swarms
|
||||
|
||||
api_key = "your_openai_api_key"
|
||||
|
||||
# Initialize Swarms with your API key
|
||||
swarm = Swarms(api_key=api_key)
|
||||
|
||||
# Define an objective
|
||||
objective = "Please make a web GUI for using HTTP API server..."
|
||||
|
||||
# Run Swarms
|
||||
result = swarm.run(objective)
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Constructor
|
||||
|
||||
```python
|
||||
def __init__(self, openai_api_key)
|
||||
```
|
||||
|
||||
- `openai_api_key` (required): The API key for OpenAI's models.
|
||||
|
||||
## Methods
|
||||
|
||||
### run(objective)
|
||||
|
||||
Runs the swarm with the given objective by initializing the worker and boss nodes.
|
||||
|
||||
- `objective` (required): The objective or task to be accomplished by the swarm.
|
||||
|
||||
Returns the result of the swarm execution.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python
|
||||
from swarms import Swarms
|
||||
|
||||
api_key = "your_openai_api_key"
|
||||
|
||||
# Initialize Swarms with your API key
|
||||
swarm = Swarms(api_key=api_key)
|
||||
|
||||
# Define an objective
|
||||
objective = "Please make a web GUI for using HTTP API server..."
|
||||
|
||||
# Run Swarms
|
||||
result = swarm.run(objective)
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
## WorkerNode
|
||||
|
||||
The `WorkerNode` class represents an autonomous agent instance that functions as a worker to accomplish complex tasks. It has the ability to search the internet, process and generate images, text, audio, and more.
|
||||
|
||||
### Constructor
|
||||
|
||||
```python
|
||||
def __init__(self, llm, tools, vectorstore)
|
||||
```
|
||||
|
||||
- `llm` (required): The language model used by the worker node.
|
||||
- `tools` (required): A list of tools available to the worker node.
|
||||
- `vectorstore` (required): The vector store used by the worker node.
|
||||
|
||||
### Methods
|
||||
|
||||
- `create_agent(ai_name, ai_role, human_in_the_loop, search_kwargs)`: Creates an agent within the worker node.
|
||||
- `add_tool(tool)`: Adds a tool to the worker node.
|
||||
- `run(prompt)`: Runs the worker node to complete a task specified by the prompt.
|
||||
|
||||
### Example Usage
|
||||
|
||||
```python
|
||||
from swarms import worker_node
|
||||
|
||||
# Your OpenAI API key
|
||||
api_key = "your_openai_api_key"
|
||||
|
||||
# Initialize a WorkerNode with your API key
|
||||
node = worker_node(api_key)
|
||||
|
||||
# Define an objective
|
||||
objective = "Please make a web GUI for using HTTP API server..."
|
||||
|
||||
# Run the task
|
||||
task = node.run(objective)
|
||||
|
||||
print(task)
|
||||
```
|
||||
|
||||
## BossNode
|
||||
|
||||
The `BossNode` class represents an agent responsible for creating and managing tasks for the worker agent(s). It interacts with the worker node(s) to delegate tasks and monitor their progress.
|
||||
|
||||
### Constructor
|
||||
|
||||
```python
|
||||
def __init__(self, llm, vectorstore, agent_executor, max_iterations)
|
||||
```
|
||||
|
||||
- `llm` (required): The language model used by the boss node.
|
||||
- `vectorstore` (required): The vector store used by the boss node.
|
||||
- `agent_executor` (required): The agent executor used to execute tasks.
|
||||
- `max_iterations` (required): The maximum number of iterations for task execution.
|
||||
|
||||
### Methods
|
||||
|
||||
- `create_task(objective)`: Creates a task with the given objective.
|
||||
- `execute_task(task)`: Executes the given task by interacting with the worker agent(s).
|
||||
|
||||
## LLM
|
||||
|
||||
The `LLM` class is a utility class that provides an interface to different language models (LLMs) such as OpenAI's ChatGPT and Hugging Face models. It is used to initialize the language model for the worker and boss nodes.
|
||||
|
||||
### Constructor
|
||||
|
||||
```python
|
||||
def __init__(self, openai_api_key=None, hf_repo_id=None, hf_api_token=None, model_kwargs=None)
|
||||
```
|
||||
|
||||
- `openai_api_key` (optional): The API key for OpenAI's models.
|
||||
- `hf_repo_id` (optional): The repository ID for the Hugging Face model.
|
||||
- `hf_api_token` (optional): The API token for the Hugging Face model.
|
||||
- `model_kwargs` (optional): Additional keyword arguments to pass to the language model.
|
||||
|
||||
### Methods
|
||||
|
||||
- `run(prompt)`: Runs the language model with the given prompt and returns the generated response.
|
||||
|
||||
## Configuration
|
||||
|
||||
The Swarms module can be configured by modifying the following parameters:
|
||||
|
||||
### WorkerNode
|
||||
|
||||
- `llm_class`: The language model class to use for the worker node (default: `ChatOpenAI`).
|
||||
- `temperature`: The temperature parameter for the language model (default: `0.5`).
|
||||
|
||||
### BossNode
|
||||
|
||||
- `llm_class`: The language model class to use for the boss node (default: `OpenAI`).
|
||||
- `max_iterations`: The maximum number of iterations for task execution (default: `5`).
|
||||
|
||||
### LLM
|
||||
|
||||
- `openai_api_key`: The API key for OpenAI's models.
|
||||
- `hf_repo_id`: The repository ID for the Hugging Face model.
|
||||
- `hf_api_token`: The API token for the Hugging Face model.
|
||||
- `model_kwargs`: Additional keyword arguments to pass to the language model.
|
||||
|
||||
## Tool Configuration
|
||||
|
||||
The Swarms module supports various tools that can be added to the worker node for performing specific tasks. The following tools are available:
|
||||
|
||||
- `DuckDuckGoSearchRun`: A tool for performing web searches.
|
||||
- `WriteFileTool`: A tool for writing files.
|
||||
- `ReadFileTool`: A tool for reading files.
|
||||
- `process_csv`: A tool for processing CSV files.
|
||||
- `WebpageQATool`: A tool for performing question answering using web pages.
|
||||
|
||||
Additional tools can be added by extending the functionality of the `Tool` class.
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
For more advanced usage, you can customize the tools and parameters according to your specific requirements. The Swarms module provides flexibility and extensibility to accommodate various use cases.
|
||||
|
||||
For example, you can add your own custom tools by extending the `Tool` class and adding them to the worker node. You can also modify the prompt templates used by the boss node to customize the interaction between the boss and worker agents.
|
||||
|
||||
Please refer to the source code and documentation of the Swarms module for more details and examples.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The Swarms module provides a powerful framework for creating and managing swarms of autonomous agents to accomplish complex tasks. With the `WorkerNode` and `BossNode` classes, along with the `LLM` utility class, you can easily set up and run a swarm of agents to tackle any objective. The module is highly configurable and extensible, allowing you to tailor it to your specific needs.
|
||||
|
||||
|
||||
## LLM
|
||||
### Purpose
|
||||
The `LLM` class provides an interface to different language models (LLMs) such as OpenAI's ChatGPT and Hugging Face models. It allows you to initialize and run a language model with a given prompt and obtain the generated response.
|
||||
|
||||
### Systems Understanding
|
||||
The `LLM` class takes an OpenAI API key or Hugging Face repository ID and API token as input. It uses these credentials to initialize the language model, either from OpenAI's models or from a specific Hugging Face repository. The language model can then be run with a prompt, and the generated response is returned.
|
||||
|
||||
### Usage Example
|
||||
```python
|
||||
from swarms import LLM
|
||||
|
||||
# Create an instance of LLM with OpenAI API key
|
||||
llm_instance = LLM(openai_api_key="your_openai_key")
|
||||
|
||||
# Run the language model with a prompt
|
||||
result = llm_instance.run("Who won the FIFA World Cup in 1998?")
|
||||
print(result)
|
||||
|
||||
# Create an instance of LLM with Hugging Face repository ID and API token
|
||||
llm_instance = LLM(hf_repo_id="google/flan-t5-xl", hf_api_token="your_hf_api_token")
|
||||
|
||||
# Run the language model with a prompt
|
||||
result = llm_instance.run("Who won the FIFA World Cup in 1998?")
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Constructor
|
||||
```python
|
||||
def __init__(self, openai_api_key: Optional[str] = None,
|
||||
hf_repo_id: Optional[str] = None,
|
||||
hf_api_token: Optional[str] = None,
|
||||
model_kwargs: Optional[dict] = None)
|
||||
```
|
||||
- `openai_api_key` (optional): The API key for OpenAI's models.
|
||||
- `hf_repo_id` (optional): The repository ID for the Hugging Face model.
|
||||
- `hf_api_token` (optional): The API token for the Hugging Face model.
|
||||
- `model_kwargs` (optional): Additional keyword arguments to pass to the language model.
|
||||
|
||||
### Methods
|
||||
- `run(prompt: str) -> str`: Runs the language model with the given prompt and returns the generated response.
|
||||
|
||||
### Args
|
||||
- `prompt` (str): The prompt to be passed to the language model.
|
||||
|
||||
### Returns
|
||||
- `result` (str): The generated response from the language model.
|
||||
|
||||
## Conclusion
|
||||
The `LLM` class provides a convenient way to initialize and run different language models using either OpenAI's API or Hugging Face models. By providing the necessary credentials and a prompt, you can obtain the generated response from the language model.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
# `GooglePalm` class:
|
||||
|
||||
### Example 1: Using Dictionaries as Messages
|
||||
|
||||
```python
|
||||
from google_palm import GooglePalm
|
||||
|
||||
# Initialize the GooglePalm instance
|
||||
gp = GooglePalm(
|
||||
client=your_client,
|
||||
model_name="models/chat-bison-001",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=10,
|
||||
n=5
|
||||
)
|
||||
|
||||
# Create some messages
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "user", "content": "Who won the world series in 2020?"},
|
||||
]
|
||||
|
||||
# Generate a response
|
||||
response = gp.generate(messages)
|
||||
|
||||
# Print the generated response
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 2: Using BaseMessage and Its Subclasses as Messages
|
||||
|
||||
```python
|
||||
from google_palm import GooglePalm
|
||||
from langchain.schema.messages import SystemMessage, HumanMessage
|
||||
|
||||
# Initialize the GooglePalm instance
|
||||
gp = GooglePalm(
|
||||
client=your_client,
|
||||
model_name="models/chat-bison-001",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=10,
|
||||
n=5
|
||||
)
|
||||
|
||||
# Create some messages
|
||||
messages = [
|
||||
SystemMessage(content="You are a helpful assistant."),
|
||||
HumanMessage(content="Who won the world series in 2020?"),
|
||||
]
|
||||
|
||||
# Generate a response
|
||||
response = gp.generate(messages)
|
||||
|
||||
# Print the generated response
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Example 3: Using GooglePalm with Asynchronous Function
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from google_palm import GooglePalm
|
||||
from langchain.schema.messages import SystemMessage, HumanMessage
|
||||
|
||||
# Initialize the GooglePalm instance
|
||||
gp = GooglePalm(
|
||||
client=your_client,
|
||||
model_name="models/chat-bison-001",
|
||||
temperature=0.7,
|
||||
top_p=0.9,
|
||||
top_k=10,
|
||||
n=5
|
||||
)
|
||||
|
||||
# Create some messages
|
||||
messages = [
|
||||
SystemMessage(content="You are a helpful assistant."),
|
||||
HumanMessage(content="Who won the world series in 2020?"),
|
||||
]
|
||||
|
||||
# Define an asynchronous function
|
||||
async def generate_response():
|
||||
response = await gp._agenerate(messages)
|
||||
print(response)
|
||||
|
||||
# Run the asynchronous function
|
||||
asyncio.run(generate_response())
|
||||
```
|
||||
|
||||
Remember to replace `your_client` with an actual instance of your client. Also, ensure the `model_name` is the correct name of the model that you want to use.
|
||||
|
||||
The `temperature`, `top_p`, `top_k`, and `n` parameters control the randomness and diversity of the generated responses. You can adjust these parameters based on your application's requirements.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## `CodeInterpreter`:
|
||||
|
||||
```python
|
||||
tool = CodeInterpreter("Code Interpreter", "A tool to interpret code and generate useful outputs.")
|
||||
tool.run("Plot the bitcoin chart of 2023 YTD")
|
||||
|
||||
# Or with file inputs
|
||||
tool.run("Analyze this dataset and plot something interesting about it.", ["examples/assets/iris.csv"])
|
||||
```
|
||||
|
||||
To use the asynchronous version, simply replace `run` with `arun` and ensure your calling code is in an async context:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
tool = CodeInterpreter("Code Interpreter", "A tool to interpret code and generate useful outputs.")
|
||||
asyncio.run(tool.arun("Plot the bitcoin chart of 2023 YTD"))
|
||||
|
||||
# Or with file inputs
|
||||
asyncio.run(tool.arun("Analyze this dataset and plot something interesting about it.", ["examples/assets/iris.csv"]))
|
||||
```
|
||||
|
||||
The `CodeInterpreter` class is a flexible tool that uses the `CodeInterpreterSession` from the `codeinterpreterapi` package to run the code interpretation and return the result. It provides both synchronous and asynchronous methods for convenience, and ensures that exceptions are handled gracefully.
|
@ -0,0 +1,143 @@
|
||||
## LLMs in Swarms Documentation
|
||||
|
||||
Welcome to the documentation for the llm section of the swarms package, designed to facilitate seamless integration with various AI language models and APIs. This package empowers developers, end-users, and system administrators to interact with AI models from different providers, such as OpenAI, Hugging Face, Google PaLM, and Anthropic.
|
||||
|
||||
### Table of Contents
|
||||
1. [OpenAI](#openai)
|
||||
2. [HuggingFace](#huggingface)
|
||||
3. [Google PaLM](#google-palm)
|
||||
4. [Anthropic](#anthropic)
|
||||
|
||||
### 1. OpenAI (swarms.models.OpenAI)
|
||||
|
||||
The OpenAI class provides an interface to interact with OpenAI's language models. It allows both synchronous and asynchronous interactions.
|
||||
|
||||
**Constructor:**
|
||||
```python
|
||||
OpenAI(api_key: str, system: str = None, console: bool = True, model: str = None, params: dict = None, save_messages: bool = True)
|
||||
```
|
||||
|
||||
**Attributes:**
|
||||
- `api_key` (str): Your OpenAI API key.
|
||||
- `system` (str, optional): A system message to be used in conversations.
|
||||
- `console` (bool, default=True): Display console logs.
|
||||
- `model` (str, optional): Name of the language model to use.
|
||||
- `params` (dict, optional): Additional parameters for model interactions.
|
||||
- `save_messages` (bool, default=True): Save conversation messages.
|
||||
|
||||
**Methods:**
|
||||
- `generate(message: str, **kwargs) -> str`: Generate a response using the OpenAI model.
|
||||
- `generate_async(message: str, **kwargs) -> str`: Generate a response asynchronously.
|
||||
- `ask_multiple(ids: List[str], question_template: str) -> List[str]`: Query multiple IDs simultaneously.
|
||||
- `stream_multiple(ids: List[str], question_template: str) -> List[str]`: Stream multiple responses.
|
||||
|
||||
**Usage Example:**
|
||||
```python
|
||||
from swarms import OpenAI
|
||||
import asyncio
|
||||
|
||||
chat = OpenAI(api_key="YOUR_OPENAI_API_KEY")
|
||||
|
||||
response = chat.generate("Hello, how can I assist you?")
|
||||
print(response)
|
||||
|
||||
ids = ["id1", "id2", "id3"]
|
||||
async_responses = asyncio.run(chat.ask_multiple(ids, "How is {id}?"))
|
||||
print(async_responses)
|
||||
```
|
||||
|
||||
### 2. HuggingFace (swarms.models.HuggingFaceLLM)
|
||||
|
||||
The HuggingFaceLLM class allows interaction with language models from Hugging Face.
|
||||
|
||||
**Constructor:**
|
||||
```python
|
||||
HuggingFaceLLM(model_id: str, device: str = None, max_length: int = 20, quantize: bool = False, quantization_config: dict = None)
|
||||
```
|
||||
|
||||
**Attributes:**
|
||||
- `model_id` (str): ID or name of the Hugging Face model.
|
||||
- `device` (str, optional): Device to run the model on (e.g., 'cuda', 'cpu').
|
||||
- `max_length` (int, default=20): Maximum length of generated text.
|
||||
- `quantize` (bool, default=False): Apply model quantization.
|
||||
- `quantization_config` (dict, optional): Configuration for quantization.
|
||||
|
||||
**Methods:**
|
||||
- `generate(prompt_text: str, max_length: int = None) -> str`: Generate text based on a prompt.
|
||||
|
||||
**Usage Example:**
|
||||
```python
|
||||
from swarms import HuggingFaceLLM
|
||||
|
||||
model_id = "gpt2"
|
||||
hugging_face_model = HuggingFaceLLM(model_id=model_id)
|
||||
|
||||
prompt = "Once upon a time"
|
||||
generated_text = hugging_face_model.generate(prompt)
|
||||
print(generated_text)
|
||||
```
|
||||
|
||||
### 3. Google PaLM (swarms.models.GooglePalm)
|
||||
|
||||
The GooglePalm class provides an interface for Google's PaLM Chat API.
|
||||
|
||||
**Constructor:**
|
||||
```python
|
||||
GooglePalm(model_name: str = "models/chat-bison-001", google_api_key: str = None, temperature: float = None, top_p: float = None, top_k: int = None, n: int = 1)
|
||||
```
|
||||
|
||||
**Attributes:**
|
||||
- `model_name` (str): Name of the Google PaLM model.
|
||||
- `google_api_key` (str, optional): Google API key.
|
||||
- `temperature` (float, optional): Temperature for text generation.
|
||||
- `top_p` (float, optional): Top-p sampling value.
|
||||
- `top_k` (int, optional): Top-k sampling value.
|
||||
- `n` (int, default=1): Number of candidate completions.
|
||||
|
||||
**Methods:**
|
||||
- `generate(messages: List[Dict[str, Any]], stop: List[str] = None, **kwargs) -> Dict[str, Any]`: Generate text based on a list of messages.
|
||||
- `__call__(messages: List[Dict[str, Any]], stop: List[str] = None, **kwargs) -> Dict[str, Any]`: Generate text using the call syntax.
|
||||
|
||||
**Usage Example:**
|
||||
```python
|
||||
from swarms import GooglePalm
|
||||
|
||||
google_palm = GooglePalm()
|
||||
messages = [{"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Tell me a joke"}]
|
||||
|
||||
response = google_palm.generate(messages)
|
||||
print(response["choices"][0]["text"])
|
||||
```
|
||||
|
||||
### 4. Anthropic (swarms.models.Anthropic)
|
||||
|
||||
The Anthropic class enables interaction with Anthropic's large language models.
|
||||
|
||||
**Constructor:**
|
||||
```python
|
||||
Anthropic(model: str = "claude-2", max_tokens_to_sample: int = 256, temperature: float = None, top_k: int = None, top_p: float = None, streaming: bool = False, default_request_timeout: int = None)
|
||||
```
|
||||
|
||||
**Attributes:**
|
||||
- `model` (str): Name of the Anthropic model.
|
||||
- `max_tokens_to_sample` (int, default=256): Maximum tokens to sample.
|
||||
- `temperature` (float, optional): Temperature for text generation.
|
||||
- `top_k` (int, optional): Top-k sampling value.
|
||||
- `top_p` (float, optional): Top-p sampling value.
|
||||
- `streaming` (bool, default=False): Enable streaming mode.
|
||||
- `default_request_timeout` (int, optional): Default request timeout.
|
||||
|
||||
**Methods:**
|
||||
- `generate(prompt: str, stop: List[str] = None) -> str`: Generate text based on a prompt.
|
||||
|
||||
**Usage Example:**
|
||||
```python
|
||||
from swarms import Anthropic
|
||||
|
||||
anthropic = Anthropic()
|
||||
prompt = "Once upon a time"
|
||||
generated_text = anthropic.generate(prompt)
|
||||
print(generated_text)
|
||||
```
|
||||
|
||||
This concludes the documentation for the "swarms" package, providing you with tools to seamlessly integrate with various language models and APIs. Happy coding!
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue