Merge branch 'master' of https://github.com/kyegomez/swarms into memory

# Conflicts:
#	pyproject.toml
pull/175/head
Sashin 1 year ago
commit c2a15ee3bb

@ -0,0 +1,66 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# This workflow lets you generate SLSA provenance file for your project.
# The generation satisfies level 3 for the provenance requirements - see https://slsa.dev/spec/v0.1/requirements
# The project is an initiative of the OpenSSF (openssf.org) and is developed at
# https://github.com/slsa-framework/slsa-github-generator.
# The provenance file can be verified using https://github.com/slsa-framework/slsa-verifier.
# For more information about SLSA and how it improves the supply-chain, visit slsa.dev.
name: SLSA generic generator
on:
workflow_dispatch:
release:
types: [created]
jobs:
build:
runs-on: ubuntu-latest
outputs:
digests: ${{ steps.hash.outputs.digests }}
steps:
- uses: actions/checkout@v3
# ========================================================
#
# Step 1: Build your artifacts.
#
# ========================================================
- name: Build artifacts
run: |
# These are some amazing artifacts.
echo "artifact1" > artifact1
echo "artifact2" > artifact2
# ========================================================
#
# Step 2: Add a step to generate the provenance subjects
# as shown below. Update the sha256 sum arguments
# to include all binaries that you generate
# provenance for.
#
# ========================================================
- name: Generate subject for provenance
id: hash
run: |
set -euo pipefail
# List the artifacts the provenance will refer to.
files=$(ls artifact*)
# Generate the subjects (base64 encoded).
echo "hashes=$(sha256sum $files | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
needs: [build]
permissions:
actions: read # To read the workflow path.
id-token: write # To sign the provenance.
contents: write # To add assets to a release.
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v1.4.0
with:
base64-subjects: "${{ needs.build.outputs.digests }}"
upload-assets: true # Optional: Upload to a new release

@ -0,0 +1,27 @@
name: Makefile CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: configure
run: ./configure
- name: Install dependencies
run: make
- name: Run check
run: make check
- name: Run distcheck
run: make distcheck

@ -0,0 +1,46 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# This workflow integrates Pyre with GitHub's
# Code Scanning feature.
#
# Pyre is a performant type checker for Python compliant with
# PEP 484. Pyre can analyze codebases with millions of lines
# of code incrementally providing instantaneous feedback
# to developers as they write code.
#
# See https://pyre-check.org
name: Pyre
on:
workflow_dispatch:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
permissions:
contents: read
jobs:
pyre:
permissions:
actions: read
contents: read
security-events: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
submodules: true
- name: Run Pyre
uses: facebook/pyre-action@60697a7858f7cc8470d8cc494a3cf2ad6b06560d
with:
# To customize these inputs:
# See https://github.com/facebook/pyre-action#inputs
repo-directory: './'
requirements-path: 'requirements.txt'

@ -7,7 +7,7 @@ repos:
rev: 'v0.0.255'
hooks:
- id: ruff
args: [--fix]
args: [----unsafe-fixes]
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.6.3
hooks:

@ -20,6 +20,12 @@ Swarms is designed to provide modular building blocks to build scalable swarms o
Before you contribute a new feature, consider submitting an Issue to discuss the feature so the community can weigh in and assist.
### Requirements:
- New class and or function Module with documentation in docstrings with error handling
- Tests using pytest in tests folder in the same module folder
- Documentation in the docs/swarms/module_name folder and then added into the mkdocs.yml
## How to Contribute Changes
First, fork this repository to your own GitHub account. Click "fork" in the top corner of the `swarms` repository to get started:
@ -43,11 +49,30 @@ git push -u origin main
```
## 🎨 Code quality
- Follow the following guide on code quality a python guide or your PR will most likely be overlooked: [CLICK HERE](https://google.github.io/styleguide/pyguide.html)
### Pre-commit tool
This project utilizes the [pre-commit](https://pre-commit.com/) tool to maintain code quality and consistency. Before submitting a pull request or making any commits, it is important to run the pre-commit tool to ensure that your changes meet the project's guidelines.
- Install pre-commit (https://pre-commit.com/)
```bash
pip install pre-commit
```
- Check that it's installed
```bash
pre-commit --version
```
Now when you make a git commit, the black code formatter and ruff linter will run.
Furthermore, we have integrated a pre-commit GitHub Action into our workflow. This means that with every pull request opened, the pre-commit checks will be automatically enforced, streamlining the code review process and ensuring that all contributions adhere to our quality standards.
To run the pre-commit tool, follow these steps:
@ -60,6 +85,7 @@ To run the pre-commit tool, follow these steps:
4. You can also install pre-commit as a git hook by execute `pre-commit install`. Every time you made `git commit` pre-commit run automatically for you.
### Docstrings
All new functions and classes in `swarms` should include docstrings. This is a prerequisite for any new functions and classes to be added to the library.

@ -1,21 +0,0 @@
Developers
Install pre-commit (https://pre-commit.com/)
```bash
pip install pre-commit
```
Check that it's installed
```bash
pre-commit --versioni
```
This repository already has a pre-commit configuration. To install the hooks, run:
```bash
pre-commit install
```
Now when you make a git commit, the black code formatter and ruff linter will run.

@ -22,11 +22,10 @@ Swarms is a modular framework that enables reliable and useful multi-agent colla
---
## Usage
### Example in Colab:
<a target="_blank" href="https://colab.research.google.com/github/kyegomez/swarms/blob/master/playground/swarms_example.ipynb">
Run example in Collab: <a target="_blank" href="https://colab.research.google.com/github/kyegomez/swarms/blob/master/playground/swarms_example.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a> Run example in Colab, using your OpenAI API key.
</a>
### `Flow` Example
- Reliable Structure that provides LLMS autonomy
@ -42,10 +41,8 @@ api_key = ""
# Initialize the language model, this model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
llm = OpenAIChat(
# model_name="gpt-4"
openai_api_key=api_key,
temperature=0.5,
# max_tokens=100,
)
## Initialize the workflow
@ -53,24 +50,10 @@ flow = Flow(
llm=llm,
max_loops=2,
dashboard=True,
# stopping_condition=None, # You can define a stopping condition as needed.
# loop_interval=1,
# retry_attempts=3,
# retry_interval=1,
# interactive=False, # Set to 'True' for interactive mode.
# dynamic_temperature=False, # Set to 'True' for dynamic temperature handling.
)
# out = flow.load_state("flow_state.json")
# temp = flow.dynamic_temperature()
# filter = flow.add_response_filter("Trump")
out = flow.run("Generate a 10,000 word blog on health and wellness.")
# out = flow.validate_response(out)
# out = flow.analyze_feedback(out)
# out = flow.print_history_and_memory()
# # out = flow.save_state("flow_state.json")
# print(out)
```

@ -1,37 +1,14 @@
from swarms.models import OpenAIChat
from swarms.structs import Flow
# Initialize the language model, this model can be swapped out with Anthropic, ETC, Huggingface Models like Mistral, ETC
# Initialize the language model
llm = OpenAIChat(
# model_name="gpt-4"
# openai_api_key=api_key,
temperature=0.5,
# max_tokens=100,
)
## Initialize the workflow
flow = Flow(
llm=llm,
max_loops=2,
dashboard=True,
# tools=[search_api]
# stopping_condition=None, # You can define a stopping condition as needed.
# loop_interval=1,
# retry_attempts=3,
# retry_interval=1,
# interactive=False, # Set to 'True' for interactive mode.
# dynamic_temperature=False, # Set to 'True' for dynamic temperature handling.
)
flow = Flow(llm=llm, max_loops=1, dashboard=True)
# out = flow.load_state("flow_state.json")
# temp = flow.dynamic_temperature()
# filter = flow.add_response_filter("Trump")
out = flow.run(
"Generate a 10,000 word blog on mental clarity and the benefits of meditation."
)
# out = flow.validate_response(out)
# out = flow.analyze_feedback(out)
# out = flow.print_history_and_memory()
# # out = flow.save_state("flow_state.json")
# print(out)
# Run the workflow on a task
out = flow.run("Generate a 10,000 word blog on health and wellness.")

@ -0,0 +1,31 @@
import os
from dotenv import load_dotenv
from swarms.models import OpenAIChat
from swarms.structs import Flow
from swarms.swarms.multi_agent_collab import MultiAgentCollaboration
load_dotenv()
api_key = os.environ.get("OPENAI_API_KEY")
# Initialize the language model
llm = OpenAIChat(
temperature=0.5,
openai_api_key=api_key,
)
## Initialize the workflow
flow = Flow(llm=llm, max_loops=1, dashboard=True)
flow2 = Flow(llm=llm, max_loops=1, dashboard=True)
flow3 = Flow(llm=llm, max_loops=1, dashboard=True)
swarm = MultiAgentCollaboration(
agents=[flow, flow2, flow3],
max_iters=4,
)
swarm.run("Generate a 10,000 word blog on health and wellness.")

@ -9,6 +9,9 @@ text = node.run_text("What is your name? Generate a picture of yourself")
img = node.run_img("/image1", "What is this image about?")
chat = node.chat(
"What is your name? Generate a picture of yourself. What is this image about?",
(
"What is your name? Generate a picture of yourself. What is this image"
" about?"
),
streaming=True,
)

@ -10,13 +10,19 @@ config = {
"plugin_ids": [os.getenv("REVGPT_PLUGIN_IDS")],
"disable_history": os.getenv("REVGPT_DISABLE_HISTORY") == "True",
"PUID": os.getenv("REVGPT_PUID"),
"unverified_plugin_domains": [os.getenv("REVGPT_UNVERIFIED_PLUGIN_DOMAINS")],
"unverified_plugin_domains": [
os.getenv("REVGPT_UNVERIFIED_PLUGIN_DOMAINS")
],
}
llm = RevChatGPTModel(access_token=os.getenv("ACCESS_TOKEN"), **config)
worker = Worker(ai_name="Optimus Prime", llm=llm)
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
task = (
"What were the winning boston marathon times for the past 5 years (ending"
" in 2022)? Generate a table of the year, name, country of origin, and"
" times."
)
response = worker.run(task)
print(response)

@ -103,7 +103,8 @@ class AccountantSwarms:
# Provide decision making support to the accountant
decision_making_support_agent_output = decision_making_support_agent.run(
f"{self.decision_making_support_agent_instructions}: {summary_agent_output}"
f"{self.decision_making_support_agent_instructions}:"
f" {summary_agent_output}"
)
return decision_making_support_agent_output
@ -113,5 +114,7 @@ swarm = AccountantSwarms(
pdf_path="tesla.pdf",
fraud_detection_instructions="Detect fraud in the document",
summary_agent_instructions="Generate an actionable summary of the document",
decision_making_support_agent_instructions="Provide decision making support to the business owner:",
decision_making_support_agent_instructions=(
"Provide decision making support to the business owner:"
),
)

@ -48,6 +48,7 @@ paper_implementor_agent = Flow(
paper = pdf_to_text(PDF_PATH)
algorithmic_psuedocode_agent = paper_summarizer_agent.run(
f"Focus on creating the algorithmic pseudocode for the novel method in this paper: {paper}"
"Focus on creating the algorithmic pseudocode for the novel method in this"
f" paper: {paper}"
)
pytorch_code = paper_implementor_agent.run(algorithmic_psuedocode_agent)

@ -0,0 +1,86 @@
import re
from swarms.models.openai_models import OpenAIChat
class AutoTemp:
"""
AutoTemp is a tool for automatically selecting the best temperature setting for a given task.
It generates responses at different temperatures, evaluates them, and ranks them based on quality.
"""
def __init__(
self,
api_key,
default_temp=0.0,
alt_temps=None,
auto_select=True,
max_workers=6,
):
self.api_key = api_key
self.default_temp = default_temp
self.alt_temps = (
alt_temps if alt_temps else [0.4, 0.6, 0.8, 1.0, 1.2, 1.4]
)
self.auto_select = auto_select
self.max_workers = max_workers
self.llm = OpenAIChat(
openai_api_key=self.api_key, temperature=self.default_temp
)
def evaluate_output(self, output, temperature):
print(f"Evaluating output at temperature {temperature}...")
eval_prompt = f"""
Evaluate the following output which was generated at a temperature setting of {temperature}. Provide a precise score from 0.0 to 100.0, considering the following criteria:
- Relevance: How well does the output address the prompt or task at hand?
- Clarity: Is the output easy to understand and free of ambiguity?
- Utility: How useful is the output for its intended purpose?
- Pride: If the user had to submit this output to the world for their career, would they be proud?
- Delight: Is the output likely to delight or positively surprise the user?
Be sure to comprehensively evaluate the output, it is very important for my career. Please answer with just the score with one decimal place accuracy, such as 42.0 or 96.9. Be extremely critical.
Output to evaluate:
---
{output}
---
"""
score_text = self.llm(eval_prompt, temperature=0.5)
score_match = re.search(r"\b\d+(\.\d)?\b", score_text)
return round(float(score_match.group()), 1) if score_match else 0.0
def run(self, prompt, temperature_string):
print("Starting generation process...")
temperature_list = [
float(temp.strip())
for temp in temperature_string.split(",")
if temp.strip()
]
outputs = {}
scores = {}
for temp in temperature_list:
print(f"Generating at temperature {temp}...")
output_text = self.llm(prompt, temperature=temp)
if output_text:
outputs[temp] = output_text
scores[temp] = self.evaluate_output(output_text, temp)
print("Generation process complete.")
if not scores:
return "No valid outputs generated.", None
sorted_scores = sorted(
scores.items(), key=lambda item: item[1], reverse=True
)
best_temp, best_score = sorted_scores[0]
best_output = outputs[best_temp]
return (
f"Best AutoTemp Output (Temp {best_temp} | Score:"
f" {best_score}):\n{best_output}"
if self.auto_select
else "\n".join(
f"Temp {temp} | Score: {score}:\n{outputs[temp]}"
for temp, score in sorted_scores
)
)

@ -0,0 +1,22 @@
from swarms.models import OpenAIChat
from swarms.models.autotemp import AutoTemp
# Your OpenAI API key
api_key = ""
autotemp_agent = AutoTemp(
api_key=api_key,
alt_temps=[0.4, 0.6, 0.8, 1.0, 1.2],
auto_select=False,
# model_version="gpt-3.5-turbo" # Specify the model version if needed
)
# Define the task and temperature string
task = "Generate a short story about a lost civilization."
temperature_string = "0.4,0.6,0.8,1.0,1.2,"
# Run the AutoTempAgent
result = autotemp_agent.run(task, temperature_string)
# Print the result
print(result)

@ -0,0 +1,128 @@
import os
from termcolor import colored
from swarms.models import OpenAIChat
from swarms.models.autotemp import AutoTemp
from swarms.structs import SequentialWorkflow
class BlogGen:
def __init__(
self,
api_key,
blog_topic,
temperature_range: str = "0.4,0.6,0.8,1.0,1.2",
): # Add blog_topic as an argument
self.openai_chat = OpenAIChat(openai_api_key=api_key, temperature=0.8)
self.auto_temp = AutoTemp(api_key)
self.temperature_range = temperature_range
self.workflow = SequentialWorkflow(max_loops=5)
# Formatting the topic selection prompt with the user's topic
self.TOPIC_SELECTION_SYSTEM_PROMPT = f"""
Given the topic '{blog_topic}', generate an engaging and versatile blog topic. This topic should cover areas related to '{blog_topic}' and might include aspects such as current events, lifestyle, technology, health, and culture related to '{blog_topic}'. Identify trending subjects within this realm. The topic must be unique, thought-provoking, and have the potential to draw in readers interested in '{blog_topic}'.
"""
self.DRAFT_WRITER_SYSTEM_PROMPT = """
Create an engaging and comprehensive blog article of at least 1,000 words on '{{CHOSEN_TOPIC}}'. The content should be original, informative, and reflective of a human-like style, with a clear structure including headings and sub-headings. Incorporate a blend of narrative, factual data, expert insights, and anecdotes to enrich the article. Focus on SEO optimization by using relevant keywords, ensuring readability, and including meta descriptions and title tags. The article should provide value, appeal to both knowledgeable and general readers, and maintain a balance between depth and accessibility. Aim to make the article engaging and suitable for online audiences.
"""
self.REVIEW_AGENT_SYSTEM_PROMPT = """
Critically review the drafted blog article on '{{ARTICLE_TOPIC}}' to refine it to high-quality content suitable for online publication. Ensure the article is coherent, factually accurate, engaging, and optimized for search engines (SEO). Check for the effective use of keywords, readability, internal and external links, and the inclusion of meta descriptions and title tags. Edit the content to enhance clarity, impact, and maintain the authors voice. The goal is to polish the article into a professional, error-free piece that resonates with the target audience, adheres to publication standards, and is optimized for both search engines and social media sharing.
"""
self.DISTRIBUTION_AGENT_SYSTEM_PROMPT = """
Develop an autonomous distribution strategy for the blog article on '{{ARTICLE_TOPIC}}'. Utilize an API to post the article on a popular blog platform (e.g., WordPress, Blogger, Medium) commonly used by our target audience. Ensure the post includes all SEO elements like meta descriptions, title tags, and properly formatted content. Craft unique, engaging social media posts tailored to different platforms to promote the blog article. Schedule these posts to optimize reach and engagement, using data-driven insights. Monitor the performance of the distribution efforts, adjusting strategies based on engagement metrics and audience feedback. Aim to maximize the article's visibility, attract a diverse audience, and foster engagement across digital channels.
"""
def run_workflow(self):
try:
# Topic generation using OpenAIChat
topic_result = self.openai_chat.generate(
[self.TOPIC_SELECTION_SYSTEM_PROMPT]
)
topic_output = topic_result.generations[0][0].text
print(
colored(
(
"\nTopic Selection Task"
f" Output:\n----------------------------\n{topic_output}\n"
),
"white",
)
)
chosen_topic = topic_output.split("\n")[0]
print(colored("Selected topic: " + chosen_topic, "yellow"))
# Initial draft generation with AutoTemp
initial_draft_prompt = self.DRAFT_WRITER_SYSTEM_PROMPT.replace(
"{{CHOSEN_TOPIC}}", chosen_topic
)
auto_temp_output = self.auto_temp.run(
initial_draft_prompt, self.temperature_range
)
initial_draft_output = auto_temp_output # Assuming AutoTemp.run returns the best output directly
print(
colored(
(
"\nInitial Draft"
f" Output:\n----------------------------\n{initial_draft_output}\n"
),
"white",
)
)
# Review process using OpenAIChat
review_prompt = self.REVIEW_AGENT_SYSTEM_PROMPT.replace(
"{{ARTICLE_TOPIC}}", chosen_topic
)
review_result = self.openai_chat.generate([review_prompt])
review_output = review_result.generations[0][0].text
print(
colored(
(
"\nReview"
f" Output:\n----------------------------\n{review_output}\n"
),
"white",
)
)
# Distribution preparation using OpenAIChat
distribution_prompt = self.DISTRIBUTION_AGENT_SYSTEM_PROMPT.replace(
"{{ARTICLE_TOPIC}}", chosen_topic
)
distribution_result = self.openai_chat.generate(
[distribution_prompt]
)
distribution_output = distribution_result.generations[0][0].text
print(
colored(
(
"\nDistribution"
f" Output:\n----------------------------\n{distribution_output}\n"
),
"white",
)
)
# Final compilation of the blog
final_blog_content = f"{initial_draft_output}\n\n{review_output}\n\n{distribution_output}"
print(
colored(
(
"\nFinal Blog"
f" Content:\n----------------------------\n{final_blog_content}\n"
),
"green",
)
)
except Exception as e:
print(colored(f"An error occurred: {str(e)}", "red"))
if __name__ == "__main__":
api_key = os.environ["OPENAI_API_KEY"]
blog_generator = BlogGen(api_key)
blog_generator.run_workflow()

@ -0,0 +1,23 @@
import os
from swarms.swarms.blog_gen import BlogGen
def main():
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise ValueError("OPENAI_API_KEY environment variable not set.")
blog_topic = input("Enter the topic for the blog generation: ")
blog_generator = BlogGen(api_key, blog_topic)
blog_generator.TOPIC_SELECTION_SYSTEM_PROMPT = (
blog_generator.TOPIC_SELECTION_SYSTEM_PROMPT.replace(
"{{BLOG_TOPIC}}", blog_topic
)
)
blog_generator.run_workflow()
if __name__ == "__main__":
main()

@ -4,7 +4,10 @@ from swarms.models import Idefics
# Multi Modality Auto Agent
llm = Idefics(max_length=2000)
task = "User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"
task = (
"User: What is in this image?"
" https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"
)
## Initialize the workflow
flow = Flow(

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

@ -0,0 +1,129 @@
import os
import base64
import requests
from dotenv import load_dotenv
from swarms.models import Anthropic, OpenAIChat
from swarms.structs import Flow
# Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
# Define prompts for various tasks
MEAL_PLAN_PROMPT = (
"Based on the following user preferences: dietary restrictions as"
" vegetarian, preferred cuisines as Italian and Indian, a total caloric"
" intake of around 2000 calories per day, and an exclusion of legumes,"
" create a detailed weekly meal plan. Include a variety of meals for"
" breakfast, lunch, dinner, and optional snacks."
)
IMAGE_ANALYSIS_PROMPT = (
"Identify the items in this fridge, including their quantities and"
" condition."
)
# Function to encode image to base64
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
# Initialize Language Model (LLM)
llm = OpenAIChat(
openai_api_key=openai_api_key,
max_tokens=3000,
)
# Function to handle vision tasks
def create_vision_agent(image_path):
base64_image = encode_image(image_path)
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {openai_api_key}",
}
payload = {
"model": "gpt-4-vision-preview",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": IMAGE_ANALYSIS_PROMPT},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
},
},
],
}
],
"max_tokens": 300,
}
response = requests.post(
"https://api.openai.com/v1/chat/completions",
headers=headers,
json=payload,
)
return response.json()
# Function to generate an integrated shopping list considering meal plan and fridge contents
def generate_integrated_shopping_list(
meal_plan_output, image_analysis, user_preferences
):
# Prepare the prompt for the LLM
fridge_contents = image_analysis["choices"][0]["message"]["content"]
prompt = (
f"Based on this meal plan: {meal_plan_output}, and the following items"
f" in the fridge: {fridge_contents}, considering dietary preferences as"
" vegetarian with a preference for Italian and Indian cuisines,"
" generate a comprehensive shopping list that includes only the items"
" needed."
)
# Send the prompt to the LLM and return the response
response = llm(prompt)
return response # assuming the response is a string
# Define agent for meal planning
meal_plan_agent = Flow(
llm=llm,
sop=MEAL_PLAN_PROMPT,
max_loops=1,
autosave=True,
saved_state_path="meal_plan_agent.json",
)
# User preferences for meal planning
user_preferences = {
"dietary_restrictions": "vegetarian",
"preferred_cuisines": ["Italian", "Indian"],
"caloric_intake": 2000,
"other notes": "Doesn't eat legumes",
}
# Generate Meal Plan
meal_plan_output = meal_plan_agent.run(
f"Generate a meal plan: {user_preferences}"
)
# Vision Agent - Analyze an Image
image_analysis_output = create_vision_agent("full_fridge.jpg")
# Generate Integrated Shopping List
integrated_shopping_list = generate_integrated_shopping_list(
meal_plan_output, image_analysis_output, user_preferences
)
# Print and save the outputs
print("Meal Plan:", meal_plan_output)
print("Integrated Shopping List:", integrated_shopping_list)
with open("nutrition_output.txt", "w") as file:
file.write("Meal Plan:\n" + meal_plan_output + "\n\n")
file.write("Integrated Shopping List:\n" + integrated_shopping_list + "\n")
print("Outputs have been saved to nutrition_output.txt")

@ -39,9 +39,9 @@ def get_review_prompt(article):
def social_media_prompt(article: str, goal: str = "Clicks and engagement"):
prompt = SOCIAL_MEDIA_SYSTEM_PROMPT_AGENT.replace("{{ARTICLE}}", article).replace(
"{{GOAL}}", goal
)
prompt = SOCIAL_MEDIA_SYSTEM_PROMPT_AGENT.replace(
"{{ARTICLE}}", article
).replace("{{GOAL}}", goal)
return prompt
@ -50,7 +50,8 @@ topic_selection_task = (
"Generate 10 topics on gaining mental clarity using ancient practices"
)
topics = llm(
f"Your System Instructions: {TOPIC_GENERATOR}, Your current task: {topic_selection_task}"
f"Your System Instructions: {TOPIC_GENERATOR}, Your current task:"
f" {topic_selection_task}"
)
dashboard = print(
@ -109,7 +110,9 @@ reviewed_draft = print(
# Agent that publishes on social media
distribution_agent = llm(social_media_prompt(draft_blog, goal="Clicks and engagement"))
distribution_agent = llm(
social_media_prompt(draft_blog, goal="Clicks and engagement")
)
distribution_agent_out = print(
colored(
f"""

@ -1,6 +1,8 @@
from swarms.models.bioclip import BioClip
clip = BioClip("hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224")
clip = BioClip(
"hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224"
)
labels = [
"adenocarcinoma histopathology",

@ -2,11 +2,17 @@ from swarms.models import idefics
model = idefics()
user_input = "User: What is in this image? https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"
user_input = (
"User: What is in this image?"
" https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG"
)
response = model.chat(user_input)
print(response)
user_input = "User: And who is that? https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052"
user_input = (
"User: And who is that?"
" https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052"
)
response = model.chat(user_input)
print(response)

@ -28,7 +28,9 @@ llama_caller.add_func(
)
# Call the function
result = llama_caller.call_function("get_weather", location="Paris", format="Celsius")
result = llama_caller.call_function(
"get_weather", location="Paris", format="Celsius"
)
print(result)
# Stream a user prompt

@ -3,5 +3,6 @@ from swarms.models.vilt import Vilt
model = Vilt()
output = model(
"What is this image", "http://images.cocodataset.org/val2017/000000039769.jpg"
"What is this image",
"http://images.cocodataset.org/val2017/000000039769.jpg",
)

@ -30,7 +30,9 @@ async def async_load_playwright(url: str) -> str:
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
chunks = (
phrase.strip() for line in lines for phrase in line.split(" ")
)
results = "\n".join(chunk for chunk in chunks if chunk)
except Exception as e:
results = f"Error: {e}"
@ -58,5 +60,6 @@ flow = Flow(
)
out = flow.run(
"Generate a 10,000 word blog on mental clarity and the benefits of meditation."
"Generate a 10,000 word blog on mental clarity and the benefits of"
" meditation."
)

@ -36,7 +36,9 @@ class DialogueAgent:
message = self.model(
[
self.system_message,
HumanMessage(content="\n".join(self.message_history + [self.prefix])),
HumanMessage(
content="\n".join(self.message_history + [self.prefix])
),
]
)
return message.content
@ -124,19 +126,19 @@ game_description = f"""Here is the topic for the presidential debate: {topic}.
The presidential candidates are: {', '.join(character_names)}."""
player_descriptor_system_message = SystemMessage(
content="You can add detail to the description of each presidential candidate."
content=(
"You can add detail to the description of each presidential candidate."
)
)
def generate_character_description(character_name):
character_specifier_prompt = [
player_descriptor_system_message,
HumanMessage(
content=f"""{game_description}
HumanMessage(content=f"""{game_description}
Please reply with a creative description of the presidential candidate, {character_name}, in {word_limit} words or less, that emphasizes their personalities.
Speak directly to {character_name}.
Do not add anything else."""
),
Do not add anything else."""),
]
character_description = ChatOpenAI(temperature=1.0)(
character_specifier_prompt
@ -155,9 +157,7 @@ Your goal is to be as creative as possible and make the voters think you are the
def generate_character_system_message(character_name, character_header):
return SystemMessage(
content=(
f"""{character_header}
return SystemMessage(content=f"""{character_header}
You will speak in the style of {character_name}, and exaggerate their personality.
You will come up with creative ideas related to {topic}.
Do not say the same things over and over again.
@ -169,13 +169,12 @@ Speak only from the perspective of {character_name}.
Stop speaking the moment you finish speaking from your perspective.
Never forget to keep your response to {word_limit} words!
Do not add anything else.
"""
)
)
""")
character_descriptions = [
generate_character_description(character_name) for character_name in character_names
generate_character_description(character_name)
for character_name in character_names
]
character_headers = [
generate_character_header(character_name, character_description)
@ -185,7 +184,9 @@ character_headers = [
]
character_system_messages = [
generate_character_system_message(character_name, character_headers)
for character_name, character_headers in zip(character_names, character_headers)
for character_name, character_headers in zip(
character_names, character_headers
)
]
for (
@ -207,7 +208,10 @@ for (
class BidOutputParser(RegexParser):
def get_format_instructions(self) -> str:
return "Your response should be an integer delimited by angled brackets, like this: <int>."
return (
"Your response should be an integer delimited by angled brackets,"
" like this: <int>."
)
bid_parser = BidOutputParser(
@ -248,8 +252,7 @@ for character_name, bidding_template in zip(
topic_specifier_prompt = [
SystemMessage(content="You can make a task more specific."),
HumanMessage(
content=f"""{game_description}
HumanMessage(content=f"""{game_description}
You are the debate moderator.
Please make the debate topic more specific.
@ -257,8 +260,7 @@ topic_specifier_prompt = [
Be creative and imaginative.
Please reply with the specified topic in {word_limit} words or less.
Speak directly to the presidential candidates: {*character_names,}.
Do not add anything else."""
),
Do not add anything else."""),
]
specified_topic = ChatOpenAI(temperature=1.0)(topic_specifier_prompt).content
@ -321,7 +323,9 @@ for character_name, character_system_message, bidding_template in zip(
max_iters = 10
n = 0
simulator = DialogueSimulator(agents=characters, selection_function=select_next_speaker)
simulator = DialogueSimulator(
agents=characters, selection_function=select_next_speaker
)
simulator.reset()
simulator.inject("Debate Moderator", specified_topic)
print(f"(Debate Moderator): {specified_topic}")

@ -36,7 +36,11 @@ agents = [worker1, worker2, worker3]
debate = MultiAgentDebate(agents, select_speaker)
# Run task
task = "What were the winning boston marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
task = (
"What were the winning boston marathon times for the past 5 years (ending"
" in 2022)? Generate a table of the year, name, country of origin, and"
" times."
)
results = debate.run(task, max_iters=4)
# Print results

@ -10,4 +10,6 @@ node = Worker(
orchestrator = Orchestrator(node, agent_list=[node] * 10, task_queue=[])
# Agent 7 sends a message to Agent 9
orchestrator.chat(sender_id=7, receiver_id=9, message="Can you help me with this task?")
orchestrator.chat(
sender_id=7, receiver_id=9, message="Can you help me with this task?"
)

@ -10,4 +10,6 @@ node = Worker(
orchestrator = Orchestrator(node, agent_list=[node] * 10, task_queue=[])
# Agent 7 sends a message to Agent 9
orchestrator.chat(sender_id=7, receiver_id=9, message="Can you help me with this task?")
orchestrator.chat(
sender_id=7, receiver_id=9, message="Can you help me with this task?"
)

@ -7,7 +7,10 @@ api_key = ""
swarm = HierarchicalSwarm(api_key)
# Define an objective
objective = "Find 20 potential customers for a HierarchicalSwarm based AI Agent automation infrastructure"
objective = (
"Find 20 potential customers for a HierarchicalSwarm based AI Agent"
" automation infrastructure"
)
# Run HierarchicalSwarm
swarm.run(objective)

@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "2.3.9"
version = "2.4.0"
description = "Swarms - Pytorch"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]
@ -23,16 +23,14 @@ classifiers = [
[tool.poetry.dependencies]
python = "^3.8.1"
sentence_transformers = "*"
torch = "2.1.1"
transformers = "*"
qdrant_client = "*"
openai = "0.28.0"
langchain = "*"
asyncio = "*"
nest_asyncio = "*"
einops = "*"
google-generativeai = "*"
torch = "*"
langchain-experimental = "*"
playwright = "*"
duckduckgo-search = "*"
@ -79,11 +77,18 @@ mypy-protobuf = "^3.0.0"
[tool.autopep8]
max_line_length = 120
max_line_length = 80
ignore = "E501,W6" # or ["E501", "W6"]
in-place = true
recursive = true
aggressive = 3
[tool.ruff]
line-length = 120
line-length = 80
[tool.black]
line-length = 80
target-version = ['py38']
preview = true

@ -1,18 +1,13 @@
# faiss-gpu
torch==2.1.1
transformers
revChatGPT
pandas
langchain
nest_asyncio
pegasusx
google-generativeai
EdgeGPT
langchain-experimental
playwright
wget==3.2
simpleaichat
httpx
torch
open_clip_torch
ggl
beautifulsoup4
@ -26,9 +21,7 @@ soundfile
huggingface-hub
google-generativeai
sentencepiece
duckduckgo-search
PyPDF2
agent-protocol
accelerate
chromadb
tiktoken
@ -56,16 +49,13 @@ openai
opencv-python
prettytable
safetensors
streamlit
test-tube
timm
torchmetrics
transformers
webdataset
marshmallow
yapf
autopep8
dalle3
cohere
torchvision
rich
@ -74,5 +64,4 @@ rich
mkdocs
mkdocs-material
mkdocs-glightbox
pre-commit
pre-commit

@ -18,7 +18,12 @@ from swarms.agents.message import Message
class Step:
def __init__(
self, task: str, id: int, dep: List[int], args: Dict[str, str], tool: BaseTool
self,
task: str,
id: int,
dep: List[int],
args: Dict[str, str],
tool: BaseTool,
):
self.task = task
self.id = id

@ -37,7 +37,7 @@ class BaseVectorStore(ABC):
self,
artifacts: dict[str, list[TextArtifact]],
meta: Optional[dict] = None,
**kwargs
**kwargs,
) -> None:
execute_futures_dict(
{
@ -54,7 +54,7 @@ class BaseVectorStore(ABC):
artifact: TextArtifact,
namespace: Optional[str] = None,
meta: Optional[dict] = None,
**kwargs
**kwargs,
) -> str:
if not meta:
meta = {}
@ -67,7 +67,11 @@ class BaseVectorStore(ABC):
vector = artifact.generate_embedding(self.embedding_driver)
return self.upsert_vector(
vector, vector_id=artifact.id, namespace=namespace, meta=meta, **kwargs
vector,
vector_id=artifact.id,
namespace=namespace,
meta=meta,
**kwargs,
)
def upsert_text(
@ -76,14 +80,14 @@ class BaseVectorStore(ABC):
vector_id: Optional[str] = None,
namespace: Optional[str] = None,
meta: Optional[dict] = None,
**kwargs
**kwargs,
) -> str:
return self.upsert_vector(
self.embedding_driver.embed_string(string),
vector_id=vector_id,
namespace=namespace,
meta=meta if meta else {},
**kwargs
**kwargs,
)
@abstractmethod
@ -93,12 +97,14 @@ class BaseVectorStore(ABC):
vector_id: Optional[str] = None,
namespace: Optional[str] = None,
meta: Optional[dict] = None,
**kwargs
**kwargs,
) -> str:
...
@abstractmethod
def load_entry(self, vector_id: str, namespace: Optional[str] = None) -> Entry:
def load_entry(
self, vector_id: str, namespace: Optional[str] = None
) -> Entry:
...
@abstractmethod
@ -112,6 +118,6 @@ class BaseVectorStore(ABC):
count: Optional[int] = None,
namespace: Optional[str] = None,
include_vectors: bool = False,
**kwargs
**kwargs,
) -> list[QueryResult]:
...

@ -111,7 +111,9 @@ class Chroma(VectorStore):
chroma_db_impl="duckdb+parquet",
)
else:
_client_settings = chromadb.config.Settings(is_persistent=True)
_client_settings = chromadb.config.Settings(
is_persistent=True
)
_client_settings.persist_directory = persist_directory
else:
_client_settings = chromadb.config.Settings()
@ -124,9 +126,11 @@ class Chroma(VectorStore):
self._embedding_function = embedding_function
self._collection = self._client.get_or_create_collection(
name=collection_name,
embedding_function=self._embedding_function.embed_documents
if self._embedding_function is not None
else None,
embedding_function=(
self._embedding_function.embed_documents
if self._embedding_function is not None
else None
),
metadata=collection_metadata,
)
self.override_relevance_score_fn = relevance_score_fn
@ -203,7 +207,9 @@ class Chroma(VectorStore):
metadatas = [metadatas[idx] for idx in non_empty_ids]
texts_with_metadatas = [texts[idx] for idx in non_empty_ids]
embeddings_with_metadatas = (
[embeddings[idx] for idx in non_empty_ids] if embeddings else None
[embeddings[idx] for idx in non_empty_ids]
if embeddings
else None
)
ids_with_metadata = [ids[idx] for idx in non_empty_ids]
try:
@ -216,7 +222,8 @@ class Chroma(VectorStore):
except ValueError as e:
if "Expected metadata value to be" in str(e):
msg = (
"Try filtering complex metadata from the document using "
"Try filtering complex metadata from the document"
" using "
"langchain.vectorstores.utils.filter_complex_metadata."
)
raise ValueError(e.args[0] + "\n\n" + msg)
@ -258,7 +265,9 @@ class Chroma(VectorStore):
Returns:
List[Document]: List of documents most similar to the query text.
"""
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
docs_and_scores = self.similarity_search_with_score(
query, k, filter=filter
)
return [doc for doc, _ in docs_and_scores]
def similarity_search_by_vector(
@ -428,7 +437,9 @@ class Chroma(VectorStore):
candidates = _results_to_docs(results)
selected_results = [r for i, r in enumerate(candidates) if i in mmr_selected]
selected_results = [
r for i, r in enumerate(candidates) if i in mmr_selected
]
return selected_results
def max_marginal_relevance_search(
@ -460,7 +471,8 @@ class Chroma(VectorStore):
"""
if self._embedding_function is None:
raise ValueError(
"For MMR search, you must specify an embedding function oncreation."
"For MMR search, you must specify an embedding function"
" oncreation."
)
embedding = self._embedding_function.embed_query(query)
@ -543,7 +555,9 @@ class Chroma(VectorStore):
"""
return self.update_documents([document_id], [document])
def update_documents(self, ids: List[str], documents: List[Document]) -> None:
def update_documents(
self, ids: List[str], documents: List[Document]
) -> None:
"""Update a document in the collection.
Args:
@ -554,7 +568,8 @@ class Chroma(VectorStore):
metadata = [document.metadata for document in documents]
if self._embedding_function is None:
raise ValueError(
"For update, you must specify an embedding function on creation."
"For update, you must specify an embedding function on"
" creation."
)
embeddings = self._embedding_function.embed_documents(text)
@ -645,7 +660,9 @@ class Chroma(VectorStore):
ids=batch[0],
)
else:
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
chroma_collection.add_texts(
texts=texts, metadatas=metadatas, ids=ids
)
return chroma_collection
@classmethod

@ -18,8 +18,8 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray:
Y = np.array(Y)
if X.shape[1] != Y.shape[1]:
raise ValueError(
f"Number of columns in X and Y must be the same. X has shape {X.shape} "
f"and Y has shape {Y.shape}."
"Number of columns in X and Y must be the same. X has shape"
f" {X.shape} and Y has shape {Y.shape}."
)
try:
import simsimd as simd
@ -32,8 +32,9 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray:
return Z
except ImportError:
logger.info(
"Unable to import simsimd, defaulting to NumPy implementation. If you want "
"to use simsimd please install with `pip install simsimd`."
"Unable to import simsimd, defaulting to NumPy implementation. If"
" you want to use simsimd please install with `pip install"
" simsimd`."
)
X_norm = np.linalg.norm(X, axis=1)
Y_norm = np.linalg.norm(Y, axis=1)

@ -151,7 +151,9 @@ class InMemoryTaskDB(TaskDB):
) -> Artifact:
artifact_id = str(uuid.uuid4())
artifact = Artifact(
artifact_id=artifact_id, file_name=file_name, relative_path=relative_path
artifact_id=artifact_id,
file_name=file_name,
relative_path=relative_path,
)
task = await self.get_task(task_id)
task.artifacts.append(artifact)

@ -91,7 +91,9 @@ class OceanDB:
try:
return collection.add(documents=[document], ids=[id])
except Exception as e:
logging.error(f"Failed to append document to the collection. Error {e}")
logging.error(
f"Failed to append document to the collection. Error {e}"
)
raise
def add_documents(self, collection, documents: List[str], ids: List[str]):
@ -137,7 +139,9 @@ class OceanDB:
the results of the query
"""
try:
results = collection.query(query_texts=query_texts, n_results=n_results)
results = collection.query(
query_texts=query_texts, n_results=n_results
)
return results
except Exception as e:
logging.error(f"Failed to query the collection. Error {e}")

@ -89,11 +89,15 @@ class PgVectorVectorStore(BaseVectorStore):
engine: Optional[Engine] = field(default=None, kw_only=True)
table_name: str = field(kw_only=True)
_model: any = field(
default=Factory(lambda self: self.default_vector_model(), takes_self=True)
default=Factory(
lambda self: self.default_vector_model(), takes_self=True
)
)
@connection_string.validator
def validate_connection_string(self, _, connection_string: Optional[str]) -> None:
def validate_connection_string(
self, _, connection_string: Optional[str]
) -> None:
# If an engine is provided, the connection string is not used.
if self.engine is not None:
return
@ -104,7 +108,8 @@ class PgVectorVectorStore(BaseVectorStore):
if not connection_string.startswith("postgresql://"):
raise ValueError(
"The connection string must describe a Postgres database connection"
"The connection string must describe a Postgres database"
" connection"
)
@engine.validator
@ -148,7 +153,7 @@ class PgVectorVectorStore(BaseVectorStore):
vector_id: Optional[str] = None,
namespace: Optional[str] = None,
meta: Optional[dict] = None,
**kwargs
**kwargs,
) -> str:
"""Inserts or updates a vector in the collection."""
with Session(self.engine) as session:
@ -208,7 +213,7 @@ class PgVectorVectorStore(BaseVectorStore):
namespace: Optional[str] = None,
include_vectors: bool = False,
distance_metric: str = "cosine_distance",
**kwargs
**kwargs,
) -> list[BaseVectorStore.QueryResult]:
"""Performs a search on the collection to find vectors similar to the provided input vector,
optionally filtering to only those that match the provided namespace.

@ -108,7 +108,7 @@ class PineconeVectorStoreStore(BaseVector):
vector_id: Optional[str] = None,
namespace: Optional[str] = None,
meta: Optional[dict] = None,
**kwargs
**kwargs,
) -> str:
"""Upsert vector"""
vector_id = vector_id if vector_id else str_to_hash(str(vector))
@ -123,7 +123,9 @@ class PineconeVectorStoreStore(BaseVector):
self, vector_id: str, namespace: Optional[str] = None
) -> Optional[BaseVector.Entry]:
"""Load entry"""
result = self.index.fetch(ids=[vector_id], namespace=namespace).to_dict()
result = self.index.fetch(
ids=[vector_id], namespace=namespace
).to_dict()
vectors = list(result["vectors"].values())
if len(vectors) > 0:
@ -138,7 +140,9 @@ class PineconeVectorStoreStore(BaseVector):
else:
return None
def load_entries(self, namespace: Optional[str] = None) -> list[BaseVector.Entry]:
def load_entries(
self, namespace: Optional[str] = None
) -> list[BaseVector.Entry]:
"""Load entries"""
# This is a hacky way to query up to 10,000 values from Pinecone. Waiting on an official API for fetching
# all values from a namespace:
@ -169,7 +173,7 @@ class PineconeVectorStoreStore(BaseVector):
include_vectors: bool = False,
# PineconeVectorStoreStorageDriver-specific params:
include_metadata=True,
**kwargs
**kwargs,
) -> list[BaseVector.QueryResult]:
"""Query vectors"""
vector = self.embedding_driver.embed_string(query)
@ -196,6 +200,9 @@ class PineconeVectorStoreStore(BaseVector):
def create_index(self, name: str, **kwargs) -> None:
"""Create index"""
params = {"name": name, "dimension": self.embedding_driver.dimensions} | kwargs
params = {
"name": name,
"dimension": self.embedding_driver.dimensions,
} | kwargs
pinecone.create_index(**params)

@ -50,7 +50,9 @@ class StepInput(BaseModel):
class StepOutput(BaseModel):
__root__: Any = Field(
...,
description="Output that the task step has produced. Any value is allowed.",
description=(
"Output that the task step has produced. Any value is allowed."
),
example='{\n"tokens": 7894,\n"estimated_cost": "0,24$"\n}',
)
@ -112,8 +114,9 @@ class Step(StepRequestBody):
None,
description="Output of the task step.",
example=(
"I am going to use the write_to_file command and write Washington to a file"
" called output.txt <write_to_file('output.txt', 'Washington')"
"I am going to use the write_to_file command and write Washington"
" to a file called output.txt <write_to_file('output.txt',"
" 'Washington')"
),
)
additional_output: Optional[StepOutput] = None

@ -57,7 +57,7 @@ def maximal_marginal_relevance(
def filter_complex_metadata(
documents: List[Document],
*,
allowed_types: Tuple[Type, ...] = (str, bool, int, float)
allowed_types: Tuple[Type, ...] = (str, bool, int, float),
) -> List[Document]:
"""Filter out metadata types that are not supported for a vector store."""
updated_documents = []

@ -7,7 +7,11 @@ sys.stderr = log_file
from swarms.models.anthropic import Anthropic # noqa: E402
from swarms.models.petals import Petals # noqa: E402
from swarms.models.mistral import Mistral # noqa: E402
from swarms.models.openai_models import OpenAI, AzureOpenAI, OpenAIChat # noqa: E402
from swarms.models.openai_models import (
OpenAI,
AzureOpenAI,
OpenAIChat,
) # noqa: E402
from swarms.models.zephyr import Zephyr # noqa: E402
from swarms.models.biogpt import BioGPT # noqa: E402
from swarms.models.huggingface import HuggingfaceLLM # noqa: E402

@ -50,7 +50,9 @@ def xor_args(*arg_groups: Tuple[str, ...]) -> Callable:
]
invalid_groups = [i for i, count in enumerate(counts) if count != 1]
if invalid_groups:
invalid_group_names = [", ".join(arg_groups[i]) for i in invalid_groups]
invalid_group_names = [
", ".join(arg_groups[i]) for i in invalid_groups
]
raise ValueError(
"Exactly one argument in each of the following"
" groups must be defined:"
@ -106,7 +108,10 @@ def mock_now(dt_value): # type: ignore
def guard_import(
module_name: str, *, pip_name: Optional[str] = None, package: Optional[str] = None
module_name: str,
*,
pip_name: Optional[str] = None,
package: Optional[str] = None,
) -> Any:
"""Dynamically imports a module and raises a helpful exception if the module is not
installed."""
@ -180,18 +185,18 @@ def build_extra_kwargs(
if field_name in extra_kwargs:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
warnings.warn(
f"""WARNING! {field_name} is not default parameter.
warnings.warn(f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
Please confirm that {field_name} is what you intended.""")
extra_kwargs[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra_kwargs.keys())
invalid_model_kwargs = all_required_field_names.intersection(
extra_kwargs.keys()
)
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
"Instead they were passed in as part of `model_kwargs` parameter."
f"Parameters {invalid_model_kwargs} should be specified explicitly."
" Instead they were passed in as part of `model_kwargs` parameter."
)
return extra_kwargs
@ -250,7 +255,9 @@ class _AnthropicCommon(BaseLanguageModel):
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
values["anthropic_api_key"] = convert_to_secret_str(
get_from_dict_or_env(values, "anthropic_api_key", "ANTHROPIC_API_KEY")
get_from_dict_or_env(
values, "anthropic_api_key", "ANTHROPIC_API_KEY"
)
)
# Get custom api url from environment.
values["anthropic_api_url"] = get_from_dict_or_env(
@ -305,7 +312,9 @@ class _AnthropicCommon(BaseLanguageModel):
"""Get the identifying parameters."""
return {**{}, **self._default_params}
def _get_anthropic_stop(self, stop: Optional[List[str]] = None) -> List[str]:
def _get_anthropic_stop(
self, stop: Optional[List[str]] = None
) -> List[str]:
if not self.HUMAN_PROMPT or not self.AI_PROMPT:
raise NameError("Please ensure the anthropic package is loaded")
@ -354,8 +363,8 @@ class Anthropic(LLM, _AnthropicCommon):
def raise_warning(cls, values: Dict) -> Dict:
"""Raise warning that this class is deprecated."""
warnings.warn(
"This Anthropic LLM is deprecated. "
"Please use `from langchain.chat_models import ChatAnthropic` instead"
"This Anthropic LLM is deprecated. Please use `from"
" langchain.chat_models import ChatAnthropic` instead"
)
return values
@ -372,12 +381,16 @@ class Anthropic(LLM, _AnthropicCommon):
return prompt # Already wrapped.
# Guard against common errors in specifying wrong number of newlines.
corrected_prompt, n_subs = re.subn(r"^\n*Human:", self.HUMAN_PROMPT, prompt)
corrected_prompt, n_subs = re.subn(
r"^\n*Human:", self.HUMAN_PROMPT, prompt
)
if n_subs == 1:
return corrected_prompt
# As a last resort, wrap the prompt ourselves to emulate instruct-style.
return f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\n"
return (
f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\n"
)
def _call(
self,
@ -476,7 +489,10 @@ class Anthropic(LLM, _AnthropicCommon):
params = {**self._default_params, **kwargs}
for token in self.client.completions.create(
prompt=self._wrap_prompt(prompt), stop_sequences=stop, stream=True, **params
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
stream=True,
**params,
):
chunk = GenerationChunk(text=token.completion)
yield chunk

@ -1,101 +0,0 @@
import re
from concurrent.futures import ThreadPoolExecutor, as_completed
from swarms.models.openai_models import OpenAIChat
class AutoTempAgent:
"""
AutoTemp is a tool for automatically selecting the best temperature setting for a given task.
Flow:
1. Generate outputs at a range of temperature settings.
2. Evaluate each output using the default temperature setting.
3. Select the best output based on the evaluation score.
4. Return the best output.
Args:
temperature (float, optional): The default temperature setting to use. Defaults to 0.5.
api_key (str, optional): Your OpenAI API key. Defaults to None.
alt_temps ([type], optional): A list of alternative temperature settings to try. Defaults to None.
auto_select (bool, optional): If True, the best temperature setting will be automatically selected. Defaults to True.
max_workers (int, optional): The maximum number of workers to use when generating outputs. Defaults to 6.
Returns:
[type]: [description]
Examples:
>>> from swarms.demos.autotemp import AutoTemp
>>> autotemp = AutoTemp()
>>> autotemp.run("Generate a 10,000 word blog on mental clarity and the benefits of meditation.", "0.4,0.6,0.8,1.0,1.2,1.4")
Best AutoTemp Output (Temp 0.4 | Score: 100.0):
Generate a 10,000 word blog on mental clarity and the benefits of meditation.
"""
def __init__(
self,
temperature: float = 0.5,
api_key: str = None,
alt_temps=None,
auto_select=True,
max_workers=6,
):
self.alt_temps = alt_temps if alt_temps else [0.4, 0.6, 0.8, 1.0, 1.2, 1.4]
self.auto_select = auto_select
self.max_workers = max_workers
self.temperature = temperature
self.alt_temps = alt_temps
self.llm = OpenAIChat(
openai_api_key=api_key,
temperature=temperature,
)
def evaluate_output(self, output: str):
"""Evaluate the output using the default temperature setting."""
eval_prompt = f"""
Evaluate the following output which was generated at a temperature setting of {self.temperature}.
Provide a precise score from 0.0 to 100.0, considering the criteria of relevance, clarity, utility, pride, and delight.
Output to evaluate:
---
{output}
---
"""
score_text = self.llm(prompt=eval_prompt)
score_match = re.search(r"\b\d+(\.\d)?\b", score_text)
return round(float(score_match.group()), 1) if score_match else 0.0
def run(self, task: str, temperature_string):
"""Run the AutoTemp agent."""
temperature_list = [
float(temp.strip()) for temp in temperature_string.split(",")
]
outputs = {}
scores = {}
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
future_to_temp = {
executor.submit(self.llm.generate, task, temp): temp
for temp in temperature_list
}
for future in as_completed(future_to_temp):
temp = future_to_temp[future]
output_text = future.result()
outputs[temp] = output_text
scores[temp] = self.evaluate_output(output_text, temp)
if not scores:
return "No valid outputs generated.", None
sorted_scores = sorted(scores.items(), key=lambda item: item[1], reverse=True)
best_temp, best_score = sorted_scores[0]
best_output = outputs[best_temp]
return (
f"Best AutoTemp Output (Temp {best_temp} | Score: {best_score}):\n{best_output}"
if self.auto_select
else "\n".join(
f"Temp {temp} | Score: {score}:\n{outputs[temp]}"
for temp, score in sorted_scores
)
)

@ -98,7 +98,9 @@ class BioClip:
) = open_clip.create_model_and_transforms(model_path)
self.tokenizer = open_clip.get_tokenizer(model_path)
self.device = (
torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
torch.device("cuda")
if torch.cuda.is_available()
else torch.device("cpu")
)
self.model.to(self.device)
self.model.eval()
@ -110,13 +112,17 @@ class BioClip:
template: str = "this is a photo of ",
context_length: int = 256,
):
image = torch.stack([self.preprocess_val(Image.open(img_path))]).to(self.device)
image = torch.stack([self.preprocess_val(Image.open(img_path))]).to(
self.device
)
texts = self.tokenizer(
[template + l for l in labels], context_length=context_length
).to(self.device)
with torch.no_grad():
image_features, text_features, logit_scale = self.model(image, texts)
image_features, text_features, logit_scale = self.model(
image, texts
)
logits = (
(logit_scale * image_features @ text_features.t())
.detach()
@ -142,7 +148,9 @@ class BioClip:
title = (
metadata["filename"]
+ "\n"
+ "\n".join([f"{k}: {v*100:.1f}" for k, v in metadata["top_probs"].items()])
+ "\n".join(
[f"{k}: {v*100:.1f}" for k, v in metadata["top_probs"].items()]
)
)
ax.set_title(title, fontsize=14)
plt.tight_layout()

@ -154,7 +154,7 @@ class BioGPT:
min_length=self.min_length,
max_length=self.max_length,
num_beams=num_beams,
early_stopping=early_stopping
early_stopping=early_stopping,
)
return self.tokenizer.decode(beam_output[0], skip_special_tokens=True)

@ -96,7 +96,9 @@ class BaseCohere(Serializable):
values, "cohere_api_key", "COHERE_API_KEY"
)
client_name = values["user_agent"]
values["client"] = cohere.Client(cohere_api_key, client_name=client_name)
values["client"] = cohere.Client(
cohere_api_key, client_name=client_name
)
values["async_client"] = cohere.AsyncClient(
cohere_api_key, client_name=client_name
)
@ -172,17 +174,23 @@ class Cohere(LLM, BaseCohere):
"""Return type of llm."""
return "cohere"
def _invocation_params(self, stop: Optional[List[str]], **kwargs: Any) -> dict:
def _invocation_params(
self, stop: Optional[List[str]], **kwargs: Any
) -> dict:
params = self._default_params
if self.stop is not None and stop is not None:
raise ValueError("`stop` found in both the input and default params.")
raise ValueError(
"`stop` found in both the input and default params."
)
elif self.stop is not None:
params["stop_sequences"] = self.stop
else:
params["stop_sequences"] = stop
return {**params, **kwargs}
def _process_response(self, response: Any, stop: Optional[List[str]]) -> str:
def _process_response(
self, response: Any, stop: Optional[List[str]]
) -> str:
text = response.generations[0].text
# If stop tokens are provided, Cohere's endpoint returns them.
# In order to make this consistent with other endpoints, we strip them.

@ -169,8 +169,8 @@ class Dalle3:
print(
colored(
(
f"Error running Dalle3: {error} try optimizing your api key and"
" or try again"
f"Error running Dalle3: {error} try optimizing your api"
" key and or try again"
),
"red",
)
@ -234,8 +234,8 @@ class Dalle3:
print(
colored(
(
f"Error running Dalle3: {error} try optimizing your api key and"
" or try again"
f"Error running Dalle3: {error} try optimizing your api"
" key and or try again"
),
"red",
)
@ -248,8 +248,7 @@ class Dalle3:
"""Print the Dalle3 dashboard"""
print(
colored(
(
f"""Dalle3 Dashboard:
f"""Dalle3 Dashboard:
--------------------
Model: {self.model}
@ -265,13 +264,14 @@ class Dalle3:
--------------------
"""
),
""",
"green",
)
)
def process_batch_concurrently(self, tasks: List[str], max_workers: int = 5):
def process_batch_concurrently(
self, tasks: List[str], max_workers: int = 5
):
"""
Process a batch of tasks concurrently
@ -293,8 +293,12 @@ class Dalle3:
['https://cdn.openai.com/dall-e/encoded/feats/feats_01J9J5ZKJZJY9.png',
"""
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
future_to_task = {executor.submit(self, task): task for task in tasks}
with concurrent.futures.ThreadPoolExecutor(
max_workers=max_workers
) as executor:
future_to_task = {
executor.submit(self, task): task for task in tasks
}
results = []
for future in concurrent.futures.as_completed(future_to_task):
task = future_to_task[future]
@ -307,14 +311,20 @@ class Dalle3:
print(
colored(
(
f"Error running Dalle3: {error} try optimizing your api key and"
" or try again"
f"Error running Dalle3: {error} try optimizing"
" your api key and or try again"
),
"red",
)
)
print(colored(f"Error running Dalle3: {error.http_status}", "red"))
print(colored(f"Error running Dalle3: {error.error}", "red"))
print(
colored(
f"Error running Dalle3: {error.http_status}", "red"
)
)
print(
colored(f"Error running Dalle3: {error.error}", "red")
)
raise error
def _generate_uuid(self):

@ -28,7 +28,10 @@ def async_retry(max_retries=3, exceptions=(Exception,), delay=1):
retries -= 1
if retries <= 0:
raise
print(f"Retry after exception: {e}, Attempts remaining: {retries}")
print(
f"Retry after exception: {e}, Attempts remaining:"
f" {retries}"
)
await asyncio.sleep(delay)
return wrapper
@ -62,7 +65,9 @@ class DistilWhisperModel:
def __init__(self, model_id="distil-whisper/distil-large-v2"):
self.device = "cuda:0" if torch.cuda.is_available() else "cpu"
self.torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
self.torch_dtype = (
torch.float16 if torch.cuda.is_available() else torch.float32
)
self.model_id = model_id
self.model = AutoModelForSpeechSeq2Seq.from_pretrained(
model_id,
@ -119,7 +124,9 @@ class DistilWhisperModel:
try:
with torch.no_grad():
# Load the whole audio file, but process and transcribe it in chunks
audio_input = self.processor.audio_file_to_array(audio_file_path)
audio_input = self.processor.audio_file_to_array(
audio_file_path
)
sample_rate = audio_input.sampling_rate
len(audio_input.array) / sample_rate
chunks = [
@ -139,7 +146,9 @@ class DistilWhisperModel:
return_tensors="pt",
padding=True,
)
processed_inputs = processed_inputs.input_values.to(self.device)
processed_inputs = processed_inputs.input_values.to(
self.device
)
# Generate transcription for the chunk
logits = self.model.generate(processed_inputs)
@ -157,4 +166,6 @@ class DistilWhisperModel:
time.sleep(chunk_duration)
except Exception as e:
print(colored(f"An error occurred during transcription: {e}", "red"))
print(
colored(f"An error occurred during transcription: {e}", "red")
)

@ -79,7 +79,9 @@ class ElevenLabsText2SpeechTool(BaseTool):
f.write(speech)
return f.name
except Exception as e:
raise RuntimeError(f"Error while running ElevenLabsText2SpeechTool: {e}")
raise RuntimeError(
f"Error while running ElevenLabsText2SpeechTool: {e}"
)
def play(self, speech_file: str) -> None:
"""Play the text as speech."""
@ -93,7 +95,9 @@ class ElevenLabsText2SpeechTool(BaseTool):
"""Stream the text as speech as it is generated.
Play the text in your speakers."""
elevenlabs = _import_elevenlabs()
speech_stream = elevenlabs.generate(text=query, model=self.model, stream=True)
speech_stream = elevenlabs.generate(
text=query, model=self.model, stream=True
)
elevenlabs.stream(speech_stream)
def save(self, speech_file: str, path: str) -> None:

@ -10,7 +10,9 @@ from pydantic import BaseModel, StrictFloat, StrictInt, validator
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the classes for image classification
with open(os.path.join(os.path.dirname(__file__), "fast_vit_classes.json")) as f:
with open(
os.path.join(os.path.dirname(__file__), "fast_vit_classes.json")
) as f:
FASTVIT_IMAGENET_1K_CLASSES = json.load(f)
@ -20,7 +22,9 @@ class ClassificationResult(BaseModel):
@validator("class_id", "confidence", pre=True, each_item=True)
def check_list_contents(cls, v):
assert isinstance(v, int) or isinstance(v, float), "must be integer or float"
assert isinstance(v, int) or isinstance(
v, float
), "must be integer or float"
return v
@ -50,7 +54,9 @@ class FastViT:
"hf_hub:timm/fastvit_s12.apple_in1k", pretrained=True
).to(DEVICE)
data_config = timm.data.resolve_model_data_config(self.model)
self.transforms = timm.data.create_transform(**data_config, is_training=False)
self.transforms = timm.data.create_transform(
**data_config, is_training=False
)
self.model.eval()
def __call__(

@ -46,7 +46,9 @@ class Fuyu:
self.tokenizer = AutoTokenizer.from_pretrained(pretrained_path)
self.image_processor = FuyuImageProcessor()
self.processor = FuyuProcessor(
image_processor=self.image_processor, tokenizer=self.tokenizer, **kwargs
image_processor=self.image_processor,
tokenizer=self.tokenizer,
**kwargs,
)
self.model = FuyuForCausalLM.from_pretrained(
pretrained_path,
@ -69,8 +71,12 @@ class Fuyu:
for k, v in model_inputs.items():
model_inputs[k] = v.to(self.device_map)
output = self.model.generate(**model_inputs, max_new_tokens=self.max_new_tokens)
text = self.processor.batch_decode(output[:, -7:], skip_special_tokens=True)
output = self.model.generate(
**model_inputs, max_new_tokens=self.max_new_tokens
)
text = self.processor.batch_decode(
output[:, -7:], skip_special_tokens=True
)
return print(str(text))
def get_img_from_web(self, img_url: str):

@ -190,12 +190,15 @@ class GPT4Vision:
"""Process a batch of tasks and images"""
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [
executor.submit(self.run, task, img) for task, img in tasks_images
executor.submit(self.run, task, img)
for task, img in tasks_images
]
results = [future.result() for future in futures]
return results
async def run_batch_async(self, tasks_images: List[Tuple[str, str]]) -> List[str]:
async def run_batch_async(
self, tasks_images: List[Tuple[str, str]]
) -> List[str]:
"""Process a batch of tasks and images asynchronously"""
loop = asyncio.get_event_loop()
futures = [

@ -133,7 +133,9 @@ class HuggingfaceLLM:
):
self.logger = logging.getLogger(__name__)
self.device = (
device if device else ("cuda" if torch.cuda.is_available() else "cpu")
device
if device
else ("cuda" if torch.cuda.is_available() else "cpu")
)
self.model_id = model_id
self.max_length = max_length
@ -178,7 +180,11 @@ class HuggingfaceLLM:
except Exception as e:
# self.logger.error(f"Failed to load the model or the tokenizer: {e}")
# raise
print(colored(f"Failed to load the model and or the tokenizer: {e}", "red"))
print(
colored(
f"Failed to load the model and or the tokenizer: {e}", "red"
)
)
def print_error(self, error: str):
"""Print error"""
@ -207,12 +213,16 @@ class HuggingfaceLLM:
if self.distributed:
self.model = DDP(self.model)
except Exception as error:
self.logger.error(f"Failed to load the model or the tokenizer: {error}")
self.logger.error(
f"Failed to load the model or the tokenizer: {error}"
)
raise
def concurrent_run(self, tasks: List[str], max_workers: int = 5):
"""Concurrently generate text for a list of prompts."""
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
with concurrent.futures.ThreadPoolExecutor(
max_workers=max_workers
) as executor:
results = list(executor.map(self.run, tasks))
return results
@ -220,7 +230,8 @@ class HuggingfaceLLM:
"""Process a batch of tasks and images"""
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = [
executor.submit(self.run, task, img) for task, img in tasks_images
executor.submit(self.run, task, img)
for task, img in tasks_images
]
results = [future.result() for future in futures]
return results
@ -243,7 +254,9 @@ class HuggingfaceLLM:
self.print_dashboard(task)
try:
inputs = self.tokenizer.encode(task, return_tensors="pt").to(self.device)
inputs = self.tokenizer.encode(task, return_tensors="pt").to(
self.device
)
# self.log.start()
@ -279,8 +292,8 @@ class HuggingfaceLLM:
print(
colored(
(
f"HuggingfaceLLM could not generate text because of error: {e},"
" try optimizing your arguments"
"HuggingfaceLLM could not generate text because of"
f" error: {e}, try optimizing your arguments"
),
"red",
)
@ -305,7 +318,9 @@ class HuggingfaceLLM:
self.print_dashboard(task)
try:
inputs = self.tokenizer.encode(task, return_tensors="pt").to(self.device)
inputs = self.tokenizer.encode(task, return_tensors="pt").to(
self.device
)
# self.log.start()

@ -66,7 +66,9 @@ class Idefics:
max_length=100,
):
self.device = (
device if device else ("cuda" if torch.cuda.is_available() else "cpu")
device
if device
else ("cuda" if torch.cuda.is_available() else "cpu")
)
self.model = IdeficsForVisionText2Text.from_pretrained(
checkpoint,

@ -54,7 +54,9 @@ class JinaEmbeddings:
):
self.logger = logging.getLogger(__name__)
self.device = (
device if device else ("cuda" if torch.cuda.is_available() else "cpu")
device
if device
else ("cuda" if torch.cuda.is_available() else "cpu")
)
self.model_id = model_id
self.max_length = max_length
@ -83,7 +85,9 @@ class JinaEmbeddings:
try:
self.model = AutoModelForCausalLM.from_pretrained(
self.model_id, quantization_config=bnb_config, trust_remote_code=True
self.model_id,
quantization_config=bnb_config,
trust_remote_code=True,
)
self.model # .to(self.device)
@ -112,7 +116,9 @@ class JinaEmbeddings:
if self.distributed:
self.model = DDP(self.model)
except Exception as error:
self.logger.error(f"Failed to load the model or the tokenizer: {error}")
self.logger.error(
f"Failed to load the model or the tokenizer: {error}"
)
raise
def run(self, task: str):

@ -70,11 +70,13 @@ class Kosmos2(BaseModel):
prompt = "<grounding>An image of"
inputs = self.processor(text=prompt, images=image, return_tensors="pt")
outputs = self.model.generate(**inputs, use_cache=True, max_new_tokens=64)
outputs = self.model.generate(
**inputs, use_cache=True, max_new_tokens=64
)
generated_text = self.processor.batch_decode(outputs, skip_special_tokens=True)[
0
]
generated_text = self.processor.batch_decode(
outputs, skip_special_tokens=True
)[0]
# The actual processing of generated_text to entities would go here
# For the purpose of this example, assume a mock function 'extract_entities' exists:
@ -99,7 +101,9 @@ class Kosmos2(BaseModel):
if not entities:
return Detections.empty()
class_ids = [0] * len(entities) # Replace with actual class ID extraction logic
class_ids = [0] * len(
entities
) # Replace with actual class ID extraction logic
xyxys = [
(
e[1][0] * image.width,
@ -111,7 +115,9 @@ class Kosmos2(BaseModel):
]
confidences = [1.0] * len(entities) # Placeholder confidence
return Detections(xyxy=xyxys, class_id=class_ids, confidence=confidences)
return Detections(
xyxy=xyxys, class_id=class_ids, confidence=confidences
)
# Usage:

@ -145,12 +145,12 @@ class Kosmos:
elif isinstance(image, torch.Tensor):
# pdb.set_trace()
image_tensor = image.cpu()
reverse_norm_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073])[
:, None, None
]
reverse_norm_std = torch.tensor([0.26862954, 0.26130258, 0.27577711])[
:, None, None
]
reverse_norm_mean = torch.tensor(
[0.48145466, 0.4578275, 0.40821073]
)[:, None, None]
reverse_norm_std = torch.tensor(
[0.26862954, 0.26130258, 0.27577711]
)[:, None, None]
image_tensor = image_tensor * reverse_norm_std + reverse_norm_mean
pil_img = T.ToPILImage()(image_tensor)
image_h = pil_img.height
@ -188,7 +188,11 @@ class Kosmos:
# random color
color = tuple(np.random.randint(0, 255, size=3).tolist())
new_image = cv2.rectangle(
new_image, (orig_x1, orig_y1), (orig_x2, orig_y2), color, box_line
new_image,
(orig_x1, orig_y1),
(orig_x2, orig_y2),
color,
box_line,
)
l_o, r_o = (
@ -211,7 +215,10 @@ class Kosmos:
# add text background
(text_width, text_height), _ = cv2.getTextSize(
f" {entity_name}", cv2.FONT_HERSHEY_COMPLEX, text_size, text_line
f" {entity_name}",
cv2.FONT_HERSHEY_COMPLEX,
text_size,
text_line,
)
text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2 = (
x1,
@ -222,7 +229,8 @@ class Kosmos:
for prev_bbox in previous_bboxes:
while is_overlapping(
(text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2), prev_bbox
(text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2),
prev_bbox,
):
text_bg_y1 += (
text_height + text_offset_original + 2 * text_spaces
@ -230,14 +238,18 @@ class Kosmos:
text_bg_y2 += (
text_height + text_offset_original + 2 * text_spaces
)
y1 += text_height + text_offset_original + 2 * text_spaces
y1 += (
text_height + text_offset_original + 2 * text_spaces
)
if text_bg_y2 >= image_h:
text_bg_y1 = max(
0,
image_h
- (
text_height + text_offset_original + 2 * text_spaces
text_height
+ text_offset_original
+ 2 * text_spaces
),
)
text_bg_y2 = image_h
@ -270,7 +282,9 @@ class Kosmos:
cv2.LINE_AA,
)
# previous_locations.append((x1, y1))
previous_bboxes.append((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2))
previous_bboxes.append(
(text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2)
)
pil_image = Image.fromarray(new_image[:, :, [2, 1, 0]])
if save_path:

@ -121,7 +121,11 @@ class LlamaFunctionCaller:
)
def add_func(
self, name: str, function: Callable, description: str, arguments: List[Dict]
self,
name: str,
function: Callable,
description: str,
arguments: List[Dict],
):
"""
Adds a new function to the LlamaFunctionCaller.
@ -172,12 +176,17 @@ class LlamaFunctionCaller:
if self.streaming:
out = self.model.generate(
**inputs, streamer=streamer, max_new_tokens=self.max_tokens, **kwargs
**inputs,
streamer=streamer,
max_new_tokens=self.max_tokens,
**kwargs,
)
return out
else:
out = self.model.generate(**inputs, max_length=self.max_tokens, **kwargs)
out = self.model.generate(
**inputs, max_length=self.max_tokens, **kwargs
)
# return self.tokenizer.decode(out[0], skip_special_tokens=True)
return out

@ -49,7 +49,9 @@ class Mistral:
# Check if the specified device is available
if not torch.cuda.is_available() and device == "cuda":
raise ValueError("CUDA is not available. Please choose a different device.")
raise ValueError(
"CUDA is not available. Please choose a different device."
)
# Load the model and tokenizer
self.model = None
@ -70,7 +72,9 @@ class Mistral:
"""Run the model on a given task."""
try:
model_inputs = self.tokenizer([task], return_tensors="pt").to(self.device)
model_inputs = self.tokenizer([task], return_tensors="pt").to(
self.device
)
generated_ids = self.model.generate(
**model_inputs,
max_length=self.max_length,
@ -87,7 +91,9 @@ class Mistral:
"""Run the model on a given task."""
try:
model_inputs = self.tokenizer([task], return_tensors="pt").to(self.device)
model_inputs = self.tokenizer([task], return_tensors="pt").to(
self.device
)
generated_ids = self.model.generate(
**model_inputs,
max_length=self.max_length,

@ -29,7 +29,9 @@ class MPT7B:
"""
def __init__(self, model_name: str, tokenizer_name: str, max_tokens: int = 100):
def __init__(
self, model_name: str, tokenizer_name: str, max_tokens: int = 100
):
# Loading model and tokenizer details
self.model_name = model_name
self.tokenizer_name = tokenizer_name
@ -118,7 +120,10 @@ class MPT7B:
"""
with torch.autocast("cuda", dtype=torch.bfloat16):
return self.pipe(
prompt, max_new_tokens=self.max_tokens, do_sample=True, use_cache=True
prompt,
max_new_tokens=self.max_tokens,
do_sample=True,
use_cache=True,
)[0]["generated_text"]
async def generate_async(self, prompt: str) -> str:

@ -41,8 +41,12 @@ class Nougat:
self.min_length = min_length
self.max_new_tokens = max_new_tokens
self.processor = NougatProcessor.from_pretrained(self.model_name_or_path)
self.model = VisionEncoderDecoderModel.from_pretrained(self.model_name_or_path)
self.processor = NougatProcessor.from_pretrained(
self.model_name_or_path
)
self.model = VisionEncoderDecoderModel.from_pretrained(
self.model_name_or_path
)
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.model.to(self.device)
@ -63,8 +67,12 @@ class Nougat:
max_new_tokens=self.max_new_tokens,
)
sequence = self.processor.batch_decode(outputs, skip_special_tokens=True)[0]
sequence = self.processor.post_process_generation(sequence, fix_markdown=False)
sequence = self.processor.batch_decode(
outputs, skip_special_tokens=True
)[0]
sequence = self.processor.post_process_generation(
sequence, fix_markdown=False
)
out = print(sequence)
return out

@ -43,7 +43,9 @@ def get_pydantic_field_names(cls: Any) -> Set[str]:
logger = logging.getLogger(__name__)
def _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]:
def _create_retry_decorator(
embeddings: OpenAIEmbeddings,
) -> Callable[[Any], Any]:
import llm
min_seconds = 4
@ -118,7 +120,9 @@ def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:
return _embed_with_retry(**kwargs)
async def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:
async def async_embed_with_retry(
embeddings: OpenAIEmbeddings, **kwargs: Any
) -> Any:
"""Use tenacity to retry the embedding call."""
@_async_retry_decorator(embeddings)
@ -172,7 +176,9 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
client: Any #: :meta private:
model: str = "text-embedding-ada-002"
deployment: str = model # to support Azure OpenAI Service custom deployment names
deployment: str = (
model # to support Azure OpenAI Service custom deployment names
)
openai_api_version: Optional[str] = None
# to support Azure OpenAI Service custom endpoints
openai_api_base: Optional[str] = None
@ -229,11 +235,14 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
)
extra[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
invalid_model_kwargs = all_required_field_names.intersection(
extra.keys()
)
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
"Instead they were passed in as part of `model_kwargs` parameter."
f"Parameters {invalid_model_kwargs} should be specified"
" explicitly. Instead they were passed in as part of"
" `model_kwargs` parameter."
)
values["model_kwargs"] = extra
@ -333,7 +342,9 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
try:
encoding = tiktoken.encoding_for_model(model_name)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
logger.warning(
"Warning: model not found. Using cl100k_base encoding."
)
model = "cl100k_base"
encoding = tiktoken.get_encoding(model)
for i, text in enumerate(texts):
@ -384,11 +395,11 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
self,
input="",
**self._invocation_params,
)[
"data"
][0]["embedding"]
)["data"][0]["embedding"]
else:
average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
average = np.average(
_result, axis=0, weights=num_tokens_in_batch[i]
)
embeddings[i] = (average / np.linalg.norm(average)).tolist()
return embeddings
@ -414,7 +425,9 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
try:
encoding = tiktoken.encoding_for_model(model_name)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
logger.warning(
"Warning: model not found. Using cl100k_base encoding."
)
model = "cl100k_base"
encoding = tiktoken.get_encoding(model)
for i, text in enumerate(texts):
@ -458,7 +471,9 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
)
)["data"][0]["embedding"]
else:
average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
average = np.average(
_result, axis=0, weights=num_tokens_in_batch[i]
)
embeddings[i] = (average / np.linalg.norm(average)).tolist()
return embeddings
@ -495,7 +510,9 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
"""
# NOTE: to keep things simple, we assume the list may contain texts longer
# than the maximum context and use length-safe embedding function.
return await self._aget_len_safe_embeddings(texts, engine=self.deployment)
return await self._aget_len_safe_embeddings(
texts, engine=self.deployment
)
def embed_query(self, text: str) -> List[float]:
"""Call out to OpenAI's embedding endpoint for embedding query text.

@ -146,7 +146,8 @@ class OpenAIFunctionCaller:
self.messages.append({"role": role, "content": content})
@retry(
wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3)
wait=wait_random_exponential(multiplier=1, max=40),
stop=stop_after_attempt(3),
)
def chat_completion_request(
self,
@ -194,17 +195,22 @@ class OpenAIFunctionCaller:
elif message["role"] == "user":
print(
colored(
f"user: {message['content']}\n", role_to_color[message["role"]]
f"user: {message['content']}\n",
role_to_color[message["role"]],
)
)
elif message["role"] == "assistant" and message.get("function_call"):
elif message["role"] == "assistant" and message.get(
"function_call"
):
print(
colored(
f"assistant: {message['function_call']}\n",
role_to_color[message["role"]],
)
)
elif message["role"] == "assistant" and not message.get("function_call"):
elif message["role"] == "assistant" and not message.get(
"function_call"
):
print(
colored(
f"assistant: {message['content']}\n",

@ -62,19 +62,25 @@ def _stream_response_to_generation_chunk(
return GenerationChunk(
text=stream_response["choices"][0]["text"],
generation_info=dict(
finish_reason=stream_response["choices"][0].get("finish_reason", None),
finish_reason=stream_response["choices"][0].get(
"finish_reason", None
),
logprobs=stream_response["choices"][0].get("logprobs", None),
),
)
def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:
def _update_response(
response: Dict[str, Any], stream_response: Dict[str, Any]
) -> None:
"""Update response from the stream response."""
response["choices"][0]["text"] += stream_response["choices"][0]["text"]
response["choices"][0]["finish_reason"] = stream_response["choices"][0].get(
"finish_reason", None
)
response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"]
response["choices"][0]["logprobs"] = stream_response["choices"][0][
"logprobs"
]
def _streaming_response_template() -> Dict[str, Any]:
@ -315,9 +321,11 @@ class BaseOpenAI(BaseLLM):
chunk.text,
chunk=chunk,
verbose=self.verbose,
logprobs=chunk.generation_info["logprobs"]
if chunk.generation_info
else None,
logprobs=(
chunk.generation_info["logprobs"]
if chunk.generation_info
else None
),
)
async def _astream(
@ -339,9 +347,11 @@ class BaseOpenAI(BaseLLM):
chunk.text,
chunk=chunk,
verbose=self.verbose,
logprobs=chunk.generation_info["logprobs"]
if chunk.generation_info
else None,
logprobs=(
chunk.generation_info["logprobs"]
if chunk.generation_info
else None
),
)
def _generate(
@ -377,10 +387,14 @@ class BaseOpenAI(BaseLLM):
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
raise ValueError(
"Cannot stream results with multiple prompts."
)
generation: Optional[GenerationChunk] = None
for chunk in self._stream(_prompts[0], stop, run_manager, **kwargs):
for chunk in self._stream(
_prompts[0], stop, run_manager, **kwargs
):
if generation is None:
generation = chunk
else:
@ -389,12 +403,16 @@ class BaseOpenAI(BaseLLM):
choices.append(
{
"text": generation.text,
"finish_reason": generation.generation_info.get("finish_reason")
if generation.generation_info
else None,
"logprobs": generation.generation_info.get("logprobs")
if generation.generation_info
else None,
"finish_reason": (
generation.generation_info.get("finish_reason")
if generation.generation_info
else None
),
"logprobs": (
generation.generation_info.get("logprobs")
if generation.generation_info
else None
),
}
)
else:
@ -424,7 +442,9 @@ class BaseOpenAI(BaseLLM):
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
raise ValueError(
"Cannot stream results with multiple prompts."
)
generation: Optional[GenerationChunk] = None
async for chunk in self._astream(
@ -438,12 +458,16 @@ class BaseOpenAI(BaseLLM):
choices.append(
{
"text": generation.text,
"finish_reason": generation.generation_info.get("finish_reason")
if generation.generation_info
else None,
"logprobs": generation.generation_info.get("logprobs")
if generation.generation_info
else None,
"finish_reason": (
generation.generation_info.get("finish_reason")
if generation.generation_info
else None
),
"logprobs": (
generation.generation_info.get("logprobs")
if generation.generation_info
else None
),
}
)
else:
@ -463,7 +487,9 @@ class BaseOpenAI(BaseLLM):
"""Get the sub prompts for llm call."""
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
raise ValueError(
"`stop` found in both the input and default params."
)
params["stop"] = stop
if params["max_tokens"] == -1:
if len(prompts) != 1:
@ -541,7 +567,9 @@ class BaseOpenAI(BaseLLM):
try:
enc = tiktoken.encoding_for_model(model_name)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
logger.warning(
"Warning: model not found. Using cl100k_base encoding."
)
model = "cl100k_base"
enc = tiktoken.get_encoding(model)
@ -602,8 +630,9 @@ class BaseOpenAI(BaseLLM):
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
f"Unknown model: {modelname}. Please provide a valid OpenAI"
" model name.Known models are: "
+ ", ".join(model_token_mapping.keys())
)
return context_size
@ -753,7 +782,9 @@ class OpenAIChat(BaseLLM):
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
all_required_field_names = {
field.alias for field in cls.__fields__.values()
}
extra = values.get("model_kwargs", {})
for field_name in list(values):
@ -820,13 +851,21 @@ class OpenAIChat(BaseLLM):
) -> Tuple:
if len(prompts) > 1:
raise ValueError(
f"OpenAIChat currently only supports single prompt, got {prompts}"
"OpenAIChat currently only supports single prompt, got"
f" {prompts}"
)
messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
messages = self.prefix_messages + [
{"role": "user", "content": prompts[0]}
]
params: Dict[str, Any] = {
**{"model": self.model_name},
**self._default_params,
}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
raise ValueError(
"`stop` found in both the input and default params."
)
params["stop"] = stop
if params.get("max_tokens") == -1:
# for ChatGPT api, omitting max_tokens is equivalent to having no limit
@ -897,7 +936,11 @@ class OpenAIChat(BaseLLM):
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
[
Generation(
text=full_response["choices"][0]["message"]["content"]
)
]
],
llm_output=llm_output,
)
@ -911,7 +954,9 @@ class OpenAIChat(BaseLLM):
) -> LLMResult:
if self.streaming:
generation: Optional[GenerationChunk] = None
async for chunk in self._astream(prompts[0], stop, run_manager, **kwargs):
async for chunk in self._astream(
prompts[0], stop, run_manager, **kwargs
):
if generation is None:
generation = chunk
else:
@ -930,7 +975,11 @@ class OpenAIChat(BaseLLM):
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
[
Generation(
text=full_response["choices"][0]["message"]["content"]
)
]
],
llm_output=llm_output,
)

@ -37,10 +37,16 @@ def _create_retry_decorator() -> Callable[[Any], Any]:
return retry(
reraise=True,
stop=stop_after_attempt(max_retries),
wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),
wait=wait_exponential(
multiplier=multiplier, min=min_seconds, max=max_seconds
),
retry=(
retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)
| retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)
retry_if_exception_type(
google.api_core.exceptions.ResourceExhausted
)
| retry_if_exception_type(
google.api_core.exceptions.ServiceUnavailable
)
| retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
@ -64,7 +70,9 @@ def _strip_erroneous_leading_spaces(text: str) -> str:
The PaLM API will sometimes erroneously return a single leading space in all
lines > 1. This function strips that space.
"""
has_leading_space = all(not line or line[0] == " " for line in text.split("\n")[1:])
has_leading_space = all(
not line or line[0] == " " for line in text.split("\n")[1:]
)
if has_leading_space:
return text.replace("\n ", "\n")
else:
@ -112,7 +120,10 @@ class GooglePalm(BaseLLM, BaseModel):
values["client"] = genai
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
if (
values["temperature"] is not None
and not 0 <= values["temperature"] <= 1
):
raise ValueError("temperature must be in the range [0.0, 1.0]")
if values["top_p"] is not None and not 0 <= values["top_p"] <= 1:
@ -121,7 +132,10 @@ class GooglePalm(BaseLLM, BaseModel):
if values["top_k"] is not None and values["top_k"] <= 0:
raise ValueError("top_k must be positive")
if values["max_output_tokens"] is not None and values["max_output_tokens"] <= 0:
if (
values["max_output_tokens"] is not None
and values["max_output_tokens"] <= 0
):
raise ValueError("max_output_tokens must be greater than zero")
return values

@ -16,4 +16,6 @@ def get_ada_embeddings(text: str, model: str = "text-embedding-ada-002"):
text = text.replace("\n", " ")
return client.embeddings.create(input=[text], model=model)["data"][0]["embedding"]
return client.embeddings.create(input=[text], model=model)["data"][0][
"embedding"
]

@ -90,7 +90,9 @@ class SpeechT5:
self.processor = SpeechT5Processor.from_pretrained(self.model_name)
self.model = SpeechT5ForTextToSpeech.from_pretrained(self.model_name)
self.vocoder = SpeechT5HifiGan.from_pretrained(self.vocoder_name)
self.embeddings_dataset = load_dataset(self.dataset_name, split="validation")
self.embeddings_dataset = load_dataset(
self.dataset_name, split="validation"
)
def __call__(self, text: str, speaker_id: float = 7306):
"""Call the model on some text and return the speech."""
@ -121,7 +123,9 @@ class SpeechT5:
def set_embeddings_dataset(self, dataset_name):
"""Set the embeddings dataset to a new dataset."""
self.dataset_name = dataset_name
self.embeddings_dataset = load_dataset(self.dataset_name, split="validation")
self.embeddings_dataset = load_dataset(
self.dataset_name, split="validation"
)
# Feature 1: Get sampling rate
def get_sampling_rate(self):

@ -141,8 +141,8 @@ class SSD1B:
print(
colored(
(
f"Error running SSD1B: {error} try optimizing your api key and"
" or try again"
f"Error running SSD1B: {error} try optimizing your api"
" key and or try again"
),
"red",
)
@ -167,8 +167,7 @@ class SSD1B:
"""Print the SSD1B dashboard"""
print(
colored(
(
f"""SSD1B Dashboard:
f"""SSD1B Dashboard:
--------------------
Model: {self.model}
@ -184,13 +183,14 @@ class SSD1B:
--------------------
"""
),
""",
"green",
)
)
def process_batch_concurrently(self, tasks: List[str], max_workers: int = 5):
def process_batch_concurrently(
self, tasks: List[str], max_workers: int = 5
):
"""
Process a batch of tasks concurrently
@ -211,8 +211,12 @@ class SSD1B:
>>> print(results)
"""
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
future_to_task = {executor.submit(self, task): task for task in tasks}
with concurrent.futures.ThreadPoolExecutor(
max_workers=max_workers
) as executor:
future_to_task = {
executor.submit(self, task): task for task in tasks
}
results = []
for future in concurrent.futures.as_completed(future_to_task):
task = future_to_task[future]
@ -225,13 +229,17 @@ class SSD1B:
print(
colored(
(
f"Error running SSD1B: {error} try optimizing your api key and"
" or try again"
f"Error running SSD1B: {error} try optimizing"
" your api key and or try again"
),
"red",
)
)
print(colored(f"Error running SSD1B: {error.http_status}", "red"))
print(
colored(
f"Error running SSD1B: {error.http_status}", "red"
)
)
print(colored(f"Error running SSD1B: {error.error}", "red"))
raise error

@ -66,7 +66,9 @@ class WhisperX:
compute_type = "float16"
# 1. Transcribe with original Whisper (batched) 🗣️
model = whisperx.load_model("large-v2", device, compute_type=compute_type)
model = whisperx.load_model(
"large-v2", device, compute_type=compute_type
)
audio = whisperx.load_audio(audio_file)
result = model.transcribe(audio, batch_size=batch_size)

@ -45,7 +45,9 @@ class WizardLLMStoryTeller:
):
self.logger = logging.getLogger(__name__)
self.device = (
device if device else ("cuda" if torch.cuda.is_available() else "cpu")
device
if device
else ("cuda" if torch.cuda.is_available() else "cpu")
)
self.model_id = model_id
self.max_length = max_length
@ -101,7 +103,9 @@ class WizardLLMStoryTeller:
if self.distributed:
self.model = DDP(self.model)
except Exception as error:
self.logger.error(f"Failed to load the model or the tokenizer: {error}")
self.logger.error(
f"Failed to load the model or the tokenizer: {error}"
)
raise
def run(self, prompt_text: str):

@ -45,7 +45,9 @@ class YarnMistral128:
):
self.logger = logging.getLogger(__name__)
self.device = (
device if device else ("cuda" if torch.cuda.is_available() else "cpu")
device
if device
else ("cuda" if torch.cuda.is_available() else "cpu")
)
self.model_id = model_id
self.max_length = max_length
@ -106,7 +108,9 @@ class YarnMistral128:
if self.distributed:
self.model = DDP(self.model)
except Exception as error:
self.logger.error(f"Failed to load the model or the tokenizer: {error}")
self.logger.error(
f"Failed to load the model or the tokenizer: {error}"
)
raise
def run(self, prompt_text: str):

@ -15,7 +15,9 @@ class PromptGenerator:
"thoughts": {
"text": "thought",
"reasoning": "reasoning",
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"plan": (
"- short bulleted\n- list that conveys\n- long-term plan"
),
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user",
},
@ -66,13 +68,11 @@ class PromptGenerator:
"""
formatted_response_format = json.dumps(self.response_format, indent=4)
prompt_string = (
f"Constraints:\n{''.join(self.constraints)}\n\n"
f"Commands:\n{''.join(self.commands)}\n\n"
f"Resources:\n{''.join(self.resources)}\n\n"
f"Performance Evaluation:\n{''.join(self.performance_evaluation)}\n\n"
"You should only respond in JSON format as described below "
f"\nResponse Format: \n{formatted_response_format} "
"\nEnsure the response can be parsed by Python json.loads"
f"Constraints:\n{''.join(self.constraints)}\n\nCommands:\n{''.join(self.commands)}\n\nResources:\n{''.join(self.resources)}\n\nPerformance"
f" Evaluation:\n{''.join(self.performance_evaluation)}\n\nYou"
" should only respond in JSON format as described below \nResponse"
f" Format: \n{formatted_response_format} \nEnsure the response can"
" be parsed by Python json.loads"
)
return prompt_string

@ -5,26 +5,26 @@ def generate_agent_role_prompt(agent):
"""
prompts = {
"Finance Agent": (
"You are a seasoned finance analyst AI assistant. Your primary goal is to"
" compose comprehensive, astute, impartial, and methodically arranged"
" financial reports based on provided data and trends."
"You are a seasoned finance analyst AI assistant. Your primary goal"
" is to compose comprehensive, astute, impartial, and methodically"
" arranged financial reports based on provided data and trends."
),
"Travel Agent": (
"You are a world-travelled AI tour guide assistant. Your main purpose is to"
" draft engaging, insightful, unbiased, and well-structured travel reports"
" on given locations, including history, attractions, and cultural"
" insights."
"You are a world-travelled AI tour guide assistant. Your main"
" purpose is to draft engaging, insightful, unbiased, and"
" well-structured travel reports on given locations, including"
" history, attractions, and cultural insights."
),
"Academic Research Agent": (
"You are an AI academic research assistant. Your primary responsibility is"
" to create thorough, academically rigorous, unbiased, and systematically"
" organized reports on a given research topic, following the standards of"
" scholarly work."
"You are an AI academic research assistant. Your primary"
" responsibility is to create thorough, academically rigorous,"
" unbiased, and systematically organized reports on a given"
" research topic, following the standards of scholarly work."
),
"Default Agent": (
"You are an AI critical thinker research assistant. Your sole purpose is to"
" write well written, critically acclaimed, objective and structured"
" reports on given text."
"You are an AI critical thinker research assistant. Your sole"
" purpose is to write well written, critically acclaimed, objective"
" and structured reports on given text."
),
}
@ -39,12 +39,12 @@ def generate_report_prompt(question, research_summary):
"""
return (
f'"""{research_summary}""" Using the above information, answer the following'
f' question or topic: "{question}" in a detailed report -- The report should'
" focus on the answer to the question, should be well structured, informative,"
" in depth, with facts and numbers if available, a minimum of 1,200 words and"
" with markdown syntax and apa format. Write all source urls at the end of the"
" report in apa format"
f'"""{research_summary}""" Using the above information, answer the'
f' following question or topic: "{question}" in a detailed report --'
" The report should focus on the answer to the question, should be"
" well structured, informative, in depth, with facts and numbers if"
" available, a minimum of 1,200 words and with markdown syntax and apa"
" format. Write all source urls at the end of the report in apa format"
)
@ -55,9 +55,10 @@ def generate_search_queries_prompt(question):
"""
return (
"Write 4 google search queries to search online that form an objective opinion"
f' from the following: "{question}"You must respond with a list of strings in'
' the following format: ["query 1", "query 2", "query 3", "query 4"]'
"Write 4 google search queries to search online that form an objective"
f' opinion from the following: "{question}"You must respond with a list'
' of strings in the following format: ["query 1", "query 2", "query'
' 3", "query 4"]'
)
@ -73,14 +74,15 @@ def generate_resource_report_prompt(question, research_summary):
"""
return (
f'"""{research_summary}""" Based on the above information, generate a'
" bibliography recommendation report for the following question or topic:"
f' "{question}". The report should provide a detailed analysis of each'
" recommended resource, explaining how each source can contribute to finding"
" answers to the research question. Focus on the relevance, reliability, and"
" significance of each source. Ensure that the report is well-structured,"
" informative, in-depth, and follows Markdown syntax. Include relevant facts,"
" figures, and numbers whenever available. The report should have a minimum"
" length of 1,200 words."
" bibliography recommendation report for the following question or"
f' topic: "{question}". The report should provide a detailed analysis'
" of each recommended resource, explaining how each source can"
" contribute to finding answers to the research question. Focus on the"
" relevance, reliability, and significance of each source. Ensure that"
" the report is well-structured, informative, in-depth, and follows"
" Markdown syntax. Include relevant facts, figures, and numbers"
" whenever available. The report should have a minimum length of 1,200"
" words."
)
@ -92,13 +94,14 @@ def generate_outline_report_prompt(question, research_summary):
"""
return (
f'"""{research_summary}""" Using the above information, generate an outline for'
" a research report in Markdown syntax for the following question or topic:"
f' "{question}". The outline should provide a well-structured framework for the'
" research report, including the main sections, subsections, and key points to"
" be covered. The research report should be detailed, informative, in-depth,"
" and a minimum of 1,200 words. Use appropriate Markdown syntax to format the"
" outline and ensure readability."
f'"""{research_summary}""" Using the above information, generate an'
" outline for a research report in Markdown syntax for the following"
f' question or topic: "{question}". The outline should provide a'
" well-structured framework for the research report, including the"
" main sections, subsections, and key points to be covered. The"
" research report should be detailed, informative, in-depth, and a"
" minimum of 1,200 words. Use appropriate Markdown syntax to format"
" the outline and ensure readability."
)
@ -110,11 +113,12 @@ def generate_concepts_prompt(question, research_summary):
"""
return (
f'"""{research_summary}""" Using the above information, generate a list of 5'
" main concepts to learn for a research report on the following question or"
f' topic: "{question}". The outline should provide a well-structured'
" frameworkYou must respond with a list of strings in the following format:"
' ["concepts 1", "concepts 2", "concepts 3", "concepts 4, concepts 5"]'
f'"""{research_summary}""" Using the above information, generate a list'
" of 5 main concepts to learn for a research report on the following"
f' question or topic: "{question}". The outline should provide a'
" well-structured frameworkYou must respond with a list of strings in"
' the following format: ["concepts 1", "concepts 2", "concepts 3",'
' "concepts 4, concepts 5"]'
)
@ -128,10 +132,10 @@ def generate_lesson_prompt(concept):
"""
prompt = (
f"generate a comprehensive lesson about {concept} in Markdown syntax. This"
f" should include the definitionof {concept}, its historical background and"
" development, its applications or uses in differentfields, and notable events"
f" or facts related to {concept}."
f"generate a comprehensive lesson about {concept} in Markdown syntax."
f" This should include the definitionof {concept}, its historical"
" background and development, its applications or uses in"
f" differentfields, and notable events or facts related to {concept}."
)
return prompt

@ -12,7 +12,9 @@ if TYPE_CHECKING:
def get_buffer_string(
messages: Sequence[BaseMessage], human_prefix: str = "Human", ai_prefix: str = "AI"
messages: Sequence[BaseMessage],
human_prefix: str = "Human",
ai_prefix: str = "AI",
) -> str:
"""Convert sequence of Messages to strings and concatenate them into one string.

@ -105,7 +105,9 @@ class ChatMessage(Message):
def get_buffer_string(
messages: Sequence[Message], human_prefix: str = "Human", ai_prefix: str = "AI"
messages: Sequence[Message],
human_prefix: str = "Human",
ai_prefix: str = "AI",
) -> str:
string_messages = []
for m in messages:

@ -1,6 +1,6 @@
ERROR_PROMPT = (
"An error has occurred for the following text: \n{promptedQuery} Please explain"
" this error.\n {e}"
"An error has occurred for the following text: \n{promptedQuery} Please"
" explain this error.\n {e}"
)
IMAGE_PROMPT = """

@ -1,16 +1,17 @@
PY_SIMPLE_COMPLETION_INSTRUCTION = "# Write the body of this function only."
PY_REFLEXION_COMPLETION_INSTRUCTION = (
"You are a Python writing assistant. You will be given your past function"
" implementation, a series of unit tests, and a hint to change the implementation"
" appropriately. Write your full implementation (restate the function"
" signature).\n\n-----"
" implementation, a series of unit tests, and a hint to change the"
" implementation appropriately. Write your full implementation (restate the"
" function signature).\n\n-----"
)
PY_SELF_REFLECTION_COMPLETION_INSTRUCTION = (
"You are a Python writing assistant. You will be given a function implementation"
" and a series of unit tests. Your goal is to write a few sentences to explain why"
" your implementation is wrong as indicated by the tests. You will need this as a"
" hint when you try again later. Only provide the few sentence description in your"
" answer, not the implementation.\n\n-----"
"You are a Python writing assistant. You will be given a function"
" implementation and a series of unit tests. Your goal is to write a few"
" sentences to explain why your implementation is wrong as indicated by the"
" tests. You will need this as a hint when you try again later. Only"
" provide the few sentence description in your answer, not the"
" implementation.\n\n-----"
)
USE_PYTHON_CODEBLOCK_INSTRUCTION = (
"Use a Python code block to write your response. For"
@ -18,25 +19,26 @@ USE_PYTHON_CODEBLOCK_INSTRUCTION = (
)
PY_SIMPLE_CHAT_INSTRUCTION = (
"You are an AI that only responds with python code, NOT ENGLISH. You will be given"
" a function signature and its docstring by the user. Write your full"
" implementation (restate the function signature)."
"You are an AI that only responds with python code, NOT ENGLISH. You will"
" be given a function signature and its docstring by the user. Write your"
" full implementation (restate the function signature)."
)
PY_SIMPLE_CHAT_INSTRUCTION_V2 = (
"You are an AI that only responds with only python code. You will be given a"
" function signature and its docstring by the user. Write your full implementation"
" (restate the function signature)."
"You are an AI that only responds with only python code. You will be given"
" a function signature and its docstring by the user. Write your full"
" implementation (restate the function signature)."
)
PY_REFLEXION_CHAT_INSTRUCTION = (
"You are an AI Python assistant. You will be given your past function"
" implementation, a series of unit tests, and a hint to change the implementation"
" appropriately. Write your full implementation (restate the function signature)."
" implementation, a series of unit tests, and a hint to change the"
" implementation appropriately. Write your full implementation (restate the"
" function signature)."
)
PY_REFLEXION_CHAT_INSTRUCTION_V2 = (
"You are an AI Python assistant. You will be given your previous implementation of"
" a function, a series of unit tests results, and your self-reflection on your"
" previous implementation. Write your full implementation (restate the function"
" signature)."
"You are an AI Python assistant. You will be given your previous"
" implementation of a function, a series of unit tests results, and your"
" self-reflection on your previous implementation. Write your full"
" implementation (restate the function signature)."
)
PY_REFLEXION_FEW_SHOT_ADD = '''Example 1:
[previous impl]:
@ -172,18 +174,19 @@ END EXAMPLES
'''
PY_SELF_REFLECTION_CHAT_INSTRUCTION = (
"You are a Python programming assistant. You will be given a function"
" implementation and a series of unit tests. Your goal is to write a few sentences"
" to explain why your implementation is wrong as indicated by the tests. You will"
" need this as a hint when you try again later. Only provide the few sentence"
" description in your answer, not the implementation."
" implementation and a series of unit tests. Your goal is to write a few"
" sentences to explain why your implementation is wrong as indicated by the"
" tests. You will need this as a hint when you try again later. Only"
" provide the few sentence description in your answer, not the"
" implementation."
)
PY_SELF_REFLECTION_CHAT_INSTRUCTION_V2 = (
"You are a Python programming assistant. You will be given a function"
" implementation and a series of unit test results. Your goal is to write a few"
" sentences to explain why your implementation is wrong as indicated by the tests."
" You will need this as guidance when you try again later. Only provide the few"
" sentence description in your answer, not the implementation. You will be given a"
" few examples by the user."
" implementation and a series of unit test results. Your goal is to write a"
" few sentences to explain why your implementation is wrong as indicated by"
" the tests. You will need this as guidance when you try again later. Only"
" provide the few sentence description in your answer, not the"
" implementation. You will be given a few examples by the user."
)
PY_SELF_REFLECTION_FEW_SHOT = """Example 1:
[function impl]:

@ -1,23 +1,26 @@
conversation_stages = {
"1": (
"Introduction: Start the conversation by introducing yourself and your company."
" Be polite and respectful while keeping the tone of the conversation"
" professional. Your greeting should be welcoming. Always clarify in your"
" greeting the reason why you are contacting the prospect."
"Introduction: Start the conversation by introducing yourself and your"
" company. Be polite and respectful while keeping the tone of the"
" conversation professional. Your greeting should be welcoming. Always"
" clarify in your greeting the reason why you are contacting the"
" prospect."
),
"2": (
"Qualification: Qualify the prospect by confirming if they are the right person"
" to talk to regarding your product/service. Ensure that they have the"
" authority to make purchasing decisions."
"Qualification: Qualify the prospect by confirming if they are the"
" right person to talk to regarding your product/service. Ensure that"
" they have the authority to make purchasing decisions."
),
"3": (
"Value proposition: Briefly explain how your product/service can benefit the"
" prospect. Focus on the unique selling points and value proposition of your"
" product/service that sets it apart from competitors."
"Value proposition: Briefly explain how your product/service can"
" benefit the prospect. Focus on the unique selling points and value"
" proposition of your product/service that sets it apart from"
" competitors."
),
"4": (
"Needs analysis: Ask open-ended questions to uncover the prospect's needs and"
" pain points. Listen carefully to their responses and take notes."
"Needs analysis: Ask open-ended questions to uncover the prospect's"
" needs and pain points. Listen carefully to their responses and take"
" notes."
),
"5": (
"Solution presentation: Based on the prospect's needs, present your"
@ -29,9 +32,9 @@ conversation_stages = {
" testimonials to support your claims."
),
"7": (
"Close: Ask for the sale by proposing a next step. This could be a demo, a"
" trial or a meeting with decision-makers. Ensure to summarize what has been"
" discussed and reiterate the benefits."
"Close: Ask for the sale by proposing a next step. This could be a"
" demo, a trial or a meeting with decision-makers. Ensure to summarize"
" what has been discussed and reiterate the benefits."
),
}

@ -46,24 +46,27 @@ Conversation history:
conversation_stages = {
"1": (
"Introduction: Start the conversation by introducing yourself and your company."
" Be polite and respectful while keeping the tone of the conversation"
" professional. Your greeting should be welcoming. Always clarify in your"
" greeting the reason why you are contacting the prospect."
"Introduction: Start the conversation by introducing yourself and your"
" company. Be polite and respectful while keeping the tone of the"
" conversation professional. Your greeting should be welcoming. Always"
" clarify in your greeting the reason why you are contacting the"
" prospect."
),
"2": (
"Qualification: Qualify the prospect by confirming if they are the right person"
" to talk to regarding your product/service. Ensure that they have the"
" authority to make purchasing decisions."
"Qualification: Qualify the prospect by confirming if they are the"
" right person to talk to regarding your product/service. Ensure that"
" they have the authority to make purchasing decisions."
),
"3": (
"Value proposition: Briefly explain how your product/service can benefit the"
" prospect. Focus on the unique selling points and value proposition of your"
" product/service that sets it apart from competitors."
"Value proposition: Briefly explain how your product/service can"
" benefit the prospect. Focus on the unique selling points and value"
" proposition of your product/service that sets it apart from"
" competitors."
),
"4": (
"Needs analysis: Ask open-ended questions to uncover the prospect's needs and"
" pain points. Listen carefully to their responses and take notes."
"Needs analysis: Ask open-ended questions to uncover the prospect's"
" needs and pain points. Listen carefully to their responses and take"
" notes."
),
"5": (
"Solution presentation: Based on the prospect's needs, present your"
@ -75,8 +78,8 @@ conversation_stages = {
" testimonials to support your claims."
),
"7": (
"Close: Ask for the sale by proposing a next step. This could be a demo, a"
" trial or a meeting with decision-makers. Ensure to summarize what has been"
" discussed and reiterate the benefits."
"Close: Ask for the sale by proposing a next step. This could be a"
" demo, a trial or a meeting with decision-makers. Ensure to summarize"
" what has been discussed and reiterate the benefits."
),
}

@ -7,7 +7,11 @@ from typing import Callable, Dict, List
from termcolor import colored
from swarms.structs.flow import Flow
from swarms.utils.decorators import error_decorator, log_decorator, timing_decorator
from swarms.utils.decorators import (
error_decorator,
log_decorator,
timing_decorator,
)
class AutoScaler:
@ -69,7 +73,9 @@ class AutoScaler:
try:
self.tasks_queue.put(task)
except Exception as error:
print(f"Error adding task to queue: {error} try again with a new task")
print(
f"Error adding task to queue: {error} try again with a new task"
)
@log_decorator
@error_decorator
@ -108,10 +114,15 @@ class AutoScaler:
if pending_tasks / len(self.agents_pool) > self.busy_threshold:
self.scale_up()
elif active_agents / len(self.agents_pool) < self.idle_threshold:
elif (
active_agents / len(self.agents_pool) < self.idle_threshold
):
self.scale_down()
except Exception as error:
print(f"Error monitoring and scaling: {error} try again with a new task")
print(
f"Error monitoring and scaling: {error} try again with a new"
" task"
)
@log_decorator
@error_decorator
@ -125,7 +136,9 @@ class AutoScaler:
while True:
task = self.task_queue.get()
if task:
available_agent = next((agent for agent in self.agents_pool))
available_agent = next(
(agent for agent in self.agents_pool)
)
if available_agent:
available_agent.run(task)
except Exception as error:

@ -11,14 +11,9 @@ from termcolor import colored
from swarms.utils.code_interpreter import SubprocessCodeInterpreter
from swarms.utils.parse_code import extract_code_in_backticks_in_string
from swarms.tools.tool import BaseTool
# Prompts
DYNAMIC_STOP_PROMPT = """
When you have finished the task from the Human, output a special token: <DONE>
This will enable you to leave the autonomous loop.
"""
# Constants
# System prompt
FLOW_SYSTEM_PROMPT = f"""
You are an autonomous agent granted autonomy in a autonomous loop structure.
Your role is to engage in multi-step conversations with your self or the user,
@ -30,6 +25,17 @@ to aid in these complex tasks. Your responses should be coherent, contextually r
"""
# Prompts
DYNAMIC_STOP_PROMPT = """
Now, when you 99% sure you have completed the task, you may follow the instructions below to escape the autonomous loop.
When you have finished the task from the Human, output a special token: <DONE>
This will enable you to leave the autonomous loop.
"""
# Make it able to handle multi input tools
DYNAMICAL_TOOL_USAGE = """
You have access to the following tools:
@ -46,6 +52,11 @@ commands: {
"tool1": "inputs",
"tool1": "inputs"
}
"tool3: "tool_name",
"params": {
"tool1": "inputs",
"tool1": "inputs"
}
}
}
@ -53,6 +64,29 @@ commands: {
{tools}
"""
SCENARIOS = """
commands: {
"tools": {
tool1: "tool_name",
"params": {
"tool1": "inputs",
"tool1": "inputs"
}
"tool2: "tool_name",
"params": {
"tool1": "inputs",
"tool1": "inputs"
}
"tool3: "tool_name",
"params": {
"tool1": "inputs",
"tool1": "inputs"
}
}
}
"""
def autonomous_agent_prompt(
tools_prompt: str = DYNAMICAL_TOOL_USAGE,
@ -101,9 +135,9 @@ def parse_done_token(response: str) -> bool:
class Flow:
"""
Flow is a chain like structure from langchain that provides the autonomy to language models
to generate sequential responses.
Flow is the structure that provides autonomy to any llm in a reliable and effective fashion.
The flow structure is designed to be used with any llm and provides the following features:
Features:
* Interactive, AI generates, then user input
* Message history and performance history fed -> into context -> truncate if too long
@ -191,7 +225,7 @@ class Flow:
def __init__(
self,
llm: Any,
# template: str,
template: Optional[str] = None,
max_loops=5,
stopping_condition: Optional[Callable[[str], bool]] = None,
loop_interval: int = 1,
@ -205,7 +239,7 @@ class Flow:
agent_name: str = " Autonomous Agent XYZ1B",
agent_description: str = None,
system_prompt: str = FLOW_SYSTEM_PROMPT,
# tools: List[Any] = None,
tools: List[BaseTool] = None,
dynamic_temperature: bool = False,
sop: str = None,
saved_state_path: Optional[str] = "flow_state.json",
@ -217,6 +251,7 @@ class Flow:
**kwargs: Any,
):
self.llm = llm
self.template = template
self.max_loops = max_loops
self.stopping_condition = stopping_condition
self.loop_interval = loop_interval
@ -238,7 +273,7 @@ class Flow:
# The max_loops will be set dynamically if the dynamic_loop
if self.dynamic_loops:
self.max_loops = "auto"
# self.tools = tools or []
self.tools = tools or []
self.system_prompt = system_prompt
self.agent_name = agent_name
self.agent_description = agent_description
@ -302,68 +337,82 @@ class Flow:
# # Parse the text for tool usage
# pass
# def get_tool_description(self):
# """Get the tool description"""
# tool_descriptions = []
# for tool in self.tools:
# description = f"{tool.name}: {tool.description}"
# tool_descriptions.append(description)
# return "\n".join(tool_descriptions)
# def find_tool_by_name(self, name: str):
# """Find a tool by name"""
# for tool in self.tools:
# if tool.name == name:
# return tool
# return None
# def construct_dynamic_prompt(self):
# """Construct the dynamic prompt"""
# tools_description = self.get_tool_description()
# return DYNAMICAL_TOOL_USAGE.format(tools=tools_description)
# def extract_tool_commands(self, text: str):
# """
# Extract the tool commands from the text
# Example:
# ```json
# {
# "tool": "tool_name",
# "params": {
# "tool1": "inputs",
# "param2": "value2"
# }
# }
# ```
def get_tool_description(self):
"""Get the tool description"""
if self.tools:
try:
tool_descriptions = []
for tool in self.tools:
description = f"{tool.name}: {tool.description}"
tool_descriptions.append(description)
return "\n".join(tool_descriptions)
except Exception as error:
print(
f"Error getting tool description: {error} try adding a"
" description to the tool or removing the tool"
)
else:
return "No tools available"
# """
# # Regex to find JSON like strings
# pattern = r"```json(.+?)```"
# matches = re.findall(pattern, text, re.DOTALL)
# json_commands = []
# for match in matches:
# try:
# json_commands = json.loads(match)
# json_commands.append(json_commands)
# except Exception as error:
# print(f"Error parsing JSON command: {error}")
# def parse_and_execute_tools(self, response):
# """Parse and execute the tools"""
# json_commands = self.extract_tool_commands(response)
# for command in json_commands:
# tool_name = command.get("tool")
# params = command.get("parmas", {})
# self.execute_tool(tool_name, params)
# def execute_tools(self, tool_name, params):
# """Execute the tool with the provided params"""
# tool = self.tool_find_by_name(tool_name)
# if tool:
# # Execute the tool with the provided parameters
# tool_result = tool.run(**params)
# print(tool_result)
def find_tool_by_name(self, name: str):
"""Find a tool by name"""
for tool in self.tools:
if tool.name == name:
return tool
return None
def construct_dynamic_prompt(self):
"""Construct the dynamic prompt"""
tools_description = self.get_tool_description()
tool_prompt = self.tool_prompt_prep(tools_description, SCENARIOS)
return tool_prompt
# return DYNAMICAL_TOOL_USAGE.format(tools=tools_description)
def extract_tool_commands(self, text: str):
"""
Extract the tool commands from the text
Example:
```json
{
"tool": "tool_name",
"params": {
"tool1": "inputs",
"param2": "value2"
}
}
```
"""
# Regex to find JSON like strings
pattern = r"```json(.+?)```"
matches = re.findall(pattern, text, re.DOTALL)
json_commands = []
for match in matches:
try:
json_commands = json.loads(match)
json_commands.append(json_commands)
except Exception as error:
print(f"Error parsing JSON command: {error}")
def parse_and_execute_tools(self, response: str):
"""Parse and execute the tools"""
json_commands = self.extract_tool_commands(response)
for command in json_commands:
tool_name = command.get("tool")
params = command.get("parmas", {})
self.execute_tool(tool_name, params)
def execute_tools(self, tool_name, params):
"""Execute the tool with the provided params"""
tool = self.tool_find_by_name(tool_name)
if tool:
# Execute the tool with the provided parameters
tool_result = tool.run(**params)
print(tool_result)
def truncate_history(self):
"""
@ -431,8 +480,12 @@ class Flow:
print(colored("Initializing Autonomous Agent...", "yellow"))
# print(colored("Loading modules...", "yellow"))
# print(colored("Modules loaded successfully.", "green"))
print(colored("Autonomous Agent Activated.", "cyan", attrs=["bold"]))
print(colored("All systems operational. Executing task...", "green"))
print(
colored("Autonomous Agent Activated.", "cyan", attrs=["bold"])
)
print(
colored("All systems operational. Executing task...", "green")
)
except Exception as error:
print(
colored(
@ -475,16 +528,18 @@ class Flow:
self.print_dashboard(task)
loop_count = 0
# for i in range(self.max_loops):
while self.max_loops == "auto" or loop_count < self.max_loops:
loop_count += 1
print(colored(f"\nLoop {loop_count} of {self.max_loops}", "blue"))
print(
colored(f"\nLoop {loop_count} of {self.max_loops}", "blue")
)
print("\n")
# Check to see if stopping token is in the output to stop the loop
if self.stopping_token:
if self._check_stopping_condition(response) or parse_done_token(
if self._check_stopping_condition(
response
):
) or parse_done_token(response):
break
# Adjust temperature, comment if no work
@ -502,111 +557,22 @@ class Flow:
**kwargs,
)
# If code interpreter is enabled then run the code
if self.code_interpreter:
self.run_code(response)
# If there are any tools then parse and execute them
# if self.tools:
# self.parse_and_execute_tools(response)
if self.interactive:
print(f"AI: {response}")
history.append(f"AI: {response}")
response = input("You: ")
history.append(f"Human: {response}")
else:
print(f"AI: {response}")
history.append(f"AI: {response}")
# print(response)
break
except Exception as e:
logging.error(f"Error generating response: {e}")
attempt += 1
time.sleep(self.retry_interval)
history.append(response)
time.sleep(self.loop_interval)
self.memory.append(history)
if self.autosave:
save_path = self.saved_state_path or "flow_state.json"
print(colored(f"Autosaving flow state to {save_path}", "green"))
self.save_state(save_path)
if self.return_history:
return response, history
return response
except Exception as error:
print(f"Error running flow: {error}")
raise
def __call__(self, task: str, **kwargs):
"""
Run the autonomous agent loop
Args:
task (str): The initial task to run
Flow:
1. Generate a response
2. Check stopping condition
3. If stopping condition is met, stop
4. If stopping condition is not met, generate a response
5. Repeat until stopping condition is met or max_loops is reached
"""
try:
# dynamic_prompt = self.construct_dynamic_prompt()
# combined_prompt = f"{dynamic_prompt}\n{task}"
# Activate Autonomous agent message
self.activate_autonomous_agent()
response = task # or combined_prompt
history = [f"{self.user_name}: {task}"]
# If dashboard = True then print the dashboard
if self.dashboard:
self.print_dashboard(task)
loop_count = 0
# for i in range(self.max_loops):
while self.max_loops == "auto" or loop_count < self.max_loops:
loop_count += 1
print(colored(f"\nLoop {loop_count} of {self.max_loops}", "blue"))
print("\n")
if self.stopping_token:
if self._check_stopping_condition(response) or parse_done_token(
response
):
break
# Adjust temperature, comment if no work
if self.dynamic_temperature:
self.dynamic_temperature()
# Preparing the prompt
task = self.agent_history_prompt(FLOW_SYSTEM_PROMPT, response)
attempt = 0
while attempt < self.retry_attempts:
try:
response = self.llm(
task,
**kwargs,
)
if self.code_interpreter:
self.run_code(response)
# If there are any tools then parse and execute them
# if self.tools:
# self.parse_and_execute_tools(response)
if self.tools:
self.parse_and_execute_tools(response)
# If interactive mode is enabled then print the response and get user input
if self.interactive:
print(f"AI: {response}")
history.append(f"AI: {response}")
response = input("You: ")
history.append(f"Human: {response}")
# If interactive mode is not enabled then print the response
else:
print(f"AI: {response}")
history.append(f"AI: {response}")
@ -616,15 +582,20 @@ class Flow:
logging.error(f"Error generating response: {e}")
attempt += 1
time.sleep(self.retry_interval)
# Add the response to the history
history.append(response)
time.sleep(self.loop_interval)
# Add the history to the memory
self.memory.append(history)
# If autosave is enabled then save the state
if self.autosave:
save_path = self.saved_state_path or "flow_state.json"
print(colored(f"Autosaving flow state to {save_path}", "green"))
self.save_state(save_path)
# If return history is enabled then return the response and history
if self.return_history:
return response, history
@ -665,7 +636,9 @@ class Flow:
print(colored(f"\nLoop {loop_count} of {self.max_loops}", "blue"))
print("\n")
if self._check_stopping_condition(response) or parse_done_token(response):
if self._check_stopping_condition(response) or parse_done_token(
response
):
break
# Adjust temperature, comment if no work
@ -985,7 +958,8 @@ class Flow:
if hasattr(self.llm, name):
value = getattr(self.llm, name)
if isinstance(
value, (str, int, float, bool, list, dict, tuple, type(None))
value,
(str, int, float, bool, list, dict, tuple, type(None)),
):
llm_params[name] = value
else:
@ -1046,7 +1020,9 @@ class Flow:
print(f"Flow state loaded from {file_path}")
def retry_on_failure(self, function, retries: int = 3, retry_delay: int = 1):
def retry_on_failure(
self, function, retries: int = 3, retry_delay: int = 1
):
"""Retry wrapper for LLM calls."""
attempt = 0
while attempt < retries:
@ -1105,7 +1081,7 @@ class Flow:
run_code = self.code_executor.run(parsed_code)
return run_code
def tool_prompt_prep(self, api_docs: str = None, required_api: str = None):
def tools_prompt_prep(self, docs: str = None, scenarios: str = None):
"""
Prepare the tool prompt
"""
@ -1152,19 +1128,14 @@ class Flow:
response.
Deliver your response in this format:
- Scenario 1: <Scenario1>
- Scenario 2: <Scenario2>
- Scenario 3: <Scenario3>
{scenarios}
# APIs
{api_docs}
{docs}
# Response
Required API: {required_api}
Scenarios with >=5 API calls:
- Scenario 1: <Scenario1>
"""
def self_healing(self, **kwargs):

@ -0,0 +1,97 @@
from swarms.models import OpenAIChat
from swarms.structs.flow import Flow
import concurrent.futures
from typing import Callable, List, Dict, Any, Sequence
class Task:
def __init__(
self,
id: str,
task: str,
flows: Sequence[Flow],
dependencies: List[str] = [],
):
self.id = id
self.task = task
self.flows = flows
self.dependencies = dependencies
self.results = []
def execute(self, parent_results: Dict[str, Any]):
args = [parent_results[dep] for dep in self.dependencies]
for flow in self.flows:
result = flow.run(self.task, *args)
self.results.append(result)
args = [
result
] # The output of one flow becomes the input to the next
class Workflow:
def __init__(self):
self.tasks: Dict[str, Task] = {}
self.executor = concurrent.futures.ThreadPoolExecutor()
def add_task(self, task: Task):
self.tasks[task.id] = task
def run(self):
completed_tasks = set()
while len(completed_tasks) < len(self.tasks):
futures = []
for task in self.tasks.values():
if task.id not in completed_tasks and all(
dep in completed_tasks for dep in task.dependencies
):
future = self.executor.submit(
task.execute,
{
dep: self.tasks[dep].results
for dep in task.dependencies
},
)
futures.append((future, task.id))
for future, task_id in futures:
future.result() # Wait for task completion
completed_tasks.add(task_id)
def get_results(self):
return {task_id: task.results for task_id, task in self.tasks.items()}
# create flows
llm = OpenAIChat(openai_api_key="sk-")
flow1 = Flow(llm, max_loops=1)
flow2 = Flow(llm, max_loops=1)
flow3 = Flow(llm, max_loops=1)
flow4 = Flow(llm, max_loops=1)
# Create tasks with their respective Flows and task strings
task1 = Task("task1", "Generate a summary on Quantum field theory", [flow1])
task2 = Task(
"task2",
"Elaborate on the summary of topic X",
[flow2, flow3],
dependencies=["task1"],
)
task3 = Task(
"task3", "Generate conclusions for topic X", [flow4], dependencies=["task1"]
)
# Create a workflow and add tasks
workflow = Workflow()
workflow.add_task(task1)
workflow.add_task(task2)
workflow.add_task(task3)
# Run the workflow
workflow.run()
# Get results
results = workflow.get_results()
print(results)

@ -113,7 +113,9 @@ class SequentialWorkflow:
restore_state_filepath: Optional[str] = None
dashboard: bool = False
def add(self, task: str, flow: Union[Callable, Flow], *args, **kwargs) -> None:
def add(
self, task: str, flow: Union[Callable, Flow], *args, **kwargs
) -> None:
"""
Add a task to the workflow.
@ -147,6 +149,7 @@ class SequentialWorkflow:
return {task.description: task.result for task in self.tasks}
def remove_task(self, task_description: str) -> None:
"""Remove tasks from sequential workflow"""
self.tasks = [
task for task in self.tasks if task.description != task_description
]
@ -182,7 +185,9 @@ class SequentialWorkflow:
raise ValueError(f"Task {task_description} not found in workflow.")
def save_workflow_state(
self, filepath: Optional[str] = "sequential_workflow_state.json", **kwargs
self,
filepath: Optional[str] = "sequential_workflow_state.json",
**kwargs,
) -> None:
"""
Saves the workflow state to a json file.
@ -260,10 +265,6 @@ class SequentialWorkflow:
--------------------------------
Metadata:
kwargs: {kwargs}
""",
"cyan",
attrs=["bold", "underline"],
@ -352,8 +353,9 @@ class SequentialWorkflow:
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
"The 'task' argument is required for the Flow flow"
f" execution in '{task.description}'"
"The 'task' argument is required for the"
" Flow flow execution in"
f" '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
@ -377,7 +379,9 @@ class SequentialWorkflow:
# Autosave the workflow state
if self.autosave:
self.save_workflow_state("sequential_workflow_state.json")
self.save_workflow_state(
"sequential_workflow_state.json"
)
except Exception as e:
print(
colored(
@ -408,8 +412,8 @@ class SequentialWorkflow:
# Ensure that 'task' is provided in the kwargs
if "task" not in task.kwargs:
raise ValueError(
"The 'task' argument is required for the Flow flow"
f" execution in '{task.description}'"
"The 'task' argument is required for the Flow"
f" flow execution in '{task.description}'"
)
# Separate the 'task' argument from other kwargs
flow_task_arg = task.kwargs.pop("task")
@ -433,4 +437,6 @@ class SequentialWorkflow:
# Autosave the workflow state
if self.autosave:
self.save_workflow_state("sequential_workflow_state.json")
self.save_workflow_state(
"sequential_workflow_state.json"
)

@ -103,7 +103,9 @@ class AutoBlogGenSwarm:
review_agent = self.print_beautifully("Review Agent", review_agent)
# Agent that publishes on social media
distribution_agent = self.llm(self.social_media_prompt(article=review_agent))
distribution_agent = self.llm(
self.social_media_prompt(article=review_agent)
)
distribution_agent = self.print_beautifully(
"Distribution Agent", distribution_agent
)
@ -115,7 +117,11 @@ class AutoBlogGenSwarm:
for i in range(self.iterations):
self.step()
except Exception as error:
print(colored(f"Error while running AutoBlogGenSwarm {error}", "red"))
print(
colored(
f"Error while running AutoBlogGenSwarm {error}", "red"
)
)
if attempt == self.retry_attempts - 1:
raise

@ -117,7 +117,9 @@ class AbstractSwarm(ABC):
pass
@abstractmethod
def broadcast(self, message: str, sender: Optional["AbstractWorker"] = None):
def broadcast(
self, message: str, sender: Optional["AbstractWorker"] = None
):
"""Broadcast a message to all workers"""
pass

@ -23,7 +23,9 @@ class DialogueSimulator:
>>> model.run("test")
"""
def __init__(self, agents: List[Callable], max_iters: int = 10, name: str = None):
def __init__(
self, agents: List[Callable], max_iters: int = 10, name: str = None
):
self.agents = agents
self.max_iters = max_iters
self.name = name
@ -45,7 +47,8 @@ class DialogueSimulator:
for receiver in self.agents:
message_history = (
f"Speaker Name: {speaker.name} and message: {speaker_message}"
f"Speaker Name: {speaker.name} and message:"
f" {speaker_message}"
)
receiver.run(message_history)
@ -56,7 +59,9 @@ class DialogueSimulator:
print(f"Error running dialogue simulator: {error}")
def __repr__(self):
return f"DialogueSimulator({self.agents}, {self.max_iters}, {self.name})"
return (
f"DialogueSimulator({self.agents}, {self.max_iters}, {self.name})"
)
def save_state(self):
"""Save the state of the dialogue simulator"""

@ -64,7 +64,8 @@ class GodMode:
table.append([f"LLM {i+1}", response])
print(
colored(
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"), "cyan"
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"),
"cyan",
)
)
@ -83,7 +84,8 @@ class GodMode:
table.append([f"LLM {i+1}", response])
print(
colored(
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"), "cyan"
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"),
"cyan",
)
)
@ -115,11 +117,13 @@ class GodMode:
print(f"{i + 1}. {task}")
print("\nLast Responses:")
table = [
[f"LLM {i+1}", response] for i, response in enumerate(self.last_responses)
[f"LLM {i+1}", response]
for i, response in enumerate(self.last_responses)
]
print(
colored(
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"), "cyan"
tabulate(table, headers=["LLM", "Response"], tablefmt="pretty"),
"cyan",
)
)
@ -137,7 +141,8 @@ class GodMode:
"""Asynchronous run the task string"""
loop = asyncio.get_event_loop()
futures = [
loop.run_in_executor(None, lambda llm: llm(task), llm) for llm in self.llms
loop.run_in_executor(None, lambda llm: llm(task), llm)
for llm in self.llms
]
for response in await asyncio.gather(*futures):
print(response)
@ -145,13 +150,18 @@ class GodMode:
def concurrent_run(self, task: str) -> List[str]:
"""Synchronously run the task on all llms and collect responses"""
with ThreadPoolExecutor() as executor:
future_to_llm = {executor.submit(llm, task): llm for llm in self.llms}
future_to_llm = {
executor.submit(llm, task): llm for llm in self.llms
}
responses = []
for future in as_completed(future_to_llm):
try:
responses.append(future.result())
except Exception as error:
print(f"{future_to_llm[future]} generated an exception: {error}")
print(
f"{future_to_llm[future]} generated an exception:"
f" {error}"
)
self.last_responses = responses
self.task_history.append(task)
return responses

@ -47,7 +47,9 @@ class GroupChat:
def next_agent(self, agent: Flow) -> Flow:
"""Return the next agent in the list."""
return self.agents[(self.agent_names.index(agent.name) + 1) % len(self.agents)]
return self.agents[
(self.agent_names.index(agent.name) + 1) % len(self.agents)
]
def select_speaker_msg(self):
"""Return the message for selecting the next speaker."""
@ -78,9 +80,9 @@ class GroupChat:
{
"role": "system",
"content": (
"Read the above conversation. Then select the next most"
f" suitable role from {self.agent_names} to play. Only"
" return the role."
"Read the above conversation. Then select the next"
f" most suitable role from {self.agent_names} to"
" play. Only return the role."
),
}
]
@ -126,7 +128,9 @@ class GroupChatManager:
self.selector = selector
def __call__(self, task: str):
self.groupchat.messages.append({"role": self.selector.name, "content": task})
self.groupchat.messages.append(
{"role": self.selector.name, "content": task}
)
for i in range(self.groupchat.max_round):
speaker = self.groupchat.select_speaker(
last_speaker=self.selector, selector=self.selector

@ -13,8 +13,8 @@ from swarms.utils.logger import logger
class BidOutputParser(RegexParser):
def get_format_instructions(self) -> str:
return (
"Your response should be an integrater delimited by angled brackets like"
" this: <int>"
"Your response should be an integrater delimited by angled brackets"
" like this: <int>"
)
@ -23,22 +23,6 @@ bid_parser = BidOutputParser(
)
def select_next_speaker_director(step: int, agents, director) -> int:
# if the step if even => director
# => director selects next speaker
if step % 2 == 1:
idx = 0
else:
idx = director.select_next_speaker() + 1
return idx
# Define a selection function
def select_speaker_round_table(step: int, agents) -> int:
# This function selects the speaker in a round-robin fashion
return step % len(agents)
# main
class MultiAgentCollaboration:
"""
@ -49,6 +33,15 @@ class MultiAgentCollaboration:
selection_function (callable): The function that selects the next speaker.
Defaults to select_next_speaker.
max_iters (int): The maximum number of iterations. Defaults to 10.
autosave (bool): Whether to autosave the state of all agents. Defaults to True.
saved_file_path_name (str): The path to the saved file. Defaults to
"multi_agent_collab.json".
stopping_token (str): The token that stops the collaboration. Defaults to
"<DONE>".
results (list): The results of the collaboration. Defaults to [].
logger (logging.Logger): The logger. Defaults to logger.
logging (bool): Whether to log the collaboration. Defaults to True.
Methods:
reset: Resets the state of all agents.
@ -62,18 +55,40 @@ class MultiAgentCollaboration:
Usage:
>>> from swarms.models import MultiAgentCollaboration
>>> from swarms.models import Flow
>>> from swarms.models import OpenAIChat
>>> from swarms.models import Anthropic
>>> from swarms.structs import Flow
>>> from swarms.swarms.multi_agent_collab import MultiAgentCollaboration
>>>
>>> # Initialize the language model
>>> llm = OpenAIChat(
>>> temperature=0.5,
>>> )
>>>
>>>
>>> ## Initialize the workflow
>>> flow = Flow(llm=llm, max_loops=1, dashboard=True)
>>>
>>> # Run the workflow on a task
>>> out = flow.run("Generate a 10,000 word blog on health and wellness.")
>>>
>>> # Initialize the multi-agent collaboration
>>> swarm = MultiAgentCollaboration(
>>> agents=[flow],
>>> max_iters=4,
>>> )
>>>
>>> # Run the multi-agent collaboration
>>> swarm.run()
>>>
>>> # Format the results of the multi-agent collaboration
>>> swarm.format_results(swarm.results)
"""
def __init__(
self,
agents: List[Flow],
selection_function: callable = select_next_speaker_director,
selection_function: callable = None,
max_iters: int = 10,
autosave: bool = True,
saved_file_path_name: str = "multi_agent_collab.json",
@ -165,7 +180,7 @@ class MultiAgentCollaboration:
),
retry_error_callback=lambda retry_state: 0,
)
def run(self):
def run_director(self, task: str):
"""Runs the multi-agent collaboration."""
n = 0
self.reset()
@ -179,10 +194,85 @@ class MultiAgentCollaboration:
print("\n")
n += 1
def select_next_speaker_roundtable(
self, step: int, agents: List[Flow]
) -> int:
"""Selects the next speaker."""
return step % len(agents)
def select_next_speaker_director(
step: int, agents: List[Flow], director
) -> int:
# if the step if even => director
# => director selects next speaker
if step % 2 == 1:
idx = 0
else:
idx = director.select_next_speaker() + 1
return idx
# def run(self, task: str):
# """Runs the multi-agent collaboration."""
# for step in range(self.max_iters):
# speaker_idx = self.select_next_speaker_roundtable(step, self.agents)
# speaker = self.agents[speaker_idx]
# result = speaker.run(task)
# self.results.append({"agent": speaker, "response": result})
# if self.autosave:
# self.save_state()
# if result == self.stopping_token:
# break
# return self.results
# def run(self, task: str):
# for _ in range(self.max_iters):
# for step, agent, in enumerate(self.agents):
# result = agent.run(task)
# self.results.append({"agent": agent, "response": result})
# if self.autosave:
# self.save_state()
# if result == self.stopping_token:
# break
# return self.results
# def run(self, task: str):
# conversation = task
# for _ in range(self.max_iters):
# for agent in self.agents:
# result = agent.run(conversation)
# self.results.append({"agent": agent, "response": result})
# conversation = result
# if self.autosave:
# self.save()
# if result == self.stopping_token:
# break
# return self.results
def run(self, task: str):
conversation = task
for _ in range(self.max_iters):
for agent in self.agents:
result = agent.run(conversation)
self.results.append({"agent": agent, "response": result})
conversation += result
if self.autosave:
self.save_state()
if result == self.stopping_token:
break
return self.results
def format_results(self, results):
"""Formats the results of the run method"""
formatted_results = "\n".join(
[f"{result['agent']} responded: {result['response']}" for result in results]
[
f"{result['agent']} responded: {result['response']}"
for result in results
]
)
return formatted_results
@ -208,7 +298,12 @@ class MultiAgentCollaboration:
return state
def __repr__(self):
return f"MultiAgentCollaboration(agents={self.agents}, selection_function={self.select_next_speaker}, max_iters={self.max_iters}, autosave={self.autosave}, saved_file_path_name={self.saved_file_path_name})"
return (
f"MultiAgentCollaboration(agents={self.agents},"
f" selection_function={self.select_next_speaker},"
f" max_iters={self.max_iters}, autosave={self.autosave},"
f" saved_file_path_name={self.saved_file_path_name})"
)
def performance(self):
"""Tracks and reports the performance of each agent"""

@ -111,7 +111,9 @@ class Orchestrator:
self.chroma_client = chromadb.Client()
self.collection = self.chroma_client.create_collection(name=collection_name)
self.collection = self.chroma_client.create_collection(
name=collection_name
)
self.current_tasks = {}
@ -148,13 +150,14 @@ class Orchestrator:
)
logging.info(
f"Task {id(str)} has been processed by agent {id(agent)} with"
f"Task {id(str)} has been processed by agent"
f" {id(agent)} with"
)
except Exception as error:
logging.error(
f"Failed to process task {id(task)} by agent {id(agent)}. Error:"
f" {error}"
f"Failed to process task {id(task)} by agent {id(agent)}."
f" Error: {error}"
)
finally:
with self.condition:
@ -175,7 +178,9 @@ class Orchestrator:
try:
# Query the vector database for documents created by the agents
results = self.collection.query(query_texts=[str(agent_id)], n_results=10)
results = self.collection.query(
query_texts=[str(agent_id)], n_results=10
)
return results
except Exception as e:
@ -212,7 +217,9 @@ class Orchestrator:
self.collection.add(documents=[result], ids=[str(id(result))])
except Exception as e:
logging.error(f"Failed to append the agent output to database. Error: {e}")
logging.error(
f"Failed to append the agent output to database. Error: {e}"
)
raise
def run(self, objective: str):
@ -226,7 +233,9 @@ class Orchestrator:
results = [
self.assign_task(agent_id, task)
for agent_id, task in zip(range(len(self.agents)), self.task_queue)
for agent_id, task in zip(
range(len(self.agents)), self.task_queue
)
]
for result in results:

@ -6,7 +6,9 @@ from typing import Optional
import pandas as pd
import torch
from langchain.agents import tool
from langchain.agents.agent_toolkits.pandas.base import create_pandas_dataframe_agent
from langchain.agents.agent_toolkits.pandas.base import (
create_pandas_dataframe_agent,
)
from langchain.chains.qa_with_sources.loading import (
BaseCombineDocumentsChain,
)
@ -38,7 +40,10 @@ def pushd(new_dir):
@tool
def process_csv(
llm, csv_file_path: str, instructions: str, output_path: Optional[str] = None
llm,
csv_file_path: str,
instructions: str,
output_path: Optional[str] = None,
) -> str:
"""Process a CSV by with pandas in a limited REPL.\
Only use this after writing data to disk as a csv file.\
@ -49,7 +54,9 @@ def process_csv(
df = pd.read_csv(csv_file_path)
except Exception as e:
return f"Error: {e}"
agent = create_pandas_dataframe_agent(llm, df, max_iterations=30, verbose=False)
agent = create_pandas_dataframe_agent(
llm, df, max_iterations=30, verbose=False
)
if output_path is not None:
instructions += f" Save output to disk at {output_path}"
try:
@ -79,7 +86,9 @@ async def async_load_playwright(url: str) -> str:
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
chunks = (
phrase.strip() for line in lines for phrase in line.split(" ")
)
results = "\n".join(chunk for chunk in chunks if chunk)
except Exception as e:
results = f"Error: {e}"
@ -110,7 +119,8 @@ def _get_text_splitter():
class WebpageQATool(BaseTool):
name = "query_webpage"
description = (
"Browse a webpage and retrieve the information relevant to the question."
"Browse a webpage and retrieve the information relevant to the"
" question."
)
text_splitter: RecursiveCharacterTextSplitter = Field(
default_factory=_get_text_splitter
@ -176,7 +186,9 @@ def VQAinference(self, inputs):
image_path, question = inputs.split(",")
raw_image = Image.open(image_path).convert("RGB")
inputs = processor(raw_image, question, return_tensors="pt").to(device, torch_dtype)
inputs = processor(raw_image, question, return_tensors="pt").to(
device, torch_dtype
)
out = model.generate(**inputs)
answer = processor.decode(out[0], skip_special_tokens=True)

@ -28,7 +28,9 @@ class MaskFormer:
def __init__(self, device):
print("Initializing MaskFormer to %s" % device)
self.device = device
self.processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined")
self.processor = CLIPSegProcessor.from_pretrained(
"CIDAS/clipseg-rd64-refined"
)
self.model = CLIPSegForImageSegmentation.from_pretrained(
"CIDAS/clipseg-rd64-refined"
).to(device)
@ -76,23 +78,26 @@ class ImageEditing:
@tool(
name="Remove Something From The Photo",
description=(
"useful when you want to remove and object or something from the photo "
"from its description or location. "
"The input to this tool should be a comma separated string of two, "
"representing the image_path and the object need to be removed. "
"useful when you want to remove and object or something from the"
" photo from its description or location. The input to this tool"
" should be a comma separated string of two, representing the"
" image_path and the object need to be removed. "
),
)
def inference_remove(self, inputs):
image_path, to_be_removed_txt = inputs.split(",")
return self.inference_replace(f"{image_path},{to_be_removed_txt},background")
return self.inference_replace(
f"{image_path},{to_be_removed_txt},background"
)
@tool(
name="Replace Something From The Photo",
description=(
"useful when you want to replace an object from the object description or"
" location with another object from its description. The input to this tool"
" should be a comma separated string of three, representing the image_path,"
" the object to be replaced, the object to be replaced with "
"useful when you want to replace an object from the object"
" description or location with another object from its description."
" The input to this tool should be a comma separated string of"
" three, representing the image_path, the object to be replaced,"
" the object to be replaced with "
),
)
def inference_replace(self, inputs):
@ -137,10 +142,10 @@ class InstructPix2Pix:
@tool(
name="Instruct Image Using Text",
description=(
"useful when you want to the style of the image to be like the text. "
"like: make it look like a painting. or make it like a robot. "
"The input to this tool should be a comma separated string of two, "
"representing the image_path and the text. "
"useful when you want to the style of the image to be like the"
" text. like: make it look like a painting. or make it like a"
" robot. The input to this tool should be a comma separated string"
" of two, representing the image_path and the text. "
),
)
def inference(self, inputs):
@ -149,14 +154,17 @@ class InstructPix2Pix:
image_path, text = inputs.split(",")[0], ",".join(inputs.split(",")[1:])
original_image = Image.open(image_path)
image = self.pipe(
text, image=original_image, num_inference_steps=40, image_guidance_scale=1.2
text,
image=original_image,
num_inference_steps=40,
image_guidance_scale=1.2,
).images[0]
updated_image_path = get_new_image_name(image_path, func_name="pix2pix")
image.save(updated_image_path)
logger.debug(
f"\nProcessed InstructPix2Pix, Input Image: {image_path}, Instruct Text:"
f" {text}, Output Image: {updated_image_path}"
f"\nProcessed InstructPix2Pix, Input Image: {image_path}, Instruct"
f" Text: {text}, Output Image: {updated_image_path}"
)
return updated_image_path
@ -173,17 +181,18 @@ class Text2Image:
self.pipe.to(device)
self.a_prompt = "best quality, extremely detailed"
self.n_prompt = (
"longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, "
"fewer digits, cropped, worst quality, low quality"
"longbody, lowres, bad anatomy, bad hands, missing fingers, extra"
" digit, fewer digits, cropped, worst quality, low quality"
)
@tool(
name="Generate Image From User Input Text",
description=(
"useful when you want to generate an image from a user input text and save"
" it to a file. like: generate an image of an object or something, or"
" generate an image that includes some objects. The input to this tool"
" should be a string, representing the text used to generate image. "
"useful when you want to generate an image from a user input text"
" and save it to a file. like: generate an image of an object or"
" something, or generate an image that includes some objects. The"
" input to this tool should be a string, representing the text used"
" to generate image. "
),
)
def inference(self, text):
@ -205,7 +214,9 @@ class VisualQuestionAnswering:
print("Initializing VisualQuestionAnswering to %s" % device)
self.torch_dtype = torch.float16 if "cuda" in device else torch.float32
self.device = device
self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
self.processor = BlipProcessor.from_pretrained(
"Salesforce/blip-vqa-base"
)
self.model = BlipForQuestionAnswering.from_pretrained(
"Salesforce/blip-vqa-base", torch_dtype=self.torch_dtype
).to(self.device)
@ -213,10 +224,11 @@ class VisualQuestionAnswering:
@tool(
name="Answer Question About The Image",
description=(
"useful when you need an answer for a question based on an image. like:"
" what is the background color of the last image, how many cats in this"
" figure, what is in this figure. The input to this tool should be a comma"
" separated string of two, representing the image_path and the question"
"useful when you need an answer for a question based on an image."
" like: what is the background color of the last image, how many"
" cats in this figure, what is in this figure. The input to this"
" tool should be a comma separated string of two, representing the"
" image_path and the question"
),
)
def inference(self, inputs):
@ -229,8 +241,8 @@ class VisualQuestionAnswering:
answer = self.processor.decode(out[0], skip_special_tokens=True)
logger.debug(
f"\nProcessed VisualQuestionAnswering, Input Image: {image_path}, Input"
f" Question: {question}, Output Answer: {answer}"
f"\nProcessed VisualQuestionAnswering, Input Image: {image_path},"
f" Input Question: {question}, Output Answer: {answer}"
)
return answer
@ -245,7 +257,8 @@ class ImageCaptioning(BaseHandler):
"Salesforce/blip-image-captioning-base"
)
self.model = BlipForConditionalGeneration.from_pretrained(
"Salesforce/blip-image-captioning-base", torch_dtype=self.torch_dtype
"Salesforce/blip-image-captioning-base",
torch_dtype=self.torch_dtype,
).to(self.device)
def handle(self, filename: str):
@ -264,8 +277,8 @@ class ImageCaptioning(BaseHandler):
out = self.model.generate(**inputs)
description = self.processor.decode(out[0], skip_special_tokens=True)
print(
f"\nProcessed ImageCaptioning, Input Image: {filename}, Output Text:"
f" {description}"
f"\nProcessed ImageCaptioning, Input Image: {filename}, Output"
f" Text: {description}"
)
return IMAGE_PROMPT.format(filename=filename, description=description)

@ -7,7 +7,17 @@ import warnings
from abc import abstractmethod
from functools import partial
from inspect import signature
from typing import Any, Awaitable, Callable, Dict, List, Optional, Tuple, Type, Union
from typing import (
Any,
Awaitable,
Callable,
Dict,
List,
Optional,
Tuple,
Type,
Union,
)
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.manager import (
@ -27,7 +37,11 @@ from pydantic import (
root_validator,
validate_arguments,
)
from langchain.schema.runnable import Runnable, RunnableConfig, RunnableSerializable
from langchain.schema.runnable import (
Runnable,
RunnableConfig,
RunnableSerializable,
)
class SchemaAnnotationError(TypeError):
@ -52,7 +66,11 @@ def _get_filtered_args(
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
return {k: schema[k] for k in valid_keys if k not in ("run_manager", "callbacks")}
return {
k: schema[k]
for k in valid_keys
if k not in ("run_manager", "callbacks")
}
class _SchemaConfig:
@ -120,12 +138,11 @@ class ChildTool(BaseTool):
..."""
name = cls.__name__
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid type annotations"
" for argument 'args_schema' to behave as expected.\n"
"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
"Expected class looks like:\n"
f"{typehint_mandate}"
f"Tool definition for {name} must include valid type"
" annotations for argument 'args_schema' to behave as"
" expected.\nExpected annotation of 'Type[BaseModel]' but"
f" got '{args_schema_type}'.\nExpected class looks"
f" like:\n{typehint_mandate}"
)
name: str
@ -147,7 +164,9 @@ class ChildTool(BaseTool):
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to be called during tool execution."""
callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)
callback_manager: Optional[BaseCallbackManager] = Field(
default=None, exclude=True
)
"""Deprecated. Please use callbacks instead."""
tags: Optional[List[str]] = None
"""Optional list of tags associated with the tool. Defaults to None
@ -244,7 +263,9 @@ class ChildTool(BaseTool):
else:
if input_args is not None:
result = input_args.parse_obj(tool_input)
return {k: v for k, v in result.dict().items() if k in tool_input}
return {
k: v for k, v in result.dict().items() if k in tool_input
}
return tool_input
@root_validator()
@ -286,7 +307,9 @@ class ChildTool(BaseTool):
*args,
)
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
def _to_args_and_kwargs(
self, tool_input: Union[str, Dict]
) -> Tuple[Tuple, Dict]:
# For backwards compatibility, if run_input is a string,
# pass as a positional argument.
if isinstance(tool_input, str):
@ -353,8 +376,9 @@ class ChildTool(BaseTool):
observation = self.handle_tool_error(e)
else:
raise ValueError(
"Got unexpected type of `handle_tool_error`. Expected bool, str "
f"or callable. Received: {self.handle_tool_error}"
"Got unexpected type of `handle_tool_error`. Expected"
" bool, str or callable. Received:"
f" {self.handle_tool_error}"
)
run_manager.on_tool_end(
str(observation), color="red", name=self.name, **kwargs
@ -409,7 +433,9 @@ class ChildTool(BaseTool):
# We then call the tool on the tool input to get an observation
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
observation = (
await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs)
await self._arun(
*tool_args, run_manager=run_manager, **tool_kwargs
)
if new_arg_supported
else await self._arun(*tool_args, **tool_kwargs)
)
@ -428,8 +454,9 @@ class ChildTool(BaseTool):
observation = self.handle_tool_error(e)
else:
raise ValueError(
"Got unexpected type of `handle_tool_error`. Expected bool, str "
f"or callable. Received: {self.handle_tool_error}"
"Got unexpected type of `handle_tool_error`. Expected"
" bool, str or callable. Received:"
f" {self.handle_tool_error}"
)
await run_manager.on_tool_end(
str(observation), color="red", name=self.name, **kwargs
@ -484,14 +511,17 @@ class Tool(BaseTool):
# assume it takes a single string input.
return {"tool_input": {"type": "string"}}
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
def _to_args_and_kwargs(
self, tool_input: Union[str, Dict]
) -> Tuple[Tuple, Dict]:
"""Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
# For backwards compatibility. The tool must be run with a single input
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ToolException(
f"Too many arguments to single-input tool {self.name}. Args: {all_args}"
f"Too many arguments to single-input tool {self.name}. Args:"
f" {all_args}"
)
return tuple(all_args), {}
@ -503,7 +533,9 @@ class Tool(BaseTool):
) -> Any:
"""Use the tool."""
if self.func:
new_argument_supported = signature(self.func).parameters.get("callbacks")
new_argument_supported = signature(self.func).parameters.get(
"callbacks"
)
return (
self.func(
*args,
@ -537,12 +569,18 @@ class Tool(BaseTool):
)
else:
return await asyncio.get_running_loop().run_in_executor(
None, partial(self._run, run_manager=run_manager, **kwargs), *args
None,
partial(self._run, run_manager=run_manager, **kwargs),
*args,
)
# TODO: this is for backwards compatibility, remove in future
def __init__(
self, name: str, func: Optional[Callable], description: str, **kwargs: Any
self,
name: str,
func: Optional[Callable],
description: str,
**kwargs: Any,
) -> None:
"""Initialize tool."""
super(Tool, self).__init__(
@ -617,7 +655,9 @@ class StructuredTool(BaseTool):
) -> Any:
"""Use the tool."""
if self.func:
new_argument_supported = signature(self.func).parameters.get("callbacks")
new_argument_supported = signature(self.func).parameters.get(
"callbacks"
)
return (
self.func(
*args,
@ -714,7 +754,9 @@ class StructuredTool(BaseTool):
description = f"{name}{sig} - {description.strip()}"
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(f"{name}Schema", source_function)
_args_schema = create_schema_from_function(
f"{name}Schema", source_function
)
return cls(
name=name,
func=func,
@ -772,7 +814,9 @@ def tool(
async def ainvoke_wrapper(
callbacks: Optional[Callbacks] = None, **kwargs: Any
) -> Any:
return await runnable.ainvoke(kwargs, {"callbacks": callbacks})
return await runnable.ainvoke(
kwargs, {"callbacks": callbacks}
)
def invoke_wrapper(
callbacks: Optional[Callbacks] = None, **kwargs: Any
@ -821,7 +865,11 @@ def tool(
return _make_tool
if len(args) == 2 and isinstance(args[0], str) and isinstance(args[1], Runnable):
if (
len(args) == 2
and isinstance(args[0], str)
and isinstance(args[1], Runnable)
):
return _make_with_name(args[0])(args[1])
elif len(args) == 1 and isinstance(args[0], str):
# if the argument is a string, then we use the string as the tool name

@ -1,4 +1,4 @@
from swarms.utils.display_markdown import display_markdown_message
from swarms.utils.markdown_message import display_markdown_message
from swarms.utils.futures import execute_futures_dict
from swarms.utils.code_interpreter import SubprocessCodeInterpreter
from swarms.utils.parse_code import extract_code_in_backticks_in_string

@ -144,7 +144,9 @@ class Singleton(abc.ABCMeta, type):
def __call__(cls, *args, **kwargs):
"""Call method for the singleton metaclass."""
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
cls._instances[cls] = super(Singleton, cls).__call__(
*args, **kwargs
)
return cls._instances[cls]

@ -116,14 +116,20 @@ class SubprocessCodeInterpreter(BaseCodeInterpreter):
# Most of the time it doesn't matter, but we should figure out why it happens frequently with:
# applescript
yield {"output": traceback.format_exc()}
yield {"output": f"Retrying... ({retry_count}/{max_retries})"}
yield {
"output": f"Retrying... ({retry_count}/{max_retries})"
}
yield {"output": "Restarting process."}
self.start_process()
retry_count += 1
if retry_count > max_retries:
yield {"output": "Maximum retries reached. Could not execute code."}
yield {
"output": (
"Maximum retries reached. Could not execute code."
)
}
return
while True:
@ -132,7 +138,9 @@ class SubprocessCodeInterpreter(BaseCodeInterpreter):
else:
time.sleep(0.1)
try:
output = self.output_queue.get(timeout=0.3) # Waits for 0.3 seconds
output = self.output_queue.get(
timeout=0.3
) # Waits for 0.3 seconds
yield output
except queue.Empty:
if self.done.is_set():

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save