You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
99 lines
3.1 KiB
99 lines
3.1 KiB
swarms
|
|
|
|
pip install swarms
|
|
swarms is the most pythonic way of writing cognitive systems. Leveraging pydantic models as output schemas combined with langchain in the backend allows for a seamless integration of llms into your apps. It utilizes OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output. In the backend it compiles the swarms syntax into langchain runnables so you can easily invoke, stream or batch process your pipelines.
|
|
|
|
Open in GitHub Codespaces
|
|
|
|
from pydantic import BaseModel, Field
|
|
from swarms import Anthropic
|
|
from swarms import Agent
|
|
|
|
|
|
# Initialize the schema for the person's information
|
|
class Schema(BaseModel):
|
|
name: str = Field(..., title="Name of the person")
|
|
agent: int = Field(..., title="Age of the person")
|
|
is_student: bool = Field(..., title="Whether the person is a student")
|
|
courses: list[str] = Field(
|
|
..., title="List of courses the person is taking"
|
|
)
|
|
|
|
# Convert the schema to a JSON string
|
|
tool_schema = Schema(
|
|
name="Tool Name",
|
|
agent=1,
|
|
is_student=True,
|
|
courses=["Course1", "Course2"],
|
|
)
|
|
|
|
# Define the task to generate a person's information
|
|
task = "Generate a person's information based on the following schema:"
|
|
|
|
# Initialize the agent
|
|
agent = Agent(
|
|
agent_name="Person Information Generator",
|
|
system_prompt=(
|
|
"Generate a person's information based on the following schema:"
|
|
),
|
|
# Set the tool schema to the JSON string -- this is the key difference
|
|
tool_schema=tool_schema,
|
|
llm=Anthropic(),
|
|
max_loops=3,
|
|
autosave=True,
|
|
dashboard=False,
|
|
streaming_on=True,
|
|
verbose=True,
|
|
interactive=True,
|
|
# Set the output type to the tool schema which is a BaseModel
|
|
output_type=tool_schema, # or dict, or str
|
|
metadata_output_type="json",
|
|
# List of schemas that the agent can handle
|
|
list_tool_schemas = [tool_schema],
|
|
function_calling_format_type = "OpenAI",
|
|
function_calling_type = "json" # or soon yaml
|
|
)
|
|
|
|
# Run the agent to generate the person's information
|
|
generated_data = agent.run(task)
|
|
|
|
# Print the generated data
|
|
print(f"Generated data: {generated_data}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Features
|
|
🐍 pythonic
|
|
🔀 easy swap between openai or local models
|
|
🔄 dynamic output types (pydantic models, or primitives)
|
|
👁️ vision llm support
|
|
🧠 langchain_core as backend
|
|
📝 jinja templating for prompts
|
|
🏗️ reliable structured output
|
|
🔁 auto retry parsing
|
|
🔧 langsmith support
|
|
🔄 sync, async, streaming, parallel, fallbacks
|
|
📦 gguf download from huggingface
|
|
✅ type hints for all functions and mypy support
|
|
🗣️ chat router component
|
|
🧩 composable with langchain LCEL
|
|
🛠️ easy error handling
|
|
🚦 enums and literal support
|
|
📐 custom parsing types
|
|
Documentation
|
|
Checkout the docs here 👈
|
|
|
|
Also highly recommend to try and run the examples in the ./examples folder.
|
|
|
|
Contribution
|
|
You want to contribute? Thanks, that's great! For more information checkout the Contributing Guide. Please run the dev setup to get started:
|
|
|
|
git clone https://github.com/kyegomez/swarms.git && cd swarms
|
|
|
|
./dev_setup.sh
|
|
About
|
|
⛓️ build cognitive systems, pythonic
|