You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
01/docs/server/configure.mdx

145 lines
5.3 KiB

---
title: "Configure"
description: "Configure your 01 instance"
---
A core part of the 01 server is the interpreter which is an instance of Open Interpreter.
Open Interpreter is highly configurable and only requires updating or creating a profile.
Properties such as `model`, `context_window`, and many more can be updated here.
To open the directory of all profiles, run:
```bash
# View profiles
poetry run 01 --profiles
```
To apply a profile to your 01 instance, use the `--profile` flag followed by the name of the profile:
```bash
# Use profile
poetry run 01 --profile <profile_name>
```
### Standard Profiles
`default.py` is the default profile that is used when no profile is specified. The default TTS service is Elevenlabs.
`fast.py` uses Cartesia for TTS and Cerebras Llama3.1-8b, which are the fastest providers.
`local.py` requires additional setup to be used with LiveKit. Uses faster-whisper for STT, ollama/codestral for LLM (default), and piper for TTS (default).
### Custom Profiles
If you want to make your own file, you can do so by creating a new file in the `profiles` directory.
The easiest way is to duplicate an existing profile and then update values as needed. Be sure to save the profile with a unique name.
Remember to add `interpreter.tts = ` to set the text-to-speech provider.
To use a custom profile, run:
```bash
# Use custom profile
poetry run 01 --profile <profile_name>
```
### Example Profile
````python
from interpreter import Interpreter
interpreter = Interpreter()
# This is an Open Interpreter compatible profile.
# Visit https://01.openinterpreter.com/profile for all options.
# 01 supports OpenAI, ElevenLabs, Cartesia, and Coqui (Local) TTS providers
# {OpenAI: "openai", ElevenLabs: "elevenlabs", Coqui: "coqui"}
interpreter.tts = "elevenlabs"
interpreter.stt = "deepgram"
# Set the identity and personality of your 01
interpreter.system_message = """
You are the 01, a screenless executive assistant that can complete any task.
When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task.
Run any code to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
Be concise. Your messages are being read aloud to the user. DO NOT MAKE PLANS. RUN CODE QUICKLY.
Try to spread complex tasks over multiple code blocks. Don't try to complex tasks in one go.
Manually summarize text."""
# Add additional instructions for the 01
interpreter.instructions = "Be very concise in your responses."
# Connect your 01 to a language model
interpreter.model = "claude-3-5-sonnet-20240620"
interpreter.provider = "anthropic"
interpreter.max_tokens = 4096
interpreter.temperature = 0
interpreter.api_key = "<your_anthropic_api_key_here>"
# Extra settings
interpreter.tools = ["interpreter", "editor"] # Enabled tool modules
interpreter.auto_run = True # Whether to auto-run tools without confirmation
interpreter.tool_calling = True # Whether to allow tool/function calling
interpreter.allowed_paths = [] # List of allowed paths
interpreter.allowed_commands = [] # List of allowed commands
````
### Hosted LLMs
The default LLM for 01 is Claude 3.5 Sonnet. You can find this in the default profile in `software/source/server/profiles/default.py`.
The fast profile uses Llama3.1-8b served by Cerebras. You can find this in the fast profile in `software/source/server/profiles/fast.py`.
```python
# Set your profile with a hosted LLM
interpreter.model = "claude-3-5-sonnet-20240620"
interpreter.provider = "anthropic"
```
### Local LLMs
You can use local models to power 01.
Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio.
```python
# Set your profile with a local LLM
interpreter.model = "ollama/codestral"
# You can also use the Local Explorer to interactively select your model
interpreter.local_setup()
```
### Hosted TTS
01 supports OpenAI, Elevenlabs, and Cartesia for hosted TTS.
```python
# Set your profile with a hosted TTS service
interpreter.tts = "elevenlabs"
```
### Local TTS and STT with LiveKit
We recommend having Docker installed for the easiest setup. Local TTS and STT relies on the [openedai-speech](https://github.com/matatonic/openedai-speech?tab=readme-ov-file) and [faster-whisper-server](https://github.com/fedirz/faster-whisper-server) repositories respectively.
#### Local TTS
1. Clone the [openedai-speech](https://github.com/matatonic/openedai-speech?tab=readme-ov-file) repository
2. Follow the Docker Image instructions for your system. Default run `docker compose -f docker-compose.min.yml up --publish 9001:8000` in the root.
3. Set your profile with local TTS service
```python
interpreter.tts = "local"
```
#### Local STT
1. Clone the [faster-whisper-server](https://github.com/fedirz/faster-whisper-server) repository
2. Follow the Docker Compose Quick Start instructions for your respective system.
3. Run `docker run --publish 9002:8000 --volume ~/.cache/huggingface:/root/.cache/huggingface --env WHISPER__MODEL=Systran/faster-whisper-small --detach fedirz/faster-whisper-server:latest-cpu` to publish to port 8001 instead of the default 8000 (since our TTS uses this port).
4. Set your profile with local STT service
```python
interpreter.stt = "local"
```