You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
117 lines
2.8 KiB
117 lines
2.8 KiB
---
|
|
title: "Setup"
|
|
description: "Get your 01 server up and running"
|
|
---
|
|
|
|
## Run Server
|
|
|
|
```bash
|
|
poetry run 01 --server
|
|
```
|
|
|
|
## Configure
|
|
|
|
A core part of the 01 server is the interpreter which is an instance of Open Interpreter.
|
|
Open Interpreter is highly configurable and only requires updating a single file.
|
|
|
|
```bash
|
|
# Edit i.py
|
|
software/source/server/i.py
|
|
```
|
|
|
|
Properties such as `model`, `context_window`, and many more can be updated here.
|
|
|
|
### LLM service provider
|
|
|
|
If you wish to use a local model, you can use the `--llm-service` flag:
|
|
|
|
```bash
|
|
# use llamafile
|
|
poetry run 01 --server --llm-service llamafile
|
|
```
|
|
|
|
For more information about LLM service providers, check out the page on <a href="/services/language-model">Language Models</a>.
|
|
|
|
### Voice Interface
|
|
|
|
Both speech-to-text and text-to-speech can be configured in 01OS.
|
|
|
|
You are able to pass CLI flags `--tts-service` and/or `--stt-service` with the desired service provider to swap out different services
|
|
|
|
These different service providers can be found in `/services/stt` and `/services/tts`
|
|
|
|
For more information, please read about <a href="/services/speech-to-text">speech-to-text</a> and <a href="/services/text-to-speech">text-to-speech</a>
|
|
|
|
## CLI Flags
|
|
|
|
- `--server`
|
|
Run server.
|
|
|
|
- `--server-host TEXT`
|
|
Specify the server host where the server will deploy.
|
|
Default: `0.0.0.0`.
|
|
|
|
- `--server-port INTEGER`
|
|
Specify the server port where the server will deploy.
|
|
Default: `10001`.
|
|
|
|
- `--tunnel-service TEXT`
|
|
Specify the tunnel service.
|
|
Default: `ngrok`.
|
|
|
|
- `--expose`
|
|
Expose server to internet.
|
|
|
|
- `--server-url TEXT`
|
|
Specify the server URL that the client should expect.
|
|
Defaults to server-host and server-port.
|
|
Default: `None`.
|
|
|
|
- `--llm-service TEXT`
|
|
Specify the LLM service.
|
|
Default: `litellm`.
|
|
|
|
- `--model TEXT`
|
|
Specify the model.
|
|
Default: `gpt-4`.
|
|
|
|
- `--llm-supports-vision`
|
|
Specify if the LLM service supports vision.
|
|
|
|
- `--llm-supports-functions`
|
|
Specify if the LLM service supports functions.
|
|
|
|
- `--context-window INTEGER`
|
|
Specify the context window size.
|
|
Default: `2048`.
|
|
|
|
- `--max-tokens INTEGER`
|
|
Specify the maximum number of tokens.
|
|
Default: `4096`.
|
|
|
|
- `--temperature FLOAT`
|
|
Specify the temperature for generation.
|
|
Default: `0.8`.
|
|
|
|
- `--tts-service TEXT`
|
|
Specify the TTS service.
|
|
Default: `openai`.
|
|
|
|
- `--stt-service TEXT`
|
|
Specify the STT service.
|
|
Default: `openai`.
|
|
|
|
- `--local`
|
|
Use recommended local services for LLM, STT, and TTS.
|
|
|
|
- `--install-completion [bash|zsh|fish|powershell|pwsh]`
|
|
Install completion for the specified shell.
|
|
Default: `None`.
|
|
|
|
- `--show-completion [bash|zsh|fish|powershell|pwsh]`
|
|
Show completion for the specified shell, to copy it or customize the installation.
|
|
Default: `None`.
|
|
|
|
- `--help`
|
|
Show this message and exit.
|