diff --git a/docs/client/setup.mdx b/docs/client/setup.mdx index 729faf6..526fc3c 100644 --- a/docs/client/setup.mdx +++ b/docs/client/setup.mdx @@ -36,3 +36,12 @@ poetry run 01 ```bash poetry run 01 --client ``` + +### Flags + +- `--client` + Run client. + +- `--client-type TEXT` + Specify the client type. + Default: `auto`. diff --git a/docs/server/setup.mdx b/docs/server/setup.mdx index a5bdb20..bb03b64 100644 --- a/docs/server/setup.mdx +++ b/docs/server/setup.mdx @@ -3,9 +3,86 @@ title: "Setup" description: "Get your 01 server up and running" --- -Setup (just run start.py --server , explain the flags (revealed via start.py --help)) - - Interpreter - Open Interpreter (explains i.py, how you configure your interpreter, cover the basic settings of OI (that file is literally just modifying an interpreter from OI) - Language Model (LLM setup via interpreter.model in i.py or from the command line via start.py --server --llm-service llamafile) - Voice Interface (explains that you can run --tts-service and --stt-service to swap out for different services, which are in /Services/Speech-to-text and /Services/Text-to-text) + +## Run Server + +```bash +poetry run 01 --server +``` + +## Flags + +- `--server` + Run server. + +- `--server-host TEXT` + Specify the server host where the server will deploy. + Default: `0.0.0.0`. + +- `--server-port INTEGER` + Specify the server port where the server will deploy. + Default: `8000`. + +- `--tunnel-service TEXT` + Specify the tunnel service. + Default: `ngrok`. + +- `--expose` + Expose server to internet. + +- `--server-url TEXT` + Specify the server URL that the client should expect. + Defaults to server-host and server-port. + Default: `None`. + +- `--llm-service TEXT` + Specify the LLM service. + Default: `litellm`. + +- `--model TEXT` + Specify the model. + Default: `gpt-4`. + +- `--llm-supports-vision` + Specify if the LLM service supports vision. + +- `--llm-supports-functions` + Specify if the LLM service supports functions. + +- `--context-window INTEGER` + Specify the context window size. + Default: `2048`. + +- `--max-tokens INTEGER` + Specify the maximum number of tokens. + Default: `4096`. + +- `--temperature FLOAT` + Specify the temperature for generation. + Default: `0.8`. + +- `--tts-service TEXT` + Specify the TTS service. + Default: `openai`. + +- `--stt-service TEXT` + Specify the STT service. + Default: `openai`. + +- `--local` + Use recommended local services for LLM, STT, and TTS. + +- `--install-completion [bash|zsh|fish|powershell|pwsh]` + Install completion for the specified shell. + Default: `None`. + +- `--show-completion [bash|zsh|fish|powershell|pwsh]` + Show completion for the specified shell, to copy it or customize the installation. + Default: `None`. + +- `--help` + Show this message and exit.