diff --git a/docs/bodies/01-light.mdx b/docs/bodies/01-light.mdx index d669537..ed27f1c 100644 --- a/docs/bodies/01-light.mdx +++ b/docs/bodies/01-light.mdx @@ -4,3 +4,5 @@ description: "Build your 01 Light" --- 01 Light (one pager that points to the STL, wiring diagrams, and points to the ESP32 client setup page^) + +For CAD files, wiring diagram, and images, please visit the [01 Light hardware repository](https://github.com/OpenInterpreter/01/tree/main/hardware/light). diff --git a/docs/client/setup.mdx b/docs/client/setup.mdx index 8e10b5e..526fc3c 100644 --- a/docs/client/setup.mdx +++ b/docs/client/setup.mdx @@ -5,5 +5,43 @@ description: "Get your 01 client up and running" (lets you pick from a grid of avaliable clients) -- ESP32 (instructions for flashing it) -- Desktop (basically says "just run start.py with no args, that will run the server with a client, or start.py --client to just run the client") +## ESP32 Playback + +To set up audio recording + playback on the ESP32 (M5 Atom), do the following: + +1. Open Arduino IDE, and open the client/client.ino file +2. Go to Tools -> Board -> Boards Manager, search "esp32", then install the boards by Arduino and Espressif +3. Go to Tools -> Manage Libraries, then install the following: + +- M5Atom by M5Stack [Reference](https://www.arduino.cc/reference/en/libraries/m5atom/) +- WebSockets by Markus Sattler [Reference](https://www.arduino.cc/reference/en/libraries/websockets/) + +4. The board needs to connect to WiFi. Once you flash, connect to ESP32 wifi "captive" which will get wifi details. Once it connects, it will ask you to enter 01OS server address in the format "domain.com:port" or "ip:port". Once its able to connect you can use the device. +5. To flash the .ino to the board, connect the board to the USB port, select the port from the dropdown on the IDE, then select the M5Atom board (or M5Stack-ATOM if you have that). Click on upload to flash the board. + +## Desktop + +### Server with a client + +```bash +# install dependencies +poetry install + +# run start.py with no args +poetry run 01 +``` + +### Client only + +```bash +poetry run 01 --client +``` + +### Flags + +- `--client` + Run client. + +- `--client-type TEXT` + Specify the client type. + Default: `auto`. diff --git a/docs/getting-started/introduction.mdx b/docs/getting-started/introduction.mdx index ed1827c..24b6e04 100644 --- a/docs/getting-started/introduction.mdx +++ b/docs/getting-started/introduction.mdx @@ -17,6 +17,18 @@ We intend to become the “Linux” of this new space— open, modular, and free ## Quick Start +### Install dependencies + +```bash +# MacOS +brew install portaudio ffmpeg cmake + +# Ubuntu +sudo apt-get install portaudio19-dev ffmpeg cmake +``` + +### Install and run the 01 CLI + ```bash # Clone the repo, cd into the 01OS directory git clone https://github.com/OpenInterpreter/01.git @@ -27,4 +39,4 @@ poetry install poetry run 01 ``` -_Disclaimer:_ The current version of 01OS is a developer preview. +_Disclaimer:_ The current version of 01OS is a developer preview diff --git a/docs/server/setup.mdx b/docs/server/setup.mdx index a5bdb20..bb03b64 100644 --- a/docs/server/setup.mdx +++ b/docs/server/setup.mdx @@ -3,9 +3,86 @@ title: "Setup" description: "Get your 01 server up and running" --- -Setup (just run start.py --server , explain the flags (revealed via start.py --help)) - - Interpreter - Open Interpreter (explains i.py, how you configure your interpreter, cover the basic settings of OI (that file is literally just modifying an interpreter from OI) - Language Model (LLM setup via interpreter.model in i.py or from the command line via start.py --server --llm-service llamafile) - Voice Interface (explains that you can run --tts-service and --stt-service to swap out for different services, which are in /Services/Speech-to-text and /Services/Text-to-text) + +## Run Server + +```bash +poetry run 01 --server +``` + +## Flags + +- `--server` + Run server. + +- `--server-host TEXT` + Specify the server host where the server will deploy. + Default: `0.0.0.0`. + +- `--server-port INTEGER` + Specify the server port where the server will deploy. + Default: `8000`. + +- `--tunnel-service TEXT` + Specify the tunnel service. + Default: `ngrok`. + +- `--expose` + Expose server to internet. + +- `--server-url TEXT` + Specify the server URL that the client should expect. + Defaults to server-host and server-port. + Default: `None`. + +- `--llm-service TEXT` + Specify the LLM service. + Default: `litellm`. + +- `--model TEXT` + Specify the model. + Default: `gpt-4`. + +- `--llm-supports-vision` + Specify if the LLM service supports vision. + +- `--llm-supports-functions` + Specify if the LLM service supports functions. + +- `--context-window INTEGER` + Specify the context window size. + Default: `2048`. + +- `--max-tokens INTEGER` + Specify the maximum number of tokens. + Default: `4096`. + +- `--temperature FLOAT` + Specify the temperature for generation. + Default: `0.8`. + +- `--tts-service TEXT` + Specify the TTS service. + Default: `openai`. + +- `--stt-service TEXT` + Specify the STT service. + Default: `openai`. + +- `--local` + Use recommended local services for LLM, STT, and TTS. + +- `--install-completion [bash|zsh|fish|powershell|pwsh]` + Install completion for the specified shell. + Default: `None`. + +- `--show-completion [bash|zsh|fish|powershell|pwsh]` + Show completion for the specified shell, to copy it or customize the installation. + Default: `None`. + +- `--help` + Show this message and exit. diff --git a/docs/services/language-model.mdx b/docs/services/language-model.mdx index 1d199f9..7d2f1d8 100644 --- a/docs/services/language-model.mdx +++ b/docs/services/language-model.mdx @@ -3,7 +3,25 @@ title: "Language Model" description: "The LLM that powers your 01" --- -- Llamafile (Local) -- Llamaedge (Local) -- Hosted Models (explains that we use litellm, you can pass in many different model flags to this) -- Add more (placeholder, we will add instructions soon) +## Llamafile + +Llamafile is cool! + +## Llamaedge + +Llamaedge is also cool! + +## Hosted Models + +01OS leverages liteLLM which supports [many hosted models](https://docs.litellm.ai/docs/providers/). + +To select your providers + +```bash +# Set the LLM service +poetry run 01 --llm-service openai +``` + +## Other Models + +More instructions coming soon!