Merge pull request #288 from OpenInterpreter/update-documentation

Update documentation
pull/266/merge
killian 6 months ago committed by GitHub
commit f6ec3dfed0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -5,15 +5,16 @@
<br>
<br>
<strong>The open-source language model computer.</strong><br>
<br><a href="https://openinterpreter.com/01">Preorder the Light</a> | <a href="https://changes.openinterpreter.com">Get Updates</a> | <a href="https://01.openinterpreter.com/">Documentation</a><br>
<br><a href="https://changes.openinterpreter.com">Get Updates</a> | <a href="https://01.openinterpreter.com/">Documentation</a><br>
</p>
<div align="center">
| [中文版](docs/README_CN.md) | [日本語](docs/README_JA.md) | [English](README.md) |
</div>
</div>
<br>
@ -65,7 +66,6 @@ poetry run 01 # Runs the 01 Light simulator (hold your spacebar, speak, release)
- The **01 Light** is an ESP32-based voice interface. Build instructions are [here](https://github.com/OpenInterpreter/01/tree/main/hardware/light). A list of what to buy [here](https://github.com/OpenInterpreter/01/blob/main/hardware/light/BOM.md).
- It works in tandem with the **01 Server** ([setup guide below](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server)) running on your home computer.
- **Mac OSX** and **Ubuntu** are supported by running `poetry run 01` (**Windows** is supported experimentally). This uses your spacebar to simulate the 01 Light.
- (coming soon) The **01 Heavy** is a standalone device that runs everything locally.
**We need your help supporting & building more hardware.** The 01 should be able to run on any device with input (microphone, keyboard, etc.), output (speakers, screens, motors, etc.), and an internet connection (or sufficient compute to run everything locally). [Contribution Guide →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
@ -98,7 +98,7 @@ https://github.com/OpenInterpreter/01/assets/63927363/8621b075-e052-46ba-8d2e-d6
Dynamic System Messages enable you to execute code inside the LLM's system message, moments before it appears to the AI.
```python
# Edit the following settings in i.py
# Edit the following settings in Profiles
interpreter.system_message = r" The time is {{time.time()}}. " # Anything in double brackets will be executed as Python
interpreter.chat("What time is it?") # It will know, without making a tool/API call
```

@ -5,7 +5,7 @@
- [ ] Test it end-to-end.
- [ ] Make sure it works with computer.skills.search (it should already work)
- [ ] Create computer.skills.teach()
- [ ] Displays a tkinter message asking users to complete the task via text (eventually voice) in the most generalizable way possible. OI should use computer.mouse and computer.keyboard to fulfill each step, then save the generalized instruction as a skill. Clicking the mouse cancels teach mode. When OI invokes this skill in the future, it will just list those steps (it needs to figure out how to flexibly accomplish each step).
- [ ] Displays a tkinter message asking users to complete the task via text (eventually voice) in the most generalizable way possible. OI should use computer.mouse and computer.keyboard to fulfill each step, then save the generalized instruction as a skill. Clicking the mouse cancels teach mode. When OI invokes this skill in the future, it will just list those steps (it needs to figure out how to flexibly accomplish each step).
- [ ] Computer: "What do you want to name this skill?"
- [ ] User: Enters name in textbox
- [ ] Computer: "Whats the First Step"
@ -73,7 +73,6 @@
- [ ] Connect the Jetson or Pi to a battery.
- [ ] Make a rudimentary case for the Seeed Sense + speaker. Optional.
- [ ] Make a rudimentary case for the Jetson or Pi. Optional.
- [ ] Determine recommended minimal hardware for the light & heavy.
**Release Day**
@ -81,6 +80,7 @@
- [ ] Create form to get pre-release feedback from 200 interested people (who responded to Killian's tweet)
**DONE**
- [ ] Get Local TTS working on Mac [Shiven]
- [ ] Get Local SST working on Mac [Zohaib + Shiven]
- [ ] Debug level logging/printing [Tom]

@ -53,7 +53,6 @@ poetry run 01 # Runs the 01 Light simulator (hold your spacebar, speak, release)
- **01 Light** 是基于 ESP32 的语音接口。 [构建说明在这里。](https://github.com/OpenInterpreter/01/tree/main/hardware/light) 它与运行在你家庭电脑上的 **01 Server** ([下面有设置指南](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server)) 配合使用。
- **Mac OSX** and **Ubuntu** 支持通过运行 `poetry run 01`。 这会使用你的空格键来模拟 01 Light。
- (即将推出) **01 Heavy** 是一个独立设备,可以在本地运行所有功能。
**我们需要您的帮助来支持和构建更多硬件。** 01 应该能够在任何具有输入(麦克风、键盘等)、输出(扬声器、屏幕、电机等)和互联网连接(或足够的计算资源以在本地运行所有内容)的设备上运行。 [ 贡献指南 →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
@ -86,7 +85,7 @@ https://github.com/OpenInterpreter/01/assets/63927363/8621b075-e052-46ba-8d2e-d6
动态系统消息使您能够在 LLM 系统消息出现在 AI 前的片刻内执行代码。
```python
# Edit the following settings in i.py
# Edit the following settings in Profiles
interpreter.system_message = r" The time is {{time.time()}}. " # Anything in double brackets will be executed as Python
interpreter.chat("What time is it?") # It will know, without making a tool/API call
```
@ -115,7 +114,7 @@ poetry run 01 --local
## 自定义
要自定义系统的行为,请编辑 `i.py` 中的 [系统消息、模型、技能库路径](https://docs.openinterpreter.com/settings/all-settings) 等。这个文件设置了一个解释器,并由 Open Interpreter 提供支持。
要自定义系统的行为,请编辑 Profiles 中的 [系统消息、模型、技能库路径](https://docs.openinterpreter.com/settings/all-settings) 等。这个文件设置了一个解释器,并由 Open Interpreter 提供支持。
## Ubuntu 依赖项

@ -57,7 +57,6 @@ poetry run 01 # Exécute le simulateur 01 Light (maintenez votre barre d'espace,
- Le **01 Light** est une interface vocale basée sur ESP32. Les instructions de construction sont [ici]. (https://github.com/OpenInterpreter/01/tree/main/hardware/light). Une liste de ce qu'il faut acheter se trouve [ici](https://github.com/OpenInterpreter/01/blob/main/hardware/light/BOM.md).
- Il fonctionne en tandem avec le **Server 01** ([guide d'installation ci-dessous](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server)) fonctionnant sur votre ordinateur.
- **Mac OSX** et **Ubuntu** sont pris en charge en exécutant `poetry run 01` (**Windows** est pris en charge de manière expérimentale). Cela utilise votre barre d'espace pour simuler le 01 Light.
- (prochainement) Le **01 Heavy** est un dispositif autonome qui exécute tout localement.
**Nous avons besoin de votre aide pour soutenir et construire plus de hardware.** Le 01 devrait pouvoir fonctionner sur tout dispositif avec entrée (microphone, clavier, etc.), sortie (haut-parleurs, écrans, moteurs, etc.) et connexion internet (ou suffisamment de puissance de calcul pour tout exécuter localement). [Guide de Contribution →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
@ -65,7 +64,7 @@ poetry run 01 # Exécute le simulateur 01 Light (maintenez votre barre d'espace,
# Comment ça marche ?
Le 01 expose un websocket de *speech-to-speech* à l'adresse `localhost:10001`.
Le 01 expose un websocket de _speech-to-speech_ à l'adresse `localhost:10001`.
Si vous diffusez des octets audio bruts vers `/` au [format de streaming LMC](https://docs.openinterpreter.com/guides/streaming-response), vous recevrez sa réponse dans le même format.
@ -81,7 +80,7 @@ Le 01 l'encapsule dans une interface vocale :
## Messages LMC
Pour communiquer avec les différents composants du système, nous introduisons le [format de messages LMC](https://docs.openinterpreter.com/protocols/lmc-messages), une extension du format de message d'OpenAI qui inclut un nouveau rôle "*computer*":
Pour communiquer avec les différents composants du système, nous introduisons le [format de messages LMC](https://docs.openinterpreter.com/protocols/lmc-messages), une extension du format de message d'OpenAI qui inclut un nouveau rôle "_computer_":
https://github.com/OpenInterpreter/01/assets/63927363/8621b075-e052-46ba-8d2e-d64b9f2a5da9
@ -90,7 +89,7 @@ https://github.com/OpenInterpreter/01/assets/63927363/8621b075-e052-46ba-8d2e-d6
Les Messages Systèmes Dynamiques vous permettent d'exécuter du code à l'intérieur du message système du LLM, juste avant qu'il n'apparaisse à l'IA.
```python
# Modifiez les paramètres suivants dans i.py
# Modifiez les paramètres suivants dans Profiles
interpreter.system_message = r" The time is {{time.time()}}. " # Tout ce qui est entre doubles crochets sera exécuté comme du Python
interpreter.chat("What time is it?") # L'interpréteur connaitre la réponse, sans faire appel à un outil ou une API
```
@ -119,7 +118,7 @@ Si vous souhaitez exécuter localement du speech-to-text en utilisant Whisper, v
## Personnalisation
Pour personnaliser le comportement du système, modifie [`system message`, `model`, `skills library path`,](https://docs.openinterpreter.com/settings/all-settings) etc. in `i.py`. Ce fichier configure un interprète alimenté par Open Interpreter.
Pour personnaliser le comportement du système, modifie [`system message`, `model`, `skills library path`,](https://docs.openinterpreter.com/settings/all-settings) etc. in Profiles. Ce fichier configure un interprète alimenté par Open Interpreter.
## Dépendances Ubuntu

@ -12,7 +12,7 @@
![OI-O1-BannerDemo-2](https://www.openinterpreter.com/OI-O1-BannerDemo-3.jpg)
あなたのビルドをサポートします。[1対1のサポートを申し込む。](https://0ggfznkwh4j.typeform.com/to/kkStE8WF)
あなたのビルドをサポートします。[1 1 のサポートを申し込む。](https://0ggfznkwh4j.typeform.com/to/kkStE8WF)
<br>
@ -56,7 +56,6 @@ poetry run 01 # 01 Light シミュレーターを作動させる(スペース
- **01 Light** は ESP32 ベースの音声インターフェースです。ビルド手順は[こちら](https://github.com/OpenInterpreter/01/tree/main/hardware/light)。買うべきもののリストは[こちら](https://github.com/OpenInterpreter/01/blob/main/hardware/light/BOM.md)。
- ご自宅のコンピューターで動作している **01 サーバー**[下記のセットアップガイド](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server))と連動して動作します。
- **Mac OSX****Ubuntu**`poetry run 01` を実行することでサポートされます(**Windows** は実験的にサポートされている)。これはスペースキーを使って 01 Light をシミュレートします。
- (近日発表) **01 Heavy** は、ローカルですべてを実行するスタンドアローンデバイスです。
**より多くのハードウェアをサポートし、構築するためには、皆さんの協力が必要です。** 01 は、入力(マイク、キーボードなど)、出力(スピーカー、スクリーン、モーターなど)、インターネット接続(またはローカルですべてを実行するのに十分な計算能力)があれば、どのようなデバイスでも実行できるはずです。[コントリビューションガイド →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
@ -89,7 +88,7 @@ https://github.com/OpenInterpreter/01/assets/63927363/8621b075-e052-46ba-8d2e-d6
ダイナミックシステムメッセージは、LLM のシステムメッセージが AI に表示される一瞬前に、その中でコードを実行することを可能にします。
```python
# i.py の以下の設定を編集
# Profiles の以下の設定を編集
interpreter.system_message = r" The time is {{time.time()}}. " # 二重括弧の中は Python として実行されます
interpreter.chat("What time is it?") # ツール/API を呼び出すことなく、次のことが分かります
```
@ -118,7 +117,7 @@ Whisper を使ってローカル音声合成を実行したい場合、Rust を
## カスタマイズ
システムの動作をカスタマイズするには、`i.py` 内の[システムメッセージ、モデル、スキルライブラリのパス](https://docs.openinterpreter.com/settings/all-settings)などを編集します。このファイルはインタープリターをセットアップするもので、Open Interpreter によって動作します。
システムの動作をカスタマイズするには、Profiles 内の[システムメッセージ、モデル、スキルライブラリのパス](https://docs.openinterpreter.com/settings/all-settings)などを編集します。このファイルはインタープリターをセットアップするもので、Open Interpreter によって動作します。
## Ubuntu 依存関係

@ -1,6 +0,0 @@
---
title: "01 Heavy"
description: "Build your 01 Heavy"
---
runs fully locally + coming soon

@ -1,12 +0,0 @@
---
title: "01 Light"
description: "Build your 01 Light"
---
## ESP32 client
Instructions to set up your ESP32 client can be found <a href="/client/setup">here</a>
## Suppliementary files
For CAD files, wiring diagram, and images, please visit the [01 Light hardware repository](https://github.com/OpenInterpreter/01/tree/main/hardware/light).

@ -1,42 +0,0 @@
---
title: "Setup"
description: "Get your 01 client up and running"
---
## ESP32 Playback
To set up audio recording + playback on the ESP32 (M5 Atom), do the following:
1. Open Arduino IDE, and open the client/client.ino file
2. Go to Tools -> Board -> Boards Manager, search "esp32", then install the boards by Arduino and Espressif
3. Go to Tools -> Manage Libraries, then install the following:
- M5Atom by M5Stack [Reference](https://www.arduino.cc/reference/en/libraries/m5atom/)
- WebSockets by Markus Sattler [Reference](https://www.arduino.cc/reference/en/libraries/websockets/)
4. The board needs to connect to WiFi. Once you flash, connect to ESP32 wifi "captive" which will get wifi details. Once it connects, it will ask you to enter 01OS server address in the format "domain.com:port" or "ip:port". Once its able to connect you can use the device.
5. To flash the .ino to the board, connect the board to the USB port, select the port from the dropdown on the IDE, then select the M5Atom board (or M5Stack-ATOM if you have that). Click on upload to flash the board.
## Desktop
### Server with a client
```bash
# run 01 with no args
poetry run 01
```
### Client only
```bash
poetry run 01 --client
```
### Flags
- `--client`
Run client.
- `--client-type TEXT`
Specify the client type.
Default: `auto`.

@ -0,0 +1,44 @@
---
title: "Getting Started"
description: "Preparing your machine"
---
## Prerequisites
There are a few packages that need to be installed in order to run 01OS on your computer
```bash
# Install poetry
curl -sSL https://install.python-poetry.org | python3 -
```
### MacOS
```bash
brew install portaudio ffmpeg cmake
```
### Ubuntu
<Note>Wayland not supported, only Ubuntu 20.04 and below</Note>
```bash
sudo apt-get install portaudio19-dev ffmpeg cmake
```
### Windows
- [Git for Windows](https://git-scm.com/download/win).
- [virtualenv](https://virtualenv.pypa.io/en/latest/installation.html) or [MiniConda](https://docs.anaconda.com/free/miniconda/miniconda-install/) to manage virtual environments.
- [Chocolatey](https://chocolatey.org/install#individual) to install the required packages.
- [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools):
- Choose [**Download Build Tools**](https://visualstudio.microsoft.com/visual-cpp-build-tools/).
- Run the downloaded file **vs_BuildTools.exe**.
- In the installer, select **Workloads** > **Desktop & Mobile** > **Desktop Development with C++**.
With these installed, you can run the following commands in a **PowerShell terminal as an administrator**:
```powershell
# Install the required packages
choco install -y ffmpeg
```

@ -1,12 +1,12 @@
---
title: Introduction
description: 'The open-source language model computer.'
description: "The open-source language model computer."
---
<img
src="https://www.openinterpreter.com/OI-O1-BannerDemo-3.jpg"
alt="thumbnail"
style={{ transform: 'translateY(-1.25rem)' }}
style={{ transform: "translateY(-1.25rem)" }}
/>
The 01 project is an open-source ecosystem for artificially intelligent devices.
@ -15,30 +15,5 @@ By combining code-interpreting language models ("interpreters") with speech reco
We intend to become the “Linux” of this new space— open, modular, and free for personal or commercial use.
## Quick Start
### Install dependencies
```bash
# MacOS
brew install portaudio ffmpeg cmake
# Ubuntu
sudo apt-get install portaudio19-dev ffmpeg cmake
```
For windows, please refer to the [setup guide](/getting-started/setup#windows).
### Install and run the 01 CLI
```bash
# Clone the repo and navigate into the software directory
git clone https://github.com/OpenInterpreter/01.git
cd 01/software
# Install dependencies and run 01
poetry install
poetry run 01
```
_Disclaimer:_ The current version of 01OS is a developer preview

@ -1,93 +0,0 @@
---
title: 'Setup'
description: 'Get your 01 up and running'
---
## Captive portal
To connect your 01, you will use the captive portal.
1. Turn on your computer or laptop and connect to the '01 light' Wi-Fi network.
2. Enter your Wi-Fi/hotspot name and password in the captive portal page.
3. Enter the server URL generated on their computer and hit 'Connect'.
Now you're connected and ready to go!
# Local 01OS
## Prerequisites
There are a few packages that need to be installed in order to run 01OS on your computer
```bash
# MacOS
brew install portaudio ffmpeg cmake
# Ubuntu (wayland not supported, only ubuntu 20.04 and below)
sudo apt-get install portaudio19-dev ffmpeg cmake
# Install poetry
curl -sSL https://install.python-poetry.org | python3 -
```
#### Windows
On Windows you will need to install the following:
- [Git for Windows](https://git-scm.com/download/win).
- [virtualenv](https://virtualenv.pypa.io/en/latest/installation.html) or [MiniConda](https://docs.anaconda.com/free/miniconda/miniconda-install/) to manage virtual environments.
- [Chocolatey](https://chocolatey.org/install#individual) to install the required packages.
- [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools):
- Choose [**Download Build Tools**](https://visualstudio.microsoft.com/visual-cpp-build-tools/).
- Run the downloaded file **vs_BuildTools.exe**.
- In the installer, select **Workloads** > **Desktop & Mobile** > **Desktop Development with C++**.
With these installed, you can run the following commands in a **PowerShell terminal as an administrator**:
```powershell
# Install the required packages
choco install -y ffmpeg
```
## Install 01
To install the 01 CLI
```bash
# Clone the repo and navigate into the 01OS directory
git clone https://github.com/OpenInterpreter/01.git
```
## Run the 01
In order to run 01 on your computer, use [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer).
Navigate to the project's software directory:
```bash
cd software
```
Install your project along with its dependencies in a virtual environment managed by Poetry.
```bash
poetry install
```
Run your local version of 01 with:
```bash
poetry run 01
```
## Swap out service providers
You have the ability to set your <a href="/services/language-model">LLM</a>, <a href="/services/speech-to-text">STT</a>, and <a href="/services/text-to-speech">TTS</a> service providers
## Server setup
You are able to run just the <a href="/server/setup">server</a>.
## Client setup
You are able to run just the <a href="/client/setup">client</a>.

@ -0,0 +1,52 @@
---
title: "01 Light"
description: "Use your 01 Light"
---
# Materials
The Bill of Materials for the 01 Light can be found [here](https://github.com/OpenInterpreter/01/blob/main/hardware/light/BOM.md)
# Chip (ESP32)
To setup the ESP32 to work with 01, follow this guide to install the firmware:
To set up audio recording + playback on the ESP32 (M5 Atom), do the following:
1. Download the [Arduino IDE](https://www.arduino.cc/en/software)
2. Get the firmware by copying the contents of [client.ino](https://github.com/OpenInterpreter/01/blob/main/software/source/clients/esp32/src/client/client.ino) from the 01 repository.
3. Open Arduino IDE and paste the client.ino contents
4. Go to Tools -> Board -> Boards Manager, search "esp32", then install the boards by Arduino and Espressif
5. Go to Tools -> Manage Libraries, then install the following:
- M5Atom by M5Stack [Reference](https://www.arduino.cc/reference/en/libraries/m5atom/)
- WebSockets by Markus Sattler [Reference](https://www.arduino.cc/reference/en/libraries/websockets/)
6. The board needs to connect to WiFi. Once you flash, connect to ESP32 wifi "captive" which will get wifi details. Once it connects, it will ask you to enter 01OS server address in the format "domain.com:port" or "ip:port". Once its able to connect you can use the device.
7. To flash the .ino to the board, connect the board to the USB port, select the port from the dropdown on the IDE, then select the M5Atom board (or M5Stack-ATOM if you have that). Click on upload to flash the board.
Check out [this video from Thomas](https://www.youtube.com/watch?v=Y76zed8nEE8) for flashing the ESP32 and connecting the 01.
# Case
This case of the 01 can be 3d printed at home. It is recommended to use a resin printer for improved quality.
Check out [this video from James at CAD9 Design](https://www.youtube.com/watch?v=BjoO0Kt-IWM) for a deep dive on his design.
The stl files can be found [here](https://github.com/OpenInterpreter/01/tree/main/hardware/light/bodies)
# Assembly
Check out [this video from James at CAD9 Design](https://www.youtube.com/watch?v=37a5bgvoZy8) on how to assemble your 01
# Connect
### Captive portal
To connect your 01, you will use the captive portal.
1. Turn on your computer or laptop and connect to the '01 light' Wi-Fi network.
2. Enter your Wi-Fi/hotspot name and password in the captive portal page.
3. Enter the server URL generated on their computer and hit 'Connect'.
Now you're connected and ready to go!

@ -0,0 +1,16 @@
---
title: "Custom Hardware"
description: "Control 01 from your own device"
---
You can build your own custom hardware that uses the 01 server.
To use 01 with your custom hardware, run the server:
```bash
poetry run 01 --server
```
You may need to set additional parameters via [flags](/software/flags) depending on your setup.
To transmit audio commands to 01, send LMC audio chunks to the websocket defined by your server.

@ -0,0 +1,12 @@
---
title: "Desktop"
description: "Control 01 from your computer"
---
<Info> Make sure that you have navigated to the `software` directory. </Info>
To run 01 with your computer's microphone and speaker, run:
```bash
poetry run 01
```

@ -0,0 +1,73 @@
---
title: "iOS & Android"
description: "Control 01 from your mobile phone"
---
Using your phone is a great way to control 01. There are multiple options available.
## [React Native app](https://github.com/OpenInterpreter/01/tree/main/software/source/clients/mobile) (iOS & Android)
Work in progress, we will continue to improve this application.
If you want to run it on your device, you will need to install [Expo Go](https://expo.dev/go) on your mobile device.
### Setup Instructions
- [Install 01 software](/software/installation) on your machine
- Run the Expo server:
```shell
cd software/source/clients/mobile/react-native
npm install # install dependencies
npx expo start # start local expo development server
```
This will produce a QR code that you can scan with Expo Go on your mobile device.
Open **Expo Go** on your mobile device and select _Scan QR code_ to scan the QR code produced by the `npx expo start` command.
- Run 01:
```shell
cd software # cd into `software`
poetry run 01 --mobile # exposes QR code for 01 Light server
```
### Using the App
In the 01 mobile app, select _Scan Code_ to scan the QR code produced by the `poetry run 01 --mobile` command
Press and hold the button to speak, release to make the request. To rescan the QR code, swipe left on the screen to go back.
## [Native iOS app](https://github.com/OpenInterpreter/01/tree/main/software/source/clients/ios) by [eladekkal](https://github.com/eladdekel).
A community contibution ❤️
To run it on your device, you can either install the app directly through the current TestFlight [here](https://testflight.apple.com/join/v8SyuzMT), or build from the source code files in Xcode on your Mac.
### Instructions
- [Install 01 software](/software/installation) on your machine
- In Xcode, open the 'zerooone-app' project file in the project folder, change the Signing Team and Bundle Identifier, and build.
### Using the App
To use the app there are four features:
1. The speak "Button"
Made to emulate the button on the hardware models of 01, the big, yellow circle in the middle of the screen is what you hold when you want to speak to the model, and let go when you're finished speaking.
2. The settings button
Tapping the settings button will allow you to input your websocket address so that the app can properly connect to your computer.
3. The reconnect button
The arrow will be RED when the websocket connection is not live, and GREEN when it is. If you're making some changes you can easily reconnect by simply tapping the arrow button (or you can just start holding the speak button, too!).
4. The terminal button
The terminal button allows you to see all response text coming in from the server side of the 01. You can toggle it by tapping on the button, and each toggle clears the on-device cache of text.

@ -34,27 +34,32 @@
"navigation": [
{
"group": "Getting Started",
"pages": ["getting-started/introduction", "getting-started/setup"]
},
{
"group": "Server",
"pages": ["server/setup"]
"pages": [
"getting-started/introduction",
"getting-started/getting-started"
]
},
{
"group": "Services",
"group": "Software Setup",
"pages": [
"services/language-model",
"services/speech-to-text",
"services/text-to-speech"
"software/installation",
"software/run",
"software/configure",
"software/flags"
]
},
{
"group": "Client",
"pages": ["client/setup"]
"group": "Hardware Setup",
"pages": [
"hardware/01-light",
"hardware/custom_hardware",
"hardware/desktop",
"hardware/mobile"
]
},
{
"group": "Bodies",
"pages": ["bodies/01-light", "bodies/01-heavy"]
"group": "Troubleshooting",
"pages": ["troubleshooting/faq"]
},
{
"group": "Legal",
@ -66,7 +71,7 @@
},
"footerSocials": {
"twitter": "https://x.com/OpenInterpreter",
"github": "https://github.com/KillianLucas/01",
"discord": "https://discord.gg/E2XTbkj4JF"
"github": "https://github.com/OpenInterpreter/01",
"discord": "https://discord.com/invite/Hvz9Axh84z"
}
}

@ -1,116 +0,0 @@
---
title: "Setup"
description: "Get your 01 server up and running"
---
## Run Server
```bash
poetry run 01 --server
```
## Configure
A core part of the 01 server is the interpreter which is an instance of Open Interpreter.
Open Interpreter is highly configurable and only requires updating a single file.
```bash
# Edit i.py
software/source/server/i.py
```
Properties such as `model`, `context_window`, and many more can be updated here.
### LLM service provider
If you wish to use a local model, you can use the `--llm-service` flag:
```bash
# use llamafile
poetry run 01 --server --llm-service llamafile
```
For more information about LLM service providers, check out the page on <a href="/services/language-model">Language Models</a>.
### Voice Interface
Both speech-to-text and text-to-speech can be configured in 01OS.
You are able to pass CLI flags `--tts-service` and/or `--stt-service` with the desired service provider to swap out different services
These different service providers can be found in `/services/stt` and `/services/tts`
For more information, please read about <a href="/services/speech-to-text">speech-to-text</a> and <a href="/services/text-to-speech">text-to-speech</a>
## CLI Flags
- `--server`
Run server.
- `--server-host TEXT`
Specify the server host where the server will deploy.
Default: `0.0.0.0`.
- `--server-port INTEGER`
Specify the server port where the server will deploy.
Default: `10001`.
- `--tunnel-service TEXT`
Specify the tunnel service.
Default: `ngrok`.
- `--expose`
Expose server to internet.
- `--server-url TEXT`
Specify the server URL that the client should expect.
Defaults to server-host and server-port.
Default: `None`.
- `--llm-service TEXT`
Specify the LLM service.
Default: `litellm`.
- `--model TEXT`
Specify the model.
Default: `gpt-4`.
- `--llm-supports-vision`
Specify if the LLM service supports vision.
- `--llm-supports-functions`
Specify if the LLM service supports functions.
- `--context-window INTEGER`
Specify the context window size.
Default: `2048`.
- `--max-tokens INTEGER`
Specify the maximum number of tokens.
Default: `4096`.
- `--temperature FLOAT`
Specify the temperature for generation.
Default: `0.8`.
- `--tts-service TEXT`
Specify the TTS service.
Default: `openai`.
- `--stt-service TEXT`
Specify the STT service.
Default: `openai`.
- `--local`
Use recommended local services for LLM, STT, and TTS.
- `--install-completion [bash|zsh|fish|powershell|pwsh]`
Install completion for the specified shell.
Default: `None`.
- `--show-completion [bash|zsh|fish|powershell|pwsh]`
Show completion for the specified shell, to copy it or customize the installation.
Default: `None`.
- `--help`
Show this message and exit.

@ -1,38 +0,0 @@
---
title: "Language Model"
description: "The LLM that powers your 01"
---
## llamafile
llamafile lets you distribute and run LLMs with a single file. Read more about llamafile [here](https://github.com/Mozilla-Ocho/llamafile)
```bash
# Set the LLM service to llamafile
poetry run 01 --llm-service llamafile
```
## Llamaedge
llamaedge makes it easy for you to run LLM inference apps and create OpenAI-compatible API services for the Llama2 series of LLMs locally.
Read more about Llamaedge [here](https://github.com/LlamaEdge/LlamaEdge)
```bash
# Set the LLM service to Llamaedge
poetry run 01 --llm-service llamaedge
```
## Hosted Models
01OS leverages liteLLM which supports [many hosted models](https://docs.litellm.ai/docs/providers/).
To select your providers
```bash
# Set the LLM service
poetry run 01 --llm-service openai
```
## Other Models
More instructions coming soon!

@ -1,24 +0,0 @@
---
title: "Speech To Text"
description: "Converts your voice into text"
---
## Whisper (Local)
This option installs whisper-rust to allow all speech to text to be done locally on device.
```bash
# Set a local STT service
01 --stt-service local-whisper
```
## Whisper (Hosted)
```bash
# Set STT service
01 --stt-service openai
```
## Other Models
More instructions coming soon!

@ -1,24 +0,0 @@
---
title: "Text To Speech"
description: "The service to speak the text"
---
## Piper (Local)
This option installs piper to allow all text to speech to be done locally on device.
```bash
# Set a local TTS service
01 --tts-service piper
```
## OpenAI (Hosted)
```bash
# Set TTS service
01 --tts-service openai
```
## Other Models
More instructions coming soon!

@ -0,0 +1,80 @@
---
title: "Configure"
description: "Configure your 01 instance"
---
A core part of the 01 server is the interpreter which is an instance of Open Interpreter.
Open Interpreter is highly configurable and only requires updating or creating a profile.
Properties such as `model`, `context_window`, and many more can be updated here.
To open the directory of all profiles, run:
```bash
# View profiles
poetry run 01 --profiles
```
To apply a profile to your 01 instance, use the `--profile` flag followed by the name of the profile
```bash
# Use profile
poetry run 01 --profile <profile_name>
```
### Standard Profiles
`default.py` is the default profile that is used when no profile is specified. The default TTS is OpenAI.
`fast.py` uses elevenlabs and groq, which are the fastest providers.
`local.py` uses coqui TTS and runs the --local explorer from Open Interpreter.
### Custom Profiles
If you want to make your own file, you can do so by creating a new file in the `profiles` directory.
The easiest way is to duplicate an existing profile and then update values as needed. Be sure to save the profile with a unique name.
```bash
# Use custom profile
poetry run 01 --profile <profile_name>
```
### Hosted LLMs
The default LLM for 01 is GPT-4-Turbo. You can find this in the default profile in `software/source/server/profiles/default.py`.
The fast profile uses Llama3-8b served by Groq. You can find this in the fast profile in `software/source/server/profiles/fast.py`.
```python
# Set your profile with a hosted LLM
interpreter.llm.model = "gpt-4o"
```
### Local LLMs
You can use local models to power 01.
Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio.
```python
# Set your profile with a local LLM
interpreter.local_setup()
```
### Hosted TTS
01 supports OpenAI and Elevenlabs for hosted TTS
```python
# Set your profile with a hosted TTS service
interpreter.tts = "elevenlabs"
```
### Local TTS
For local TTS, Coqui is used.
```python
# Set your profile with a local TTS service
interpreter.tts = "coqui"
```

@ -0,0 +1,42 @@
---
title: "Flags"
description: "Customize the behaviour of your 01 from the CLI"
---
## CLI Flags
- `--server`
Run server.
- `--server-host TEXT`
Specify the server host where the server will deploy.
Default: `0.0.0.0`.
- `--server-port INTEGER`
Specify the server port where the server will deploy.
Default: `10001`.
- `--tunnel-service TEXT`
Specify the tunnel service.
Default: `ngrok`.
- `--expose`
Expose server to internet.
- `--client`
Run client.
- `--server-url TEXT`
Specify the server URL that the client should expect.
Defaults to server-host and server-port.
Default: `None`.
- `--client-type TEXT`
Specify the client type.
Default: `auto`.
- `--qr`
Display QR code to scan to connect to the server.
- `--help`
Show this message and exit.

@ -0,0 +1,31 @@
---
title: "Install"
description: "Get your 01 up and running"
---
## Install 01
To install the 01 software
```bash
# Clone the repo and navigate into the 01OS directory
git clone https://github.com/OpenInterpreter/01.git
```
## Run the 01
In order to run 01 on your computer, use [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer).
Navigate to the project's software directory:
```bash
cd software
```
Install your project along with its dependencies in a virtual environment managed by Poetry.
```bash
poetry install
```
Now you should be ready to [run your 01](/software/run)

@ -0,0 +1,18 @@
---
title: "Run"
description: "Run your 01"
---
<Info> Make sure that you have navigated to the `software` directory. </Info>
To run 01 with your computer's microphone and speaker, run:
```bash
poetry run 01
```
To use 01 with your <a href="/hardware/01-light">01 Light</a>, run the server:
```bash
poetry run 01 --server
```

@ -0,0 +1,80 @@
---
title: "FAQ"
description: "Frequently Asked Questions"
---
- How do you build on top of the 01?
- Where should I start?
- Is there a walk-through for connecting a device to the server?
- What are minimum hardware requirements?
<Accordion title="How do I have code run on the client-side?">
We are working on supporting this, but we only support server-side code
execution right now.
</Accordion>
<Accordion title="How do I build a profile?">
We recommend running `--profiles`, duplicating a profile, then experimenting
with the settings in the profile file (like `system_message`).
</Accordion>
<Accordion title="Where does the server run?">
The server runs on your home computer, or whichever device you want to
control.
</Accordion>
<Accordion title="Can an 01 device connect to the desktop app, or do general customers/consumers need to set it up in their terminal?">
We are working on supporting external devices to the desktop app, but for
now the 01 will need to connect to the Python server.
</Accordion>
<Accordion title="Can I on/off certain tools?">
We are working on building this feature, but it isn't avaliable yet.
</Accordion>
- What firmware do I use to connect?
- What ideally do I need in my code to access the server correctly?
<Accordion title="Alternatives to nGrok?">
We support `--tunnel-service bore` and `--tunnel-service localtunnel` in
addition to `--tunnel-service ngrok`. [link to tunnel service docs]
</Accordion>
- If my device runs off bluetooth connected to a phone, is there a mobile app to use to connect to the server?
<Accordion title="Uses a huge deal of API credits, what options do I have for using local models? Can these be run on the client device?">
If you use `--profile local`, you won't need to use an LLM via an API. The
01 server will be responsible for LLM running, but you can run the server +
client on the same device (simply run `poetry run 01` to test this.)
</Accordion>
<Accordion title="Which model is best?">
We have found `gpt-4-turbo` to be the best, but we expect Claude Sonnet 1.5
to be comparable or better.
</Accordion>
<Accordion title="Do I need to pay for a monthly subscription?">
If you use `--profile local`, you don't need to. For hosted language models,
you may need to pay a monthly subscription.
</Accordion>
<Accordion title="Does the computer the O1 connects to need to always be on and running? If its in sleep mode will it wake up when I call on it?">
The computer does need to be running, and will not wake up if a request is
sent while it's sleeping.
</Accordion>
<Accordion title="Which Model does 01 use?">
The 01 defaults to `gpt-4-turbo`.
</Accordion>
<Accordion title="Do you support a Standalone Device/Hosted Server?">
We are exploring a few options about how to best provide a stand-alone device
connected to a virtual computer in the cloud, provided by Open Interpreter.
There will be an announcement once we have figured out the right way to do it.
But the idea is that it functions with the same capabilities as the demo, just
controlling a computer in the cloud, not the one on your desk at home.
</Accordion>
<Accordion title="How Do I Get Involved?">
We are figuring out the best way to activate the community to build the next
phase. For now, you can read over the Repository
https://github.com/OpenInterpreter/01 and join the Discord
https://discord.gg/Hvz9Axh84z to find and discuss ways to start contributing
to the open-source 01 Project!
</Accordion>
<Accordion title="Is there a Mobile App?">
The official app is being developed, and you can find instructions for how to
set it up and contribute to development here:
https://github.com/OpenInterpreter/01/tree/main/software/source/clients/mobile
Please also join the Discord https://discord.gg/Hvz9Axh84z to find and discuss
ways to start contributing to the open-source 01 Project!
</Accordion>
Loading…
Cancel
Save