@ -16,11 +16,8 @@ We want to help you build. [Apply for 1-on-1 support.](https://0ggfznkwh4j.typef
<br>
---
⚠️ **WARNING:** This experimental project is under rapid development and lacks basic safeguards. Until a stable `1.0` release, **ONLY** run this repository on devices without sensitive information or access to paid services. ⚠️
---
> [!IMPORTANT]
> This experimental project is under rapid development and lacks basic safeguards. Until a stable `1.0` release, only run this repository on devices without sensitive information or access to paid services.
Our goal is to power a billion devices with the 01OS over the next 10 years. The Cambrian explosion of AI devices.
Our goal is to power a billion devices with the 01OS over the next 10 years. The Cambrian explosion of AI devices.
We can do that with your help. Help extend the 01OS to run on new hardware, to connect with new peripherals like GPS and cameras, and add new locally running language models to unlock use-cases for this technology that no-one has even imagined yet.
We can do that with your help. Help extend the 01OS to run on new hardware, to connect with new peripherals like GPS and cameras, and add new locally running language models to unlock use-cases for this technology that no-one has even imagined yet.
In the coming months, we're going to release:
@ -10,4 +10,3 @@ In the coming months, we're going to release:
- [ ] An open-source language model for computer control
<!-- > Cela ne fonctionne pas ? Lis notre [guide d'installation](https://docs.openinterpreter.com/getting-started/setup). -->
<!-- > Cela ne fonctionne pas ? Lisez notre [guide d'installation](https://docs.openinterpreter.com/getting-started/setup). -->
```shell
brew install portaudio ffmpeg cmake # Installe les dépendances Mac OSX
@ -48,26 +48,28 @@ export OPENAI_API_KEY=sk... # OU exécute `poetry run 01 --local` pour tout exé
poetry run 01 # Exécute le simulateur 01 Light (maintenez votre barre d'espace, parlez, relâchez)
```
<!-- > Pour une installation sous Windows, lisez [le guide dédié](https://docs.openinterpreter.com/getting-started/setup#windows). -->
<br>
# Hardware
- Le **01 Light** est une interface vocale basée sur ESP32. Les instructions de construction sont [ici]. (https://github.com/OpenInterpreter/01/tree/main/hardware/light). Une liste de ce qu'il faut acheter [ici](https://github.com/OpenInterpreter/01/blob/main/hardware/light/BOM.md).
- Il fonctionne en tandem avec le **01 Server** ([guide d'installation ci-dessous](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server)) fonctionnant sur votre ordinateur domestique.
- **Mac OSX** et **Ubuntu** sont pris en charge en exécutant `poetry run 01` (**Windows** Windows est pris en charge de manière expérimentale). Cela utilise votre barre d'espace pour simuler le 01 Light..
- Le **01 Light** est une interface vocale basée sur ESP32. Les instructions de construction sont [ici]. (https://github.com/OpenInterpreter/01/tree/main/hardware/light). Une liste de ce qu'il faut acheter se trouve [ici](https://github.com/OpenInterpreter/01/blob/main/hardware/light/BOM.md).
- Il fonctionne en tandem avec le **Server 01** ([guide d'installation ci-dessous](https://github.com/OpenInterpreter/01/blob/main/README.md#01-server)) fonctionnant sur votre ordinateur.
- **Mac OSX** et **Ubuntu** sont pris en charge en exécutant `poetry run 01` (**Windows** est pris en charge de manière expérimentale). Cela utilise votre barre d'espace pour simuler le 01 Light.
- (prochainement) Le **01 Heavy** est un dispositif autonome qui exécute tout localement.
**Nous avons besoin de votre aide pour soutenir et construire plus de hardware.** Le 01 devrait pouvoir fonctionner sur tout dispositif avec entrée (microphone, clavier, etc.), sortie (haut-parleurs, écrans, moteurs, etc.) et une connexion internet (ou suffisamment de puissance de calcul pour tout exécuter localement). [Contribution Guide →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
**Nous avons besoin de votre aide pour soutenir et construire plus de hardware.** Le 01 devrait pouvoir fonctionner sur tout dispositif avec entrée (microphone, clavier, etc.), sortie (haut-parleurs, écrans, moteurs, etc.) et connexion internet (ou suffisamment de puissance de calcul pour tout exécuter localement). [Guide de Contribution →](https://github.com/OpenInterpreter/01/blob/main/CONTRIBUTING.md)
<br>
# Comment ça marche ?
Le 01 expose un websocket de speech-to-speech à l'adresse localhost:10001.
Le 01 expose un websocket de *speech-to-speech* à l'adresse `localhost:10001`.
Si vous diffusez des octets audio bruts vers `/` au format [Streaming LMC](https://docs.openinterpreter.com/guides/streaming-response), vous recevrez sa réponse dans le même format.
Si vous diffusez des octets audio bruts vers `/` au [format de streaming LMC](https://docs.openinterpreter.com/guides/streaming-response), vous recevrez sa réponse dans le même format.
Inspiré en partie par [Andrej Karpathy's LLM OS](https://twitter.com/karpathy/status/1723140519554105733), nous utilisons un [un modèle de langage inteprétant du code](https://github.com/OpenInterpreter/open-interpreter), et le sollicitons lorsque certains événements se produisent dans le [noyau de votre ordinateur](https://github.com/OpenInterpreter/01/blob/main/software/source/server/utils/kernel.py).
Inspiré en partie par [l'idée d'un OS LLM d'Andrej Karpathy](https://twitter.com/karpathy/status/1723140519554105733), nous utilisons un [un modèle de langage inteprétant du code](https://github.com/OpenInterpreter/open-interpreter), et le sollicitons lorsque certains événements se produisent dans le [noyau de votre ordinateur](https://github.com/OpenInterpreter/01/blob/main/software/source/server/utils/kernel.py).
Le 01 l'encapsule dans une interface vocale :
@ -75,29 +77,29 @@ Le 01 l'encapsule dans une interface vocale :
To communicate with different components of this system, we introduce [LMC Messages](https://docs.openinterpreter.com/protocols/lmc-messages) format, which extends OpenAI’s messages format to include a "computer" role:
Pour communiquer avec les différents composants du système, nous introduisons le [format de messages LMC](https://docs.openinterpreter.com/protocols/lmc-messages), une extension du format de message d'OpenAI qui inclut un nouveau rôle "*computer*":
## Messages Systèmes Dynamiques (Dynamic System Messages)
Les Dynamic System Messages vous permettent d'exécuter du code à l'intérieur du message système du LLM, juste avant qu'il n'apparaisse à l'IA.
Les Messages Systèmes Dynamiques vous permettent d'exécuter du code à l'intérieur du message système du LLM, juste avant qu'il n'apparaisse à l'IA.
```python
# Modifiez les paramètres suivants dans i.py
interpreter.system_message = r" The time is {{time.time()}}. " # Tout ce qui est entre doubles crochets sera exécuté comme du Python
interpreter.chat("What time is it?") # Il le saura, sans faire appel à un outil/API
interpreter.chat("What time is it?") # L'interpréteur connaitre la réponse, sans faire appel à un outil ou une API
```
# Guides
## 01 Server
Pour exécuter le serveur sur votre ordinateur de bureau et le connecter à votre 01 Light, exécutez les commandes suivantes :
Pour exécuter le serveur sur votre ordinateur et le connecter à votre 01 Light, exécutez les commandes suivantes :
```shell
brew install ngrok/ngrok/ngrok
@ -107,7 +109,7 @@ poetry run 01 --server --expose
La dernière commande affichera une URL de serveur. Vous pouvez saisir ceci dans le portail WiFi captif de votre 01 Light pour le connecter à votre serveur 01.
## Local Mode
## Mode Local
```
poetry run 01 --local
@ -117,9 +119,9 @@ Si vous souhaitez exécuter localement du speech-to-text en utilisant Whisper, v
## Personnalisation
Pour personnaliser le comportement du système, modifie le [system message, model, skills library path,](https://docs.openinterpreter.com/settings/all-settings) etc. in `i.py`. Ce fichier configure un interprète et est alimenté par Open Interpreter.
Pour personnaliser le comportement du système, modifie [`system message`, `model`, `skills library path`,](https://docs.openinterpreter.com/settings/all-settings) etc. in `i.py`. Ce fichier configure un interprète alimenté par Open Interpreter.
## Ubuntu Dependencies
## Dépendances Ubuntu
```bash
sudo apt-get install portaudio19-dev ffmpeg cmake
@ -135,7 +137,7 @@ Veuillez consulter nos [directives de contribution](CONTRIBUTING.md) pour plus d
# Roadmap
Visite [notre roadmap](/ROADMAP.md) pour voir le futur du 01.
Visitez [notre roadmap](/ROADMAP.md) pour connaitre le futur du 01.
@ -37,12 +37,16 @@ On Windows you will need to install the following:
- [Git for Windows](https://git-scm.com/download/win).
- [virtualenv](https://virtualenv.pypa.io/en/latest/installation.html) or [MiniConda](https://docs.anaconda.com/free/miniconda/miniconda-install/) to manage virtual environments.
- [Chocolatey](https://chocolatey.org/install#individual) to install the required packages.
- [Microsoft C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools):
- [ ] We could have `/i` which other interpreter's hit. That behaves more like the OpenAI POST endpoint with stream=True by default (i think this is important for users to see the exchange happening in real time, streaming `event/stream` or whatever). You could imagine some kind of handshake — another interpreter → my interpreter's /i → the sender is unrecognized → computer message is sent to /, prompting AI to ask the user to have the sending interpreter send a specific code → the user tells the sending interpreter to use that specific code → the sender is recognized and added to friends-list (`computer.inetwork.friends()`) → now they can hit eachother's i endpoints freely with `computer.inetwork.friend(id).message("hey")`.
- [ ] (OS team: this will require coordination with the OI core team, so let's talk about it / I'll explain at the next meetup.) When transfering skills that require OS control, the sender can replace those skills with that command, with one input "natural language query" (?) preceeded by the skill function name or something like that. Basically so if you ask it to do something you set up as a skill, it actually asks your computer to do it. If you ask your computer to do it directly, it's more direct.
- [ ] (OS team: this will require coordination with the OI core team, so let's talk about it / I'll explain at the next meetup.) When transfering skills that require OS control, the sender can replace those skills with that command, with one input "natural language query" (?) proceeded by the skill function name or something like that. Basically so if you ask it to do something you set up as a skill, it actually asks your computer to do it. If you ask your computer to do it directly, it's more direct.
This repository contains the source code for the 01 iOS/Android app. Work in progress, we will continue to improve this application to get it working properly.
Feel free to improve this and make a pull request!
If you want to run it on your own, you will need expo.
1. Install dependencies `npm install`
2. Run the app `npx expo start`
3. Open the app in your simulator or on your device with the expo app by scanning the QR code
Whentheusertellsyouaboutasetoftasks,youshouldintelligentlyordertasks,batchsimilartasks,andbreakdownlargetasksintosmallertasks(forthis,youshouldconsulttheuserandgettheirpermissiontobreakitdown).Yourgoalistomanagethetasklistasintelligentlyaspossible,tomaketheuserasefficientandnon-overwhelmedaspossible.Theywillrequirealotofencouragement,support,andkindness.Don't say too much about what'saheadofthem—justtrytofocusthemoneachstepatatime.
Todothis,scheduleareminderbasedonestimatedcompletiontimeusingthefunction`schedule(message="Your message here.",start="8am")`,WHICHHASALREADYBEENIMPORTED.YOUDON'T NEED TO IMPORT THE `schedule` FUNCTION. IT IS AVAILABLE. You'llreceivethemessageatthetimeyouscheduledit.Iftheusersaystomonitorsomething,simplyscheduleitwithanintervalofadurationthatmakessensefortheproblembyspecifyinganinterval,likethis:`schedule(message="Your message here.",interval="5m")`
@ -182,7 +182,6 @@ Try multiple methods before saying the task is impossible. **You can do it!**
Todothis,scheduleareminderbasedonestimatedcompletiontimeusingthefunction`schedule(datetime_object,"Your message here.")`,WHICHHASALREADYBEENIMPORTED.YOUDON'T NEED TO IMPORT THE `schedule` FUNCTION. IT IS AVALIABLE. You'llrecieve themessageat`datetime_object`.
Todothis,scheduleareminderbasedonestimatedcompletiontimeusingthefunction`schedule(datetime_object,"Your message here.")`,WHICHHASALREADYBEENIMPORTED.YOUDON'T NEED TO IMPORT THE `schedule` FUNCTION. IT IS AVALIABLE. You'llreceive themessageat`datetime_object`.
Youguidetheuserthroughthelistonetaskatatime,convincingthemtomoveforward,givingapeptalkifneedbe.Yourjobisessentiallytoanswer"what should I (the user) be doing right now?"foreverymomentoftheday.
# elif any(keyword in message for keyword in ['network', 'IP', 'internet', 'LAN', 'WAN', 'router', 'switch']) and "networkStatusForFlags" not in message:
# If there are no downloaded models, prompt them to download a model and try again
ifnotnames:
time.sleep(1)
interpreter.display_message(f"\nYou don't have any Ollama models downloaded. To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n")
interpreter.display_message(
"\nYou don't have any Ollama models downloaded. To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n"
)
print("Please download a model then try again\n")
time.sleep(2)
sys.exit(1)
# If there are models, prompt them to select one
else:
time.sleep(1)
interpreter.display_message(f"**{len(names)} Ollama model{'s'iflen(names)!=1else''} found.** To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n")
interpreter.display_message(
f"**{len(names)} Ollama model{'s'iflen(names)!=1else''} found.** To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n"
)
# Create a new inquirer selection from the names
name_question=[
inquirer.List('name',message="Select a downloaded Ollama model",choices=names),
print("Ollama is not installed or not recognized as a command.")
time.sleep(1)
interpreter.display_message(f"\nPlease visit [https://ollama.com/](https://ollama.com/) to download Ollama and try again\n")
interpreter.display_message(
"\nPlease visit [https://ollama.com/](https://ollama.com/) to download Ollama and try again\n"
)
time.sleep(2)
sys.exit(1)
# elif selected_model == "Jan":
# interpreter.display_message(
# """
@ -108,7 +122,6 @@ def select_local_model():
# 3. Copy the ID of the model and enter it below.
# 3. Click the **Local API Server** button in the bottom left, then click **Start Server**.
# Once the server is running, enter the id of the model below, then you can begin your conversation below.
# """
@ -117,7 +130,7 @@ def select_local_model():
# interpreter.llm.max_tokens = 1000
# interpreter.llm.context_window = 3000
# time.sleep(1)
# # Prompt the user to enter the name of the model running on Jan
# model_name_question = [
# inquirer.Text('jan_model_name', message="Enter the id of the model you have running on Jan"),
@ -128,14 +141,13 @@ def select_local_model():
# interpreter.llm.model = ""
# interpreter.display_message(f"\nUsing Jan model: `{jan_model_name}` \n")
# time.sleep(1)
# Set the system message to a minimal version for all local models.
# Set offline for all local models
interpreter.offline=True
interpreter.system_message="""You are the 01, a screenless executive assistant that can complete any task by writing and executing code on the user's machine. Just write a markdown code block! The user has given you full and complete permission.
interpreter.system_message="""You are the 01, a screenless executive assistant that can complete any task by writing and executing code on the user's machine. Just write a markdown code block! The user has given you full and complete permission.