pull/1/head
killian 12 months ago
parent 94faeb6779
commit 8f8a11f422

@ -0,0 +1 @@
The app is responsible for accepting user input, sending it to "/" as an [LMC message](https://docs.openinterpreter.com/protocols/lmc-messages), then recieving [streaming LMC messages](https://docs.openinterpreter.com/guides/streaming-response) and somehow displaying them to the user.

@ -0,0 +1,5 @@
<!--
This is the fullscreen UI of the 01.
-->

@ -0,0 +1,38 @@
"""
Responsible for setting up the language model, downloading it if necessary.
Ideally should pick the best LLM for the hardware.
"""
import os
import subprocess
### LLM SETUP
# Define the path to the models directory
models_dir = "01/core/models/"
# Check and create the models directory if it doesn't exist
if not os.path.exists(models_dir):
os.makedirs(models_dir)
# Define the path to a llamafile
llamafile_path = os.path.join(models_dir, "phi-2.Q4_K_M.llamafile")
# Check if the new llamafile exists, if not download it
if not os.path.exists(llamafile_path):
subprocess.run(
[
"wget",
"-O",
llamafile_path,
"https://huggingface.co/jartine/phi-2-llamafile/resolve/main/phi-2.Q4_K_M.llamafile",
],
check=True,
)
# Make the new llamafile executable
subprocess.run(["chmod", "+x", llamafile_path], check=True)
# Run the new llamafile in the background
subprocess.Popen([llamafile_path])

@ -0,0 +1,63 @@
"""
Responsible for configuring an interpreter, then using server.py to serve it at "/".
"""
from .server import serve
from interpreter import interpreter
### SYSTEM MESSAGE
# The system message is where most of the 01's behavior is configured.
# You can put code into the system message {{ in brackets like this }} which will be rendered just before the interpreter starts writing a message.
system_message = """
You are an executive assistant AI that helps the user manage their tasks. You can run Python code.
Store the user's tasks in a Python list called `tasks`.
---
The user's current task is: {{ tasks[0] if tasks else "No current tasks." }}
{{
if len(tasks) > 1:
print("The next task is: ", tasks[1])
}}
---
When the user completes the current task, you should remove it from the list and read the next item by running `tasks = tasks[1:]\ntasks[0]`. Then, tell the user what the next task is.
When the user tells you about a set of tasks, you should intelligently order tasks, batch similar tasks, and break down large tasks into smaller tasks (for this, you should consult the user and get their permission to break it down). Your goal is to manage the task list as intelligently as possible, to make the user as efficient and non-overwhelmed as possible. They will require a lot of encouragement, support, and kindness. Don't say too much about what's ahead of them just try to focus them on each step at a time.
After starting a task, you should check in with the user around the estimated completion time to see if the task is completed.
To do this, schedule a reminder based on estimated completion time using `computer.clock.schedule(datetime_object, "Your message here.")`. You'll recieve the message at `datetime_object`.
You guide the user through the list one task at a time, convincing them to move forward, giving a pep talk if need be. Your job is essentially to answer "what should I (the user) be doing right now?" for every moment of the day.
""".strip()
interpreter.system_message = system_message
### LLM SETTINGS
interpreter.llm.model = "local"
interpreter.llm.temperature = 0
interpreter.llm.api_base = "https://localhost:8080/v1" # Llamafile default
interpreter.llm.max_tokens = 1000
interpreter.llm.context_window = 3000
### MISC SETTINGS
interpreter.offline = True
interpreter.id = 206 # Used to identify itself to other interpreters. This should be changed programatically so it's unique.
### START SERVER
serve(interpreter)

@ -0,0 +1,5 @@
{
"role": "computer",
"type": "message",
"content": "Your 10:00am alarm has gone off."
}

@ -0,0 +1,24 @@
"""
Responsible for taking an interpreter, then serving it at "/" as a POST SSE endpoint, accepting and streaming LMC Messages.
https://docs.openinterpreter.com/protocols/lmc-messages
"""
from typing import Generator
import uvicorn
from fastapi import FastAPI, Request, Response
def serve(interpreter):
app = FastAPI()
@app.post("/")
async def i_endpoint(request: Request) -> Response:
async def event_stream() -> Generator[str, None, None]:
data = await request.json()
for response in interpreter.chat(message=data["message"], stream=True):
yield response
return Response(event_stream(), media_type="text/event-stream")
uvicorn.run(app, host="0.0.0.0", port=8000)

@ -0,0 +1,5 @@
# Display app/index.html on the second monitor in full-screen mode
# Setup the language model
# Setup and serve the interpreter at "/"

@ -0,0 +1,11 @@
This folder contains everything we would change about Ubuntu. A folder here represents a folder added to `root`.
---
# Plan
1. We modify the bootloader to show a circle.
2. We modify linux so that the primary display is a virtual display, and the display the user sees is the secondary display.
3. We make a fullscreen app auto-start on the secondary display, kiosk mode chromium, in /01/app/index.html.
4. We also make it so that 01/core/main.py is run on start-up. This is the interpreter.
5. We put monoliths around the system, which put information into /01/core/queue.
Loading…
Cancel
Save