Re-lint after rebase

pull/223/head
Davy Peter Braun 9 months ago
parent 403a29f0d6
commit 79ee710064

@ -1,8 +1,8 @@
# Roadmap # Roadmap
Our goal is to power a billion devices with the 01OS over the next 10 years. The Cambrian explosion of AI devices. Our goal is to power a billion devices with the 01OS over the next 10 years. The Cambrian explosion of AI devices.
We can do that with your help. Help extend the 01OS to run on new hardware, to connect with new peripherals like GPS and cameras, and add new locally running language models to unlock use-cases for this technology that no-one has even imagined yet. We can do that with your help. Help extend the 01OS to run on new hardware, to connect with new peripherals like GPS and cameras, and add new locally running language models to unlock use-cases for this technology that no-one has even imagined yet.
In the coming months, we're going to release: In the coming months, we're going to release:
@ -10,4 +10,3 @@ In the coming months, we're going to release:
- [ ] An open-source language model for computer control - [ ] An open-source language model for computer control
- [ ] A react-native app for your phone - [ ] A react-native app for your phone
- [ ] A hand-held device that runs fully offline. - [ ] A hand-held device that runs fully offline.

@ -36,7 +36,7 @@
- [ ] Sends to describe API - [ ] Sends to describe API
- [ ] prints and returns description - [ ] prints and returns description
- [ ] Llamafile for phi-2 + moondream - [ ] Llamafile for phi-2 + moondream
- [ ] test on rPi + Jetson (+android mini phone?) - [ ] test on rPi + Jetson (+android mini phone?)
**OS** **OS**
@ -66,7 +66,7 @@
**Hardware** **Hardware**
- [ ] (Hardware and software) Get the 01OS working on the **Jetson** or Pi. Pick one to move forward with. - [ ] (Hardware and software) Get the 01OS working on the **Jetson** or Pi. Pick one to move forward with.
- [ ] Connect the Seeed Sense (ESP32 with Wifi, Bluetooth and a mic) to a small DAC + amplifier + speaker. - [ ] Connect the Seeed Sense (ESP32 with Wifi, Bluetooth and a mic) to a small DAC + amplifier + speaker.
- [ ] Connect the Seeed Sense to a battery. - [ ] Connect the Seeed Sense to a battery.
- [ ] Configure the ESP32 to be a wireless mic + speaker for the Jetson or Pi. - [ ] Configure the ESP32 to be a wireless mic + speaker for the Jetson or Pi.

@ -34,9 +34,9 @@ poetry run 01 --client
### Flags ### Flags
- `--client` - `--client`
Run client. Run client.
- `--client-type TEXT` - `--client-type TEXT`
Specify the client type. Specify the client type.
Default: `auto`. Default: `auto`.

@ -44,73 +44,73 @@ For more information, please read about <a href="/services/speech-to-text">speec
## CLI Flags ## CLI Flags
- `--server` - `--server`
Run server. Run server.
- `--server-host TEXT` - `--server-host TEXT`
Specify the server host where the server will deploy. Specify the server host where the server will deploy.
Default: `0.0.0.0`. Default: `0.0.0.0`.
- `--server-port INTEGER` - `--server-port INTEGER`
Specify the server port where the server will deploy. Specify the server port where the server will deploy.
Default: `10001`. Default: `10001`.
- `--tunnel-service TEXT` - `--tunnel-service TEXT`
Specify the tunnel service. Specify the tunnel service.
Default: `ngrok`. Default: `ngrok`.
- `--expose` - `--expose`
Expose server to internet. Expose server to internet.
- `--server-url TEXT` - `--server-url TEXT`
Specify the server URL that the client should expect. Specify the server URL that the client should expect.
Defaults to server-host and server-port. Defaults to server-host and server-port.
Default: `None`. Default: `None`.
- `--llm-service TEXT` - `--llm-service TEXT`
Specify the LLM service. Specify the LLM service.
Default: `litellm`. Default: `litellm`.
- `--model TEXT` - `--model TEXT`
Specify the model. Specify the model.
Default: `gpt-4`. Default: `gpt-4`.
- `--llm-supports-vision` - `--llm-supports-vision`
Specify if the LLM service supports vision. Specify if the LLM service supports vision.
- `--llm-supports-functions` - `--llm-supports-functions`
Specify if the LLM service supports functions. Specify if the LLM service supports functions.
- `--context-window INTEGER` - `--context-window INTEGER`
Specify the context window size. Specify the context window size.
Default: `2048`. Default: `2048`.
- `--max-tokens INTEGER` - `--max-tokens INTEGER`
Specify the maximum number of tokens. Specify the maximum number of tokens.
Default: `4096`. Default: `4096`.
- `--temperature FLOAT` - `--temperature FLOAT`
Specify the temperature for generation. Specify the temperature for generation.
Default: `0.8`. Default: `0.8`.
- `--tts-service TEXT` - `--tts-service TEXT`
Specify the TTS service. Specify the TTS service.
Default: `openai`. Default: `openai`.
- `--stt-service TEXT` - `--stt-service TEXT`
Specify the STT service. Specify the STT service.
Default: `openai`. Default: `openai`.
- `--local` - `--local`
Use recommended local services for LLM, STT, and TTS. Use recommended local services for LLM, STT, and TTS.
- `--install-completion [bash|zsh|fish|powershell|pwsh]` - `--install-completion [bash|zsh|fish|powershell|pwsh]`
Install completion for the specified shell. Install completion for the specified shell.
Default: `None`. Default: `None`.
- `--show-completion [bash|zsh|fish|powershell|pwsh]` - `--show-completion [bash|zsh|fish|powershell|pwsh]`
Show completion for the specified shell, to copy it or customize the installation. Show completion for the specified shell, to copy it or customize the installation.
Default: `None`. Default: `None`.
- `--help` - `--help`
Show this message and exit. Show this message and exit.

@ -29,4 +29,4 @@
.body { .body {
font-weight: normal; font-weight: normal;
} }

@ -22,13 +22,13 @@ Please install first [PlatformIO](http://platformio.org/) open source ecosystem
```bash ```bash
cd software/source/clients/esp32/src/client/ cd software/source/clients/esp32/src/client/
``` ```
And build and upload the firmware with a simple command: And build and upload the firmware with a simple command:
```bash ```bash
pio run --target upload pio run --target upload
``` ```
## Wifi ## Wifi

@ -19,4 +19,4 @@
![](mac-share-internet-v2.png) ![](mac-share-internet-v2.png)
d. Now the Jetson should have connectivity! d. Now the Jetson should have connectivity!

@ -1,4 +1,3 @@
_archive _archive
__pycache__ __pycache__
.idea .idea

@ -54,4 +54,4 @@ target-version = ['py311']
[tool.isort] [tool.isort]
profile = "black" profile = "black"
multi_line_output = 3 multi_line_output = 3
include_trailing_comma = true include_trailing_comma = true

@ -19,11 +19,10 @@ Please install first [PlatformIO](http://platformio.org/) open source ecosystem
```bash ```bash
cd client/ cd client/
``` ```
And build and upload the firmware with a simple command: And build and upload the firmware with a simple command:
```bash ```bash
pio run --target upload pio run --target upload
``` ```

@ -78,11 +78,11 @@ const char post_connected_html[] PROGMEM = R"=====(
<head> <head>
<title>01OS Setup</title> <title>01OS Setup</title>
<style> <style>
* { * {
box-sizing: border-box; box-sizing: border-box;
} }
body { body {
background-color: #fff; background-color: #fff;
margin: 0; margin: 0;
@ -122,15 +122,15 @@ const char post_connected_html[] PROGMEM = R"=====(
input[type="submit"]:hover { input[type="submit"]:hover {
background-color: #333; background-color: #333;
} }
#error_message { #error_message {
color: red; color: red;
font-weight: bold; font-weight: bold;
text-align: center; text-align: center;
width: 100%; width: 100%;
margin-top: 20px; margin-top: 20px;
max-width: 300px; max-width: 300px;
} }
</style> </style>
@ -144,7 +144,7 @@ const char post_connected_html[] PROGMEM = R"=====(
<input type="text" id="server_address" name="server_address"><br><br> <input type="text" id="server_address" name="server_address"><br><br>
</div> </div>
<input type="submit" value="Connect"/> <input type="submit" value="Connect"/>
<p id="error_message"></p> <p id="error_message"></p>
@ -270,7 +270,7 @@ bool connectTo01OS(String server_address)
portStr = server_address.substring(colonIndex + 1); portStr = server_address.substring(colonIndex + 1);
} else { } else {
domain = server_address; domain = server_address;
portStr = ""; portStr = "";
} }
WiFiClient c; WiFiClient c;
@ -281,7 +281,7 @@ bool connectTo01OS(String server_address)
port = portStr.toInt(); port = portStr.toInt();
} }
HttpClient http(c, domain.c_str(), port); HttpClient http(c, domain.c_str(), port);
Serial.println("Connecting to 01OS at " + domain + ":" + port + "/ping"); Serial.println("Connecting to 01OS at " + domain + ":" + port + "/ping");
if (domain.indexOf("ngrok") != -1) { if (domain.indexOf("ngrok") != -1) {
@ -363,7 +363,7 @@ bool connectTo01OS(String server_address)
Serial.print("Connection failed: "); Serial.print("Connection failed: ");
Serial.println(err); Serial.println(err);
} }
return connectionSuccess; return connectionSuccess;
} }
@ -436,7 +436,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
{ {
String ssid; String ssid;
String password; String password;
// Check if SSID parameter exists and assign it // Check if SSID parameter exists and assign it
if(request->hasParam("ssid", true)) { if(request->hasParam("ssid", true)) {
ssid = request->getParam("ssid", true)->value(); ssid = request->getParam("ssid", true)->value();
@ -446,7 +446,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
Serial.println("OTHER SSID SELECTED: " + ssid); Serial.println("OTHER SSID SELECTED: " + ssid);
} }
} }
// Check if Password parameter exists and assign it // Check if Password parameter exists and assign it
if(request->hasParam("password", true)) { if(request->hasParam("password", true)) {
password = request->getParam("password", true)->value(); password = request->getParam("password", true)->value();
@ -458,7 +458,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
if(request->hasParam("password", true) && request->hasParam("ssid", true)) { if(request->hasParam("password", true) && request->hasParam("ssid", true)) {
connectToWifi(ssid, password); connectToWifi(ssid, password);
} }
// Redirect user or send a response back // Redirect user or send a response back
if (WiFi.status() == WL_CONNECTED) { if (WiFi.status() == WL_CONNECTED) {
@ -466,7 +466,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
AsyncWebServerResponse *response = request->beginResponse(200, "text/html", htmlContent); AsyncWebServerResponse *response = request->beginResponse(200, "text/html", htmlContent);
response->addHeader("Cache-Control", "public,max-age=31536000"); // save this file to cache for 1 year (unless you refresh) response->addHeader("Cache-Control", "public,max-age=31536000"); // save this file to cache for 1 year (unless you refresh)
request->send(response); request->send(response);
Serial.println("Served Post connection HTML Page"); Serial.println("Served Post connection HTML Page");
} else { } else {
request->send(200, "text/plain", "Failed to connect to " + ssid); request->send(200, "text/plain", "Failed to connect to " + ssid);
} }); } });
@ -474,7 +474,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
server.on("/submit_01os", HTTP_POST, [](AsyncWebServerRequest *request) server.on("/submit_01os", HTTP_POST, [](AsyncWebServerRequest *request)
{ {
String server_address; String server_address;
// Check if SSID parameter exists and assign it // Check if SSID parameter exists and assign it
if(request->hasParam("server_address", true)) { if(request->hasParam("server_address", true)) {
server_address = request->getParam("server_address", true)->value(); server_address = request->getParam("server_address", true)->value();
@ -490,7 +490,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
{ {
AsyncWebServerResponse *response = request->beginResponse(200, "text/html", successHtml); AsyncWebServerResponse *response = request->beginResponse(200, "text/html", successHtml);
response->addHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // Prevent caching of this page response->addHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // Prevent caching of this page
request->send(response); request->send(response);
Serial.println(" "); Serial.println(" ");
Serial.println("Connected to 01 websocket!"); Serial.println("Connected to 01 websocket!");
Serial.println(" "); Serial.println(" ");
@ -502,7 +502,7 @@ void setUpWebserver(AsyncWebServer &server, const IPAddress &localIP)
String htmlContent = String(post_connected_html); // Load your HTML template String htmlContent = String(post_connected_html); // Load your HTML template
// Inject the error message // Inject the error message
htmlContent.replace("<p id=\"error_message\"></p>", "<p id=\"error_message\" style=\"color: red;\">Error connecting, please try again.</p>"); htmlContent.replace("<p id=\"error_message\"></p>", "<p id=\"error_message\" style=\"color: red;\">Error connecting, please try again.</p>");
AsyncWebServerResponse *response = request->beginResponse(200, "text/html", htmlContent); AsyncWebServerResponse *response = request->beginResponse(200, "text/html", htmlContent);
response->addHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // Prevent caching of this page response->addHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // Prevent caching of this page
request->send(response); request->send(response);
@ -622,7 +622,7 @@ void InitI2SSpeakerOrMic(int mode)
#if ESP_IDF_VERSION > ESP_IDF_VERSION_VAL(4, 1, 0) #if ESP_IDF_VERSION > ESP_IDF_VERSION_VAL(4, 1, 0)
.communication_format = .communication_format =
I2S_COMM_FORMAT_STAND_I2S, // Set the format of the communication. I2S_COMM_FORMAT_STAND_I2S, // Set the format of the communication.
#else #else
.communication_format = I2S_COMM_FORMAT_I2S, .communication_format = I2S_COMM_FORMAT_I2S,
#endif #endif
.intr_alloc_flags = ESP_INTR_FLAG_LEVEL1, .intr_alloc_flags = ESP_INTR_FLAG_LEVEL1,
@ -779,17 +779,17 @@ void setup() {
Serial.begin(115200); // Initialize serial communication at 115200 baud rate. Serial.begin(115200); // Initialize serial communication at 115200 baud rate.
// Attempt to reconnect to WiFi using stored credentials. // Attempt to reconnect to WiFi using stored credentials.
// Check if WiFi is connected but the server URL isn't stored // Check if WiFi is connected but the server URL isn't stored
Serial.setTxBufferSize(1024); // Set the transmit buffer size for the Serial object. Serial.setTxBufferSize(1024); // Set the transmit buffer size for the Serial object.
WiFi.mode(WIFI_AP_STA); // Set WiFi mode to both AP and STA. WiFi.mode(WIFI_AP_STA); // Set WiFi mode to both AP and STA.
// delay(100); // Short delay to ensure mode change takes effect // delay(100); // Short delay to ensure mode change takes effect
// WiFi.softAPConfig(localIP, gatewayIP, subnetMask); // WiFi.softAPConfig(localIP, gatewayIP, subnetMask);
// WiFi.softAP(ssid, password); // WiFi.softAP(ssid, password);
startSoftAccessPoint(ssid, password, localIP, gatewayIP); startSoftAccessPoint(ssid, password, localIP, gatewayIP);
setUpDNSServer(dnsServer, localIP); setUpDNSServer(dnsServer, localIP);
setUpWebserver(server, localIP); setUpWebserver(server, localIP);
tryReconnectWiFi(); tryReconnectWiFi();
// Print a welcome message to the Serial port. // Print a welcome message to the Serial port.
@ -823,7 +823,7 @@ void loop()
if ((millis() - last_dns_ms) > DNS_INTERVAL) { if ((millis() - last_dns_ms) > DNS_INTERVAL) {
last_dns_ms = millis(); // seems to help with stability, if you are doing other things in the loop this may not be needed last_dns_ms = millis(); // seems to help with stability, if you are doing other things in the loop this may not be needed
dnsServer.processNextRequest(); // I call this atleast every 10ms in my other projects (can be higher but I haven't tested it for stability) dnsServer.processNextRequest(); // I call this atleast every 10ms in my other projects (can be higher but I haven't tested it for stability)
} }
// Check WiFi connection status // Check WiFi connection status
if (WiFi.status() == WL_CONNECTED && !hasSetupWebsocket) if (WiFi.status() == WL_CONNECTED && !hasSetupWebsocket)
@ -865,4 +865,4 @@ void loop()
M5.update(); M5.update();
webSocket.loop(); webSocket.loop();
} }
} }

@ -10,7 +10,7 @@ platform = espressif32
framework = arduino framework = arduino
monitor_speed = 115200 monitor_speed = 115200
upload_speed = 1500000 upload_speed = 1500000
monitor_filters = monitor_filters =
esp32_exception_decoder esp32_exception_decoder
time time
build_flags = build_flags =
@ -23,7 +23,7 @@ board = esp32dev
[env:m5echo] [env:m5echo]
extends = esp32common extends = esp32common
lib_deps = lib_deps =
m5stack/M5Atom @ ^0.1.2 m5stack/M5Atom @ ^0.1.2
links2004/WebSockets @ ^2.4.1 links2004/WebSockets @ ^2.4.1
;esphome/ESPAsyncWebServer-esphome @ ^3.1.0 ;esphome/ESPAsyncWebServer-esphome @ ^3.1.0

@ -2,9 +2,11 @@ from ..base_device import Device
device = Device() device = Device()
def main(server_url): def main(server_url):
device.server_url = server_url device.server_url = server_url
device.start() device.start()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

@ -2,9 +2,11 @@ from ..base_device import Device
device = Device() device = Device()
def main(server_url): def main(server_url):
device.server_url = server_url device.server_url = server_url
device.start() device.start()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

@ -2,8 +2,10 @@ from ..base_device import Device
device = Device() device = Device()
def main(): def main():
device.start() device.start()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

@ -2,9 +2,11 @@ from ..base_device import Device
device = Device() device = Device()
def main(server_url): def main(server_url):
device.server_url = server_url device.server_url = server_url
device.start() device.start()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

@ -1,4 +1,5 @@
from dotenv import load_dotenv from dotenv import load_dotenv
load_dotenv() # take environment variables from .env. load_dotenv() # take environment variables from .env.
import os import os
@ -8,7 +9,7 @@ from pathlib import Path
### LLM SETUP ### LLM SETUP
# Define the path to a llamafile # Define the path to a llamafile
llamafile_path = Path(__file__).parent / 'model.llamafile' llamafile_path = Path(__file__).parent / "model.llamafile"
# Check if the new llamafile exists, if not download it # Check if the new llamafile exists, if not download it
if not os.path.exists(llamafile_path): if not os.path.exists(llamafile_path):
@ -25,4 +26,4 @@ if not os.path.exists(llamafile_path):
subprocess.run(["chmod", "+x", llamafile_path], check=True) subprocess.run(["chmod", "+x", llamafile_path], check=True)
# Run the new llamafile # Run the new llamafile
subprocess.run([str(llamafile_path)], check=True) subprocess.run([str(llamafile_path)], check=True)

@ -1,6 +1,5 @@
class Llm: class Llm:
def __init__(self, config): def __init__(self, config):
# Litellm is used by OI by default, so we just modify OI # Litellm is used by OI by default, so we just modify OI
interpreter = config["interpreter"] interpreter = config["interpreter"]
@ -10,6 +9,3 @@ class Llm:
setattr(interpreter, key.replace("-", "_"), value) setattr(interpreter, key.replace("-", "_"), value)
self.llm = interpreter.llm.completions self.llm = interpreter.llm.completions

@ -3,29 +3,54 @@ import subprocess
import requests import requests
import json import json
class Llm: class Llm:
def __init__(self, config): def __init__(self, config):
self.install(config["service_directory"]) self.install(config["service_directory"])
def install(self, service_directory): def install(self, service_directory):
LLM_FOLDER_PATH = service_directory LLM_FOLDER_PATH = service_directory
self.llm_directory = os.path.join(LLM_FOLDER_PATH, 'llm') self.llm_directory = os.path.join(LLM_FOLDER_PATH, "llm")
if not os.path.isdir(self.llm_directory): # Check if the LLM directory exists if not os.path.isdir(self.llm_directory): # Check if the LLM directory exists
os.makedirs(LLM_FOLDER_PATH, exist_ok=True) os.makedirs(LLM_FOLDER_PATH, exist_ok=True)
# Install WasmEdge # Install WasmEdge
subprocess.run(['curl', '-sSf', 'https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh', '|', 'bash', '-s', '--', '--plugin', 'wasi_nn-ggml']) subprocess.run(
[
"curl",
"-sSf",
"https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh",
"|",
"bash",
"-s",
"--",
"--plugin",
"wasi_nn-ggml",
]
)
# Download the Qwen1.5-0.5B-Chat model GGUF file # Download the Qwen1.5-0.5B-Chat model GGUF file
MODEL_URL = "https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/resolve/main/Qwen1.5-0.5B-Chat-Q5_K_M.gguf" MODEL_URL = "https://huggingface.co/second-state/Qwen1.5-0.5B-Chat-GGUF/resolve/main/Qwen1.5-0.5B-Chat-Q5_K_M.gguf"
subprocess.run(['curl', '-LO', MODEL_URL], cwd=self.llm_directory) subprocess.run(["curl", "-LO", MODEL_URL], cwd=self.llm_directory)
# Download the llama-api-server.wasm app # Download the llama-api-server.wasm app
APP_URL = "https://github.com/LlamaEdge/LlamaEdge/releases/latest/download/llama-api-server.wasm" APP_URL = "https://github.com/LlamaEdge/LlamaEdge/releases/latest/download/llama-api-server.wasm"
subprocess.run(['curl', '-LO', APP_URL], cwd=self.llm_directory) subprocess.run(["curl", "-LO", APP_URL], cwd=self.llm_directory)
# Run the API server # Run the API server
subprocess.run(['wasmedge', '--dir', '.:.', '--nn-preload', 'default:GGML:AUTO:Qwen1.5-0.5B-Chat-Q5_K_M.gguf', 'llama-api-server.wasm', '-p', 'llama-2-chat'], cwd=self.llm_directory) subprocess.run(
[
"wasmedge",
"--dir",
".:.",
"--nn-preload",
"default:GGML:AUTO:Qwen1.5-0.5B-Chat-Q5_K_M.gguf",
"llama-api-server.wasm",
"-p",
"llama-2-chat",
],
cwd=self.llm_directory,
)
print("LLM setup completed.") print("LLM setup completed.")
else: else:
@ -33,17 +58,11 @@ class Llm:
def llm(self, messages): def llm(self, messages):
url = "http://localhost:8080/v1/chat/completions" url = "http://localhost:8080/v1/chat/completions"
headers = { headers = {"accept": "application/json", "Content-Type": "application/json"}
'accept': 'application/json', data = {"messages": messages, "model": "llama-2-chat"}
'Content-Type': 'application/json' with requests.post(
} url, headers=headers, data=json.dumps(data), stream=True
data = { ) as response:
"messages": messages,
"model": "llama-2-chat"
}
with requests.post(url, headers=headers, data=json.dumps(data), stream=True) as response:
for line in response.iter_lines(): for line in response.iter_lines():
if line: if line:
yield json.loads(line) yield json.loads(line)

@ -7,4 +7,4 @@ target/
**/*.rs.bk **/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information # MSVC Windows builds of rustc generate these, which store debugging information
*.pdb *.pdb

@ -11,4 +11,4 @@ clap = { version = "4.4.18", features = ["derive"] }
cpal = "0.15.2" cpal = "0.15.2"
hound = "3.5.1" hound = "3.5.1"
whisper-rs = "0.10.0" whisper-rs = "0.10.0"
whisper-rs-sys = "0.8.0" whisper-rs-sys = "0.8.0"

@ -10,7 +10,7 @@ struct Args {
/// This is the model for Whisper STT /// This is the model for Whisper STT
#[arg(short, long, value_parser, required = true)] #[arg(short, long, value_parser, required = true)]
model_path: PathBuf, model_path: PathBuf,
/// This is the wav audio file that will be converted from speech to text /// This is the wav audio file that will be converted from speech to text
#[arg(short, long, value_parser, required = true)] #[arg(short, long, value_parser, required = true)]
file_path: Option<PathBuf>, file_path: Option<PathBuf>,
@ -31,4 +31,4 @@ fn main() {
Ok(transcription) => print!("{}", transcription), Ok(transcription) => print!("{}", transcription),
Err(e) => panic!("Error: {}", e), Err(e) => panic!("Error: {}", e),
} }
} }

@ -61,4 +61,4 @@ pub fn transcribe(model_path: &PathBuf, file_path: &PathBuf) -> Result<String, S
} }
Ok(transcription) Ok(transcription)
} }

@ -6,7 +6,6 @@ class Stt:
return stt(audio_file_path) return stt(audio_file_path)
from datetime import datetime from datetime import datetime
import os import os
import contextlib import contextlib
@ -19,6 +18,7 @@ from openai import OpenAI
client = OpenAI() client = OpenAI()
def convert_mime_type_to_format(mime_type: str) -> str: def convert_mime_type_to_format(mime_type: str) -> str:
if mime_type == "audio/x-wav" or mime_type == "audio/wav": if mime_type == "audio/x-wav" or mime_type == "audio/wav":
return "wav" return "wav"
@ -29,30 +29,37 @@ def convert_mime_type_to_format(mime_type: str) -> str:
return mime_type return mime_type
@contextlib.contextmanager @contextlib.contextmanager
def export_audio_to_wav_ffmpeg(audio: bytearray, mime_type: str) -> str: def export_audio_to_wav_ffmpeg(audio: bytearray, mime_type: str) -> str:
temp_dir = tempfile.gettempdir() temp_dir = tempfile.gettempdir()
# Create a temporary file with the appropriate extension # Create a temporary file with the appropriate extension
input_ext = convert_mime_type_to_format(mime_type) input_ext = convert_mime_type_to_format(mime_type)
input_path = os.path.join(temp_dir, f"input_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.{input_ext}") input_path = os.path.join(
with open(input_path, 'wb') as f: temp_dir, f"input_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.{input_ext}"
)
with open(input_path, "wb") as f:
f.write(audio) f.write(audio)
# Check if the input file exists # Check if the input file exists
assert os.path.exists(input_path), f"Input file does not exist: {input_path}" assert os.path.exists(input_path), f"Input file does not exist: {input_path}"
# Export to wav # Export to wav
output_path = os.path.join(temp_dir, f"output_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.wav") output_path = os.path.join(
temp_dir, f"output_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.wav"
)
if mime_type == "audio/raw": if mime_type == "audio/raw":
ffmpeg.input( ffmpeg.input(
input_path, input_path,
f='s16le', f="s16le",
ar='16000', ar="16000",
ac=1, ac=1,
).output(output_path, loglevel='panic').run() ).output(output_path, loglevel="panic").run()
else: else:
ffmpeg.input(input_path).output(output_path, acodec='pcm_s16le', ac=1, ar='16k', loglevel='panic').run() ffmpeg.input(input_path).output(
output_path, acodec="pcm_s16le", ac=1, ar="16k", loglevel="panic"
).run()
try: try:
yield output_path yield output_path
@ -60,39 +67,49 @@ def export_audio_to_wav_ffmpeg(audio: bytearray, mime_type: str) -> str:
os.remove(input_path) os.remove(input_path)
os.remove(output_path) os.remove(output_path)
def run_command(command): def run_command(command):
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) result = subprocess.run(
command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
return result.stdout, result.stderr return result.stdout, result.stderr
def get_transcription_file(wav_file_path: str):
local_path = os.path.join(os.path.dirname(__file__), 'local_service')
whisper_rust_path = os.path.join(os.path.dirname(__file__), 'whisper-rust', 'target', 'release')
model_name = os.getenv('WHISPER_MODEL_NAME', 'ggml-tiny.en.bin')
output, error = run_command([ def get_transcription_file(wav_file_path: str):
os.path.join(whisper_rust_path, 'whisper-rust'), local_path = os.path.join(os.path.dirname(__file__), "local_service")
'--model-path', os.path.join(local_path, model_name), whisper_rust_path = os.path.join(
'--file-path', wav_file_path os.path.dirname(__file__), "whisper-rust", "target", "release"
]) )
model_name = os.getenv("WHISPER_MODEL_NAME", "ggml-tiny.en.bin")
output, error = run_command(
[
os.path.join(whisper_rust_path, "whisper-rust"),
"--model-path",
os.path.join(local_path, model_name),
"--file-path",
wav_file_path,
]
)
return output return output
def get_transcription_bytes(audio_bytes: bytearray, mime_type): def get_transcription_bytes(audio_bytes: bytearray, mime_type):
with export_audio_to_wav_ffmpeg(audio_bytes, mime_type) as wav_file_path: with export_audio_to_wav_ffmpeg(audio_bytes, mime_type) as wav_file_path:
return get_transcription_file(wav_file_path) return get_transcription_file(wav_file_path)
def stt_bytes(audio_bytes: bytearray, mime_type="audio/wav"): def stt_bytes(audio_bytes: bytearray, mime_type="audio/wav"):
with export_audio_to_wav_ffmpeg(audio_bytes, mime_type) as wav_file_path: with export_audio_to_wav_ffmpeg(audio_bytes, mime_type) as wav_file_path:
return stt_wav(wav_file_path) return stt_wav(wav_file_path)
def stt_wav(wav_file_path: str):
def stt_wav(wav_file_path: str):
audio_file = open(wav_file_path, "rb") audio_file = open(wav_file_path, "rb")
try: try:
transcript = client.audio.transcriptions.create( transcript = client.audio.transcriptions.create(
model="whisper-1", model="whisper-1", file=audio_file, response_format="text"
file=audio_file,
response_format="text"
) )
except openai.BadRequestError as e: except openai.BadRequestError as e:
print(f"openai.BadRequestError: {e}") print(f"openai.BadRequestError: {e}")
@ -100,10 +117,13 @@ def stt_wav(wav_file_path: str):
return transcript return transcript
def stt(input_data, mime_type="audio/wav"): def stt(input_data, mime_type="audio/wav"):
if isinstance(input_data, str): if isinstance(input_data, str):
return stt_wav(input_data) return stt_wav(input_data)
elif isinstance(input_data, bytearray): elif isinstance(input_data, bytearray):
return stt_bytes(input_data, mime_type) return stt_bytes(input_data, mime_type)
else: else:
raise ValueError("Input data should be either a path to a wav file (str) or audio bytes (bytearray)") raise ValueError(
"Input data should be either a path to a wav file (str) or audio bytes (bytearray)"
)

@ -13,26 +13,40 @@ class Tts:
self.install(config["service_directory"]) self.install(config["service_directory"])
def tts(self, text): def tts(self, text):
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_file: with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_file:
output_file = temp_file.name output_file = temp_file.name
piper_dir = self.piper_directory piper_dir = self.piper_directory
subprocess.run([ subprocess.run(
os.path.join(piper_dir, 'piper'), [
'--model', os.path.join(piper_dir, os.getenv('PIPER_VOICE_NAME', 'en_US-lessac-medium.onnx')), os.path.join(piper_dir, "piper"),
'--output_file', output_file "--model",
], input=text, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) os.path.join(
piper_dir,
os.getenv("PIPER_VOICE_NAME", "en_US-lessac-medium.onnx"),
),
"--output_file",
output_file,
],
input=text,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
# TODO: hack to format audio correctly for device # TODO: hack to format audio correctly for device
outfile = tempfile.gettempdir() + "/" + "raw.dat" outfile = tempfile.gettempdir() + "/" + "raw.dat"
ffmpeg.input(temp_file.name).output(outfile, f="s16le", ar="16000", ac="1", loglevel='panic').run() ffmpeg.input(temp_file.name).output(
outfile, f="s16le", ar="16000", ac="1", loglevel="panic"
).run()
return outfile return outfile
def install(self, service_directory): def install(self, service_directory):
PIPER_FOLDER_PATH = service_directory PIPER_FOLDER_PATH = service_directory
self.piper_directory = os.path.join(PIPER_FOLDER_PATH, 'piper') self.piper_directory = os.path.join(PIPER_FOLDER_PATH, "piper")
if not os.path.isdir(self.piper_directory): # Check if the Piper directory exists if not os.path.isdir(
self.piper_directory
): # Check if the Piper directory exists
os.makedirs(PIPER_FOLDER_PATH, exist_ok=True) os.makedirs(PIPER_FOLDER_PATH, exist_ok=True)
# Determine OS and architecture # Determine OS and architecture
@ -60,52 +74,92 @@ class Tts:
asset_url = f"{PIPER_URL}{PIPER_ASSETNAME}" asset_url = f"{PIPER_URL}{PIPER_ASSETNAME}"
if OS == "windows": if OS == "windows":
asset_url = asset_url.replace(".tar.gz", ".zip") asset_url = asset_url.replace(".tar.gz", ".zip")
# Download and extract Piper # Download and extract Piper
urllib.request.urlretrieve(asset_url, os.path.join(PIPER_FOLDER_PATH, PIPER_ASSETNAME)) urllib.request.urlretrieve(
asset_url, os.path.join(PIPER_FOLDER_PATH, PIPER_ASSETNAME)
)
# Extract the downloaded file # Extract the downloaded file
if OS == "windows": if OS == "windows":
import zipfile import zipfile
with zipfile.ZipFile(os.path.join(PIPER_FOLDER_PATH, PIPER_ASSETNAME), 'r') as zip_ref:
with zipfile.ZipFile(
os.path.join(PIPER_FOLDER_PATH, PIPER_ASSETNAME), "r"
) as zip_ref:
zip_ref.extractall(path=PIPER_FOLDER_PATH) zip_ref.extractall(path=PIPER_FOLDER_PATH)
else: else:
with tarfile.open(os.path.join(PIPER_FOLDER_PATH, PIPER_ASSETNAME), 'r:gz') as tar: with tarfile.open(
os.path.join(PIPER_FOLDER_PATH, PIPER_ASSETNAME), "r:gz"
) as tar:
tar.extractall(path=PIPER_FOLDER_PATH) tar.extractall(path=PIPER_FOLDER_PATH)
PIPER_VOICE_URL = os.getenv('PIPER_VOICE_URL', PIPER_VOICE_URL = os.getenv(
'https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/') "PIPER_VOICE_URL",
PIPER_VOICE_NAME = os.getenv('PIPER_VOICE_NAME', 'en_US-lessac-medium.onnx') "https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/",
)
PIPER_VOICE_NAME = os.getenv("PIPER_VOICE_NAME", "en_US-lessac-medium.onnx")
# Download voice model and its json file # Download voice model and its json file
urllib.request.urlretrieve(f"{PIPER_VOICE_URL}{PIPER_VOICE_NAME}", urllib.request.urlretrieve(
os.path.join(self.piper_directory, PIPER_VOICE_NAME)) f"{PIPER_VOICE_URL}{PIPER_VOICE_NAME}",
urllib.request.urlretrieve(f"{PIPER_VOICE_URL}{PIPER_VOICE_NAME}.json", os.path.join(self.piper_directory, PIPER_VOICE_NAME),
os.path.join(self.piper_directory, f"{PIPER_VOICE_NAME}.json")) )
urllib.request.urlretrieve(
f"{PIPER_VOICE_URL}{PIPER_VOICE_NAME}.json",
os.path.join(self.piper_directory, f"{PIPER_VOICE_NAME}.json"),
)
# Additional setup for macOS # Additional setup for macOS
if OS == "macos": if OS == "macos":
if ARCH == "x64": if ARCH == "x64":
subprocess.run(['softwareupdate', '--install-rosetta', '--agree-to-license']) subprocess.run(
["softwareupdate", "--install-rosetta", "--agree-to-license"]
)
PIPER_PHONEMIZE_ASSETNAME = f"piper-phonemize_{OS}_{ARCH}.tar.gz" PIPER_PHONEMIZE_ASSETNAME = f"piper-phonemize_{OS}_{ARCH}.tar.gz"
PIPER_PHONEMIZE_URL = "https://github.com/rhasspy/piper-phonemize/releases/latest/download/" PIPER_PHONEMIZE_URL = "https://github.com/rhasspy/piper-phonemize/releases/latest/download/"
urllib.request.urlretrieve(f"{PIPER_PHONEMIZE_URL}{PIPER_PHONEMIZE_ASSETNAME}", urllib.request.urlretrieve(
os.path.join(self.piper_directory, PIPER_PHONEMIZE_ASSETNAME)) f"{PIPER_PHONEMIZE_URL}{PIPER_PHONEMIZE_ASSETNAME}",
os.path.join(self.piper_directory, PIPER_PHONEMIZE_ASSETNAME),
with tarfile.open(os.path.join(self.piper_directory, PIPER_PHONEMIZE_ASSETNAME), 'r:gz') as tar: )
with tarfile.open(
os.path.join(self.piper_directory, PIPER_PHONEMIZE_ASSETNAME),
"r:gz",
) as tar:
tar.extractall(path=self.piper_directory) tar.extractall(path=self.piper_directory)
PIPER_DIR = self.piper_directory PIPER_DIR = self.piper_directory
subprocess.run(['install_name_tool', '-change', '@rpath/libespeak-ng.1.dylib', subprocess.run(
f"{PIPER_DIR}/piper-phonemize/lib/libespeak-ng.1.dylib", f"{PIPER_DIR}/piper"]) [
subprocess.run(['install_name_tool', '-change', '@rpath/libonnxruntime.1.14.1.dylib', "install_name_tool",
f"{PIPER_DIR}/piper-phonemize/lib/libonnxruntime.1.14.1.dylib", f"{PIPER_DIR}/piper"]) "-change",
subprocess.run(['install_name_tool', '-change', '@rpath/libpiper_phonemize.1.dylib', "@rpath/libespeak-ng.1.dylib",
f"{PIPER_DIR}/piper-phonemize/lib/libpiper_phonemize.1.dylib", f"{PIPER_DIR}/piper"]) f"{PIPER_DIR}/piper-phonemize/lib/libespeak-ng.1.dylib",
f"{PIPER_DIR}/piper",
]
)
subprocess.run(
[
"install_name_tool",
"-change",
"@rpath/libonnxruntime.1.14.1.dylib",
f"{PIPER_DIR}/piper-phonemize/lib/libonnxruntime.1.14.1.dylib",
f"{PIPER_DIR}/piper",
]
)
subprocess.run(
[
"install_name_tool",
"-change",
"@rpath/libpiper_phonemize.1.dylib",
f"{PIPER_DIR}/piper-phonemize/lib/libpiper_phonemize.1.dylib",
f"{PIPER_DIR}/piper",
]
)
print("Piper setup completed.") print("Piper setup completed.")
else: else:
print("Piper already set up. Skipping download.") print("Piper already set up. Skipping download.")

@ -36,7 +36,7 @@ Store the user's tasks in a Python list called `tasks`.
The user's current task is: {{ tasks[0] if tasks else "No current tasks." }} The user's current task is: {{ tasks[0] if tasks else "No current tasks." }}
{{ {{
if len(tasks) > 1: if len(tasks) > 1:
print("The next task is: ", tasks[1]) print("The next task is: ", tasks[1])
}} }}
@ -91,7 +91,7 @@ Store the user's tasks in a Python list called `tasks`.
The user's current task is: {{ tasks[0] if tasks else "No current tasks." }} The user's current task is: {{ tasks[0] if tasks else "No current tasks." }}
{{ {{
if len(tasks) > 1: if len(tasks) > 1:
print("The next task is: ", tasks[1]) print("The next task is: ", tasks[1])
}} }}
@ -184,7 +184,7 @@ except:
finally: finally:
sys.stdout = original_stdout sys.stdout = original_stdout
sys.stderr = original_stderr sys.stderr = original_stderr
}} }}
# SKILLS # SKILLS

@ -96,7 +96,7 @@ except:
finally: finally:
sys.stdout = original_stdout sys.stdout = original_stdout
sys.stderr = original_stderr sys.stderr = original_stderr
}} }}
# SKILLS LIBRARY # SKILLS LIBRARY
@ -131,4 +131,6 @@ print(output)
Remember: You can run Python code outside a function only to run a Python function; all other code must go in a in Python function if you first write a Python function. ALL imports must go inside the function. Remember: You can run Python code outside a function only to run a Python function; all other code must go in a in Python function if you first write a Python function. ALL imports must go inside the function.
""".strip().replace("OI_SKILLS_DIR", os.path.abspath(os.path.join(os.path.dirname(__file__), "skills"))) """.strip().replace(
"OI_SKILLS_DIR", os.path.abspath(os.path.join(os.path.dirname(__file__), "skills"))
)

@ -1,12 +1,14 @@
import subprocess import subprocess
import re import re
import shutil
import pyqrcode import pyqrcode
import time import time
from ..utils.print_markdown import print_markdown from ..utils.print_markdown import print_markdown
def create_tunnel(tunnel_method='ngrok', server_host='localhost', server_port=10001, qr=False):
print_markdown(f"Exposing server to the internet...") def create_tunnel(
tunnel_method="ngrok", server_host="localhost", server_port=10001, qr=False
):
print_markdown("Exposing server to the internet...")
server_url = "" server_url = ""
if tunnel_method == "bore": if tunnel_method == "bore":
@ -35,9 +37,11 @@ def create_tunnel(tunnel_method='ngrok', server_host='localhost', server_port=10
if not line: if not line:
break break
if "listening at bore.pub:" in line: if "listening at bore.pub:" in line:
remote_port = re.search('bore.pub:([0-9]*)', line).group(1) remote_port = re.search("bore.pub:([0-9]*)", line).group(1)
server_url = f"bore.pub:{remote_port}" server_url = f"bore.pub:{remote_port}"
print_markdown(f"Your server is being hosted at the following URL: bore.pub:{remote_port}") print_markdown(
f"Your server is being hosted at the following URL: bore.pub:{remote_port}"
)
break break
elif tunnel_method == "localtunnel": elif tunnel_method == "localtunnel":
@ -69,9 +73,11 @@ def create_tunnel(tunnel_method='ngrok', server_host='localhost', server_port=10
match = url_pattern.search(line) match = url_pattern.search(line)
if match: if match:
found_url = True found_url = True
remote_url = match.group(0).replace('your url is: ', '') remote_url = match.group(0).replace("your url is: ", "")
server_url = remote_url server_url = remote_url
print(f"\nYour server is being hosted at the following URL: {remote_url}") print(
f"\nYour server is being hosted at the following URL: {remote_url}"
)
break # Exit the loop once the URL is found break # Exit the loop once the URL is found
if not found_url: if not found_url:
@ -93,7 +99,11 @@ def create_tunnel(tunnel_method='ngrok', server_host='localhost', server_port=10
# If ngrok is installed, start it on the specified port # If ngrok is installed, start it on the specified port
# process = subprocess.Popen(f'ngrok http {server_port} --log=stdout', shell=True, stdout=subprocess.PIPE) # process = subprocess.Popen(f'ngrok http {server_port} --log=stdout', shell=True, stdout=subprocess.PIPE)
process = subprocess.Popen(f'ngrok http {server_port} --scheme http,https --domain=marten-advanced-dragon.ngrok-free.app --log=stdout', shell=True, stdout=subprocess.PIPE) process = subprocess.Popen(
f"ngrok http {server_port} --scheme http,https --domain=marten-advanced-dragon.ngrok-free.app --log=stdout",
shell=True,
stdout=subprocess.PIPE,
)
# Initially, no URL is found # Initially, no URL is found
found_url = False found_url = False
@ -110,15 +120,18 @@ def create_tunnel(tunnel_method='ngrok', server_host='localhost', server_port=10
found_url = True found_url = True
remote_url = match.group(0) remote_url = match.group(0)
server_url = remote_url server_url = remote_url
print(f"\nYour server is being hosted at the following URL: {remote_url}") print(
f"\nYour server is being hosted at the following URL: {remote_url}"
)
break # Exit the loop once the URL is found break # Exit the loop once the URL is found
if not found_url: if not found_url:
print("Failed to extract the ngrok tunnel URL. Please check ngrok's output for details.") print(
"Failed to extract the ngrok tunnel URL. Please check ngrok's output for details."
)
if server_url and qr: if server_url and qr:
text = pyqrcode.create(remote_url) text = pyqrcode.create(remote_url)
print(text.terminal(quiet_zone=1)) print(text.terminal(quiet_zone=1))
return server_url return server_url

@ -5,6 +5,7 @@ import tempfile
import ffmpeg import ffmpeg
import subprocess import subprocess
def convert_mime_type_to_format(mime_type: str) -> str: def convert_mime_type_to_format(mime_type: str) -> str:
if mime_type == "audio/x-wav" or mime_type == "audio/wav": if mime_type == "audio/x-wav" or mime_type == "audio/wav":
return "wav" return "wav"
@ -15,39 +16,49 @@ def convert_mime_type_to_format(mime_type: str) -> str:
return mime_type return mime_type
@contextlib.contextmanager @contextlib.contextmanager
def export_audio_to_wav_ffmpeg(audio: bytearray, mime_type: str) -> str: def export_audio_to_wav_ffmpeg(audio: bytearray, mime_type: str) -> str:
temp_dir = tempfile.gettempdir() temp_dir = tempfile.gettempdir()
# Create a temporary file with the appropriate extension # Create a temporary file with the appropriate extension
input_ext = convert_mime_type_to_format(mime_type) input_ext = convert_mime_type_to_format(mime_type)
input_path = os.path.join(temp_dir, f"input_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.{input_ext}") input_path = os.path.join(
with open(input_path, 'wb') as f: temp_dir, f"input_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.{input_ext}"
)
with open(input_path, "wb") as f:
f.write(audio) f.write(audio)
# Check if the input file exists # Check if the input file exists
assert os.path.exists(input_path), f"Input file does not exist: {input_path}" assert os.path.exists(input_path), f"Input file does not exist: {input_path}"
# Export to wav # Export to wav
output_path = os.path.join(temp_dir, f"output_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.wav") output_path = os.path.join(
temp_dir, f"output_{datetime.now().strftime('%Y%m%d%H%M%S%f')}.wav"
)
print(mime_type, input_path, output_path) print(mime_type, input_path, output_path)
if mime_type == "audio/raw": if mime_type == "audio/raw":
ffmpeg.input( ffmpeg.input(
input_path, input_path,
f='s16le', f="s16le",
ar='16000', ar="16000",
ac=1, ac=1,
).output(output_path, loglevel='panic').run() ).output(output_path, loglevel="panic").run()
else: else:
ffmpeg.input(input_path).output(output_path, acodec='pcm_s16le', ac=1, ar='16k', loglevel='panic').run() ffmpeg.input(input_path).output(
output_path, acodec="pcm_s16le", ac=1, ar="16k", loglevel="panic"
).run()
try: try:
yield output_path yield output_path
finally: finally:
os.remove(input_path) os.remove(input_path)
def run_command(command): def run_command(command):
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) result = subprocess.run(
command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
return result.stdout, result.stderr return result.stdout, result.stderr

@ -1,4 +1,5 @@
from dotenv import load_dotenv from dotenv import load_dotenv
load_dotenv() # take environment variables from .env. load_dotenv() # take environment variables from .env.
import asyncio import asyncio
@ -7,42 +8,49 @@ import platform
from .logs import setup_logging from .logs import setup_logging
from .logs import logger from .logs import logger
setup_logging() setup_logging()
def get_kernel_messages(): def get_kernel_messages():
""" """
Is this the way to do this? Is this the way to do this?
""" """
current_platform = platform.system() current_platform = platform.system()
if current_platform == "Darwin": if current_platform == "Darwin":
process = subprocess.Popen(['syslog'], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL) process = subprocess.Popen(
["syslog"], stdout=subprocess.PIPE, stderr=subprocess.DEVNULL
)
output, _ = process.communicate() output, _ = process.communicate()
return output.decode('utf-8') return output.decode("utf-8")
elif current_platform == "Linux": elif current_platform == "Linux":
with open('/var/log/dmesg', 'r') as file: with open("/var/log/dmesg", "r") as file:
return file.read() return file.read()
else: else:
logger.info("Unsupported platform.") logger.info("Unsupported platform.")
def custom_filter(message): def custom_filter(message):
# Check for {TO_INTERPRETER{ message here }TO_INTERPRETER} pattern # Check for {TO_INTERPRETER{ message here }TO_INTERPRETER} pattern
if '{TO_INTERPRETER{' in message and '}TO_INTERPRETER}' in message: if "{TO_INTERPRETER{" in message and "}TO_INTERPRETER}" in message:
start = message.find('{TO_INTERPRETER{') + len('{TO_INTERPRETER{') start = message.find("{TO_INTERPRETER{") + len("{TO_INTERPRETER{")
end = message.find('}TO_INTERPRETER}', start) end = message.find("}TO_INTERPRETER}", start)
return message[start:end] return message[start:end]
# Check for USB mention # Check for USB mention
# elif 'USB' in message: # elif 'USB' in message:
# return message # return message
# # Check for network related keywords # # Check for network related keywords
# elif any(keyword in message for keyword in ['network', 'IP', 'internet', 'LAN', 'WAN', 'router', 'switch']) and "networkStatusForFlags" not in message: # elif any(keyword in message for keyword in ['network', 'IP', 'internet', 'LAN', 'WAN', 'router', 'switch']) and "networkStatusForFlags" not in message:
# return message # return message
else: else:
return None return None
last_messages = "" last_messages = ""
def check_filtered_kernel(): def check_filtered_kernel():
messages = get_kernel_messages() messages = get_kernel_messages()
if messages is None: if messages is None:
@ -51,12 +59,12 @@ def check_filtered_kernel():
global last_messages global last_messages
messages.replace(last_messages, "") messages.replace(last_messages, "")
messages = messages.split("\n") messages = messages.split("\n")
filtered_messages = [] filtered_messages = []
for message in messages: for message in messages:
if custom_filter(message): if custom_filter(message):
filtered_messages.append(message) filtered_messages.append(message)
return "\n".join(filtered_messages) return "\n".join(filtered_messages)
@ -66,11 +74,25 @@ async def put_kernel_messages_into_queue(queue):
if text: if text:
if isinstance(queue, asyncio.Queue): if isinstance(queue, asyncio.Queue):
await queue.put({"role": "computer", "type": "console", "start": True}) await queue.put({"role": "computer", "type": "console", "start": True})
await queue.put({"role": "computer", "type": "console", "format": "output", "content": text}) await queue.put(
{
"role": "computer",
"type": "console",
"format": "output",
"content": text,
}
)
await queue.put({"role": "computer", "type": "console", "end": True}) await queue.put({"role": "computer", "type": "console", "end": True})
else: else:
queue.put({"role": "computer", "type": "console", "start": True}) queue.put({"role": "computer", "type": "console", "start": True})
queue.put({"role": "computer", "type": "console", "format": "output", "content": text}) queue.put(
{
"role": "computer",
"type": "console",
"format": "output",
"content": text,
}
)
queue.put({"role": "computer", "type": "console", "end": True}) queue.put({"role": "computer", "type": "console", "end": True})
await asyncio.sleep(5) await asyncio.sleep(5)

@ -1,4 +1,5 @@
from dotenv import load_dotenv from dotenv import load_dotenv
load_dotenv() # take environment variables from .env. load_dotenv() # take environment variables from .env.
import os import os
@ -9,9 +10,7 @@ root_logger: logging.Logger = logging.getLogger()
def _basic_config() -> None: def _basic_config() -> None:
logging.basicConfig( logging.basicConfig(format="%(message)s")
format="%(message)s"
)
def setup_logging() -> None: def setup_logging() -> None:

@ -1,12 +1,11 @@
class Accumulator: class Accumulator:
def __init__(self): def __init__(self):
self.template = {"role": None, "type": None, "format": None, "content": None} self.template = {"role": None, "type": None, "format": None, "content": None}
self.message = self.template self.message = self.template
def accumulate(self, chunk): def accumulate(self, chunk):
#print(str(chunk)[:100]) # print(str(chunk)[:100])
if type(chunk) == dict: if type(chunk) == dict:
if "format" in chunk and chunk["format"] == "active_line": if "format" in chunk and chunk["format"] == "active_line":
# We don't do anything with these # We don't do anything with these
return None return None
@ -17,15 +16,20 @@ class Accumulator:
return None return None
if "content" in chunk: if "content" in chunk:
if any(
if any(self.message[key] != chunk[key] for key in self.message if key != "content"): self.message[key] != chunk[key]
for key in self.message
if key != "content"
):
self.message = chunk self.message = chunk
if "content" not in self.message: if "content" not in self.message:
self.message["content"] = chunk["content"] self.message["content"] = chunk["content"]
else: else:
if type(chunk["content"]) == dict: if type(chunk["content"]) == dict:
# dict concatenation cannot happen, so we see if chunk is a dict # dict concatenation cannot happen, so we see if chunk is a dict
self.message["content"]["content"] += chunk["content"]["content"] self.message["content"]["content"] += chunk["content"][
"content"
]
else: else:
self.message["content"] += chunk["content"] self.message["content"] += chunk["content"]
return None return None
@ -41,5 +45,3 @@ class Accumulator:
self.message["content"] = b"" self.message["content"] = b""
self.message["content"] += chunk self.message["content"] += chunk
return None return None

@ -1,9 +1,10 @@
from rich.console import Console from rich.console import Console
from rich.markdown import Markdown from rich.markdown import Markdown
def print_markdown(markdown_text): def print_markdown(markdown_text):
console = Console() console = Console()
md = Markdown(markdown_text) md = Markdown(markdown_text)
print("") print("")
console.print(md) console.print(md)
print("") print("")

@ -15,35 +15,64 @@ app = typer.Typer()
@app.command() @app.command()
def run( def run(
server: bool = typer.Option(False, "--server", help="Run server"), server: bool = typer.Option(False, "--server", help="Run server"),
server_host: str = typer.Option("0.0.0.0", "--server-host", help="Specify the server host where the server will deploy"), server_host: str = typer.Option(
server_port: int = typer.Option(10001, "--server-port", help="Specify the server port where the server will deploy"), "0.0.0.0",
"--server-host",
tunnel_service: str = typer.Option("ngrok", "--tunnel-service", help="Specify the tunnel service"), help="Specify the server host where the server will deploy",
expose: bool = typer.Option(False, "--expose", help="Expose server to internet"), ),
server_port: int = typer.Option(
client: bool = typer.Option(False, "--client", help="Run client"), 10001,
server_url: str = typer.Option(None, "--server-url", help="Specify the server URL that the client should expect. Defaults to server-host and server-port"), "--server-port",
client_type: str = typer.Option("auto", "--client-type", help="Specify the client type"), help="Specify the server port where the server will deploy",
),
llm_service: str = typer.Option("litellm", "--llm-service", help="Specify the LLM service"), tunnel_service: str = typer.Option(
"ngrok", "--tunnel-service", help="Specify the tunnel service"
model: str = typer.Option("gpt-4", "--model", help="Specify the model"), ),
llm_supports_vision: bool = typer.Option(False, "--llm-supports-vision", help="Specify if the LLM service supports vision"), expose: bool = typer.Option(False, "--expose", help="Expose server to internet"),
llm_supports_functions: bool = typer.Option(False, "--llm-supports-functions", help="Specify if the LLM service supports functions"), client: bool = typer.Option(False, "--client", help="Run client"),
context_window: int = typer.Option(2048, "--context-window", help="Specify the context window size"), server_url: str = typer.Option(
max_tokens: int = typer.Option(4096, "--max-tokens", help="Specify the maximum number of tokens"), None,
temperature: float = typer.Option(0.8, "--temperature", help="Specify the temperature for generation"), "--server-url",
help="Specify the server URL that the client should expect. Defaults to server-host and server-port",
tts_service: str = typer.Option("openai", "--tts-service", help="Specify the TTS service"), ),
client_type: str = typer.Option(
stt_service: str = typer.Option("openai", "--stt-service", help="Specify the STT service"), "auto", "--client-type", help="Specify the client type"
),
local: bool = typer.Option(False, "--local", help="Use recommended local services for LLM, STT, and TTS"), llm_service: str = typer.Option(
"litellm", "--llm-service", help="Specify the LLM service"
qr: bool = typer.Option(False, "--qr", help="Print the QR code for the server URL") ),
): model: str = typer.Option("gpt-4", "--model", help="Specify the model"),
llm_supports_vision: bool = typer.Option(
False,
"--llm-supports-vision",
help="Specify if the LLM service supports vision",
),
llm_supports_functions: bool = typer.Option(
False,
"--llm-supports-functions",
help="Specify if the LLM service supports functions",
),
context_window: int = typer.Option(
2048, "--context-window", help="Specify the context window size"
),
max_tokens: int = typer.Option(
4096, "--max-tokens", help="Specify the maximum number of tokens"
),
temperature: float = typer.Option(
0.8, "--temperature", help="Specify the temperature for generation"
),
tts_service: str = typer.Option(
"openai", "--tts-service", help="Specify the TTS service"
),
stt_service: str = typer.Option(
"openai", "--stt-service", help="Specify the STT service"
),
local: bool = typer.Option(
False, "--local", help="Use recommended local services for LLM, STT, and TTS"
),
qr: bool = typer.Option(False, "--qr", help="Print the QR code for the server URL"),
):
_run( _run(
server=server, server=server,
server_host=server_host, server_host=server_host,
@ -63,7 +92,7 @@ def run(
tts_service=tts_service, tts_service=tts_service,
stt_service=stt_service, stt_service=stt_service,
local=local, local=local,
qr=qr qr=qr,
) )
@ -86,7 +115,7 @@ def _run(
tts_service: str = "openai", tts_service: str = "openai",
stt_service: str = "openai", stt_service: str = "openai",
local: bool = False, local: bool = False,
qr: bool = False qr: bool = False,
): ):
if local: if local:
tts_service = "piper" tts_service = "piper"
@ -130,7 +159,9 @@ def _run(
server_thread.start() server_thread.start()
if expose: if expose:
tunnel_thread = threading.Thread(target=create_tunnel, args=[tunnel_service, server_host, server_port, qr]) tunnel_thread = threading.Thread(
target=create_tunnel, args=[tunnel_service, server_host, server_port, qr]
)
tunnel_thread.start() tunnel_thread.start()
if client: if client:

Loading…
Cancel
Save