parent
a7c96bed62
commit
db3e2c3638
@ -1,16 +0,0 @@
|
||||
---
|
||||
title: "Language Model"
|
||||
description: "The LLM that powers your 01"
|
||||
---
|
||||
|
||||
## Hosted Models
|
||||
|
||||
The default LLM for 01 is GPT-4-Turbo. You can find this in the default profile in `software/source/server/profiles/default.py`.
|
||||
|
||||
The fast profile uses Llama3-8b served by Groq. You can find this in the fast profile in `software/source/server/profiles/fast.py`.
|
||||
|
||||
## Local Models
|
||||
|
||||
You can use local models to power 01.
|
||||
|
||||
Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio.
|
@ -1,26 +0,0 @@
|
||||
---
|
||||
title: "Text To Speech"
|
||||
description: "The voice of 01"
|
||||
---
|
||||
|
||||
## Local TTS
|
||||
|
||||
For local TTS, Coqui is used.
|
||||
|
||||
```python
|
||||
# Set your profile with a local TTS service
|
||||
interpreter.tts = "coqui"
|
||||
```
|
||||
|
||||
## Hosted TTS
|
||||
|
||||
01 supports OpenAI and Elevenlabs for hosted TTS
|
||||
|
||||
```python
|
||||
# Set your profile with a hosted TTS service
|
||||
interpreter.tts = "elevenlabs"
|
||||
```
|
||||
|
||||
## Other Models
|
||||
|
||||
More instructions coming soon!
|
Loading…
Reference in new issue