From ef1e7119865a1d6a31aaa2a8421f2163539883ad Mon Sep 17 00:00:00 2001 From: Mike Bird Date: Thu, 11 Jul 2024 11:05:01 -0400 Subject: [PATCH] add llm examples to configure --- docs/software/configure.mdx | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/software/configure.mdx b/docs/software/configure.mdx index 2ebe16a..844e122 100644 --- a/docs/software/configure.mdx +++ b/docs/software/configure.mdx @@ -45,12 +45,22 @@ The default LLM for 01 is GPT-4-Turbo. You can find this in the default profile The fast profile uses Llama3-8b served by Groq. You can find this in the fast profile in `software/source/server/profiles/fast.py`. +```python +# Set your profile with a hosted LLM +interpreter.llm.model = "gpt-4o" +``` + ### Local LLMs You can use local models to power 01. Using the local profile launches the Local Explorer where you can select your inference provider and model. The default options include Llamafile, Jan, Ollama, and LM Studio. +```python +# Set your profile with a local LLM +interpreter.local_setup() +``` + ### Hosted TTS 01 supports OpenAI and Elevenlabs for hosted TTS