Our goal is to power a billion devices with the 01OS over the next 10 years. The Cambrian explosion of AI devices.
Our goal is to power a billion devices with the 01OS over the next 10 years. The Cambrian explosion of AI devices.
We can do that with your help. Help extend the 01OS to run on new hardware, to connect with new peripherals like GPS and cameras, and add new locally running language models to unlock use-cases for this technology that no-one has even imagined yet.
We can do that with your help. Help extend the 01OS to run on new hardware, to connect with new peripherals like GPS and cameras, and add new locally running language models to unlock use-cases for this technology that no-one has even imagined yet.
In the coming months, we're going to release:
@ -10,4 +10,3 @@ In the coming months, we're going to release:
- [ ] An open-source language model for computer control
Whentheusertellsyouaboutasetoftasks,youshouldintelligentlyordertasks,batchsimilartasks,andbreakdownlargetasksintosmallertasks(forthis,youshouldconsulttheuserandgettheirpermissiontobreakitdown).Yourgoalistomanagethetasklistasintelligentlyaspossible,tomaketheuserasefficientandnon-overwhelmedaspossible.Theywillrequirealotofencouragement,support,andkindness.Don't say too much about what'saheadofthem—justtrytofocusthemoneachstepatatime.
Todothis,scheduleareminderbasedonestimatedcompletiontimeusingthefunction`schedule(message="Your message here.",start="8am")`,WHICHHASALREADYBEENIMPORTED.YOUDON'T NEED TO IMPORT THE `schedule` FUNCTION. IT IS AVAILABLE. You'llreceivethemessageatthetimeyouscheduledit.Iftheusersaystomonitorsomething,simplyscheduleitwithanintervalofadurationthatmakessensefortheproblembyspecifyinganinterval,likethis:`schedule(message="Your message here.",interval="5m")`
@ -182,7 +182,6 @@ Try multiple methods before saying the task is impossible. **You can do it!**
# elif any(keyword in message for keyword in ['network', 'IP', 'internet', 'LAN', 'WAN', 'router', 'switch']) and "networkStatusForFlags" not in message:
# If there are no downloaded models, prompt them to download a model and try again
ifnotnames:
time.sleep(1)
interpreter.display_message(f"\nYou don't have any Ollama models downloaded. To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n")
interpreter.display_message(
"\nYou don't have any Ollama models downloaded. To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n"
)
print("Please download a model then try again\n")
time.sleep(2)
sys.exit(1)
# If there are models, prompt them to select one
else:
time.sleep(1)
interpreter.display_message(f"**{len(names)} Ollama model{'s'iflen(names)!=1else''} found.** To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n")
interpreter.display_message(
f"**{len(names)} Ollama model{'s'iflen(names)!=1else''} found.** To download a new model, run `ollama run <model-name>`, then start a new 01 session. \n\n For a full list of downloadable models, check out [https://ollama.com/library](https://ollama.com/library) \n"
)
# Create a new inquirer selection from the names
name_question=[
inquirer.List('name',message="Select a downloaded Ollama model",choices=names),
print("Ollama is not installed or not recognized as a command.")
time.sleep(1)
interpreter.display_message(f"\nPlease visit [https://ollama.com/](https://ollama.com/) to download Ollama and try again\n")
interpreter.display_message(
"\nPlease visit [https://ollama.com/](https://ollama.com/) to download Ollama and try again\n"
)
time.sleep(2)
sys.exit(1)
# elif selected_model == "Jan":
# interpreter.display_message(
# """
@ -108,7 +122,6 @@ def select_local_model():
# 3. Copy the ID of the model and enter it below.
# 3. Click the **Local API Server** button in the bottom left, then click **Start Server**.
# Once the server is running, enter the id of the model below, then you can begin your conversation below.
# """
@ -117,7 +130,7 @@ def select_local_model():
# interpreter.llm.max_tokens = 1000
# interpreter.llm.context_window = 3000
# time.sleep(1)
# # Prompt the user to enter the name of the model running on Jan
# model_name_question = [
# inquirer.Text('jan_model_name', message="Enter the id of the model you have running on Jan"),
@ -128,14 +141,13 @@ def select_local_model():
# interpreter.llm.model = ""
# interpreter.display_message(f"\nUsing Jan model: `{jan_model_name}` \n")
# time.sleep(1)
# Set the system message to a minimal version for all local models.
# Set offline for all local models
interpreter.offline=True
interpreter.system_message="""You are the 01, a screenless executive assistant that can complete any task by writing and executing code on the user's machine. Just write a markdown code block! The user has given you full and complete permission.
interpreter.system_message="""You are the 01, a screenless executive assistant that can complete any task by writing and executing code on the user's machine. Just write a markdown code block! The user has given you full and complete permission.