# Langchain multi-tool LLM service ## Run > First, configure `.env` file for your LM Studio MODEL and HOST! ```bash uvicorn app.main:app --reload ``` ## API ### List of the Tools ```bash GET /tools Content-Type: application/json ``` **Response:** ```bash [ { "id": 0, "name": "string", "description": "string" } ] ``` ### Create a Tool ```bash POST /tools Content-Type: application/json { "name": "Calculator", "description": "Useful for performing mathematical calculations. Input should be a valid mathematical expression.", "function_code": "def tool_function(input: str) -> str:\n try:\n aeval = Interpreter()\n result = aeval(input)\n return str(result)\n except Exception as e:\n return f\"Error evaluating expression: {e}\"" } ``` **Response:** ```bash { "id": 0, "name": "string", "description": "string" } ``` ### Get the Tool ```bash GET /tools/{id} Content-Type: application/json ``` **Response:** ```bash { "id": 0, "name": "string", "description": "string" } ``` ### Submit a Query ```bash POST /query Content-Type: application/json { "input": "What is the capital of France and what is 15 multiplied by 3?" } ``` **Response:** ```bash { "output": "Your request is being processed." } ``` ### Get processed Answer ```bash GET /answer/{question_id} Content-Type: application/json ``` **Response:** ```bash { "id": 0, "query": "string", "answer": "string", "timestamp": "string" } ```