🐍 With pip
libre-chat
is tested on Linux, and MacOS, should work on Windows WSL.
Production deployment
When deploying in production it is recommended to use docker, or directly gunicorn, to handle many requests. The CLI is mainly used for local testing and building vectorstores.
📦 Install
Install from PyPI with pipx
or pip
:
Installing on Windows
We recommend to use WSL or Docker. Otherwise you can install with an extra dependency:
Note there are some issues with the UnstructuredEmailLoader
on Windows. It uses unstructured
, which uses python-magic
which fails due to a ctypes
import.
⌨️ Use as a command-line interface
You can easily start a new chat web service including UI and API from your terminal. If no arguments are provided it will try to parse a chat.yml
file in the current directory, or use the default configuration:
Provide a specific config file:
Re-build the vectorstore:
Get a full rundown of the available options with the usual:
🐍 Use in python scripts
Alternatively, you can use this package in python scripts:
import logging
import uvicorn
from libre_chat import ChatConf, ChatEndpoint, Llm
logging.basicConfig(level=logging.getLevelName("INFO"))
conf = ChatConf(
model_path="./models/mixtral-8x7b-instruct-v0.1.Q2_K.gguf",
vector_path=None
)
llm = Llm(conf=conf)
print(llm.query("What is the capital of the Netherlands?"))
# Create and deploy a FastAPI app based on your LLM
app = ChatEndpoint(llm=llm, conf=conf)
uvicorn.run(app)
Checkout the Code reference for more details on the available classes.