This project provides:
- BookStack page export to PDF
- Vector store indexing (Chroma)
- A RAG chatbot API (OpenAI-compatible endpoints)
- An Open WebUI interface connected to the RAG API
- Python 3.11+
- Docker + Docker Compose
- OpenAI API key with active billing/quota
- BookStack API token (for export only)
Create .env from .env.example and fill values:
OPENAI_API_KEY=sk-proj-your-api-key
MODEL_LLM=yout_model
MODEL_EMBEDDING=your_embedding_model
BOOKSTACK_URL=your_url
BOOKSTACK_TOKEN_ID=your_bookstack_token_id
BOOKSTACK_TOKEN_SECRET=your_bookstack_token_secret
CHATBOT_API_BIND_IP=IP
CHATBOT_API_HOST_PORT=PORT
CHATBOT_API_INTERNAL_PORT=PORT
ANYTHINGLLM_BIND_IP=IP
ANYTHINGLLM_HOST_PORT=PORTpython -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtpython export_pages.pyPDF files are written to exports/.
python reload_vector_store.pyThis indexes documents from exports/ into vector-store/.
docker compose up --buildAccess:
- AnythingLLM:
http://localhost:${ANYTHINGLLM_HOST_PORT}(default:3001) - Chatbot API health:
http://localhost:${CHATBOT_API_HOST_PORT}/health(default:8000)
Open WebUI is configured to call the local chatbot backend through OpenAI-compatible routes:
GET /v1/modelsPOST /v1/chat/completions
Files:
docker-compose.anythingllm.secure.ymldeploy/Caddyfile
This stack puts Caddy in front of AnythingLLM:
- HTTPS with automatic TLS certificates
- HTTP Basic Auth at proxy level
- AnythingLLM not exposed directly (internal only)
Steps:
- Create env file:
docker compose up
Security notes:
- Keep AnythingLLM built-in auth enabled (multi-user recommended for internet exposure).
- Disable public signup inside AnythingLLM unless explicitly needed.
- Keep
OPENAI_API_KEYonly inside server env, never in frontend code.
Create a job to automatically update the vectore store weekly : CRON_TZ=Europe/Paris 0 7 * * 1 chatbot/reload_job.sh >> chatbot/reindex.log 2>&1
pip install jupyter ipykernel
python -m ipykernel install --user --name chatbot-venv --display-name "Python (chatbot-venv)"
jupyter notebook