Documents OPNsense NGINX proxy setup script (upstream server, upstream, location, HTTP server creation via OPNsense API with wildcard cert auto-detection). Marks reverse proxy automation as complete. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
5.0 KiB
5.0 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
This is a Proxmox LXC provisioning system that deploys GDPR-compliant customer stacks on a Proxmox cluster. Each customer gets an isolated Debian 12 LXC containing Docker, PostgreSQL+pgvector, n8n, and PostgREST (Supabase-compatible API).
Running the Installer
# Basic usage (on Proxmox host)
bash install.sh --storage local-zfs --bridge vmbr0 --ip dhcp --vlan 90
# With all options
bash install.sh \
--storage local-zfs \
--bridge vmbr0 \
--ip dhcp \
--vlan 90 \
--base-domain userman.de \
--n8n-owner-email admin@userman.de \
--workflow-file RAGKI-BotPGVector.json \
--debug
# Debug mode shows logs on stderr; normal mode outputs only JSON on stdout
DEBUG=1 bash install.sh ...
# Delete NGINX proxy for a customer
bash delete_nginx_proxy.sh --ctid <ctid> [--dry-run] [--debug]
Architecture
Script Flow (install.sh)
install.sh sources libsupabase.sh and executes 11 sequential steps:
- Preflight – validates Proxmox tools (
pct,pvesm,pveam,pvesh), storage, bridge, and downloads Debian 12 template - CTID generation –
CTID = unix_timestamp - 1,000,000,000; hostname =sb-<unix_timestamp> - LXC creation – unprivileged by default, features:
nesting=1,keyctl=1,fuse=1 - CT provisioning – installs Docker + Docker Compose Plugin inside CT via
pct exec - Stack deploy – generates secrets (passwords, JWT, encryption keys), writes
.env+docker-compose.yml+ SQL init scripts into CT, runsdocker compose up - n8n owner setup – tries CLI command first, falls back to REST API
/rest/owner/setup - RAG workflow setup (
n8n_setup_rag_workflowinlibsupabase.sh) – logs into n8n API, creates PostgreSQL + Ollama credentials, processes workflow JSON (replaces credential IDs via Python), imports and activates workflow - Workflow auto-reload – copies
templates/reload-workflow.sh+templates/n8n-workflow-reload.serviceinto CT; systemd service re-imports workflow on every LXC restart - NGINX proxy – calls
setup_nginx_proxy.shto configure OPNsense reverse proxy (upstream server → upstream → location → HTTP server → reconfigure) - JSON output – compact JSON on original stdout (fd 3); credentials also saved to
credentials/<hostname>.json
Key Files
| File | Purpose |
|---|---|
install.sh |
Main orchestrator – argument parsing, LXC lifecycle, stack deployment |
libsupabase.sh |
Shared library – Proxmox helpers, password/JWT generators, n8n REST API functions |
setup_nginx_proxy.sh |
Creates OPNsense NGINX components (upstream server → upstream → location → HTTP server); auto-detects wildcard cert for userman.de; supports --list-certificates and --test-connection |
delete_nginx_proxy.sh |
Removes OPNsense NGINX components (HTTP server, location, upstream) by CTID |
templates/docker-compose.yml |
Reference template (actual compose is written inline by install.sh) |
templates/reload-workflow.sh |
Deployed into CT; re-imports n8n workflow on restart using saved credentials from .env |
templates/n8n-workflow-reload.service |
Systemd unit deployed into CT |
sql/init_pgvector.sql |
Initializes pgvector extension, documents table, match_documents function, PostgREST roles |
RAGKI-BotPGVector.json |
Default n8n workflow template (RAG chatbot + PDF upload form) |
credentials/<hostname>.json |
Per-customer credentials (generated at install time) |
logs/<hostname>.log |
Per-install log (temp name install_<pid>.log, renamed after hostname is known) |
Output / Logging Mode
- Normal mode (
DEBUG=0): All logs go to file only; stdout (fd 3) is reserved for the final JSON blob - Debug mode (
DEBUG=1): Logs on stderr viatee; JSON pretty-printed to stdout
The die() function outputs {"error": "..."} as JSON when not in debug mode.
Docker Stack Inside Each LXC
Located at /opt/customer-stack/ inside the CT:
- postgres (
pgvector/pgvector:pg16) – initialized fromsql/directory - postgrest (
postgrest/postgrest:latest) – REST API on port 3000 (configurable), JWT-secured - n8n (
n8nio/n8n:latest) – port 5678, backed by postgres, connected to Ollama at192.168.45.3:11434
Infrastructure Assumptions
- Ollama server:
192.168.45.3:11434(hardcoded) - Docker registry mirror:
http://192.168.45.2:5000 - APT proxy (optional):
http://192.168.45.2:3142 - OPNsense NGINX at
192.168.45.1:4444(for proxy setup/deletion) - Default VLAN: 90
- Default domain:
userman.de→ FQDN:sb-<unix_ts>.userman.de
n8n Workflow JSON Processing
When importing a workflow, a Python script replaces credential references:
postgrescredentials → newly created "PostgreSQL (local)" credential IDollamaApicredentials → newly created "Ollama (local)" credential ID- Fields removed before import:
id,versionId,meta,tags,active,pinData