Files
boti/CLAUDE.md
root e4a70fff83 Add setup_nginx_proxy.sh; update README and CLAUDE.md
Documents OPNsense NGINX proxy setup script (upstream server, upstream,
location, HTTP server creation via OPNsense API with wildcard cert
auto-detection). Marks reverse proxy automation as complete.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-07 19:49:40 +01:00

5.0 KiB
Raw Permalink Blame History

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Project Overview

This is a Proxmox LXC provisioning system that deploys GDPR-compliant customer stacks on a Proxmox cluster. Each customer gets an isolated Debian 12 LXC containing Docker, PostgreSQL+pgvector, n8n, and PostgREST (Supabase-compatible API).

Running the Installer

# Basic usage (on Proxmox host)
bash install.sh --storage local-zfs --bridge vmbr0 --ip dhcp --vlan 90

# With all options
bash install.sh \
  --storage local-zfs \
  --bridge vmbr0 \
  --ip dhcp \
  --vlan 90 \
  --base-domain userman.de \
  --n8n-owner-email admin@userman.de \
  --workflow-file RAGKI-BotPGVector.json \
  --debug

# Debug mode shows logs on stderr; normal mode outputs only JSON on stdout
DEBUG=1 bash install.sh ...

# Delete NGINX proxy for a customer
bash delete_nginx_proxy.sh --ctid <ctid> [--dry-run] [--debug]

Architecture

Script Flow (install.sh)

install.sh sources libsupabase.sh and executes 11 sequential steps:

  1. Preflight validates Proxmox tools (pct, pvesm, pveam, pvesh), storage, bridge, and downloads Debian 12 template
  2. CTID generation CTID = unix_timestamp - 1,000,000,000; hostname = sb-<unix_timestamp>
  3. LXC creation unprivileged by default, features: nesting=1,keyctl=1,fuse=1
  4. CT provisioning installs Docker + Docker Compose Plugin inside CT via pct exec
  5. Stack deploy generates secrets (passwords, JWT, encryption keys), writes .env + docker-compose.yml + SQL init scripts into CT, runs docker compose up
  6. n8n owner setup tries CLI command first, falls back to REST API /rest/owner/setup
  7. RAG workflow setup (n8n_setup_rag_workflow in libsupabase.sh) logs into n8n API, creates PostgreSQL + Ollama credentials, processes workflow JSON (replaces credential IDs via Python), imports and activates workflow
  8. Workflow auto-reload copies templates/reload-workflow.sh + templates/n8n-workflow-reload.service into CT; systemd service re-imports workflow on every LXC restart
  9. NGINX proxy calls setup_nginx_proxy.sh to configure OPNsense reverse proxy (upstream server → upstream → location → HTTP server → reconfigure)
  10. JSON output compact JSON on original stdout (fd 3); credentials also saved to credentials/<hostname>.json

Key Files

File Purpose
install.sh Main orchestrator argument parsing, LXC lifecycle, stack deployment
libsupabase.sh Shared library Proxmox helpers, password/JWT generators, n8n REST API functions
setup_nginx_proxy.sh Creates OPNsense NGINX components (upstream server → upstream → location → HTTP server); auto-detects wildcard cert for userman.de; supports --list-certificates and --test-connection
delete_nginx_proxy.sh Removes OPNsense NGINX components (HTTP server, location, upstream) by CTID
templates/docker-compose.yml Reference template (actual compose is written inline by install.sh)
templates/reload-workflow.sh Deployed into CT; re-imports n8n workflow on restart using saved credentials from .env
templates/n8n-workflow-reload.service Systemd unit deployed into CT
sql/init_pgvector.sql Initializes pgvector extension, documents table, match_documents function, PostgREST roles
RAGKI-BotPGVector.json Default n8n workflow template (RAG chatbot + PDF upload form)
credentials/<hostname>.json Per-customer credentials (generated at install time)
logs/<hostname>.log Per-install log (temp name install_<pid>.log, renamed after hostname is known)

Output / Logging Mode

  • Normal mode (DEBUG=0): All logs go to file only; stdout (fd 3) is reserved for the final JSON blob
  • Debug mode (DEBUG=1): Logs on stderr via tee; JSON pretty-printed to stdout

The die() function outputs {"error": "..."} as JSON when not in debug mode.

Docker Stack Inside Each LXC

Located at /opt/customer-stack/ inside the CT:

  • postgres (pgvector/pgvector:pg16) initialized from sql/ directory
  • postgrest (postgrest/postgrest:latest) REST API on port 3000 (configurable), JWT-secured
  • n8n (n8nio/n8n:latest) port 5678, backed by postgres, connected to Ollama at 192.168.45.3:11434

Infrastructure Assumptions

  • Ollama server: 192.168.45.3:11434 (hardcoded)
  • Docker registry mirror: http://192.168.45.2:5000
  • APT proxy (optional): http://192.168.45.2:3142
  • OPNsense NGINX at 192.168.45.1:4444 (for proxy setup/deletion)
  • Default VLAN: 90
  • Default domain: userman.de → FQDN: sb-<unix_ts>.userman.de

n8n Workflow JSON Processing

When importing a workflow, a Python script replaces credential references:

  • postgres credentials → newly created "PostgreSQL (local)" credential ID
  • ollamaApi credentials → newly created "Ollama (local)" credential ID
  • Fields removed before import: id, versionId, meta, tags, active, pinData