feat: Auto-import n8n RAG workflow with credentials
- Fixed n8n API login: use 'emailOrLdapLoginId' instead of 'email' - Added n8n_setup_rag_workflow() function to libsupabase.sh - Creates PostgreSQL and Ollama credentials automatically - Imports RAG KI-Bot workflow with correct credential references - Removed tags from workflow JSON (API validation issue) - Step 10 now fully automated: credentials + workflow import Tested successfully on container sb-1769173910
This commit is contained in:
@@ -48,6 +48,12 @@ bash delete_nginx_proxy.sh --ctid 768736636
|
||||
|
||||
# Mit Debug-Ausgabe
|
||||
bash delete_nginx_proxy.sh --debug --ctid 768736636
|
||||
|
||||
# Dry-Run (zeigt was gelöscht würde, ohne zu löschen)
|
||||
bash delete_nginx_proxy.sh --dry-run --ctid 768736636
|
||||
|
||||
# Mit expliziter FQDN
|
||||
bash delete_nginx_proxy.sh --ctid 768736636 --fqdn sb-1768736636.userman.de
|
||||
```
|
||||
|
||||
### Hilfsfunktionen
|
||||
@@ -160,6 +166,52 @@ export DEBUG="1"
|
||||
bash setup_nginx_proxy.sh --ctid 768736636 ...
|
||||
```
|
||||
|
||||
## Delete Script Parameter
|
||||
|
||||
### Erforderliche Parameter
|
||||
|
||||
| Parameter | Beschreibung | Beispiel |
|
||||
|-----------|--------------|----------|
|
||||
| `--ctid <id>` | Container ID (zum Finden der Komponenten) | `768736636` |
|
||||
|
||||
### Optionale Parameter
|
||||
|
||||
| Parameter | Beschreibung | Standard |
|
||||
|-----------|--------------|----------|
|
||||
| `--fqdn <domain>` | FQDN zum Finden des HTTP Servers | Auto-Detect |
|
||||
| `--opnsense-host <ip>` | OPNsense IP oder Hostname | `192.168.45.1` |
|
||||
| `--opnsense-port <port>` | OPNsense WebUI/API Port | `4444` |
|
||||
| `--dry-run` | Zeigt was gelöscht würde, ohne zu löschen | Aus |
|
||||
| `--debug` | Debug-Modus aktivieren | Aus |
|
||||
|
||||
### Delete Script Ausgabe
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"dry_run": false,
|
||||
"ctid": "768736636",
|
||||
"deleted_count": 4,
|
||||
"failed_count": 0,
|
||||
"components": {
|
||||
"http_server": "deleted",
|
||||
"location": "deleted",
|
||||
"upstream": "deleted",
|
||||
"upstream_server": "deleted"
|
||||
},
|
||||
"reconfigure": "ok"
|
||||
}
|
||||
```
|
||||
|
||||
### Löschreihenfolge
|
||||
|
||||
Das Script löscht die Komponenten in der richtigen Reihenfolge (von außen nach innen):
|
||||
|
||||
1. **HTTP Server** - Virtueller Host
|
||||
2. **Location** - URL-Pfad-Konfiguration
|
||||
3. **Upstream** - Load-Balancer-Gruppe
|
||||
4. **Upstream Server** - Backend-Server
|
||||
|
||||
## Fehlerbehebung
|
||||
|
||||
### API-Verbindungsfehler
|
||||
@@ -188,6 +240,8 @@ Der API-Benutzer benötigt folgende Berechtigungen in OPNsense:
|
||||
|
||||
## Versionsverlauf
|
||||
|
||||
### setup_nginx_proxy.sh
|
||||
|
||||
| Version | Änderungen |
|
||||
|---------|------------|
|
||||
| 1.0.8 | HTTP Server Suche nach servername statt description |
|
||||
@@ -197,3 +251,10 @@ Der API-Benutzer benötigt folgende Berechtigungen in OPNsense:
|
||||
| 1.0.4 | Korrektes API-Format (httpserver statt http_server) |
|
||||
| 1.0.3 | Vereinfachte HTTP Server Konfiguration |
|
||||
| 1.0.0 | Initiale Version |
|
||||
|
||||
### delete_nginx_proxy.sh
|
||||
|
||||
| Version | Änderungen |
|
||||
|---------|------------|
|
||||
| 1.0.1 | Fix: Arithmetik-Fehler bei Counter-Inkrementierung behoben |
|
||||
| 1.0.0 | Initiale Version |
|
||||
|
||||
193
TODO.md
193
TODO.md
@@ -1,152 +1,63 @@
|
||||
# n8n Customer Provisioning System
|
||||
|
||||
## Status: ✅ Phase 1 Complete, Phase 2 In Progress
|
||||
|
||||
---
|
||||
|
||||
# Phase 1: Debug-Option Implementation
|
||||
|
||||
## Status: ✅ Completed (v2 - mit Log-Datei)
|
||||
|
||||
### Schritte:
|
||||
|
||||
- [x] **libsupabase.sh anpassen**
|
||||
- [x] `DEBUG="${DEBUG:-0}"` Variable hinzufügen
|
||||
- [x] `info()` nur bei DEBUG=1 ausgeben
|
||||
- [x] `warn()` nur bei DEBUG=1 ausgeben
|
||||
- [x] `die()` anpassen: JSON-Fehler auf fd 3 bei DEBUG=0
|
||||
- [x] `setup_traps()` anpassen für JSON-Fehlerausgabe auf fd 3
|
||||
|
||||
- [x] **install.sh anpassen**
|
||||
- [x] `DEBUG=0` als Default setzen
|
||||
- [x] `--debug` Option im Argument-Parsing hinzufügen
|
||||
- [x] `echo off` Zeile entfernen
|
||||
- [x] Usage-Text aktualisieren
|
||||
- [x] Log-Verzeichnis erstellen (`logs/`)
|
||||
- [x] Alle Ausgaben in Log-Datei umleiten
|
||||
- [x] Log-Datei nach Container-Hostname benennen
|
||||
- [x] JSON-Ausgabe enthält Pfad zur Log-Datei
|
||||
|
||||
### Erwartetes Verhalten:
|
||||
|
||||
**Ohne `--debug` (Normal-Modus):**
|
||||
- Alle Ausgaben (apt, docker, etc.) → Log-Datei `logs/<hostname>.log`
|
||||
- Nur JSON auf stdout
|
||||
- Bei Fehlern: JSON mit `{"error": "..."}`
|
||||
|
||||
**Mit `--debug`:**
|
||||
- Alle Ausgaben auf stderr UND in Log-Datei
|
||||
- JSON auf stdout (auch im Log)
|
||||
|
||||
### Änderungen:
|
||||
|
||||
**libsupabase.sh:**
|
||||
- `DEBUG="${DEBUG:-0}"` Variable
|
||||
- `info()` und `warn()` nur bei `DEBUG=1`
|
||||
- `die()` und `setup_traps()` geben JSON auf fd 3 aus (falls verfügbar)
|
||||
|
||||
**install.sh:**
|
||||
- Log-Verzeichnis: `${SCRIPT_DIR}/logs/`
|
||||
- Temporäre Log-Datei während Installation
|
||||
- Umbenennung zu `<hostname>.log` nach Hostname-Generierung
|
||||
- fd 3 reserviert für JSON-Ausgabe
|
||||
- JSON enthält `"log_file"` Pfad
|
||||
- `--debug` Option für Konsolen-Ausgabe
|
||||
|
||||
### JSON-Ausgabe enthält jetzt:
|
||||
```json
|
||||
{
|
||||
"ctid": ...,
|
||||
"hostname": "sb-...",
|
||||
...
|
||||
"log_file": "/path/to/logs/sb-....log"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
# Phase 2: NGINX Reverse Proxy Setup
|
||||
# n8n Workflow Import - Implementation Plan
|
||||
|
||||
## Status: 🔄 In Progress
|
||||
|
||||
### Neues Script: `setup_nginx_proxy.sh`
|
||||
---
|
||||
|
||||
Konfiguriert automatisch einen NGINX Reverse Proxy auf OPNsense für neue n8n-Instanzen.
|
||||
|
||||
### Verwendung:
|
||||
|
||||
```bash
|
||||
# Mit Daten aus dem Installer-Output:
|
||||
bash setup_nginx_proxy.sh \
|
||||
--ctid 768736636 \
|
||||
--hostname sb-1768736636 \
|
||||
--fqdn sb-1768736636.userman.de \
|
||||
--backend-ip 192.168.45.135 \
|
||||
--backend-port 5678
|
||||
|
||||
# Mit Debug-Ausgabe:
|
||||
bash setup_nginx_proxy.sh --debug \
|
||||
--ctid 768736636 \
|
||||
--hostname sb-1768736636 \
|
||||
--fqdn sb-1768736636.userman.de \
|
||||
--backend-ip 192.168.45.135
|
||||
```
|
||||
|
||||
### Was das Script macht:
|
||||
|
||||
1. **Upstream Server** erstellen (Backend-Server mit IP:Port)
|
||||
2. **Upstream** erstellen (Load-Balancer-Gruppe)
|
||||
3. **Location** erstellen (URL-Pfad-Konfiguration mit WebSocket-Support)
|
||||
4. **HTTP Server** erstellen (Virtual Host mit HTTPS + ACME/Let's Encrypt)
|
||||
5. **NGINX neu laden** (Konfiguration anwenden)
|
||||
|
||||
### API-Endpunkte (OPNsense NGINX Plugin):
|
||||
|
||||
- `POST /api/nginx/settings/addUpstreamServer`
|
||||
- `POST /api/nginx/settings/addUpstream`
|
||||
- `POST /api/nginx/settings/addLocation`
|
||||
- `POST /api/nginx/settings/addHttpServer`
|
||||
- `POST /api/nginx/service/reconfigure`
|
||||
|
||||
### JSON-Ausgabe:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"ctid": "768736636",
|
||||
"fqdn": "sb-1768736636.userman.de",
|
||||
"backend": "192.168.45.135:5678",
|
||||
"nginx": {
|
||||
"upstream_server_uuid": "...",
|
||||
"upstream_uuid": "...",
|
||||
"location_uuid": "...",
|
||||
"http_server_uuid": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Noch zu testen:
|
||||
|
||||
- [ ] API-Verbindung zu OPNsense
|
||||
- [ ] Upstream Server erstellen
|
||||
- [ ] Upstream erstellen
|
||||
- [ ] Location erstellen
|
||||
- [ ] HTTP Server erstellen
|
||||
- [ ] NGINX Konfiguration anwenden
|
||||
- [ ] SSL-Zertifikat (Let's Encrypt/ACME)
|
||||
## Problem
|
||||
Der n8n Workflow wird nicht automatisch importiert und aktiviert. Die bisherige Implementierung in Step 10 funktioniert nicht korrekt, weil:
|
||||
1. Die `pct_exec` Ausgabe nicht korrekt für JSON-Parsing zurückgegeben wird
|
||||
2. Credentials müssen zuerst erstellt werden, dann deren IDs im Workflow referenziert werden
|
||||
3. Der Workflow muss nach dem Import aktiviert werden
|
||||
|
||||
---
|
||||
|
||||
# Phase 3: Integration in n8n Workflow (Geplant)
|
||||
## Lösung
|
||||
|
||||
### Workflow-Erweiterung:
|
||||
### Phase 1: libsupabase.sh - Neue n8n API Funktionen
|
||||
|
||||
1. `install.sh` → LXC + n8n erstellen
|
||||
2. `setup_nginx_proxy.sh` → Reverse Proxy konfigurieren
|
||||
3. E-Mail an Kunden mit Zugangsdaten
|
||||
- [x] `n8n_api_login()` - Login und Cookie speichern
|
||||
- [x] `n8n_api_create_postgres_credential()` - PostgreSQL Credential erstellen
|
||||
- [x] `n8n_api_create_ollama_credential()` - Ollama Credential erstellen
|
||||
- [x] `n8n_api_import_workflow()` - Workflow importieren
|
||||
- [x] `n8n_api_activate_workflow()` - Workflow aktivieren
|
||||
- [x] `n8n_generate_workflow_json()` - Workflow JSON mit Credential-IDs generieren
|
||||
|
||||
### n8n Workflow Nodes:
|
||||
### Phase 2: install.sh - Step 10 überarbeiten
|
||||
|
||||
```
|
||||
[Webhook Trigger] → [SSH: install.sh] → [Parse JSON] → [SSH: setup_nginx_proxy.sh] → [Parse JSON] → [Send Email]
|
||||
```
|
||||
- [x] Login durchführen
|
||||
- [x] PostgreSQL Credential erstellen und ID speichern
|
||||
- [x] Ollama Credential erstellen und ID speichern
|
||||
- [x] Workflow JSON mit korrekten Credential-IDs generieren
|
||||
- [x] Workflow importieren
|
||||
- [x] Workflow aktivieren
|
||||
|
||||
### Phase 3: Testen
|
||||
|
||||
- [ ] Neuen Container erstellen mit `bash install.sh --debug`
|
||||
- [ ] Prüfen ob Workflow importiert wurde
|
||||
- [ ] Prüfen ob Workflow aktiv ist
|
||||
- [ ] Prüfen ob Credentials korrekt verknüpft sind
|
||||
|
||||
### Phase 4: Git Push
|
||||
|
||||
- [ ] Änderungen committen
|
||||
- [ ] Push zu Repository
|
||||
|
||||
---
|
||||
|
||||
## Technische Details
|
||||
|
||||
### n8n REST API Endpoints
|
||||
|
||||
- `POST /rest/login` - Login (setzt Session Cookie)
|
||||
- `POST /rest/credentials` - Credential erstellen
|
||||
- `POST /rest/workflows` - Workflow importieren
|
||||
- `PATCH /rest/workflows/{id}` - Workflow aktivieren
|
||||
|
||||
### Credential Types
|
||||
|
||||
- `postgres` - PostgreSQL Datenbank
|
||||
- `ollamaApi` - Ollama API
|
||||
|
||||
---
|
||||
|
||||
389
delete_nginx_proxy.sh
Executable file
389
delete_nginx_proxy.sh
Executable file
@@ -0,0 +1,389 @@
|
||||
#!/usr/bin/env bash
|
||||
set -Eeuo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# OPNsense NGINX Reverse Proxy Delete Script
|
||||
# =============================================================================
|
||||
# Dieses Script löscht einen NGINX Reverse Proxy auf OPNsense
|
||||
# für eine n8n-Instanz über die OPNsense API.
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_VERSION="1.0.2"
|
||||
|
||||
# Debug mode: 0 = nur JSON, 1 = Logs auf stderr
|
||||
DEBUG="${DEBUG:-0}"
|
||||
export DEBUG
|
||||
|
||||
# Logging functions
|
||||
log_ts() { date "+[%F %T]"; }
|
||||
info() { [[ "$DEBUG" == "1" ]] && echo "$(log_ts) INFO: $*" >&2; return 0; }
|
||||
warn() { [[ "$DEBUG" == "1" ]] && echo "$(log_ts) WARN: $*" >&2; return 0; }
|
||||
die() {
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
echo "$(log_ts) ERROR: $*" >&2
|
||||
else
|
||||
echo "{\"error\": \"$*\"}"
|
||||
fi
|
||||
exit 1
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# Default Configuration
|
||||
# =============================================================================
|
||||
OPNSENSE_HOST="${OPNSENSE_HOST:-192.168.45.1}"
|
||||
OPNSENSE_PORT="${OPNSENSE_PORT:-4444}"
|
||||
OPNSENSE_API_KEY="${OPNSENSE_API_KEY:-cUUs80IDkQelMJVgAVK2oUoDHrQf+cQPwXoPKNd3KDIgiCiEyEfMq38UTXeY5/VO/yWtCC7k9Y9kJ0Pn}"
|
||||
OPNSENSE_API_SECRET="${OPNSENSE_API_SECRET:-2egxxFYCAUjBDp0OrgbJO3NBZmR4jpDm028jeS8Nq8OtCGu/0lAxt4YXWXbdZjcFVMS0Nrhru1I2R1si}"
|
||||
|
||||
# =============================================================================
|
||||
# Usage
|
||||
# =============================================================================
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
bash delete_nginx_proxy.sh [options]
|
||||
|
||||
Required options:
|
||||
--ctid <id> Container ID (used to find components by description)
|
||||
|
||||
Optional:
|
||||
--fqdn <domain> Full domain name (to find HTTP Server by servername)
|
||||
--opnsense-host <ip> OPNsense IP or hostname (default: 192.168.45.1)
|
||||
--opnsense-port <port> OPNsense WebUI/API port (default: 4444)
|
||||
--dry-run Show what would be deleted without actually deleting
|
||||
--debug Enable debug mode
|
||||
--help Show this help
|
||||
|
||||
Examples:
|
||||
# Delete proxy by CTID:
|
||||
bash delete_nginx_proxy.sh --ctid 768736636
|
||||
|
||||
# Delete proxy with debug output:
|
||||
bash delete_nginx_proxy.sh --debug --ctid 768736636
|
||||
|
||||
# Dry run (show what would be deleted):
|
||||
bash delete_nginx_proxy.sh --dry-run --ctid 768736636
|
||||
|
||||
# Delete by CTID and FQDN:
|
||||
bash delete_nginx_proxy.sh --ctid 768736636 --fqdn sb-1768736636.userman.de
|
||||
EOF
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# Default values for arguments
|
||||
# =============================================================================
|
||||
CTID=""
|
||||
FQDN=""
|
||||
DRY_RUN="0"
|
||||
|
||||
# =============================================================================
|
||||
# Argument parsing
|
||||
# =============================================================================
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--ctid) CTID="${2:-}"; shift 2 ;;
|
||||
--fqdn) FQDN="${2:-}"; shift 2 ;;
|
||||
--opnsense-host) OPNSENSE_HOST="${2:-}"; shift 2 ;;
|
||||
--opnsense-port) OPNSENSE_PORT="${2:-}"; shift 2 ;;
|
||||
--dry-run) DRY_RUN="1"; shift 1 ;;
|
||||
--debug) DEBUG="1"; export DEBUG; shift 1 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) die "Unknown option: $1 (use --help)" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# =============================================================================
|
||||
# API Base URL
|
||||
# =============================================================================
|
||||
API_BASE="https://${OPNSENSE_HOST}:${OPNSENSE_PORT}/api"
|
||||
|
||||
# =============================================================================
|
||||
# API Helper Functions
|
||||
# =============================================================================
|
||||
|
||||
# Make API request to OPNsense
|
||||
api_request() {
|
||||
local method="$1"
|
||||
local endpoint="$2"
|
||||
local data="${3:-}"
|
||||
|
||||
local url="${API_BASE}${endpoint}"
|
||||
local auth="${OPNSENSE_API_KEY}:${OPNSENSE_API_SECRET}"
|
||||
|
||||
info "API ${method} ${url}"
|
||||
|
||||
local response
|
||||
|
||||
if [[ -n "$data" ]]; then
|
||||
response=$(curl -s -k -X "${method}" \
|
||||
-u "${auth}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "${data}" \
|
||||
"${url}" 2>&1)
|
||||
else
|
||||
response=$(curl -s -k -X "${method}" \
|
||||
-u "${auth}" \
|
||||
"${url}" 2>&1)
|
||||
fi
|
||||
|
||||
echo "$response"
|
||||
}
|
||||
|
||||
# Search for items by description
|
||||
search_by_description() {
|
||||
local search_endpoint="$1"
|
||||
local description="$2"
|
||||
|
||||
local response
|
||||
response=$(api_request "GET" "${search_endpoint}")
|
||||
|
||||
info "Search response for ${search_endpoint}: ${response:0:500}..."
|
||||
|
||||
# Extract all UUIDs where description matches
|
||||
local uuid
|
||||
uuid=$(echo "$response" | python3 -c "
|
||||
import json, sys
|
||||
desc = sys.argv[1] if len(sys.argv) > 1 else ''
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
rows = data.get('rows', [])
|
||||
for row in rows:
|
||||
row_desc = row.get('description', '')
|
||||
if row_desc == desc:
|
||||
print(row.get('uuid', ''))
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f'Error: {e}', file=sys.stderr)
|
||||
" "${description}" 2>/dev/null || true)
|
||||
|
||||
info "Found UUID for description '${description}': ${uuid:-none}"
|
||||
echo "$uuid"
|
||||
}
|
||||
|
||||
# Search for HTTP Server by servername
|
||||
search_http_server_by_servername() {
|
||||
local servername="$1"
|
||||
|
||||
local response
|
||||
response=$(api_request "GET" "/nginx/settings/searchHttpServer")
|
||||
|
||||
info "HTTP Server search response: ${response:0:500}..."
|
||||
|
||||
# Extract UUID where servername matches
|
||||
local uuid
|
||||
uuid=$(echo "$response" | python3 -c "
|
||||
import json, sys
|
||||
sname = sys.argv[1] if len(sys.argv) > 1 else ''
|
||||
try:
|
||||
data = json.load(sys.stdin)
|
||||
rows = data.get('rows', [])
|
||||
for row in rows:
|
||||
row_sname = row.get('servername', '')
|
||||
if row_sname == sname:
|
||||
print(row.get('uuid', ''))
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f'Error: {e}', file=sys.stderr)
|
||||
" "${servername}" 2>/dev/null || true)
|
||||
|
||||
info "Found HTTP Server UUID for servername '${servername}': ${uuid:-none}"
|
||||
echo "$uuid"
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# Delete Functions
|
||||
# =============================================================================
|
||||
|
||||
delete_item() {
|
||||
local item_type="$1"
|
||||
local uuid="$2"
|
||||
local endpoint="$3"
|
||||
|
||||
if [[ -z "$uuid" ]]; then
|
||||
info "No ${item_type} found to delete"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "1" ]]; then
|
||||
info "[DRY-RUN] Would delete ${item_type}: ${uuid}"
|
||||
echo "dry-run"
|
||||
return 0
|
||||
fi
|
||||
|
||||
info "Deleting ${item_type}: ${uuid}"
|
||||
local response
|
||||
response=$(api_request "POST" "${endpoint}/${uuid}")
|
||||
|
||||
local result
|
||||
result=$(echo "$response" | python3 -c "import json,sys; print(json.load(sys.stdin).get('result','unknown'))" 2>/dev/null || echo "unknown")
|
||||
|
||||
if [[ "$result" == "deleted" ]]; then
|
||||
info "${item_type} deleted successfully"
|
||||
echo "deleted"
|
||||
else
|
||||
warn "Failed to delete ${item_type}: ${response}"
|
||||
echo "failed"
|
||||
fi
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# Validation
|
||||
# =============================================================================
|
||||
[[ -n "$CTID" ]] || die "--ctid is required"
|
||||
|
||||
info "Script Version: ${SCRIPT_VERSION}"
|
||||
info "Configuration:"
|
||||
info " CTID: ${CTID}"
|
||||
info " FQDN: ${FQDN:-auto-detect}"
|
||||
info " OPNsense: ${OPNSENSE_HOST}:${OPNSENSE_PORT}"
|
||||
info " Dry Run: ${DRY_RUN}"
|
||||
|
||||
# =============================================================================
|
||||
# Main
|
||||
# =============================================================================
|
||||
main() {
|
||||
info "Starting NGINX Reverse Proxy deletion for CTID ${CTID}..."
|
||||
|
||||
local description="${CTID}"
|
||||
local deleted_count=0
|
||||
local failed_count=0
|
||||
|
||||
# Results tracking
|
||||
local http_server_result="not_found"
|
||||
local location_result="not_found"
|
||||
local upstream_result="not_found"
|
||||
local upstream_server_result="not_found"
|
||||
|
||||
# Step 1: Find and delete HTTP Server
|
||||
info "Step 1: Finding HTTP Server..."
|
||||
local http_server_uuid=""
|
||||
|
||||
# Try to find by FQDN first
|
||||
if [[ -n "$FQDN" ]]; then
|
||||
http_server_uuid=$(search_http_server_by_servername "${FQDN}")
|
||||
fi
|
||||
|
||||
# If not found by FQDN, try common patterns
|
||||
if [[ -z "$http_server_uuid" ]]; then
|
||||
# Try sb-<ctid>.userman.de pattern
|
||||
http_server_uuid=$(search_http_server_by_servername "sb-${CTID}.userman.de")
|
||||
fi
|
||||
|
||||
if [[ -z "$http_server_uuid" ]]; then
|
||||
# Try sb-1<ctid>.userman.de pattern (with leading 1)
|
||||
http_server_uuid=$(search_http_server_by_servername "sb-1${CTID}.userman.de")
|
||||
fi
|
||||
|
||||
if [[ -n "$http_server_uuid" ]]; then
|
||||
http_server_result=$(delete_item "HTTP Server" "$http_server_uuid" "/nginx/settings/delHttpServer")
|
||||
if [[ "$http_server_result" == "deleted" || "$http_server_result" == "dry-run" ]]; then
|
||||
deleted_count=$((deleted_count + 1))
|
||||
else
|
||||
failed_count=$((failed_count + 1))
|
||||
fi
|
||||
else
|
||||
info "No HTTP Server found for CTID ${CTID}"
|
||||
fi
|
||||
|
||||
# Step 2: Find and delete Location
|
||||
info "Step 2: Finding Location..."
|
||||
local location_uuid
|
||||
location_uuid=$(search_by_description "/nginx/settings/searchLocation" "${description}")
|
||||
|
||||
if [[ -n "$location_uuid" ]]; then
|
||||
location_result=$(delete_item "Location" "$location_uuid" "/nginx/settings/delLocation")
|
||||
if [[ "$location_result" == "deleted" || "$location_result" == "dry-run" ]]; then
|
||||
deleted_count=$((deleted_count + 1))
|
||||
else
|
||||
failed_count=$((failed_count + 1))
|
||||
fi
|
||||
else
|
||||
info "No Location found for CTID ${CTID}"
|
||||
fi
|
||||
|
||||
# Step 3: Find and delete Upstream
|
||||
info "Step 3: Finding Upstream..."
|
||||
local upstream_uuid
|
||||
upstream_uuid=$(search_by_description "/nginx/settings/searchUpstream" "${description}")
|
||||
|
||||
if [[ -n "$upstream_uuid" ]]; then
|
||||
upstream_result=$(delete_item "Upstream" "$upstream_uuid" "/nginx/settings/delUpstream")
|
||||
if [[ "$upstream_result" == "deleted" || "$upstream_result" == "dry-run" ]]; then
|
||||
deleted_count=$((deleted_count + 1))
|
||||
else
|
||||
failed_count=$((failed_count + 1))
|
||||
fi
|
||||
else
|
||||
info "No Upstream found for CTID ${CTID}"
|
||||
fi
|
||||
|
||||
# Step 4: Find and delete Upstream Server
|
||||
info "Step 4: Finding Upstream Server..."
|
||||
local upstream_server_uuid
|
||||
upstream_server_uuid=$(search_by_description "/nginx/settings/searchUpstreamServer" "${description}")
|
||||
|
||||
if [[ -n "$upstream_server_uuid" ]]; then
|
||||
upstream_server_result=$(delete_item "Upstream Server" "$upstream_server_uuid" "/nginx/settings/delUpstreamServer")
|
||||
if [[ "$upstream_server_result" == "deleted" || "$upstream_server_result" == "dry-run" ]]; then
|
||||
deleted_count=$((deleted_count + 1))
|
||||
else
|
||||
failed_count=$((failed_count + 1))
|
||||
fi
|
||||
else
|
||||
info "No Upstream Server found for CTID ${CTID}"
|
||||
fi
|
||||
|
||||
# Step 5: Apply configuration (if not dry-run and something was deleted)
|
||||
local reconfigure_result="skipped"
|
||||
if [[ "$DRY_RUN" != "1" && $deleted_count -gt 0 ]]; then
|
||||
info "Step 5: Applying NGINX configuration..."
|
||||
local response
|
||||
response=$(api_request "POST" "/nginx/service/reconfigure" "{}")
|
||||
|
||||
local status
|
||||
status=$(echo "$response" | python3 -c "import json,sys; print(json.load(sys.stdin).get('status',''))" 2>/dev/null || echo "unknown")
|
||||
|
||||
if [[ "$status" == "ok" ]]; then
|
||||
info "NGINX configuration applied successfully"
|
||||
reconfigure_result="ok"
|
||||
else
|
||||
warn "NGINX reconfigure status: ${status}"
|
||||
reconfigure_result="failed"
|
||||
fi
|
||||
elif [[ "$DRY_RUN" == "1" ]]; then
|
||||
info "[DRY-RUN] Would apply NGINX configuration"
|
||||
reconfigure_result="dry-run"
|
||||
fi
|
||||
|
||||
# Output result as JSON
|
||||
local success="true"
|
||||
[[ $failed_count -gt 0 ]] && success="false"
|
||||
|
||||
local result
|
||||
result=$(cat <<EOF
|
||||
{
|
||||
"success": ${success},
|
||||
"dry_run": $([[ "$DRY_RUN" == "1" ]] && echo "true" || echo "false"),
|
||||
"ctid": "${CTID}",
|
||||
"deleted_count": ${deleted_count},
|
||||
"failed_count": ${failed_count},
|
||||
"components": {
|
||||
"http_server": "${http_server_result}",
|
||||
"location": "${location_result}",
|
||||
"upstream": "${upstream_result}",
|
||||
"upstream_server": "${upstream_server_result}"
|
||||
},
|
||||
"reconfigure": "${reconfigure_result}"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
echo "$result"
|
||||
else
|
||||
# Compact JSON
|
||||
echo "$result" | python3 -c "import json,sys; print(json.dumps(json.load(sys.stdin)))" 2>/dev/null || echo "$result"
|
||||
fi
|
||||
}
|
||||
|
||||
main
|
||||
@@ -1,52 +1,69 @@
|
||||
#!/bin/bash
|
||||
# delete_stopped_lxc.sh - Löscht alle gestoppten LXC Container auf PVE
|
||||
|
||||
# Skript zum Löschen aller gestoppten LXCs auf dem lokalen Proxmox-Node
|
||||
# Verwendet pct destroy und berücksichtigt nur den lokalen Node
|
||||
set -e
|
||||
|
||||
# Überprüfen, ob das Skript als Root ausgeführt wird
|
||||
if [ "$(id -u)" -ne 0 ]; then
|
||||
echo "Dieses Skript muss als Root ausgeführt werden." >&2
|
||||
exit 1
|
||||
# Farben für Output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
echo -e "${YELLOW}=== Gestoppte LXC Container finden ===${NC}\n"
|
||||
|
||||
# Array für gestoppte Container
|
||||
declare -a STOPPED_CTS
|
||||
|
||||
# Alle Container durchgehen und gestoppte finden
|
||||
while read -r line; do
|
||||
VMID=$(echo "$line" | awk '{print $1}')
|
||||
STATUS=$(echo "$line" | awk '{print $2}')
|
||||
NAME=$(echo "$line" | awk '{print $3}')
|
||||
|
||||
if [[ "$STATUS" == "stopped" ]]; then
|
||||
STOPPED_CTS+=("$VMID:$NAME")
|
||||
echo -e " ${RED}[STOPPED]${NC} CT $VMID - $NAME"
|
||||
fi
|
||||
done < <(pct list | tail -n +2)
|
||||
|
||||
# Überprüfen, ob pct verfügbar ist
|
||||
if ! command -v pct &> /dev/null; then
|
||||
echo "pct ist nicht installiert. Bitte installieren Sie es zuerst." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Alle gestoppten LXCs auf dem lokalen Node abrufen
|
||||
echo "Suche nach gestoppten LXCs auf diesem Node..."
|
||||
stopped_lxcs=$(pct list | awk '$2 == "stopped" {print $1}')
|
||||
|
||||
if [ -z "$stopped_lxcs" ]; then
|
||||
echo "Keine gestoppten LXCs auf diesem Node gefunden."
|
||||
# Prüfen ob gestoppte Container gefunden wurden
|
||||
if [[ ${#STOPPED_CTS[@]} -eq 0 ]]; then
|
||||
echo -e "\n${GREEN}Keine gestoppten Container gefunden.${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Gefundene gestoppte LXCs auf diesem Node:"
|
||||
echo "$stopped_lxcs" | while read -r lxc_id; do
|
||||
lxc_name=$(pct config $lxc_id | grep '^hostname:' | awk '{print $2}')
|
||||
echo " $lxc_id - $lxc_name"
|
||||
done
|
||||
echo -e "\n${YELLOW}Gefunden: ${#STOPPED_CTS[@]} gestoppte Container${NC}\n"
|
||||
|
||||
# Bestätigung einholen
|
||||
read -p "Möchten Sie diese LXCs wirklich löschen? (y/n): " confirm
|
||||
if [[ ! "$confirm" =~ ^[Yy]$ ]]; then
|
||||
echo "Löschvorgang abgebrochen."
|
||||
# Bestätigung anfordern
|
||||
read -p "Möchten Sie ALLE gestoppten Container unwiderruflich löschen? (ja/nein): " CONFIRM
|
||||
|
||||
if [[ "$CONFIRM" != "ja" ]]; then
|
||||
echo -e "${GREEN}Abgebrochen. Keine Container wurden gelöscht.${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# LXCs löschen
|
||||
echo "Lösche gestoppte LXCs..."
|
||||
for lxc_id in $stopped_lxcs; do
|
||||
echo "Lösche LXC $lxc_id..."
|
||||
pct destroy $lxc_id
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "LXC $lxc_id erfolgreich gelöscht."
|
||||
# Zweite Bestätigung
|
||||
read -p "Sind Sie WIRKLICH sicher? Tippen Sie 'LÖSCHEN' ein: " CONFIRM2
|
||||
|
||||
if [[ "$CONFIRM2" != "LÖSCHEN" ]]; then
|
||||
echo -e "${GREEN}Abgebrochen. Keine Container wurden gelöscht.${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo -e "\n${RED}=== Lösche Container ===${NC}\n"
|
||||
|
||||
# Container löschen
|
||||
for CT in "${STOPPED_CTS[@]}"; do
|
||||
VMID="${CT%%:*}"
|
||||
NAME="${CT##*:}"
|
||||
|
||||
echo -n "Lösche CT $VMID ($NAME)... "
|
||||
|
||||
if pct destroy "$VMID" --purge 2>/dev/null; then
|
||||
echo -e "${GREEN}OK${NC}"
|
||||
else
|
||||
echo "Fehler beim Löschen von LXC $lxc_id." >&2
|
||||
echo -e "${RED}FEHLER${NC}"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Vorgang abgeschlossen."
|
||||
echo -e "\n${GREEN}=== Fertig ===${NC}"
|
||||
224
install.sh
224
install.sh
@@ -64,8 +64,12 @@ Domain / n8n options:
|
||||
--debug Enable debug mode (show logs on stderr)
|
||||
--help Show help
|
||||
|
||||
PostgREST / Supabase options:
|
||||
--postgrest-port <port> PostgREST port (default: 3000)
|
||||
|
||||
Notes:
|
||||
- This script creates a Debian 12 LXC and provisions Docker + customer stack (Postgres/pgvector + n8n).
|
||||
- This script creates a Debian 12 LXC and provisions Docker + customer stack (Postgres/pgvector + n8n + PostgREST).
|
||||
- PostgREST provides a REST API for PostgreSQL, compatible with Supabase Vector Store node in n8n.
|
||||
- At the end it prints a JSON with credentials and URLs.
|
||||
EOF
|
||||
}
|
||||
@@ -89,6 +93,12 @@ UNPRIV="1"
|
||||
BASE_DOMAIN="userman.de"
|
||||
N8N_OWNER_EMAIL=""
|
||||
N8N_OWNER_PASS=""
|
||||
POSTGREST_PORT="3000"
|
||||
|
||||
# Ollama API settings (hardcoded for local setup)
|
||||
OLLAMA_HOST="192.168.45.3"
|
||||
OLLAMA_PORT="11434"
|
||||
OLLAMA_URL="http://${OLLAMA_HOST}:${OLLAMA_PORT}"
|
||||
|
||||
# ---------------------------
|
||||
# Arg parsing
|
||||
@@ -109,6 +119,7 @@ while [[ $# -gt 0 ]]; do
|
||||
--base-domain) BASE_DOMAIN="${2:-}"; shift 2 ;;
|
||||
--n8n-owner-email) N8N_OWNER_EMAIL="${2:-}"; shift 2 ;;
|
||||
--n8n-owner-pass) N8N_OWNER_PASS="${2:-}"; shift 2 ;;
|
||||
--postgrest-port) POSTGREST_PORT="${2:-}"; shift 2 ;;
|
||||
--debug) DEBUG="1"; export DEBUG; shift 1 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) die "Unknown option: $1 (use --help)" ;;
|
||||
@@ -293,6 +304,23 @@ WEBHOOK_URL="https://${FQDN}/"
|
||||
# But until proxy is in place, false avoids login trouble.
|
||||
N8N_SECURE_COOKIE="false"
|
||||
|
||||
# Generate JWT secret for PostgREST (32 bytes = 256 bit)
|
||||
JWT_SECRET="$(openssl rand -base64 32 | tr -d '\n')"
|
||||
|
||||
# For proper JWT, we need header.payload.signature format
|
||||
# Let's create proper JWTs
|
||||
JWT_HEADER="$(echo -n '{"alg":"HS256","typ":"JWT"}' | base64 | tr -d '\n' | tr '+/' '-_' | tr -d '=')"
|
||||
ANON_PAYLOAD="$(echo -n '{"role":"anon","iss":"supabase","iat":1700000000,"exp":2000000000}' | base64 | tr -d '\n' | tr '+/' '-_' | tr -d '=')"
|
||||
SERVICE_PAYLOAD="$(echo -n '{"role":"service_role","iss":"supabase","iat":1700000000,"exp":2000000000}' | base64 | tr -d '\n' | tr '+/' '-_' | tr -d '=')"
|
||||
|
||||
ANON_SIGNATURE="$(echo -n "${JWT_HEADER}.${ANON_PAYLOAD}" | openssl dgst -sha256 -hmac "${JWT_SECRET}" -binary | base64 | tr -d '\n' | tr '+/' '-_' | tr -d '=')"
|
||||
SERVICE_SIGNATURE="$(echo -n "${JWT_HEADER}.${SERVICE_PAYLOAD}" | openssl dgst -sha256 -hmac "${JWT_SECRET}" -binary | base64 | tr -d '\n' | tr '+/' '-_' | tr -d '=')"
|
||||
|
||||
ANON_KEY="${JWT_HEADER}.${ANON_PAYLOAD}.${ANON_SIGNATURE}"
|
||||
SERVICE_ROLE_KEY="${JWT_HEADER}.${SERVICE_PAYLOAD}.${SERVICE_SIGNATURE}"
|
||||
|
||||
info "Generated JWT Secret and API Keys for PostgREST"
|
||||
|
||||
# Write .env into CT
|
||||
pct_push_text "${CTID}" "/opt/customer-stack/.env" "$(cat <<EOF
|
||||
PG_DB=${PG_DB}
|
||||
@@ -312,13 +340,95 @@ N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
|
||||
N8N_DIAGNOSTICS_ENABLED=false
|
||||
N8N_VERSION_NOTIFICATIONS_ENABLED=false
|
||||
N8N_TEMPLATES_ENABLED=false
|
||||
|
||||
# PostgREST / Supabase API
|
||||
POSTGREST_PORT=${POSTGREST_PORT}
|
||||
JWT_SECRET=${JWT_SECRET}
|
||||
ANON_KEY=${ANON_KEY}
|
||||
SERVICE_ROLE_KEY=${SERVICE_ROLE_KEY}
|
||||
EOF
|
||||
)"
|
||||
|
||||
# init sql for pgvector (optional but nice)
|
||||
# init sql for pgvector + Supabase Vector Store schema
|
||||
pct_push_text "${CTID}" "/opt/customer-stack/sql/init_pgvector.sql" "$(cat <<'SQL'
|
||||
-- Enable extensions
|
||||
CREATE EXTENSION IF NOT EXISTS vector;
|
||||
CREATE EXTENSION IF NOT EXISTS pg_trgm;
|
||||
|
||||
-- Create schema for API
|
||||
CREATE SCHEMA IF NOT EXISTS api;
|
||||
|
||||
-- Create documents table for Vector Store (n8n PGVector Store compatible)
|
||||
CREATE TABLE IF NOT EXISTS public.documents (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
text TEXT,
|
||||
metadata JSONB,
|
||||
embedding VECTOR(768) -- nomic-embed-text uses 768 dimensions
|
||||
);
|
||||
|
||||
-- Create index for vector similarity search
|
||||
CREATE INDEX IF NOT EXISTS documents_embedding_idx ON public.documents
|
||||
USING ivfflat (embedding vector_cosine_ops)
|
||||
WITH (lists = 100);
|
||||
|
||||
-- Create the match_documents function for similarity search (Supabase/LangChain compatible)
|
||||
CREATE OR REPLACE FUNCTION public.match_documents(
|
||||
query_embedding VECTOR(768),
|
||||
match_count INT DEFAULT 5,
|
||||
filter JSONB DEFAULT '{}'
|
||||
)
|
||||
RETURNS TABLE (
|
||||
id BIGINT,
|
||||
content TEXT,
|
||||
metadata JSONB,
|
||||
similarity FLOAT
|
||||
)
|
||||
LANGUAGE plpgsql
|
||||
AS $$
|
||||
BEGIN
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
d.id,
|
||||
d.content,
|
||||
d.metadata,
|
||||
1 - (d.embedding <=> query_embedding) AS similarity
|
||||
FROM public.documents d
|
||||
WHERE (filter = '{}' OR d.metadata @> filter)
|
||||
ORDER BY d.embedding <=> query_embedding
|
||||
LIMIT match_count;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Grant permissions for PostgREST roles
|
||||
-- Create roles if they don't exist
|
||||
DO $$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = 'anon') THEN
|
||||
CREATE ROLE anon NOLOGIN;
|
||||
END IF;
|
||||
IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = 'service_role') THEN
|
||||
CREATE ROLE service_role NOLOGIN;
|
||||
END IF;
|
||||
IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = 'authenticator') THEN
|
||||
CREATE ROLE authenticator NOINHERIT LOGIN PASSWORD 'authenticator_password';
|
||||
END IF;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Grant permissions
|
||||
GRANT USAGE ON SCHEMA public TO anon, service_role;
|
||||
GRANT ALL ON ALL TABLES IN SCHEMA public TO anon, service_role;
|
||||
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO anon, service_role;
|
||||
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO anon, service_role;
|
||||
|
||||
-- Allow authenticator to switch to these roles
|
||||
GRANT anon TO authenticator;
|
||||
GRANT service_role TO authenticator;
|
||||
|
||||
-- Set default privileges for future tables
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO anon, service_role;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO anon, service_role;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT EXECUTE ON FUNCTIONS TO anon, service_role;
|
||||
SQL
|
||||
)"
|
||||
|
||||
@@ -344,6 +454,24 @@ services:
|
||||
networks:
|
||||
- customer-net
|
||||
|
||||
postgrest:
|
||||
image: postgrest/postgrest:latest
|
||||
container_name: customer-postgrest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "${POSTGREST_PORT}:3000"
|
||||
environment:
|
||||
PGRST_DB_URI: postgres://${PG_USER}:${PG_PASSWORD}@postgres:5432/${PG_DB}
|
||||
PGRST_DB_SCHEMA: public
|
||||
PGRST_DB_ANON_ROLE: anon
|
||||
PGRST_JWT_SECRET: ${JWT_SECRET}
|
||||
PGRST_DB_USE_LEGACY_GUCS: "false"
|
||||
networks:
|
||||
- customer-net
|
||||
|
||||
n8n:
|
||||
image: n8nio/n8n:latest
|
||||
container_name: n8n
|
||||
@@ -351,6 +479,8 @@ services:
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
postgrest:
|
||||
condition: service_started
|
||||
ports:
|
||||
- "${N8N_PORT}:5678"
|
||||
environment:
|
||||
@@ -420,22 +550,104 @@ pct_exec "${CTID}" "cd /opt/customer-stack && docker compose ps"
|
||||
# We create the owner via CLI inside the container.
|
||||
pct_exec "${CTID}" "cd /opt/customer-stack && docker exec -u node n8n n8n --help >/dev/null 2>&1 || true"
|
||||
|
||||
# Try modern command first (works in current n8n builds); if it fails, we leave setup screen (but you’ll see it in logs).
|
||||
# Try modern command first (works in current n8n builds); if it fails, we leave setup screen (but you'll see it in logs).
|
||||
pct_exec "${CTID}" "cd /opt/customer-stack && (docker exec -u node n8n n8n user-management:reset --email '${N8N_OWNER_EMAIL}' --password '${N8N_OWNER_PASS}' --firstName 'Admin' --lastName 'Owner' >/dev/null 2>&1 || true)"
|
||||
|
||||
# Final info
|
||||
info "Step 7 OK: Stack deployed"
|
||||
|
||||
# ---------------------------
|
||||
# Step 8: Setup Owner Account via REST API (fallback)
|
||||
# ---------------------------
|
||||
info "Step 8: Setting up owner account via REST API..."
|
||||
|
||||
# Wait for n8n to be ready
|
||||
sleep 5
|
||||
|
||||
# Try REST API setup (works if user-management:reset didn't work)
|
||||
pct_exec "${CTID}" "curl -sS -X POST 'http://127.0.0.1:5678/rest/owner/setup' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{\"email\":\"${N8N_OWNER_EMAIL}\",\"firstName\":\"Admin\",\"lastName\":\"Owner\",\"password\":\"${N8N_OWNER_PASS}\"}' || true"
|
||||
|
||||
info "Step 8 OK: Owner account setup attempted"
|
||||
|
||||
# ---------------------------
|
||||
# Step 9: Final URLs and Output
|
||||
# ---------------------------
|
||||
info "Step 9: Generating final output..."
|
||||
|
||||
# Final URLs
|
||||
N8N_INTERNAL_URL="http://${CT_IP}:5678/"
|
||||
N8N_EXTERNAL_URL="https://${FQDN}"
|
||||
POSTGREST_URL="http://${CT_IP}:${POSTGREST_PORT}"
|
||||
# Supabase URL format for n8n credential (PostgREST acts as Supabase API)
|
||||
# IMPORTANT: n8n runs inside Docker, so it needs the Docker-internal URL!
|
||||
SUPABASE_URL="http://postgrest:3000"
|
||||
SUPABASE_URL_EXTERNAL="http://${CT_IP}:${POSTGREST_PORT}"
|
||||
|
||||
# Chat URL (webhook URL for the chat trigger - will be available after workflow activation)
|
||||
CHAT_WEBHOOK_URL="https://${FQDN}/webhook/rag-chat-webhook/chat"
|
||||
CHAT_INTERNAL_URL="http://${CT_IP}:5678/webhook/rag-chat-webhook/chat"
|
||||
|
||||
info "Step 7 OK: Stack deployed"
|
||||
info "n8n intern: ${N8N_INTERNAL_URL}"
|
||||
info "n8n extern (geplant via OPNsense): ${N8N_EXTERNAL_URL}"
|
||||
info "PostgREST API: ${POSTGREST_URL}"
|
||||
info "Supabase Service Role Key: ${SERVICE_ROLE_KEY}"
|
||||
info "Ollama URL: ${OLLAMA_URL}"
|
||||
info "Chat Webhook URL (extern): ${CHAT_WEBHOOK_URL}"
|
||||
info "Chat Webhook URL (intern): ${CHAT_INTERNAL_URL}"
|
||||
|
||||
# ---------------------------
|
||||
# Step 10: Setup n8n Credentials + Import Workflow + Activate
|
||||
# ---------------------------
|
||||
info "Step 10: Setting up n8n credentials and importing RAG workflow..."
|
||||
|
||||
# Use the new robust n8n setup function from libsupabase.sh
|
||||
# Parameters: ctid, email, password, pg_host, pg_port, pg_db, pg_user, pg_pass, ollama_url, ollama_model, embedding_model
|
||||
if n8n_setup_rag_workflow "${CTID}" "${N8N_OWNER_EMAIL}" "${N8N_OWNER_PASS}" \
|
||||
"postgres" "5432" "${PG_DB}" "${PG_USER}" "${PG_PASSWORD}" \
|
||||
"${OLLAMA_URL}" "llama3.2:3b" "nomic-embed-text:v1.5"; then
|
||||
info "Step 10 OK: n8n RAG workflow setup completed successfully"
|
||||
else
|
||||
warn "Step 10: n8n workflow setup failed - manual setup may be required"
|
||||
info "Step 10: You can manually import the workflow via n8n UI"
|
||||
fi
|
||||
|
||||
# ---------------------------
|
||||
# Step 11: Setup NGINX Reverse Proxy in OPNsense
|
||||
# ---------------------------
|
||||
info "Step 11: Setting up NGINX Reverse Proxy in OPNsense..."
|
||||
|
||||
# Check if setup_nginx_proxy.sh exists
|
||||
if [[ -f "${SCRIPT_DIR}/setup_nginx_proxy.sh" ]]; then
|
||||
# Run the proxy setup script
|
||||
PROXY_RESULT=$(DEBUG="${DEBUG}" bash "${SCRIPT_DIR}/setup_nginx_proxy.sh" \
|
||||
--ctid "${CTID}" \
|
||||
--hostname "${CT_HOSTNAME}" \
|
||||
--fqdn "${FQDN}" \
|
||||
--backend-ip "${CT_IP}" \
|
||||
--backend-port "5678" \
|
||||
2>&1 || echo '{"success": false, "error": "Proxy setup failed"}')
|
||||
|
||||
# Check if proxy setup was successful
|
||||
if echo "$PROXY_RESULT" | grep -q '"success": true'; then
|
||||
info "NGINX Reverse Proxy setup successful"
|
||||
else
|
||||
warn "NGINX Reverse Proxy setup may have failed: ${PROXY_RESULT}"
|
||||
fi
|
||||
else
|
||||
warn "setup_nginx_proxy.sh not found, skipping proxy setup"
|
||||
fi
|
||||
|
||||
info "Step 11 OK: Proxy setup completed"
|
||||
|
||||
# ---------------------------
|
||||
# Final JSON Output
|
||||
# ---------------------------
|
||||
# Machine-readable JSON output (for your downstream automation)
|
||||
# Kompaktes JSON in einer Zeile für einfaches Parsing
|
||||
# Bei DEBUG=0: JSON auf fd 3 (ursprüngliches stdout) ausgeben
|
||||
# Bei DEBUG=1: JSON normal auf stdout (geht auch ins Log)
|
||||
JSON_OUTPUT="{\"ctid\":${CTID},\"hostname\":\"${CT_HOSTNAME}\",\"fqdn\":\"${FQDN}\",\"ip\":\"${CT_IP}\",\"vlan\":${VLAN},\"urls\":{\"n8n_internal\":\"${N8N_INTERNAL_URL}\",\"n8n_external\":\"${N8N_EXTERNAL_URL}\"},\"postgres\":{\"host\":\"postgres\",\"port\":5432,\"db\":\"${PG_DB}\",\"user\":\"${PG_USER}\",\"password\":\"${PG_PASSWORD}\"},\"n8n\":{\"encryption_key\":\"${N8N_ENCRYPTION_KEY}\",\"owner_email\":\"${N8N_OWNER_EMAIL}\",\"owner_password\":\"${N8N_OWNER_PASS}\",\"secure_cookie\":${N8N_SECURE_COOKIE}},\"log_file\":\"${FINAL_LOG}\"}"
|
||||
JSON_OUTPUT="{\"ctid\":${CTID},\"hostname\":\"${CT_HOSTNAME}\",\"fqdn\":\"${FQDN}\",\"ip\":\"${CT_IP}\",\"vlan\":${VLAN},\"urls\":{\"n8n_internal\":\"${N8N_INTERNAL_URL}\",\"n8n_external\":\"${N8N_EXTERNAL_URL}\",\"postgrest\":\"${POSTGREST_URL}\",\"chat_webhook\":\"${CHAT_WEBHOOK_URL}\",\"chat_internal\":\"${CHAT_INTERNAL_URL}\"},\"postgres\":{\"host\":\"postgres\",\"port\":5432,\"db\":\"${PG_DB}\",\"user\":\"${PG_USER}\",\"password\":\"${PG_PASSWORD}\"},\"supabase\":{\"url\":\"${SUPABASE_URL}\",\"url_external\":\"${SUPABASE_URL_EXTERNAL}\",\"anon_key\":\"${ANON_KEY}\",\"service_role_key\":\"${SERVICE_ROLE_KEY}\",\"jwt_secret\":\"${JWT_SECRET}\"},\"ollama\":{\"url\":\"${OLLAMA_URL}\"},\"n8n\":{\"encryption_key\":\"${N8N_ENCRYPTION_KEY}\",\"owner_email\":\"${N8N_OWNER_EMAIL}\",\"owner_password\":\"${N8N_OWNER_PASS}\",\"secure_cookie\":${N8N_SECURE_COOKIE}},\"log_file\":\"${FINAL_LOG}\"}"
|
||||
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
# Debug-Modus: JSON normal ausgeben (formatiert für Lesbarkeit)
|
||||
|
||||
419
install_flowise.sh
Executable file
419
install_flowise.sh
Executable file
@@ -0,0 +1,419 @@
|
||||
#!/usr/bin/env bash
|
||||
set -Eeuo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# Flowise LXC Installer
|
||||
# =============================================================================
|
||||
# Erstellt einen LXC-Container mit Docker + Flowise + PostgreSQL
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_VERSION="1.0.0"
|
||||
|
||||
# Debug mode: 0 = nur JSON, 1 = Logs auf stderr
|
||||
DEBUG="${DEBUG:-0}"
|
||||
export DEBUG
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Log-Verzeichnis
|
||||
LOG_DIR="${SCRIPT_DIR}/logs"
|
||||
mkdir -p "${LOG_DIR}"
|
||||
|
||||
# Temporäre Log-Datei (wird später umbenannt nach Container-Hostname)
|
||||
TEMP_LOG="${LOG_DIR}/install_flowise_$$.log"
|
||||
FINAL_LOG=""
|
||||
|
||||
# Funktion zum Aufräumen bei Exit
|
||||
cleanup_log() {
|
||||
# Wenn FINAL_LOG gesetzt ist, umbenennen
|
||||
if [[ -n "${FINAL_LOG}" && -f "${TEMP_LOG}" ]]; then
|
||||
mv "${TEMP_LOG}" "${FINAL_LOG}"
|
||||
fi
|
||||
}
|
||||
trap cleanup_log EXIT
|
||||
|
||||
# Alle Ausgaben in Log-Datei umleiten
|
||||
# Bei DEBUG=1: auch auf stderr ausgeben (tee)
|
||||
# Bei DEBUG=0: nur in Datei
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
# Debug-Modus: Ausgabe auf stderr UND in Datei
|
||||
exec > >(tee -a "${TEMP_LOG}") 2>&1
|
||||
else
|
||||
# Normal-Modus: Nur in Datei, stdout bleibt für JSON frei
|
||||
exec 3>&1 # stdout (fd 3) für JSON reservieren
|
||||
exec > "${TEMP_LOG}" 2>&1
|
||||
fi
|
||||
|
||||
source "${SCRIPT_DIR}/libsupabase.sh"
|
||||
setup_traps
|
||||
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
bash install_flowise.sh [options]
|
||||
|
||||
Core options:
|
||||
--ctid <id> Force CT ID (optional). If omitted, a customer-safe CTID is generated.
|
||||
--cores <n> (default: 4)
|
||||
--memory <mb> (default: 4096)
|
||||
--swap <mb> (default: 512)
|
||||
--disk <gb> (default: 50)
|
||||
--bridge <vmbrX> (default: vmbr0)
|
||||
--storage <storage> (default: local-zfs)
|
||||
--ip <dhcp|CIDR> (default: dhcp)
|
||||
--vlan <id> VLAN tag for net0 (default: 90; set 0 to disable)
|
||||
--privileged Create privileged CT (default: unprivileged)
|
||||
--apt-proxy <url> Optional: APT proxy (e.g. http://192.168.45.2:3142) for Apt-Cacher NG
|
||||
|
||||
Domain / Flowise options:
|
||||
--base-domain <domain> (default: userman.de) -> FQDN becomes fw-<unix>.domain
|
||||
--flowise-user <user> (default: admin)
|
||||
--flowise-pass <pass> Optional. If omitted, generated (policy compliant).
|
||||
--debug Enable debug mode (show logs on stderr)
|
||||
--help Show help
|
||||
|
||||
Notes:
|
||||
- This script creates a Debian 12 LXC and provisions Docker + Flowise stack (Postgres + Flowise).
|
||||
- At the end it prints a JSON with credentials and URLs.
|
||||
EOF
|
||||
}
|
||||
|
||||
# Defaults
|
||||
DOCKER_REGISTRY_MIRROR="http://192.168.45.2:5000"
|
||||
APT_PROXY=""
|
||||
CTID=""
|
||||
CORES="4"
|
||||
MEMORY="4096"
|
||||
SWAP="512"
|
||||
DISK="50"
|
||||
BRIDGE="vmbr0"
|
||||
STORAGE="local-zfs"
|
||||
IPCFG="dhcp"
|
||||
VLAN="90"
|
||||
UNPRIV="1"
|
||||
|
||||
BASE_DOMAIN="userman.de"
|
||||
FLOWISE_USER="admin"
|
||||
FLOWISE_PASS=""
|
||||
|
||||
# ---------------------------
|
||||
# Arg parsing
|
||||
# ---------------------------
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--ctid) CTID="${2:-}"; shift 2 ;;
|
||||
--apt-proxy) APT_PROXY="${2:-}"; shift 2 ;;
|
||||
--cores) CORES="${2:-}"; shift 2 ;;
|
||||
--memory) MEMORY="${2:-}"; shift 2 ;;
|
||||
--swap) SWAP="${2:-}"; shift 2 ;;
|
||||
--disk) DISK="${2:-}"; shift 2 ;;
|
||||
--bridge) BRIDGE="${2:-}"; shift 2 ;;
|
||||
--storage) STORAGE="${2:-}"; shift 2 ;;
|
||||
--ip) IPCFG="${2:-}"; shift 2 ;;
|
||||
--vlan) VLAN="${2:-}"; shift 2 ;;
|
||||
--privileged) UNPRIV="0"; shift 1 ;;
|
||||
--base-domain) BASE_DOMAIN="${2:-}"; shift 2 ;;
|
||||
--flowise-user) FLOWISE_USER="${2:-}"; shift 2 ;;
|
||||
--flowise-pass) FLOWISE_PASS="${2:-}"; shift 2 ;;
|
||||
--debug) DEBUG="1"; export DEBUG; shift 1 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) die "Unknown option: $1 (use --help)" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ---------------------------
|
||||
# Validation
|
||||
# ---------------------------
|
||||
[[ "$CORES" =~ ^[0-9]+$ ]] || die "--cores must be integer"
|
||||
[[ "$MEMORY" =~ ^[0-9]+$ ]] || die "--memory must be integer"
|
||||
[[ "$SWAP" =~ ^[0-9]+$ ]] || die "--swap must be integer"
|
||||
[[ "$DISK" =~ ^[0-9]+$ ]] || die "--disk must be integer"
|
||||
[[ "$UNPRIV" == "0" || "$UNPRIV" == "1" ]] || die "internal: UNPRIV invalid"
|
||||
[[ "$VLAN" =~ ^[0-9]+$ ]] || die "--vlan must be integer (0 disables tagging)"
|
||||
[[ -n "$BASE_DOMAIN" ]] || die "--base-domain must not be empty"
|
||||
|
||||
if [[ "$IPCFG" != "dhcp" ]]; then
|
||||
[[ "$IPCFG" =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]{1,2}$ ]] || die "--ip must be dhcp or CIDR (e.g. 192.168.45.171/24)"
|
||||
fi
|
||||
|
||||
if [[ -n "${APT_PROXY}" ]]; then
|
||||
[[ "${APT_PROXY}" =~ ^http://[^/]+:[0-9]+$ ]] || die "--apt-proxy must look like http://IP:PORT (example: http://192.168.45.2:3142)"
|
||||
fi
|
||||
|
||||
info "Script Version: ${SCRIPT_VERSION}"
|
||||
info "Argument-Parsing OK"
|
||||
|
||||
if [[ -n "${APT_PROXY}" ]]; then
|
||||
info "APT proxy enabled: ${APT_PROXY}"
|
||||
else
|
||||
info "APT proxy disabled"
|
||||
fi
|
||||
|
||||
# ---------------------------
|
||||
# Preflight Proxmox
|
||||
# ---------------------------
|
||||
need_cmd pct pvesm pveam pvesh grep date awk sed cut tr head
|
||||
|
||||
pve_storage_exists "$STORAGE" || die "Storage not found: $STORAGE"
|
||||
pve_bridge_exists "$BRIDGE" || die "Bridge not found: $BRIDGE"
|
||||
|
||||
TEMPLATE="$(pve_template_ensure_debian12 "$STORAGE")"
|
||||
info "Template OK: ${TEMPLATE}"
|
||||
|
||||
# Hostname / FQDN based on unix time (fw- prefix for Flowise)
|
||||
UNIXTS="$(date +%s)"
|
||||
CT_HOSTNAME="fw-${UNIXTS}"
|
||||
FQDN="${CT_HOSTNAME}.${BASE_DOMAIN}"
|
||||
|
||||
# Log-Datei nach Container-Hostname benennen
|
||||
FINAL_LOG="${LOG_DIR}/${CT_HOSTNAME}.log"
|
||||
|
||||
# CTID selection
|
||||
if [[ -n "$CTID" ]]; then
|
||||
[[ "$CTID" =~ ^[0-9]+$ ]] || die "--ctid must be integer"
|
||||
if pve_vmid_exists_cluster "$CTID"; then
|
||||
die "Forced CTID=${CTID} already exists in cluster"
|
||||
fi
|
||||
else
|
||||
# unix time - 1000000000 (safe until 2038)
|
||||
CTID="$(pve_ctid_from_unixtime "$UNIXTS")"
|
||||
if pve_vmid_exists_cluster "$CTID"; then
|
||||
die "Generated CTID=${CTID} already exists in cluster (unexpected). Try again in 1s."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Flowise credentials defaults
|
||||
if [[ -z "$FLOWISE_PASS" ]]; then
|
||||
FLOWISE_PASS="$(gen_password_policy)"
|
||||
else
|
||||
password_policy_check "$FLOWISE_PASS" || die "--flowise-pass does not meet policy: 8+ chars, 1 number, 1 uppercase"
|
||||
fi
|
||||
|
||||
info "CTID selected: ${CTID}"
|
||||
info "SCRIPT_DIR=${SCRIPT_DIR}"
|
||||
info "CT_HOSTNAME=${CT_HOSTNAME}"
|
||||
info "FQDN=${FQDN}"
|
||||
info "cores=${CORES} memory=${MEMORY}MB swap=${SWAP}MB disk=${DISK}GB"
|
||||
info "bridge=${BRIDGE} storage=${STORAGE} ip=${IPCFG} vlan=${VLAN} unprivileged=${UNPRIV}"
|
||||
|
||||
# ---------------------------
|
||||
# Step 1: Create CT
|
||||
# ---------------------------
|
||||
NET0="$(pve_build_net0 "$BRIDGE" "$IPCFG" "$VLAN")"
|
||||
ROOTFS="${STORAGE}:${DISK}"
|
||||
FEATURES="nesting=1,keyctl=1,fuse=1"
|
||||
|
||||
info "Step 1: Create CT"
|
||||
info "Creating CT ${CTID} (${CT_HOSTNAME}) from ${TEMPLATE}"
|
||||
pct create "${CTID}" "${TEMPLATE}" \
|
||||
--hostname "${CT_HOSTNAME}" \
|
||||
--cores "${CORES}" \
|
||||
--memory "${MEMORY}" \
|
||||
--swap "${SWAP}" \
|
||||
--net0 "${NET0}" \
|
||||
--rootfs "${ROOTFS}" \
|
||||
--unprivileged "${UNPRIV}" \
|
||||
--features "${FEATURES}" \
|
||||
--start 0 \
|
||||
--onboot yes
|
||||
|
||||
info "CT created (not started). Next step: start CT + wait for IP"
|
||||
info "Starting CT ${CTID}"
|
||||
pct start "${CTID}"
|
||||
|
||||
CT_IP="$(pct_wait_for_ip "${CTID}" || true)"
|
||||
[[ -n "${CT_IP}" ]] || die "Could not determine CT IP after start"
|
||||
|
||||
info "Step 1 OK: LXC erstellt + IP ermittelt"
|
||||
info "CT_HOSTNAME=${CT_HOSTNAME}"
|
||||
info "CT_IP=${CT_IP}"
|
||||
|
||||
# ---------------------------
|
||||
# Step 2: Provision inside CT (Docker + Locales + Base)
|
||||
# ---------------------------
|
||||
info "Step 2: Provisioning im CT (Docker + Locales + Base)"
|
||||
|
||||
# Optional: APT proxy (Apt-Cacher NG)
|
||||
if [[ -n "${APT_PROXY}" ]]; then
|
||||
pct_exec "${CTID}" "cat > /etc/apt/apt.conf.d/00aptproxy <<'EOF'
|
||||
Acquire::http::Proxy \"${APT_PROXY}\";
|
||||
Acquire::https::Proxy \"${APT_PROXY}\";
|
||||
EOF"
|
||||
pct_exec "$CTID" "apt-config dump | grep -i proxy || true"
|
||||
fi
|
||||
|
||||
# Minimal base packages
|
||||
pct_exec "${CTID}" "export DEBIAN_FRONTEND=noninteractive; apt-get update -y"
|
||||
pct_exec "${CTID}" "export DEBIAN_FRONTEND=noninteractive; apt-get install -y ca-certificates curl gnupg lsb-release"
|
||||
|
||||
# Locales (avoid perl warnings + consistent system)
|
||||
pct_exec "${CTID}" "export DEBIAN_FRONTEND=noninteractive; apt-get install -y locales"
|
||||
pct_exec "${CTID}" "sed -i 's/^# *de_DE.UTF-8 UTF-8/de_DE.UTF-8 UTF-8/; s/^# *en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen || true"
|
||||
pct_exec "${CTID}" "locale-gen >/dev/null || true"
|
||||
pct_exec "${CTID}" "update-locale LANG=de_DE.UTF-8 LC_ALL=de_DE.UTF-8 || true"
|
||||
|
||||
# Docker official repo (Debian 12 / bookworm)
|
||||
pct_exec "${CTID}" "install -m 0755 -d /etc/apt/keyrings"
|
||||
pct_exec "${CTID}" "curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg"
|
||||
pct_exec "${CTID}" "chmod a+r /etc/apt/keyrings/docker.gpg"
|
||||
pct_exec "${CTID}" "echo \"deb [arch=\$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \$(. /etc/os-release && echo \$VERSION_CODENAME) stable\" > /etc/apt/sources.list.d/docker.list"
|
||||
pct_exec "${CTID}" "export DEBIAN_FRONTEND=noninteractive; apt-get update -y"
|
||||
pct_exec "${CTID}" "export DEBIAN_FRONTEND=noninteractive; apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin"
|
||||
|
||||
# Create stack directories
|
||||
pct_exec "${CTID}" "mkdir -p /opt/flowise-stack/volumes/postgres/data /opt/flowise-stack/volumes/flowise-data /opt/flowise-stack/sql"
|
||||
|
||||
info "Step 2 OK: Docker + Compose Plugin installiert, Locales gesetzt, Basis-Verzeichnisse erstellt"
|
||||
|
||||
# ---------------------------
|
||||
# Step 3: Stack finalisieren + Secrets + Up + Checks
|
||||
# ---------------------------
|
||||
info "Step 3: Stack finalisieren + Secrets + Up + Checks"
|
||||
|
||||
# Secrets
|
||||
PG_DB="flowise"
|
||||
PG_USER="flowise"
|
||||
PG_PASSWORD="$(gen_password_policy)"
|
||||
FLOWISE_SECRETKEY="$(gen_hex_64)"
|
||||
|
||||
# Flowise configuration
|
||||
FLOWISE_PORT="3000"
|
||||
FLOWISE_HOST="${CT_IP}"
|
||||
FLOWISE_EXTERNAL_URL="https://${FQDN}"
|
||||
|
||||
# Write .env into CT
|
||||
pct_push_text "${CTID}" "/opt/flowise-stack/.env" "$(cat <<EOF
|
||||
# PostgreSQL
|
||||
PG_DB=${PG_DB}
|
||||
PG_USER=${PG_USER}
|
||||
PG_PASSWORD=${PG_PASSWORD}
|
||||
|
||||
# Flowise
|
||||
FLOWISE_PORT=${FLOWISE_PORT}
|
||||
FLOWISE_USERNAME=${FLOWISE_USER}
|
||||
FLOWISE_PASSWORD=${FLOWISE_PASS}
|
||||
FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY}
|
||||
|
||||
# Database connection
|
||||
DATABASE_TYPE=postgres
|
||||
DATABASE_HOST=postgres
|
||||
DATABASE_PORT=5432
|
||||
DATABASE_NAME=${PG_DB}
|
||||
DATABASE_USER=${PG_USER}
|
||||
DATABASE_PASSWORD=${PG_PASSWORD}
|
||||
|
||||
# General
|
||||
TZ=Europe/Berlin
|
||||
EOF
|
||||
)"
|
||||
|
||||
# init sql for pgvector (optional but useful for Flowise vector stores)
|
||||
pct_push_text "${CTID}" "/opt/flowise-stack/sql/init_pgvector.sql" "$(cat <<'SQL'
|
||||
CREATE EXTENSION IF NOT EXISTS vector;
|
||||
CREATE EXTENSION IF NOT EXISTS pg_trgm;
|
||||
SQL
|
||||
)"
|
||||
|
||||
# docker-compose.yml for Flowise
|
||||
pct_push_text "${CTID}" "/opt/flowise-stack/docker-compose.yml" "$(cat <<'YML'
|
||||
services:
|
||||
postgres:
|
||||
image: pgvector/pgvector:pg16
|
||||
container_name: flowise-postgres
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: ${PG_DB}
|
||||
POSTGRES_USER: ${PG_USER}
|
||||
POSTGRES_PASSWORD: ${PG_PASSWORD}
|
||||
volumes:
|
||||
- ./volumes/postgres/data:/var/lib/postgresql/data
|
||||
- ./sql:/docker-entrypoint-initdb.d:ro
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${PG_USER} -d ${PG_DB} || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 20
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
flowise:
|
||||
image: flowiseai/flowise:latest
|
||||
container_name: flowise
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
ports:
|
||||
- "${FLOWISE_PORT}:3000"
|
||||
environment:
|
||||
# --- Authentication ---
|
||||
FLOWISE_USERNAME: ${FLOWISE_USERNAME}
|
||||
FLOWISE_PASSWORD: ${FLOWISE_PASSWORD}
|
||||
FLOWISE_SECRETKEY_OVERWRITE: ${FLOWISE_SECRETKEY_OVERWRITE}
|
||||
|
||||
# --- Database ---
|
||||
DATABASE_TYPE: ${DATABASE_TYPE}
|
||||
DATABASE_HOST: ${DATABASE_HOST}
|
||||
DATABASE_PORT: ${DATABASE_PORT}
|
||||
DATABASE_NAME: ${DATABASE_NAME}
|
||||
DATABASE_USER: ${DATABASE_USER}
|
||||
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
|
||||
|
||||
# --- General ---
|
||||
TZ: ${TZ}
|
||||
|
||||
# --- Logging ---
|
||||
LOG_LEVEL: info
|
||||
DEBUG: "false"
|
||||
|
||||
volumes:
|
||||
- ./volumes/flowise-data:/root/.flowise
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
networks:
|
||||
flowise-net:
|
||||
driver: bridge
|
||||
YML
|
||||
)"
|
||||
|
||||
# Docker Registry Mirror (if APT proxy is set)
|
||||
if [[ -n "${APT_PROXY}" ]]; then
|
||||
pct_exec "$CTID" "mkdir -p /etc/docker"
|
||||
pct_exec "$CTID" "cat > /etc/docker/daemon.json <<EOF
|
||||
{
|
||||
\"registry-mirrors\": [\"${DOCKER_REGISTRY_MIRROR}\"]
|
||||
}
|
||||
EOF"
|
||||
pct_exec "$CTID" "systemctl restart docker"
|
||||
pct_exec "$CTID" "systemctl is-active docker"
|
||||
pct_exec "$CTID" "docker info | grep -A2 -i 'Registry Mirrors'"
|
||||
fi
|
||||
|
||||
# Pull + up
|
||||
pct_exec "${CTID}" "cd /opt/flowise-stack && docker compose pull"
|
||||
pct_exec "${CTID}" "cd /opt/flowise-stack && docker compose up -d"
|
||||
pct_exec "${CTID}" "cd /opt/flowise-stack && docker compose ps"
|
||||
|
||||
# Wait for Flowise to be ready
|
||||
info "Waiting for Flowise to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Final info
|
||||
FLOWISE_INTERNAL_URL="http://${CT_IP}:${FLOWISE_PORT}/"
|
||||
FLOWISE_EXTERNAL_URL="https://${FQDN}"
|
||||
|
||||
info "Step 3 OK: Stack deployed"
|
||||
info "Flowise intern: ${FLOWISE_INTERNAL_URL}"
|
||||
info "Flowise extern (geplant via OPNsense): ${FLOWISE_EXTERNAL_URL}"
|
||||
|
||||
# Machine-readable JSON output
|
||||
JSON_OUTPUT="{\"ctid\":${CTID},\"hostname\":\"${CT_HOSTNAME}\",\"fqdn\":\"${FQDN}\",\"ip\":\"${CT_IP}\",\"vlan\":${VLAN},\"urls\":{\"flowise_internal\":\"${FLOWISE_INTERNAL_URL}\",\"flowise_external\":\"${FLOWISE_EXTERNAL_URL}\"},\"postgres\":{\"host\":\"postgres\",\"port\":5432,\"db\":\"${PG_DB}\",\"user\":\"${PG_USER}\",\"password\":\"${PG_PASSWORD}\"},\"flowise\":{\"username\":\"${FLOWISE_USER}\",\"password\":\"${FLOWISE_PASS}\",\"secret_key\":\"${FLOWISE_SECRETKEY}\"},\"log_file\":\"${FINAL_LOG}\"}"
|
||||
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
# Debug-Modus: JSON normal ausgeben (formatiert für Lesbarkeit)
|
||||
echo "$JSON_OUTPUT" | python3 -m json.tool 2>/dev/null || echo "$JSON_OUTPUT"
|
||||
else
|
||||
# Normal-Modus: JSON auf ursprüngliches stdout (fd 3) - kompakt
|
||||
echo "$JSON_OUTPUT" >&3
|
||||
fi
|
||||
596
libsupabase.sh
596
libsupabase.sh
@@ -214,3 +214,599 @@ emit_json() {
|
||||
# prints to stdout only; keep logs on stderr
|
||||
cat
|
||||
}
|
||||
|
||||
# ----- n8n API helpers -----
|
||||
# These functions interact with n8n REST API inside a container
|
||||
|
||||
# Login to n8n and save session cookie
|
||||
# Usage: n8n_api_login <ctid> <email> <password>
|
||||
# Returns: 0 on success, 1 on failure
|
||||
# Side effect: Creates /tmp/n8n_cookies.txt in the container
|
||||
n8n_api_login() {
|
||||
local ctid="$1"
|
||||
local email="$2"
|
||||
local password="$3"
|
||||
local api_url="http://127.0.0.1:5678"
|
||||
|
||||
info "n8n API: Logging in as ${email}..."
|
||||
|
||||
# Escape special characters in password for JSON
|
||||
local escaped_password
|
||||
escaped_password=$(echo "$password" | sed 's/\\/\\\\/g; s/"/\\"/g')
|
||||
|
||||
local response
|
||||
response=$(pct exec "$ctid" -- bash -c "curl -sS -X POST '${api_url}/rest/login' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-c /tmp/n8n_cookies.txt \
|
||||
-d '{\"email\":\"${email}\",\"password\":\"${escaped_password}\"}' 2>&1" || echo "CURL_FAILED")
|
||||
|
||||
if [[ "$response" == *"CURL_FAILED"* ]] || [[ "$response" == *"error"* && "$response" != *"data"* ]]; then
|
||||
warn "n8n API login failed: ${response}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
info "n8n API: Login successful"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Create PostgreSQL credential in n8n
|
||||
# Usage: n8n_api_create_postgres_credential <ctid> <name> <host> <port> <database> <user> <password>
|
||||
# Returns: Credential ID on stdout, or empty on failure
|
||||
n8n_api_create_postgres_credential() {
|
||||
local ctid="$1"
|
||||
local name="$2"
|
||||
local host="$3"
|
||||
local port="$4"
|
||||
local database="$5"
|
||||
local user="$6"
|
||||
local password="$7"
|
||||
local api_url="http://127.0.0.1:5678"
|
||||
|
||||
info "n8n API: Creating PostgreSQL credential '${name}'..."
|
||||
|
||||
# Escape special characters in password for JSON
|
||||
local escaped_password
|
||||
escaped_password=$(echo "$password" | sed 's/\\/\\\\/g; s/"/\\"/g')
|
||||
|
||||
local response
|
||||
response=$(pct exec "$ctid" -- bash -c "curl -sS -X POST '${api_url}/rest/credentials' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-b /tmp/n8n_cookies.txt \
|
||||
-d '{
|
||||
\"name\": \"${name}\",
|
||||
\"type\": \"postgres\",
|
||||
\"data\": {
|
||||
\"host\": \"${host}\",
|
||||
\"port\": ${port},
|
||||
\"database\": \"${database}\",
|
||||
\"user\": \"${user}\",
|
||||
\"password\": \"${escaped_password}\",
|
||||
\"ssl\": \"disable\"
|
||||
}
|
||||
}' 2>&1" || echo "")
|
||||
|
||||
# Extract credential ID from response
|
||||
local cred_id
|
||||
cred_id=$(echo "$response" | grep -oP '"id"\s*:\s*"\K[^"]+' | head -1 || echo "")
|
||||
|
||||
if [[ -n "$cred_id" ]]; then
|
||||
info "n8n API: PostgreSQL credential created with ID: ${cred_id}"
|
||||
echo "$cred_id"
|
||||
return 0
|
||||
else
|
||||
warn "n8n API: Failed to create PostgreSQL credential: ${response}"
|
||||
echo ""
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Create Ollama credential in n8n
|
||||
# Usage: n8n_api_create_ollama_credential <ctid> <name> <base_url>
|
||||
# Returns: Credential ID on stdout, or empty on failure
|
||||
n8n_api_create_ollama_credential() {
|
||||
local ctid="$1"
|
||||
local name="$2"
|
||||
local base_url="$3"
|
||||
local api_url="http://127.0.0.1:5678"
|
||||
|
||||
info "n8n API: Creating Ollama credential '${name}'..."
|
||||
|
||||
local response
|
||||
response=$(pct exec "$ctid" -- bash -c "curl -sS -X POST '${api_url}/rest/credentials' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-b /tmp/n8n_cookies.txt \
|
||||
-d '{
|
||||
\"name\": \"${name}\",
|
||||
\"type\": \"ollamaApi\",
|
||||
\"data\": {
|
||||
\"baseUrl\": \"${base_url}\"
|
||||
}
|
||||
}' 2>&1" || echo "")
|
||||
|
||||
# Extract credential ID from response
|
||||
local cred_id
|
||||
cred_id=$(echo "$response" | grep -oP '"id"\s*:\s*"\K[^"]+' | head -1 || echo "")
|
||||
|
||||
if [[ -n "$cred_id" ]]; then
|
||||
info "n8n API: Ollama credential created with ID: ${cred_id}"
|
||||
echo "$cred_id"
|
||||
return 0
|
||||
else
|
||||
warn "n8n API: Failed to create Ollama credential: ${response}"
|
||||
echo ""
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Import workflow into n8n
|
||||
# Usage: n8n_api_import_workflow <ctid> <workflow_json_file_in_container>
|
||||
# Returns: Workflow ID on stdout, or empty on failure
|
||||
n8n_api_import_workflow() {
|
||||
local ctid="$1"
|
||||
local workflow_file="$2"
|
||||
local api_url="http://127.0.0.1:5678"
|
||||
|
||||
info "n8n API: Importing workflow from ${workflow_file}..."
|
||||
|
||||
local response
|
||||
response=$(pct exec "$ctid" -- bash -c "curl -sS -X POST '${api_url}/rest/workflows' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-b /tmp/n8n_cookies.txt \
|
||||
-d @${workflow_file} 2>&1" || echo "")
|
||||
|
||||
# Extract workflow ID from response
|
||||
local workflow_id
|
||||
workflow_id=$(echo "$response" | grep -oP '"id"\s*:\s*"\K[^"]+' | head -1 || echo "")
|
||||
|
||||
if [[ -n "$workflow_id" ]]; then
|
||||
info "n8n API: Workflow imported with ID: ${workflow_id}"
|
||||
echo "$workflow_id"
|
||||
return 0
|
||||
else
|
||||
warn "n8n API: Failed to import workflow: ${response}"
|
||||
echo ""
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Activate workflow in n8n
|
||||
# Usage: n8n_api_activate_workflow <ctid> <workflow_id>
|
||||
# Returns: 0 on success, 1 on failure
|
||||
n8n_api_activate_workflow() {
|
||||
local ctid="$1"
|
||||
local workflow_id="$2"
|
||||
local api_url="http://127.0.0.1:5678"
|
||||
|
||||
info "n8n API: Activating workflow ${workflow_id}..."
|
||||
|
||||
local response
|
||||
response=$(pct exec "$ctid" -- bash -c "curl -sS -X PATCH '${api_url}/rest/workflows/${workflow_id}' \
|
||||
-H 'Content-Type: application/json' \
|
||||
-b /tmp/n8n_cookies.txt \
|
||||
-d '{\"active\": true}' 2>&1" || echo "")
|
||||
|
||||
if [[ "$response" == *"\"active\":true"* ]] || [[ "$response" == *"\"active\": true"* ]]; then
|
||||
info "n8n API: Workflow ${workflow_id} activated successfully"
|
||||
return 0
|
||||
else
|
||||
warn "n8n API: Failed to activate workflow: ${response}"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Generate RAG workflow JSON with credential IDs
|
||||
# Usage: n8n_generate_rag_workflow_json <postgres_cred_id> <ollama_cred_id> <ollama_model> <embedding_model>
|
||||
# Returns: Workflow JSON on stdout
|
||||
n8n_generate_rag_workflow_json() {
|
||||
local postgres_cred_id="$1"
|
||||
local postgres_cred_name="${2:-PostgreSQL (local)}"
|
||||
local ollama_cred_id="$3"
|
||||
local ollama_cred_name="${4:-Ollama (local)}"
|
||||
local ollama_model="${5:-llama3.2:3b}"
|
||||
local embedding_model="${6:-nomic-embed-text:v1.5}"
|
||||
|
||||
cat <<WORKFLOW_JSON
|
||||
{
|
||||
"name": "RAG KI-Bot (PGVector)",
|
||||
"nodes": [
|
||||
{
|
||||
"parameters": {
|
||||
"public": true,
|
||||
"options": {}
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
|
||||
"typeVersion": 1.3,
|
||||
"position": [0, 0],
|
||||
"id": "chat-trigger-001",
|
||||
"name": "When chat message received",
|
||||
"webhookId": "rag-chat-webhook",
|
||||
"notesInFlow": true,
|
||||
"notes": "Chat URL: /webhook/rag-chat-webhook/chat"
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"promptType": "define",
|
||||
"text": "={{ \$json.chatInput }}\nAntworte ausschliesslich auf Deutsch",
|
||||
"options": {}
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.agent",
|
||||
"typeVersion": 2.2,
|
||||
"position": [208, 0],
|
||||
"id": "ai-agent-001",
|
||||
"name": "AI Agent"
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"model": "${ollama_model}",
|
||||
"options": {}
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.lmChatOllama",
|
||||
"typeVersion": 1,
|
||||
"position": [64, 208],
|
||||
"id": "ollama-chat-001",
|
||||
"name": "Ollama Chat Model",
|
||||
"credentials": {
|
||||
"ollamaApi": {
|
||||
"id": "${ollama_cred_id}",
|
||||
"name": "${ollama_cred_name}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"parameters": {},
|
||||
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
|
||||
"typeVersion": 1.3,
|
||||
"position": [224, 208],
|
||||
"id": "memory-001",
|
||||
"name": "Simple Memory"
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"mode": "retrieve-as-tool",
|
||||
"toolName": "knowledge_base",
|
||||
"toolDescription": "Verwende dieses Tool für Infos die der Benutzer fragt. Sucht in der Wissensdatenbank nach relevanten Dokumenten.",
|
||||
"tableName": "documents",
|
||||
"options": {}
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.vectorStorePGVector",
|
||||
"typeVersion": 1,
|
||||
"position": [432, 128],
|
||||
"id": "pgvector-retrieve-001",
|
||||
"name": "PGVector Store",
|
||||
"credentials": {
|
||||
"postgres": {
|
||||
"id": "${postgres_cred_id}",
|
||||
"name": "${postgres_cred_name}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"model": "${embedding_model}"
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.embeddingsOllama",
|
||||
"typeVersion": 1,
|
||||
"position": [384, 320],
|
||||
"id": "embeddings-retrieve-001",
|
||||
"name": "Embeddings Ollama",
|
||||
"credentials": {
|
||||
"ollamaApi": {
|
||||
"id": "${ollama_cred_id}",
|
||||
"name": "${ollama_cred_name}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"formTitle": "Dokument hochladen",
|
||||
"formDescription": "Laden Sie ein PDF-Dokument hoch, um es in die Wissensdatenbank aufzunehmen.",
|
||||
"formFields": {
|
||||
"values": [
|
||||
{
|
||||
"fieldLabel": "Dokument",
|
||||
"fieldType": "file",
|
||||
"acceptFileTypes": ".pdf"
|
||||
}
|
||||
]
|
||||
},
|
||||
"options": {}
|
||||
},
|
||||
"type": "n8n-nodes-base.formTrigger",
|
||||
"typeVersion": 2.3,
|
||||
"position": [768, 0],
|
||||
"id": "form-trigger-001",
|
||||
"name": "On form submission",
|
||||
"webhookId": "rag-upload-form"
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"operation": "pdf",
|
||||
"binaryPropertyName": "Dokument",
|
||||
"options": {}
|
||||
},
|
||||
"type": "n8n-nodes-base.extractFromFile",
|
||||
"typeVersion": 1,
|
||||
"position": [976, 0],
|
||||
"id": "extract-file-001",
|
||||
"name": "Extract from File"
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"mode": "insert",
|
||||
"tableName": "documents",
|
||||
"options": {}
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.vectorStorePGVector",
|
||||
"typeVersion": 1,
|
||||
"position": [1184, 0],
|
||||
"id": "pgvector-insert-001",
|
||||
"name": "PGVector Store Insert",
|
||||
"credentials": {
|
||||
"postgres": {
|
||||
"id": "${postgres_cred_id}",
|
||||
"name": "${postgres_cred_name}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"model": "${embedding_model}"
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.embeddingsOllama",
|
||||
"typeVersion": 1,
|
||||
"position": [1168, 240],
|
||||
"id": "embeddings-insert-001",
|
||||
"name": "Embeddings Ollama1",
|
||||
"credentials": {
|
||||
"ollamaApi": {
|
||||
"id": "${ollama_cred_id}",
|
||||
"name": "${ollama_cred_name}"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"parameters": {
|
||||
"options": {}
|
||||
},
|
||||
"type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
|
||||
"typeVersion": 1.1,
|
||||
"position": [1392, 240],
|
||||
"id": "data-loader-001",
|
||||
"name": "Default Data Loader"
|
||||
}
|
||||
],
|
||||
"connections": {
|
||||
"When chat message received": {
|
||||
"main": [[{"node": "AI Agent", "type": "main", "index": 0}]]
|
||||
},
|
||||
"Ollama Chat Model": {
|
||||
"ai_languageModel": [[{"node": "AI Agent", "type": "ai_languageModel", "index": 0}]]
|
||||
},
|
||||
"Simple Memory": {
|
||||
"ai_memory": [[{"node": "AI Agent", "type": "ai_memory", "index": 0}]]
|
||||
},
|
||||
"PGVector Store": {
|
||||
"ai_tool": [[{"node": "AI Agent", "type": "ai_tool", "index": 0}]]
|
||||
},
|
||||
"Embeddings Ollama": {
|
||||
"ai_embedding": [[{"node": "PGVector Store", "type": "ai_embedding", "index": 0}]]
|
||||
},
|
||||
"On form submission": {
|
||||
"main": [[{"node": "Extract from File", "type": "main", "index": 0}]]
|
||||
},
|
||||
"Extract from File": {
|
||||
"main": [[{"node": "PGVector Store Insert", "type": "main", "index": 0}]]
|
||||
},
|
||||
"Embeddings Ollama1": {
|
||||
"ai_embedding": [[{"node": "PGVector Store Insert", "type": "ai_embedding", "index": 0}]]
|
||||
},
|
||||
"Default Data Loader": {
|
||||
"ai_document": [[{"node": "PGVector Store Insert", "type": "ai_document", "index": 0}]]
|
||||
}
|
||||
},
|
||||
"settings": {
|
||||
"executionOrder": "v1"
|
||||
}
|
||||
}
|
||||
WORKFLOW_JSON
|
||||
}
|
||||
|
||||
# Cleanup n8n API session
|
||||
# Usage: n8n_api_cleanup <ctid>
|
||||
n8n_api_cleanup() {
|
||||
local ctid="$1"
|
||||
pct exec "$ctid" -- bash -c "rm -f /tmp/n8n_cookies.txt /tmp/rag_workflow.json" 2>/dev/null || true
|
||||
}
|
||||
|
||||
# Full n8n setup: Create credentials, import workflow, activate
|
||||
# This version runs all API calls in a single shell session to preserve cookies
|
||||
# Usage: n8n_setup_rag_workflow <ctid> <email> <password> <pg_host> <pg_port> <pg_db> <pg_user> <pg_pass> <ollama_url> [ollama_model] [embedding_model]
|
||||
# Returns: 0 on success, 1 on failure
|
||||
n8n_setup_rag_workflow() {
|
||||
local ctid="$1"
|
||||
local email="$2"
|
||||
local password="$3"
|
||||
local pg_host="$4"
|
||||
local pg_port="$5"
|
||||
local pg_db="$6"
|
||||
local pg_user="$7"
|
||||
local pg_pass="$8"
|
||||
local ollama_url="$9"
|
||||
local ollama_model="${10:-llama3.2:3b}"
|
||||
local embedding_model="${11:-nomic-embed-text:v1.5}"
|
||||
|
||||
info "n8n Setup: Starting RAG workflow setup..."
|
||||
|
||||
# Wait for n8n to be ready
|
||||
info "n8n Setup: Waiting for n8n to be ready..."
|
||||
local i
|
||||
for i in $(seq 1 30); do
|
||||
if pct exec "$ctid" -- bash -c "curl -sS -o /dev/null -w '%{http_code}' http://127.0.0.1:5678/rest/settings 2>/dev/null" | grep -q "200"; then
|
||||
info "n8n Setup: n8n is ready"
|
||||
break
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Escape special characters in passwords for JSON
|
||||
local escaped_password
|
||||
escaped_password=$(echo "$password" | sed 's/\\/\\\\/g; s/"/\\"/g')
|
||||
local escaped_pg_pass
|
||||
escaped_pg_pass=$(echo "$pg_pass" | sed 's/\\/\\\\/g; s/"/\\"/g')
|
||||
|
||||
# Generate workflow JSON with placeholder credential IDs (will be replaced in container)
|
||||
info "n8n Setup: Generating workflow JSON..."
|
||||
local workflow_json
|
||||
workflow_json=$(n8n_generate_rag_workflow_json "POSTGRES_CRED_ID" "PostgreSQL (local)" "OLLAMA_CRED_ID" "Ollama (local)" "$ollama_model" "$embedding_model")
|
||||
|
||||
# Push workflow JSON to container
|
||||
pct_push_text "$ctid" "/tmp/rag_workflow_template.json" "$workflow_json"
|
||||
|
||||
# Create a setup script that runs all API calls in one session
|
||||
info "n8n Setup: Creating setup script..."
|
||||
pct_push_text "$ctid" "/tmp/n8n_setup.sh" "$(cat <<SETUP_SCRIPT
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
API_URL="http://127.0.0.1:5678"
|
||||
COOKIE_FILE="/tmp/n8n_cookies.txt"
|
||||
EMAIL="${email}"
|
||||
PASSWORD="${escaped_password}"
|
||||
|
||||
# Login (n8n API uses emailOrLdapLoginId instead of email)
|
||||
echo "Logging in..."
|
||||
LOGIN_RESP=\$(curl -sS -X POST "\${API_URL}/rest/login" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-c "\${COOKIE_FILE}" \\
|
||||
-d "{\"emailOrLdapLoginId\":\"\${EMAIL}\",\"password\":\"\${PASSWORD}\"}")
|
||||
|
||||
if echo "\$LOGIN_RESP" | grep -q '"code":\|"status":"error"'; then
|
||||
echo "LOGIN_FAILED: \$LOGIN_RESP"
|
||||
exit 1
|
||||
fi
|
||||
echo "Login successful"
|
||||
|
||||
# Create PostgreSQL credential
|
||||
echo "Creating PostgreSQL credential..."
|
||||
PG_CRED_RESP=\$(curl -sS -X POST "\${API_URL}/rest/credentials" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-b "\${COOKIE_FILE}" \\
|
||||
-d '{
|
||||
"name": "PostgreSQL (local)",
|
||||
"type": "postgres",
|
||||
"data": {
|
||||
"host": "${pg_host}",
|
||||
"port": ${pg_port},
|
||||
"database": "${pg_db}",
|
||||
"user": "${pg_user}",
|
||||
"password": "${escaped_pg_pass}",
|
||||
"ssl": "disable"
|
||||
}
|
||||
}')
|
||||
|
||||
PG_CRED_ID=\$(echo "\$PG_CRED_RESP" | grep -oP '"id"\s*:\s*"\K[^"]+' | head -1)
|
||||
if [ -z "\$PG_CRED_ID" ]; then
|
||||
echo "POSTGRES_CRED_FAILED: \$PG_CRED_RESP"
|
||||
exit 1
|
||||
fi
|
||||
echo "PostgreSQL credential created: \$PG_CRED_ID"
|
||||
|
||||
# Create Ollama credential
|
||||
echo "Creating Ollama credential..."
|
||||
OLLAMA_CRED_RESP=\$(curl -sS -X POST "\${API_URL}/rest/credentials" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-b "\${COOKIE_FILE}" \\
|
||||
-d '{
|
||||
"name": "Ollama (local)",
|
||||
"type": "ollamaApi",
|
||||
"data": {
|
||||
"baseUrl": "${ollama_url}"
|
||||
}
|
||||
}')
|
||||
|
||||
OLLAMA_CRED_ID=\$(echo "\$OLLAMA_CRED_RESP" | grep -oP '"id"\s*:\s*"\K[^"]+' | head -1)
|
||||
if [ -z "\$OLLAMA_CRED_ID" ]; then
|
||||
echo "OLLAMA_CRED_FAILED: \$OLLAMA_CRED_RESP"
|
||||
exit 1
|
||||
fi
|
||||
echo "Ollama credential created: \$OLLAMA_CRED_ID"
|
||||
|
||||
# Replace placeholder IDs in workflow JSON
|
||||
echo "Preparing workflow JSON..."
|
||||
sed -e "s/POSTGRES_CRED_ID/\$PG_CRED_ID/g" -e "s/OLLAMA_CRED_ID/\$OLLAMA_CRED_ID/g" /tmp/rag_workflow_template.json > /tmp/rag_workflow.json
|
||||
|
||||
# Import workflow
|
||||
echo "Importing workflow..."
|
||||
WORKFLOW_RESP=\$(curl -sS -X POST "\${API_URL}/rest/workflows" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-b "\${COOKIE_FILE}" \\
|
||||
-d @/tmp/rag_workflow.json)
|
||||
|
||||
WORKFLOW_ID=\$(echo "\$WORKFLOW_RESP" | grep -oP '"id"\s*:\s*"\K[^"]+' | head -1)
|
||||
if [ -z "\$WORKFLOW_ID" ]; then
|
||||
echo "WORKFLOW_IMPORT_FAILED: \$WORKFLOW_RESP"
|
||||
exit 1
|
||||
fi
|
||||
echo "Workflow imported: \$WORKFLOW_ID"
|
||||
|
||||
# Activate workflow
|
||||
echo "Activating workflow..."
|
||||
ACTIVATE_RESP=\$(curl -sS -X PATCH "\${API_URL}/rest/workflows/\${WORKFLOW_ID}" \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-b "\${COOKIE_FILE}" \\
|
||||
-d '{"active": true}')
|
||||
|
||||
if echo "\$ACTIVATE_RESP" | grep -q '"active":true\|"active": true'; then
|
||||
echo "Workflow activated successfully"
|
||||
else
|
||||
echo "WORKFLOW_ACTIVATION_WARNING: \$ACTIVATE_RESP"
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
rm -f "\${COOKIE_FILE}" /tmp/rag_workflow_template.json /tmp/rag_workflow.json
|
||||
|
||||
# Output results
|
||||
echo "SUCCESS"
|
||||
echo "POSTGRES_CRED_ID=\$PG_CRED_ID"
|
||||
echo "OLLAMA_CRED_ID=\$OLLAMA_CRED_ID"
|
||||
echo "WORKFLOW_ID=\$WORKFLOW_ID"
|
||||
SETUP_SCRIPT
|
||||
)"
|
||||
|
||||
# Make script executable and run it
|
||||
pct exec "$ctid" -- chmod +x /tmp/n8n_setup.sh
|
||||
|
||||
info "n8n Setup: Running setup script in container..."
|
||||
local setup_output
|
||||
setup_output=$(pct exec "$ctid" -- /tmp/n8n_setup.sh 2>&1 || echo "SCRIPT_FAILED")
|
||||
|
||||
# Log the output
|
||||
info "n8n Setup: Script output:"
|
||||
echo "$setup_output" | while read -r line; do
|
||||
info " $line"
|
||||
done
|
||||
|
||||
# Check for success
|
||||
if echo "$setup_output" | grep -q "^SUCCESS$"; then
|
||||
# Extract IDs from output
|
||||
local pg_cred_id ollama_cred_id workflow_id
|
||||
pg_cred_id=$(echo "$setup_output" | grep "^POSTGRES_CRED_ID=" | cut -d= -f2)
|
||||
ollama_cred_id=$(echo "$setup_output" | grep "^OLLAMA_CRED_ID=" | cut -d= -f2)
|
||||
workflow_id=$(echo "$setup_output" | grep "^WORKFLOW_ID=" | cut -d= -f2)
|
||||
|
||||
info "n8n Setup: RAG workflow setup completed successfully"
|
||||
info "n8n Setup: Workflow ID: ${workflow_id}"
|
||||
info "n8n Setup: PostgreSQL Credential ID: ${pg_cred_id}"
|
||||
info "n8n Setup: Ollama Credential ID: ${ollama_cred_id}"
|
||||
|
||||
# Cleanup setup script
|
||||
pct exec "$ctid" -- rm -f /tmp/n8n_setup.sh 2>/dev/null || true
|
||||
|
||||
return 0
|
||||
else
|
||||
warn "n8n Setup: Setup script failed"
|
||||
# Cleanup
|
||||
pct exec "$ctid" -- rm -f /tmp/n8n_setup.sh /tmp/n8n_cookies.txt /tmp/rag_workflow_template.json /tmp/rag_workflow.json 2>/dev/null || true
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
269
setup_flowise_account.sh
Executable file
269
setup_flowise_account.sh
Executable file
@@ -0,0 +1,269 @@
|
||||
#!/usr/bin/env bash
|
||||
set -Eeuo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# Flowise Account Setup Script
|
||||
# =============================================================================
|
||||
# Erstellt den Administrator-Account für eine neue Flowise-Instanz
|
||||
# über die Flowise API (/api/v1/organization/setup)
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_VERSION="1.0.1"
|
||||
|
||||
# Debug mode: 0 = nur JSON, 1 = Logs auf stderr
|
||||
DEBUG="${DEBUG:-0}"
|
||||
export DEBUG
|
||||
|
||||
# Logging functions
|
||||
log_ts() { date "+[%F %T]"; }
|
||||
info() { [[ "$DEBUG" == "1" ]] && echo "$(log_ts) INFO: $*" >&2; return 0; }
|
||||
warn() { [[ "$DEBUG" == "1" ]] && echo "$(log_ts) WARN: $*" >&2; return 0; }
|
||||
die() {
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
echo "$(log_ts) ERROR: $*" >&2
|
||||
else
|
||||
echo "{\"error\": \"$*\"}"
|
||||
fi
|
||||
exit 1
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# Usage
|
||||
# =============================================================================
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
bash setup_flowise_account.sh [options]
|
||||
|
||||
Required options:
|
||||
--url <url> Flowise base URL (e.g., https://fw-1768829679.userman.de)
|
||||
--name <name> Administrator display name
|
||||
--email <email> Administrator email (used as login)
|
||||
--password <password> Administrator password (8+ chars, upper, lower, digit, special)
|
||||
|
||||
Optional:
|
||||
--basic-user <user> Basic Auth username (if Flowise has FLOWISE_USERNAME set)
|
||||
--basic-pass <pass> Basic Auth password (if Flowise has FLOWISE_PASSWORD set)
|
||||
--debug Enable debug mode (show logs on stderr)
|
||||
--help Show this help
|
||||
|
||||
Password requirements:
|
||||
- At least 8 characters
|
||||
- At least one lowercase letter
|
||||
- At least one uppercase letter
|
||||
- At least one digit
|
||||
- At least one special character
|
||||
|
||||
Examples:
|
||||
# Setup account:
|
||||
bash setup_flowise_account.sh \
|
||||
--url https://fw-1768829679.userman.de \
|
||||
--name "Admin User" \
|
||||
--email admin@example.com \
|
||||
--password "SecurePass1!"
|
||||
|
||||
# With debug output:
|
||||
bash setup_flowise_account.sh --debug \
|
||||
--url https://fw-1768829679.userman.de \
|
||||
--name "Admin User" \
|
||||
--email admin@example.com \
|
||||
--password "SecurePass1!"
|
||||
EOF
|
||||
}
|
||||
|
||||
# =============================================================================
|
||||
# Default values
|
||||
# =============================================================================
|
||||
FLOWISE_URL=""
|
||||
ADMIN_NAME=""
|
||||
ADMIN_EMAIL=""
|
||||
ADMIN_PASSWORD=""
|
||||
BASIC_USER=""
|
||||
BASIC_PASS=""
|
||||
|
||||
# =============================================================================
|
||||
# Argument parsing
|
||||
# =============================================================================
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--url) FLOWISE_URL="${2:-}"; shift 2 ;;
|
||||
--name) ADMIN_NAME="${2:-}"; shift 2 ;;
|
||||
--email) ADMIN_EMAIL="${2:-}"; shift 2 ;;
|
||||
--password) ADMIN_PASSWORD="${2:-}"; shift 2 ;;
|
||||
--basic-user) BASIC_USER="${2:-}"; shift 2 ;;
|
||||
--basic-pass) BASIC_PASS="${2:-}"; shift 2 ;;
|
||||
--debug) DEBUG="1"; export DEBUG; shift 1 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*) die "Unknown option: $1 (use --help)" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# =============================================================================
|
||||
# Validation
|
||||
# =============================================================================
|
||||
[[ -n "$FLOWISE_URL" ]] || die "--url is required"
|
||||
[[ -n "$ADMIN_NAME" ]] || die "--name is required"
|
||||
[[ -n "$ADMIN_EMAIL" ]] || die "--email is required"
|
||||
[[ -n "$ADMIN_PASSWORD" ]] || die "--password is required"
|
||||
|
||||
# Validate email format
|
||||
[[ "$ADMIN_EMAIL" =~ ^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$ ]] || die "Invalid email format: $ADMIN_EMAIL"
|
||||
|
||||
# Validate password policy (Flowise requirements)
|
||||
validate_password() {
|
||||
local p="$1"
|
||||
[[ ${#p} -ge 8 ]] || return 1
|
||||
[[ "$p" =~ [a-z] ]] || return 1
|
||||
[[ "$p" =~ [A-Z] ]] || return 1
|
||||
[[ "$p" =~ [0-9] ]] || return 1
|
||||
[[ "$p" =~ [^a-zA-Z0-9] ]] || return 1
|
||||
return 0
|
||||
}
|
||||
|
||||
validate_password "$ADMIN_PASSWORD" || die "Password does not meet requirements: 8+ chars, lowercase, uppercase, digit, special character"
|
||||
|
||||
# Remove trailing slash from URL
|
||||
FLOWISE_URL="${FLOWISE_URL%/}"
|
||||
|
||||
info "Script Version: ${SCRIPT_VERSION}"
|
||||
info "Configuration:"
|
||||
info " URL: ${FLOWISE_URL}"
|
||||
info " Name: ${ADMIN_NAME}"
|
||||
info " Email: ${ADMIN_EMAIL}"
|
||||
info " Password: ********"
|
||||
if [[ -n "$BASIC_USER" ]]; then
|
||||
info " Basic Auth: ${BASIC_USER}:********"
|
||||
fi
|
||||
|
||||
# Build curl auth options
|
||||
CURL_AUTH=""
|
||||
if [[ -n "$BASIC_USER" && -n "$BASIC_PASS" ]]; then
|
||||
CURL_AUTH="-u ${BASIC_USER}:${BASIC_PASS}"
|
||||
fi
|
||||
|
||||
# =============================================================================
|
||||
# Check if Flowise is reachable
|
||||
# =============================================================================
|
||||
info "Checking if Flowise is reachable..."
|
||||
|
||||
# Try to reach the organization-setup page
|
||||
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -k ${CURL_AUTH} "${FLOWISE_URL}/organization-setup" 2>/dev/null || echo "000")
|
||||
|
||||
if [[ "$HTTP_CODE" == "000" ]]; then
|
||||
die "Cannot connect to Flowise at ${FLOWISE_URL}"
|
||||
elif [[ "$HTTP_CODE" == "404" ]]; then
|
||||
warn "Organization setup page not found (404). Account may already exist."
|
||||
fi
|
||||
|
||||
info "Flowise is reachable (HTTP ${HTTP_CODE})"
|
||||
|
||||
# =============================================================================
|
||||
# Create Account via API
|
||||
# =============================================================================
|
||||
info "Creating administrator account..."
|
||||
|
||||
# Prepare JSON payload
|
||||
# Note: Flowise expects specific field names
|
||||
JSON_PAYLOAD=$(cat <<EOF
|
||||
{
|
||||
"name": "${ADMIN_NAME}",
|
||||
"email": "${ADMIN_EMAIL}",
|
||||
"password": "${ADMIN_PASSWORD}"
|
||||
}
|
||||
EOF
|
||||
)
|
||||
|
||||
info "Sending request to ${FLOWISE_URL}/api/v1/organization/setup"
|
||||
|
||||
# Make API request
|
||||
RESPONSE=$(curl -s -k ${CURL_AUTH} -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "${JSON_PAYLOAD}" \
|
||||
-w "\n%{http_code}" \
|
||||
"${FLOWISE_URL}/api/v1/organization/setup" 2>&1)
|
||||
|
||||
# Extract HTTP code from last line
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
RESPONSE_BODY=$(echo "$RESPONSE" | sed '$d')
|
||||
|
||||
info "HTTP Response Code: ${HTTP_CODE}"
|
||||
info "Response Body: ${RESPONSE_BODY}"
|
||||
|
||||
# =============================================================================
|
||||
# Handle Response
|
||||
# =============================================================================
|
||||
if [[ "$HTTP_CODE" == "200" || "$HTTP_CODE" == "201" ]]; then
|
||||
info "Account created successfully!"
|
||||
|
||||
# Output result as JSON
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
cat <<EOF
|
||||
{
|
||||
"success": true,
|
||||
"url": "${FLOWISE_URL}",
|
||||
"email": "${ADMIN_EMAIL}",
|
||||
"name": "${ADMIN_NAME}",
|
||||
"message": "Account created successfully"
|
||||
}
|
||||
EOF
|
||||
else
|
||||
echo "{\"success\":true,\"url\":\"${FLOWISE_URL}\",\"email\":\"${ADMIN_EMAIL}\",\"name\":\"${ADMIN_NAME}\",\"message\":\"Account created successfully\"}"
|
||||
fi
|
||||
|
||||
elif [[ "$HTTP_CODE" == "400" ]]; then
|
||||
# Check if account already exists
|
||||
if echo "$RESPONSE_BODY" | grep -qi "already exists\|already setup\|already registered"; then
|
||||
warn "Account may already exist"
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
cat <<EOF
|
||||
{
|
||||
"success": false,
|
||||
"url": "${FLOWISE_URL}",
|
||||
"email": "${ADMIN_EMAIL}",
|
||||
"error": "Account already exists",
|
||||
"response": ${RESPONSE_BODY}
|
||||
}
|
||||
EOF
|
||||
else
|
||||
echo "{\"success\":false,\"url\":\"${FLOWISE_URL}\",\"email\":\"${ADMIN_EMAIL}\",\"error\":\"Account already exists\"}"
|
||||
fi
|
||||
exit 1
|
||||
else
|
||||
die "Bad request (400): ${RESPONSE_BODY}"
|
||||
fi
|
||||
|
||||
elif [[ "$HTTP_CODE" == "404" ]]; then
|
||||
# Try alternative endpoints
|
||||
info "Trying alternative endpoint /api/v1/signup..."
|
||||
|
||||
RESPONSE=$(curl -s -k ${CURL_AUTH} -X POST \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "${JSON_PAYLOAD}" \
|
||||
-w "\n%{http_code}" \
|
||||
"${FLOWISE_URL}/api/v1/signup" 2>&1)
|
||||
|
||||
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
|
||||
RESPONSE_BODY=$(echo "$RESPONSE" | sed '$d')
|
||||
|
||||
if [[ "$HTTP_CODE" == "200" || "$HTTP_CODE" == "201" ]]; then
|
||||
info "Account created successfully via /api/v1/signup!"
|
||||
if [[ "$DEBUG" == "1" ]]; then
|
||||
cat <<EOF
|
||||
{
|
||||
"success": true,
|
||||
"url": "${FLOWISE_URL}",
|
||||
"email": "${ADMIN_EMAIL}",
|
||||
"name": "${ADMIN_NAME}",
|
||||
"message": "Account created successfully"
|
||||
}
|
||||
EOF
|
||||
else
|
||||
echo "{\"success\":true,\"url\":\"${FLOWISE_URL}\",\"email\":\"${ADMIN_EMAIL}\",\"name\":\"${ADMIN_NAME}\",\"message\":\"Account created successfully\"}"
|
||||
fi
|
||||
else
|
||||
die "API endpoint not found. Tried /api/v1/organization/setup and /api/v1/signup. Response: ${RESPONSE_BODY}"
|
||||
fi
|
||||
|
||||
else
|
||||
die "Unexpected response (HTTP ${HTTP_CODE}): ${RESPONSE_BODY}"
|
||||
fi
|
||||
Reference in New Issue
Block a user