2025-11-01 07:45:01 +01:00
2025-11-01 07:45:01 +01:00

LLM-Suite Installatiehandleiding (Whisper, Piper, Ollama, ha-llm-bridge, LiteLLM)

Deze handleiding beschrijft stap-voor-stap hoe je de volledige LLM-suite op een nieuwe server kunt installeren — inclusief NVIDIA GPU-ondersteuning, Docker, configuratiebestanden en koppeling met Home Assistant.


0. Systeempreparatie (Ubuntu 22.04/24.04 met NVIDIA GPU)

Updates en tools

sudo apt update && sudo apt -y upgrade
sudo apt -y install curl ca-certificates gnupg lsb-release git

NVIDIA-driver installeren

sudo ubuntu-drivers install
sudo reboot

Na herstart:

nvidia-smi

Docker installeren

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $USER
newgrp docker

NVIDIA Container Toolkit

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)  && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit.gpg  && curl -fsSL https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list  | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit.gpg] https://#g'  | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update
sudo apt -y install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

1. Mappenstructuur & .env-bestand

sudo mkdir -p /docker/appdata/llm-suite/{wyoming-whisper,wyoming-piper,ollama,ha-llm-bridge,litellm}
sudo mkdir -p /docker/appdata/llm-suite/ha-llm-bridge/app
sudo touch /docker/appdata/llm-suite/ha-llm-bridge/requirements.txt
sudo chown -R $USER: /docker/appdata

Maak het .env bestand:

cat > /docker/appdata/llm-suite/.env << 'EOF'
HA_WEBHOOK_URL=https://jouw-ha-domein.example/api/webhook/ha-llm-bridge
LITELLM_MASTER_KEY=supergeheimesleutel
EOF

2. Docker Compose-bestand

Sla dit bestand op als /docker/appdata/llm-suite/docker-compose.yml:

<-- volledige YAML zoals in vorige bericht (om redenen van lengte niet herhaald) -->

3. ha-llm-bridge bestanden

requirements.txt

fastapi==0.115.5
uvicorn[standard]==0.32.0
httpx==0.27.2
pydantic==2.9.2

main.py

import os
from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class EchoIn(BaseModel):
    text: str

@app.get("/healthz")
def health():
    return {"status": "ok"}

@app.post("/echo")
def echo(payload: EchoIn):
    return {"ok": True, "echo": payload.text, "model": os.getenv("OLLAMA_MODEL", "unset")}

system_prompt.txt

Je bent een behulpzame Nederlandstalige assistent. Antwoord kort, feitelijk en bruikbaar voor Home Assistant automations.

entities.json

[
  "light.woonkamer",
  "switch.keuken",
  "media_player.tv",
  "climate.thermostaat"
]

4. LiteLLM Configuratie

Maak /docker/appdata/llm-suite/litellm/config.yaml:

model_list:
  - model_name: qwen2-local
    litellm_params:
      model: ollama/qwen2:7b-instruct-q4_K_M
      api_base: http://ollama:11434
      timeout: 120

general_settings:
  master_key: ${LITELLM_MASTER_KEY}

5. Opstarten

cd /docker/appdata/llm-suite
docker compose -f docker-compose.yml up -d

Model downloaden:

docker exec -it ollama ollama pull qwen2:7b-instruct-q4_K_M

6. Health Checks

curl -s http://localhost:11434/api/tags | jq
curl -s -H "Authorization: Bearer supergeheimesleutel" http://localhost:4000/v1/models | jq
curl -s http://localhost:8082/healthz
curl -s -X POST http://localhost:8082/echo -H 'Content-Type: application/json' -d '{"text":"ping"}'

7. Home Assistant Integratie

  • STT: Wyoming STT → dockerhost:10300
  • TTS: Wyoming TTS → dockerhost:10200
  • LLM (OpenAI compatible):
    • Base URL: http://dockerhost:4000/v1
    • API Key: supergeheimesleutel
    • Model: qwen2-local

8. Beheer en Upgrades

docker compose ps
docker logs -f ollama
docker compose pull && docker compose up -d

9. Troubleshooting

  • GPU niet zichtbaar → nvidia-smi en runtime check.
  • Model ontbreekt → juiste tag downloaden.
  • Port conflicts → poort aanpassen in Compose.
  • HA kan niet verbinden → test met curl vanaf HA host.

© 2025 Marco Voskuil — GPEU / voskuil.cloud

Description
Portainer stack for adding local AI / LLM to home assisant
Readme 25 KiB