Back to notes
homelab 26 March 2026

How my internal services were exposed to the internet

Found that my *.internal.gread.uk services were publicly reachable via the VPS TCP proxy. Built a three-layer defence to fix it.

security traefik nginx wireguard

During a separate network investigation I noticed something that shouldn’t have been possible. My *.internal.gread.uk services — Grafana, Portainer, Pi-hole, Prometheus — were reachable from the public internet.

The problem

My VPS runs nginx as a dumb TCP stream proxy. It forwards all traffic on ports 80/443 to the NAS over a WireGuard tunnel without inspecting HTTP headers. Traefik on the NAS handles TLS termination and routing.

The issue: *.internal.gread.uk DNS records point to 192.168.1.16 (my LAN IP, unreachable from the internet) — so in theory, nobody outside my network can resolve these hostnames. But DNS isn’t the only way to reach a service. If you connect directly to the VPS IP with the right Host header, nginx forwards it blindly to the NAS, Traefik matches the hostname, and you’re in.

Proof of concept

import ssl, socket

context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE

sock = socket.create_connection(("89.167.41.67", 443), timeout=5)
ssock = context.wrap_socket(sock, server_hostname="grafana.internal.gread.uk")

request = b"GET / HTTP/1.1\r\nHost: grafana.internal.gread.uk\r\nConnection: close\r\n\r\n"
ssock.send(request)

response = b""
while True:
    chunk = ssock.recv(4096)
    if not chunk:
        break
    response += chunk

ssock.close()
print(response[:500].decode('utf-8', errors='ignore'))

Result: HTTP/1.1 302 Found — redirected to Grafana’s login page. The service was live and responding.

Three-layer fix

Each layer’s bypass condition is defeated by the next.

Layer 1: nginx SNI filtering (VPS)

Block connections at the TCP layer before they ever reach the NAS. nginx’s ssl_preread module inspects the SNI (Server Name Indication) field in the TLS ClientHello and drops anything matching *.internal.gread.uk.

stream {
    map $ssl_preread_server_name $upstream {
        ~*\.internal\.gread\.uk$  blocked;
        default                    nas_https;
    }

    upstream nas_https { server 10.8.0.2:443; }
    upstream blocked   { server 127.0.0.1:9; }

    server {
        listen 443;
        proxy_pass $upstream;
        ssl_preread on;
        proxy_protocol on;
    }
}

Bypass: an attacker could use a public SNI (e.g. photos.gread.uk) but send the internal hostname in the HTTP Host header. SNI filtering alone isn’t enough.

Layer 2: IP allowlist middleware (Traefik)

All internal Traefik routers get an IP allowlist middleware. VPS-forwarded traffic uses PROXY protocol, which exposes the real client IP — public internet IPs aren’t in the allowlist, so they get 403. Direct connections (LAN via Pi-hole DNS, Tailscale via subnet route) are masqueraded to 172.19.0.1 by Docker bridge NAT, which is in the allowlist.

CIDRPurpose
172.19.0.1/32Docker bridge gateway — LAN + Tailscale connections
192.168.1.0/24Home LAN (belt-and-suspenders)
100.64.0.0/10Tailscale CGNAT range (belt-and-suspenders)

Layer 3: Authentik forwardAuth

The final layer. Even if an attacker bypasses both SNI filtering and the IP allowlist, they need a valid Authentik session to access any internal service.

Verification

TestExpectedLayer
SNI grafana.internal.gread.uk direct to VPSConnection refusednginx SNI filter
SNI photos.gread.uk + Host grafana.internal.gread.uk403 ForbiddenIP allowlist
From LAN200 OKAllowed
Via Tailscale200 OKAllowed
Public services (photos.gread.uk)200 OKUnaffected

Before fix: HTTP/1.1 302 Found (Grafana login page). After fix: HTTP/1.1 403 Forbidden.

Side discovery

Tailscale wasn’t routing subnet traffic correctly. The NAS needed --advertise-routes=192.168.1.0/24 and the client needed --accept-routes=true to route traffic through Tailscale rather than falling back to the VPS.

Automated regression testing

Added a weekly GitHub Actions workflow that runs the PoC script against all internal service hostnames and fails if any return 200 or 302. Defence-in-depth includes knowing when your defences break.