InfrastructureDevOpsNetworking
Running2025 – Present

UN1290 — Self-Hosted Production Infrastructure

Production infrastructure on a mini PC. $0/month. Zero exposed ports.

DockerDocker ComposeUbuntu ServerCloudflare TunnelTailscaleWireGuardPostgreSQLRedisSupabaseNginxOllama

The Problem

Cloud hosting costs add up fast when you're running multiple services — databases, APIs, storage, LLM inference. At $20-50/month per service, running 6+ services means $120-300/month just to keep your projects alive. For a student and builder shipping multiple production systems, that's unsustainable. What if you could run everything for $0?

The Approach

I built a self-hosted production infrastructure on a Minisforum UN1290 mini PC (Intel i9-12900HK, 14 cores/20 threads, 32GB RAM) that runs all my project backends, databases, and services — with zero exposed ports to the public internet.

Key architectural decisions

Self-hosted Supabase instead of Supabase Cloud — 13-container stack (PostgreSQL, GoTrue, PostgREST, Realtime, Storage, Kong, Studio, etc.) giving full platform capabilities at $0.

Cloudflare Tunnel instead of port forwarding — services are accessible via HTTPS without exposing any ports on the server. Cloudflare handles TLS, DDoS protection, and routing.

Tailscale for dev machine → server communication — encrypted mesh VPN connecting my IdeaPad (development, GPU) to the UN1290 (services) without touching the public internet.

WireGuard VPN for remote admin — when I need to SSH into the server from outside, I connect via WireGuard, not direct SSH over the internet.

Zero-trust networking philosophy — no port is exposed to the internet. Every service binds to localhost. External access only through Cloudflare Tunnel (public) or WireGuard/Tailscale (admin).

Technical Deep-Dive

Hardware

  1. Minisforum UN1290: Intel i9-12900HK (14C/20T), 32GB RAM
  2. Runs 24/7 (always-on server)
  3. Power draw ~25-45W (efficient for always-on operation)

Services running

  1. Self-hosted Supabase (13 containers): PostgreSQL, GoTrue (auth), PostgREST (API), Realtime, Storage, Kong (gateway), Studio (dashboard), and supporting services
  2. PostgreSQL (standalone instances for non-Supabase projects)
  3. Redis (caching + session storage)
  4. Ollama (local LLM inference — running Qwen3 for on-device AI)
  5. Project APIs (Document Q&A SaaS, IDMS backend)
  6. Nginx (reverse proxy for routing)

Networking architecture

IdeaPad (GPU dev) ←→ Tailscale (encrypted mesh) ←→ UN1290 (services)
                                                    ↓
                                          Cloudflare Tunnel
                                                    ↓
                                          Public HTTPS ($0)

Security

  1. SSH: key-only authentication (password auth disabled)
  2. fail2ban: auto-bans IPs after failed SSH attempts
  3. UFW: firewall allowing only essential traffic
  4. All Docker services: bound to 127.0.0.1 (localhost only)
  5. WireGuard: VPN for remote admin access
  6. Zero ports exposed to the public internet

Automation

  1. Automated PostgreSQL backups via cron (daily, 7-day retention)
  2. systemd service management for Docker Compose stacks
  3. tmux sessions for persistent process management

Key Metrics

13+
Docker containers running 24/7
14C/20T
CPU, 32GB RAM
$0/mo
Hosting cost
0
Ports exposed to public internet
7-day
Automated backup retention
Local
LLM inference via Ollama

Challenges & Solutions

Challenge 1:13-container Supabase stack complexity

Self-hosted Supabase has 13 interdependent containers. If one crashes, others may fail. Fix: Proper Docker Compose health checks, restart policies (unless-stopped), and documented dependency chain.

Challenge 2:Cloudflare Tunnel reliability

Tunnel occasionally disconnects, making services unreachable. Fix: systemd service with automatic restart, plus health monitoring script that pings the tunnel and alerts on failure.

Challenge 3:Storage management

PostgreSQL data, file uploads, and Docker images consume disk space. Without monitoring, the disk fills up silently. Fix: Automated backups with rotation (keep 7 days, delete older), Docker image pruning cron job, and disk usage alerts.

Lessons Learned

01

Self-hosting forces infrastructure mastery. When your database goes down, there's no support ticket — you fix it yourself. This builds the kind of operational knowledge that cloud-only developers never develop.

02

Zero-trust networking is simpler than it sounds. Cloudflare Tunnel + Tailscale + WireGuard + localhost-bound services = no attack surface. It's actually LESS configuration than managing firewall rules for exposed ports.

03

$0/month is a real competitive advantage. Every other junior developer's projects die when they stop paying for hosting. Mine stay alive indefinitely because they run on hardware I own. A recruiter can visit my live projects months from now and they'll still be running.

Back to Work
Infrastructure project — no public repo.
Ask me anything