UN1290 — Self-Hosted Production Infrastructure
Production infrastructure on a mini PC. $0/month. Zero exposed ports.
The Problem
Cloud hosting costs add up fast when you're running multiple services — databases, APIs, storage, LLM inference. At $20-50/month per service, running 6+ services means $120-300/month just to keep your projects alive. For a student and builder shipping multiple production systems, that's unsustainable. What if you could run everything for $0?
The Approach
I built a self-hosted production infrastructure on a Minisforum UN1290 mini PC (Intel i9-12900HK, 14 cores/20 threads, 32GB RAM) that runs all my project backends, databases, and services — with zero exposed ports to the public internet.
Key architectural decisions
Self-hosted Supabase instead of Supabase Cloud — 13-container stack (PostgreSQL, GoTrue, PostgREST, Realtime, Storage, Kong, Studio, etc.) giving full platform capabilities at $0.
Cloudflare Tunnel instead of port forwarding — services are accessible via HTTPS without exposing any ports on the server. Cloudflare handles TLS, DDoS protection, and routing.
Tailscale for dev machine → server communication — encrypted mesh VPN connecting my IdeaPad (development, GPU) to the UN1290 (services) without touching the public internet.
WireGuard VPN for remote admin — when I need to SSH into the server from outside, I connect via WireGuard, not direct SSH over the internet.
Zero-trust networking philosophy — no port is exposed to the internet. Every service binds to localhost. External access only through Cloudflare Tunnel (public) or WireGuard/Tailscale (admin).
Technical Deep-Dive
Hardware
- ▸Minisforum UN1290: Intel i9-12900HK (14C/20T), 32GB RAM
- ▸Runs 24/7 (always-on server)
- ▸Power draw ~25-45W (efficient for always-on operation)
Services running
- ▸Self-hosted Supabase (13 containers): PostgreSQL, GoTrue (auth), PostgREST (API), Realtime, Storage, Kong (gateway), Studio (dashboard), and supporting services
- ▸PostgreSQL (standalone instances for non-Supabase projects)
- ▸Redis (caching + session storage)
- ▸Ollama (local LLM inference — running Qwen3 for on-device AI)
- ▸Project APIs (Document Q&A SaaS, IDMS backend)
- ▸Nginx (reverse proxy for routing)
Networking architecture
IdeaPad (GPU dev) ←→ Tailscale (encrypted mesh) ←→ UN1290 (services)
↓
Cloudflare Tunnel
↓
Public HTTPS ($0)Security
- ▸SSH: key-only authentication (password auth disabled)
- ▸fail2ban: auto-bans IPs after failed SSH attempts
- ▸UFW: firewall allowing only essential traffic
- ▸All Docker services: bound to 127.0.0.1 (localhost only)
- ▸WireGuard: VPN for remote admin access
- ▸Zero ports exposed to the public internet
Automation
- ▸Automated PostgreSQL backups via cron (daily, 7-day retention)
- ▸systemd service management for Docker Compose stacks
- ▸tmux sessions for persistent process management
Key Metrics
Challenges & Solutions
Challenge 1:13-container Supabase stack complexity
Self-hosted Supabase has 13 interdependent containers. If one crashes, others may fail. Fix: Proper Docker Compose health checks, restart policies (unless-stopped), and documented dependency chain.
Challenge 2:Cloudflare Tunnel reliability
Tunnel occasionally disconnects, making services unreachable. Fix: systemd service with automatic restart, plus health monitoring script that pings the tunnel and alerts on failure.
Challenge 3:Storage management
PostgreSQL data, file uploads, and Docker images consume disk space. Without monitoring, the disk fills up silently. Fix: Automated backups with rotation (keep 7 days, delete older), Docker image pruning cron job, and disk usage alerts.
Lessons Learned
Self-hosting forces infrastructure mastery. When your database goes down, there's no support ticket — you fix it yourself. This builds the kind of operational knowledge that cloud-only developers never develop.
Zero-trust networking is simpler than it sounds. Cloudflare Tunnel + Tailscale + WireGuard + localhost-bound services = no attack surface. It's actually LESS configuration than managing firewall rules for exposed ports.
$0/month is a real competitive advantage. Every other junior developer's projects die when they stop paying for hosting. Mine stay alive indefinitely because they run on hardware I own. A recruiter can visit my live projects months from now and they'll still be running.