March 29, 2026·8 min read

Building a Kube Cluster in a Homelab I Can Actually Bend and Fix

I started this homelab for a simple reason: I wanted a place to run Kubernetes and the CNCF stack on my own terms, something that runs 24/7 with a cheap electricity bill, without making my laptop carry all the long-running work or paying cloud bills for experiments.

That shaped the whole setup right away. I already knew I wanted something I could tear down and rebuild without caring too much about the cost of being wrong.

I could have used GKE or another managed cluster, but that gets expensive once the whole point is experimentation. I want to break things, throw them away, try operators I don't fully trust yet, and rebuild from scratch. Cloud is great when the goal is production. It is less fun when the goal is controlled chaos with a budget.

I also didn't want to live inside Minikube forever. It works, but it turns my personal laptop into shared infrastructure. I don't want Kubernetes competing with my browser, editor, and battery. I wanted a machine that could stay on, sit quietly on the network, and be ready whenever I felt like trying something.

So I bought a Lenovo M710Q.

Hardware

My starting point was simple:

  • Intel Core i5, 4 cores, 7th gen
  • 16 GB RAM
  • 512 GB SSD

I got it for around $170, which felt like a good deal given how weird hardware prices have been. What surprised me was how solid the machine felt. I expected something cheap and flimsy. Instead, it feels like a proper tiny server. Dense, sturdy, and a lot less toy-like than the price suggests.

For a first homelab box, I think this class of machine makes a lot of sense. It is cheap enough that I don't worry about every decision, but still strong enough to run a few VMs without becoming miserable.

It barely shows up on the electricity bill, which matters a lot if the box is going to stay on in the background all the time.

Once it arrived, I plugged it into my home router and treated it like a dedicated lab machine from day one.

Proxmox

I installed Proxmox immediately.

That part was an easy decision. I knew I wanted multiple isolated environments, and I wanted to be able to throw them away without reinstalling the whole machine every time. Proxmox is a virtualization platform, so it turns a single box into something much more flexible. Instead of one Ubuntu install that slowly accumulates experiments and bad decisions, I get clean boundaries between machines.

The install itself was straightforward. I wrote the Proxmox ISO to a bootable disk, installed it on the Lenovo, and started carving the machine into VMs.

I ended up with three:

  1. k3s-control-plane
    • 2 vCPU
    • 4 GB RAM
    • 40 GB SSD
  2. k3s-worker
    • 2 vCPU
    • 8 GB RAM
    • 80 GB SSD
  3. extra-vm
    • 2 vCPU
    • 2 GB RAM
    • 80 GB SSD

That still leaves some RAM and disk on the Proxmox host, which I like. A homelab gets more interesting once you stop treating the first layout as permanent.

All three VMs run Ubuntu 24.04. Right after installing Ubuntu, I took a snapshot of each VM before touching anything else.

This is one of those things that feels optional right up until the first time you break networking or mess up a config file you barely understand yet. Proxmox snapshots are basically checkpoints. I can roll a VM back to a known-good state instead of trying to reverse every mistake by hand. I kept doing that throughout the setup. Clean install, snapshot. Networking done, snapshot. k3s working, snapshot.

I also installed Tailscale on the Proxmox host and all three VMs. It works like a private network that sits on top of the internet. Each machine joins the same tailnet, gets a private address, and Tailscale handles the encrypted connection between them. In practice that means I can SSH into Proxmox or any VM from my laptop or phone without opening ports on the router or wiring up a VPN by hand.

k3s Cluster

For Kubernetes, I picked k3s.

k3s was the obvious fit. I didn't want the full weight of a standard Kubernetes install on a small box, and I didn't need it. I wanted something close enough to real Kubernetes that the workflows still mattered, but light enough that the hardware was doing useful work instead of babysitting the control plane.

I installed k3s on the first two VMs. One is the control plane. One is the worker node. That gives me the shape I want without turning the lab into node-count theater.

Argo CD

GitOps was the point from the start, so Argo CD went in first.

Argo CD keeps the cluster aligned with what is in Git. I don't want to reconstruct changes from shell history or guess which manifest is actually live. If something should exist in the cluster, it belongs in the repo first.

Observability

Once that was in place, I added Prometheus and Grafana. Metrics are the first thing I reach for when the cluster starts acting strange, and I wanted that stack managed the same way as everything else. No one-off Helm runs, no snowflake installs, no mystery state. This is also what I will use to instrument the workloads I deploy later, so I can see how they behave instead of guessing from pod status and log snippets.

Secrets

I also added Sealed Secrets early. GitOps is great until the manifests need credentials, and then you either solve that properly or you end up with plain secrets in a repo. Encrypted secret manifests fit the workflow cleanly and keep the cluster responsible for decrypting them.

Cloudflared

After that, I installed Cloudflared. It runs a connector inside my network and keeps an outbound tunnel open to Cloudflare. That is the part I like. The public side lives at Cloudflare's edge, not on my router, so I do not have to punch inbound holes through the home network just to expose a service. When I point a hostname at that tunnel, Cloudflare receives the request first and forwards it back through the encrypted connection to the service running inside the lab.

Inside the cluster, cloudflared is just another workload. Kubernetes keeps the pod running, restarts it if it dies, and lets it talk to internal services over the cluster network like anything else. I am not exposing a NodePort or wiring a public IP to the workload. The tunnel pod reaches the service by its internal name, and Cloudflare only ever sees the outbound connection from that pod.

At that point the cluster had the bones I wanted from the start:

  • k3s for the cluster itself
  • Argo CD for GitOps
  • Prometheus and Grafana for metrics
  • Sealed Secrets for secrets in Git
  • Cloudflared for controlled external access

That is enough to get me started experimenting with workloads on top of it.

Extra VM

The third VM stays outside the cluster on purpose.

The temptation is to put everything into Kubernetes once you have a cluster. I know that trap well enough to leave one machine alone. This box is where I want to try agentic host stuff like OpenClaw, plus anything else that makes more sense on a plain Ubuntu machine than inside the cluster.

Keeping that VM separate gives me room to experiment without risking the cluster. Sometimes I just want a plain Linux box for tooling, automation, or a side project that does not belong in Kubernetes. That is enough reason to leave it alone.

Where This is Going

What I like most about this setup is that it already does the job. It gives me a real place to run workloads, break things, and try ideas.

It is small. It is cheap enough to be forgiving. It is close enough to real infrastructure that the workflows matter. Most of all, it got this work off my laptop, which was the original problem.

That change is bigger than it sounds. Now the cluster stays up when I close the lid. I can try something at night, leave it running, come back the next day, and keep going. I can snapshot a VM, make a mess, roll it back, and move on without feeling like I just trashed my main machine.

For me, that is the whole point of a homelab. Not scale. Not aesthetics. Not building a tiny fake production environment in the corner of my apartment.

Just a box that is always there, cheap enough to abuse, and useful enough to teach me something every time I break it.

The homelab will keep growing, and I will write more about it as I add things to it.

powered by Gemini 2.5 Flash