Managing compute, not machines — the Kubernetes mindset.
One of the best aspects of Software Engineering is that something new is always emerging. It’s never a dull space. Depending on your perspective, this can be exciting or overwhelming. Every few years, you may be forced to re‑evaluate your career trajectory.
Back in my early years, deploying software to a server felt like the pinnacle of achievement. We daydreamed about different servers and processors. I remember reading Oracle Magazine and learning about Sun’s servers and the Solaris operating system.
Today, the field has grown so much that we now have countless deployment targets. You can run reasonably complex software on a $40 Raspberry Pi, or custom‑build a powerful PC with enough CPU and memory to run your business from a back office (the trend of self‑hosting).
Eventually, I was exposed to the cloud, where AWS, Azure, and Google Cloud became household names. At that time, the focus was still on virtual machines. Soon after, containerization took off.
Learning Docker drastically changed how we package and deploy software. It unlocked the ability to run multiple projects with different runtimes—Java, PHP, Python, Node—on the same machine without worrying about conflicts. You could even run different versions of the same runtime in isolation. Previously, each project might have required its own VM.
Managed Platforms
With Dockerized applications, many platforms emerged where developers no longer needed to think about servers. As long as you could package your app as a Docker image, these platforms handled deployment and monitoring.
Examples I’ve used include Heroku and AWS Elastic Container Service (ECS). These platforms dictate how your app should be structured to maximize their value. You pay for the convenience of not managing servers directly. In business, that’s fine—you can pass the cost on to customers.
But what if you’re running a hobby project and don’t want to pay for these platforms?
Enter Self Hosting
One way to keep costs low is to do it yourself. I’ve always felt comfortable setting up remote servers on PaaS providers using bash scripts and Terraform.
Self‑hosting teaches you how infrastructure works. You’ll learn the practices required to run, monitor, and scale systems yourself. The bottleneck is whether you have the time to acquire these skills.
From time to time, I want to quickly prototype, deploy, test, and maybe tear down projects. But setting up new VMs for each idea gets expensive. To deploy random apps on the same server, you need workflows that automate deployment, teardown, and external access.
After countless setups and teardowns, I grew exhausted. I realized: I have Docker, so what’s missing?
Enter Kubernetes
I’ve known about Kubernetes for years—watched talks, conferences, and heard war stories from people running large‑scale systems.
There’s a common refrain: “Not everyone needs Kubernetes.” That’s true, but what’s often unclear is the specific problems it solves and the opportunities it unlocks.
Earlier this year, I decided to give it a proper shot. With Google, AI, and the docs, I set up a simple cluster on Hetzner. My goal: spin up a Dockerized app, run it, and tear it down easily. I wanted to think only about resources—memory and CPU.
Deploying my first app revealed how many problems Kubernetes solves automatically. Secrets management? Done. Networking policies? Built‑in. No more encrypting and decrypting ENV files manually.
The light‑bulb moment came after deploying an email‑sending project: I don’t need to think about servers. I just need to think about compute.
It is all about Compute
Instead of asking “How many servers or dynos?”, I now ask “How much compute do I have?” Kubernetes pools memory and CPU across machines and schedules apps intelligently.
It also provides advanced networking features. For example, I can restrict API access with simple policies, without implementing custom security mechanisms or exchanging API keys.
Because I’m cost‑conscious, Kubernetes has made me more aware of memory usage. I even picked up Go to write leaner apps. Imagine an idle Python server consuming 400MB—if I can cut that down, I can fit more apps on one server.
The Big Shutdown
I’ve used DigitalOcean for years, but with Kubernetes and Hetzner’s low‑cost servers, I shut down all DO services and moved to my shiny new cluster.
This is just the beginning. Kubernetes is vast, with countless tools built on top. I need to keep things simple. For now, I’m avoiding CI/CD and deploying locally until it becomes painful enough to adopt Flux.
A friend with more Kubernetes experience has shared notes on his GitHub: Odytrice.
Summary
t’s funny how things evolve—from dreaming about Sun servers to juggling containers in the cloud. Kubernetes has shifted my mindset: instead of worrying about individual servers, I now think in terms of compute power.
The best part? Kubernetes handles the tedious bits like secrets and networking. It’s made me more mindful of resources, even pushing me to write Go code for efficiency.
For hobby projects, this shift has been a game‑changer. Deployment feels less like a headache and more like an adventure.