Ndifreke Ekott

Thoughts, stories, ideas and programming

21 Nov 2025

Managing compute, not machines — the Kubernetes mindset.

Managing compute, not machines — the Kubernetes mindset.

One of the best aspects of our field of Software Engineering is the fact that something new always comes up. It isn’t a dull space. Depending on your perspective, this can be a good thing or a bad thing. Every few years, you may be force to re-evaluate your career trajectory.

Back in the early years of my getting into Software Engineering, deploying your software to a server was the pinnacle of one’s career. We day dreamed of all the different Servers and Server processors out there. I remember reading Oracle’s Magazine and reading about Sun’s servers and Solaris operating system.

Today, the software field has grown so fast that we now have many target locations to deploy software to. You can run a reasonably complex Software on a $40 computer as the Raspberry Pi. You can also custom build a powerful PC with enough CPU and Power and run your business from your back office (this is the trend of - Self Hosting).

Over time, I got exposed to the cloud and the big names became the common place from Amazon Web Services, Microsoft Azure and Google Cloud Platform. At this time the focus was still on Virtual Machines. Not too long containerization became popular.

I picked up containerization via Docker and this has drastically changed how we can package our software and deploy to numerous target platforms. Learning Docker unlocked the potential to run multiple projects with different runtime from Java, PHP, Python or Node on the same machine and you need not worry about where the runtime is setup. You could run multiple projects using different version of a particular runtime and still have isolation between each app. Before now, you would have had to run each project on its own virtual machine potentially.

Managed Platforms

With the ability to Dockerize Software applications, a lot of deployment platforms were build such that as the developer, you no longer needed to think about servers. As long as you can package your application as a Docker image, these platforms will take care of deploying to a server somewhere and give you the tools to monitor your application.

Examples of platforms I have had experience working with are - Heroku and AWS Elastic Container Services (ECS). These platform tell you how your app needs to be structured to effectively derive the value they offer. You however will pay for the convenience of not having to deal with services directly. In a business setting, this is okay, you can simply charge the cost of running the software on these platforms over to your customers.

What happens when you want to run a hobby project and don’t have the money to pay for these platforms?

Enter Self Hosting

It has to be said, one of the ways you can keep the cost of hosting your brilliant hobby project low, is to Do It Yourself. I have always felt comfortable setting up remote servers on Platform As A Service (PaaS) providers using tools like bash scripts and Terraform.

Self hosting will teach you how infrastructures work and you will be forced to learn the things and practices required to continuously run, monitor and scale your infrastructure yourself. Whether you have the time to pick up these skills is were the bottleneck lies.

From time to time, I do have ideas I would love to quickly prototype, deploy test and maybe tear down. However, setting up new Virtual Machines (VM) for each project could easily get expensive. In order to be able to deploy random applications on the same server at will, you will need to build some sort of workflow that automates the deployment, tear down and providing external access to your application.

I have done my share of setting up and tearing down of projects to the point I am exhausted. I am currently working on a hobby project which I deployed to a VM for months now and then I quickly had the idea for another and with that burst of enthusiasm, I wanted to quickly deploy test and maybe tear down if it doesn’t pan out, and this is the point, I felt there has to be a way to just deploy an idea. I have docker, so what is missing?

Enter Kubernetes

I have been aware of Kubernetes for years. I have watched countless talks, watched numerous conference videos and listened to tales from the trenches of how people built and ran large scale system.

There is a trend in our field to always jump on the latest hype, right now it is AI but Kubernetes is still there. “Not everyone needs Kubernetes”, this is something you will hear routinely from people who have built and run Kubernetes in large scale organizations. There is some truth to that but what it not really demonstrated clearly to the less curious is the problem it solves and the opportunity it unlocks.

I decided to read up on Kubernetes because, I had decided earlier this year, I should give it a good shot. So I opened up Google + AI + reading the Docs and setup a simple cluster on Hetzner. I want to be able to spin up a simple Dockerized app, run it on the server and tear it down with ease. I only want to think about server resources - Memory and CPU.

After deploying my first application to test out my newly minted Kubernetes setup, I realized it solved a lot of problems I had to solve by hand when dealing with VMs directly. Where should I store secrets? How do i store the source of secrets? Kubernetes secrets does that for me and I just need to reference them on deployment, gone are my encrypting and decrypting of ENV files.

After successfully deploying an Email sending and template management project, the light bulb moment came - I just need to think of Compute and Not Server.

It is all about Compute

Rather than think, how many Servers or Dynos, think - How much compute do I have available to run all my apps? You can create a cluster of machines, put together all that memory and CPU power and allow Kubernetes schedule where your brilliant app will run.

I have an app, how much memory does it consume? How much CPU time does it require? Kubernetes also provides some advance networking features that would be difficult to create and manage yourself. An example, I can restrict access to an API endpoint by declaring a few networking policies without the need to actually implement some security mechanism between apps and exchange API Keys.

Because I am also trying to be a cheap as possible, I have also become more aware of how much memory my applications consume. I have even picked up the Go programming language in the process. Imagine having an idle python web server consuming just over 400MB. If I can bring that down to a couple of hundreds, then I chuck in as many apps on one server as possible.

The Big Shutdown

I have always been a user of Digital Ocean for year but with my new Kubernetes experiment and the low cost of Hetzner servers, well, I shutdown all services on DO and moved over to my new shinny cluster.

Though this is just the beginning, I can easily see how easy it is to go down an endless rabbit hole with Kubernetes. The platform is big and there are numerous tools out there build on it. I have to curtail my excitement and keep things simple. For starters, I am not doing CI/CD just yet and so I am ware of flux but till I need flux, I will be deploying from local till it pisses me off.

I also have a friend who has a lot more experience with Kubernetes and has shared a lot of note on his Github repository - Odytrice.

Summary

You know, it’s funny how things evolve in our field - from dreaming about Sun servers back in the day to now juggling containers in the cloud. My journey with Kubernetes really opened my eyes to a different way of thinking about deployment. Instead of getting bogged down with individual servers and their setups, I now just think about how much compute power I need for my apps. It’s like having one big pool of resources that I can dip into whenever I need to spin up a new project.

The best part? Kubernetes takes care of all those annoying bits like managing secrets and networking that I used to handle manually. Plus, it’s made me more mindful about resource usage - I’m even writing Go code now to keep things lean! Sure, there’s still plenty to learn, but for someone like me who loves to tinker with hobby projects, it’s been a game-changer in making deployment less of a headache and more of an adventure.