Orchestrating my homelab self-hosted services with Komodo and git

🗓 Posted

By

In this post I’m going to talk about how I went from a single server running all my self-hosted services to a small fleet of “servers” that I manage with Komodo, XCP-NG, and unRAID. The scenario that prompted me to divest in a single machine running the services I use every day was prompted by a spat of unexpected downtime to install/fix/replace something in my unRAID server (it doesn’t matter what it is because that’s not important to this post). In doing so, I lost access to all of my Homebridge devices in the  Home app which meant I couldn’t turn off some devices or even see my security camera feeds, my Plex server then became inaccessible for myself and the remote users who stream from it, and I couldn’t read my news feeds from FreshRSS, among losing access to the many other services.

To some this scenario might seem like a minor annoyance, but to me… well, in my quest for data-privacy and independance, encountering this situation felt awfully devastating considering the time I’d invested in not only my abilities but to have everything running smoothly for so long. Self-hosting the services I need to accomplish tasks, or waste time, is something I take a lot of pride in but extended periods of downtime started to boil my blood.

In my day job I work with all manner of software development hijinks and I’ve seen “single-source” failures happen first hand in a production environments when unseasoned people build “scalable” systems. This got me thinking: what if I brought fault tolerance and resiliency to my homelab to mitigate these situations? I had enough hardware on hand to build out infrastructure to support it, and I had the knowhow to make it happen.

After sketching a quick breakdown of what I thought was needed, I began to split up the services running on my monolith unRAID server across multiple servers. Now that I’ve gone through this exercise I can pull one, or even two, servers offline and still have service availability (for the most part). One of the other benefits to doing this is that my infrastructure can be declarative in code and I can spin up new infrastructure more easily with the push of a single git commit. It’s not as simple as running an Ansible playbook where I could spin up or even orchestrate the creation of fully configured servers themselves, but it eliminates a class of error I would definitely encounter having to remember all the steps required to get a working set of services running again from scratch.

If you care about your privacy, using a trusted VPN is a requirement for doing anything on the internet these days. I have used ExpressVPN for over 7 years and after reading the TorrentFreak 2025 review of VPN providers who do not log, I think you should too. If you sign up with my referral link, we BOTH get 1 month of free service 😁!

This is an affiliate link; it helps support keeping my website content up to date.

In the past I’ve used Portainer which was a lot to learn at the time, but I quickly moved on to Dockge to simplify things, and eventually ended up on Komodo as a happy medium that strikes a balance between the two.

A large part of this migration is not original nor new information, but I will give a big shoutout to FoxxMD for post I needed to understand how beneficial Komodo could be to my self-hosted infrastructure.

Getting started with Komodo #

Before diving in, I wanted to understand the following nomenclature that Komodo uses to describe it’s components. There is a lot to take in when starting and I’ll make note of the important things:

  • Resources - Represents the basic building block of Komodo; can be a Server, Stack, Repo, etc..
  • Server — The physical server on which you are running Komodo to do Docker orchestration.
  • Stack — Docker Compose deployments that can be deployed to a configured Server.
  • Repo — Represents a connection to a Git repository containing Komodo configuration and Docker Compose files.
  • Resource Sync - Configuration-as-code system that allows you to declaratively manage Komodo resources (Servers, Stacks, Deployments, etc.) through TOML files stored in Git repositories or on the host filesystem.

If you go down the route that I and FoxxMD went down, you’ll use Komodo to organize everything nicely into Stacks and deploy it from a git-backed repository. The benefit here is that you can make changes anywhere you have access to your repository and you likely have some fault tolerance built in if you have the git repo cloned in more than one place.

Migrating Docker Containers from unRAID #

One of the first tasks was to figure out how to translate the configurations I’d set up in unRAID over to Komodo. This is a fairly simple in hindsight, but unRAID does abstract quite a bit of the Docker-related configuration away into its GUI and doesn’t make it very portable by default. It was recommended by someone on Reddit (I can’t find the post anymore) to use the Composerize plugin to be able to view a Docker Compose-style representation of an application’s container configuration in unRAID. This helped a lot when pulling all the configuration data into dedicated compose.yaml files for each service.

Infrastructure as code #

Komodo can be used as a strictly UI-based configuration tool if you want it to be, but where it’s strength lies is in the ability to generate configuration files that represent its own state. This state can be stored in your own infrastructure-as-code files, and you can employ GitHub or some other hosted git repository to store and manage them.

When I set up Komodo I opted to use my self-hosted Gitea instance, however I quickly ran into a “chicken and egg” problem: Gitea is hosted on my unRAID server and when that server needs to be taken offline for any reason then my Komodo servers cannot update their state in the repository. This is mitigate by bringing Gitea back online, but it can be an inconvenience.

Setting up Komodo #

Once I had the basic configuration files organized in my repository, I began setting up the servers that would run Komodo’s “Core” (the main komodo-core container and database) and the servers that would run Docker and Periphery. These “servers” were setup as virtual machines in my XCP-NG infrastructure and I started by creating a lower resourced VM specifically to run Komodo’s “Core” then two more servers with multiple CPU cores and memory that would be my cattle machines that can be cloned or replaced.

During the Komodo setup process, do yourself a favour and run Periphery using systemd with a user account, not as root, and not as a docker container (while convenient, it can cause unnecessary headaches; here be dragons and trouble, ye be warned).

Once the Komodo “Core” server booted and the web interface was accessible, I began configuring it to connect to the other two servers running now running Docker and Periphery. This was a bit of trial and error because at the time I didn’t quite understand what I was doing and in hindsight I wished I’d made more notes along the way to share here because it definitely tripped me up a few times; if I remember, I’ll come back and modify this section.

Now that Komodo was setup, and my private Repo in Gitea was configured to push and pull configuration from, and I had Servers that could have Stacks deployed to, the process of setting up the containers to run began.

Running services in Komodo #

At this point everything was configured and ready to deploy but I didn’t know where the containers were going to store their data; I had thought about maybe using volume mounts but they have performance issues and aren’t incredibly portable. It was at this point that I remembered reading about using a single folder to store all my container’s app data that was a similar pattern to how unRAID stores container data. I followed this pattern from the post and it worked like a charm! Once I had begun setting up my Stacks and tweaking service configurations and monitoring logs, it was all coming together.

Now that I’ve been running this setup for a few months I can say with a high degree of certainty that Komodo is very stable and I’ve had literally no issues with it. One thing that I have yet to do is create a disaster recovery plan though, but given the excellent documentation and the fact that everything is store in git I’m not too worried if I need to fix a corrupt database or rebuild from scratch. Couple this with the XCP-NG backups that I do regularly, the likelihood of a complete rebuild is very low.

Closing notes #

Komodo is not for beginners; there is a lot of hidden knowledge that you need to have prior to using it and that comes with experience in tinkering with Docker and self-hosting, but that doesn’t mean I’m saying you shouldn’t play with it; by all means, play around and learn, but don’t expect it to be “click-to-deploy” easy to get up and running.