Orchestrating my homelab self-hosted services with Komodo and git

🗓 Posted

By

This post covers how I went from a single server running all my self-hosted services to a small fleet of “servers” that I manage with Komodo, XCP-NG, and unRAID. That decision was prompted by a spell of unexpected downtime to install/fix/replace something in my unRAID server (it doesn’t matter what it is because that’s not important to this post). In doing so, I lost access to all of my Homebridge devices in the  Home app which meant I couldn’t turn off some devices or even see my security camera feeds, my Plex server then became inaccessible for myself and the remote users who stream from it, and I couldn’t read my news feeds from FreshRSS, among losing access to the many other services.

To some this scenario might seem like a minor annoyance, but to me… well, in my quest for data privacy and independence, encountering this situation felt awfully devastating considering the time I’d invested in not only my abilities but to have everything running smoothly for so long. Self-hosting the services I need to accomplish tasks, or waste time, is something I take a lot of pride in but extended periods of downtime started to boil my blood.

In my day job I work with all manner of software development hijinks and I’ve seen “single-source” failures happen first hand in a production environment when unseasoned people build “scalable” systems. This got me thinking: what if I brought fault tolerance and resiliency to my homelab to mitigate these situations? I had enough hardware on hand to build out infrastructure to support it, and I had the knowhow to make it happen.

After sketching a quick breakdown of what I thought was needed, I began to split up the services running on my monolithic unRAID server across multiple servers. Now that I’ve gone through this exercise I can pull one, or even two, servers offline and still have service availability (for the most part). One of the other benefits to doing this is that my infrastructure can be declarative in code and I can spin up new infrastructure more easily with a single git commit. It’s not as simple as running an Ansible playbook where I could spin up or even orchestrate the creation of fully configured servers themselves, but it eliminates a class of error I would definitely encounter having to remember all the steps required to get a working set of services running again from scratch.

If you care about your privacy, using a trusted VPN is a requirement for doing anything on the internet these days. I have used ExpressVPN for over 7 years and after reading the TorrentFreak 2025 review of VPN providers who do not log, I think you should too. If you sign up with my referral link, we BOTH get 1 month of free service 😁!

This is an affiliate link; it helps support keeping my website content up to date.

In the past I’ve used Portainer which was a lot to learn at the time, but I quickly moved on to Dockge to simplify things, and eventually ended up on Komodo as a happy medium that strikes a balance between the two.

A large part of this migration is not original nor new information, but I will give a big shoutout to FoxxMD for the post that made Komodo click and how it could benefit my self-hosted infrastructure.

Getting started with Komodo #

Before diving in, I wanted to understand the following nomenclature that Komodo uses to describe its components. There is a lot to take in when starting and I’ll make note of the important things:

  • Resources — Represents the basic building block of Komodo; can be a Server, Stack, Repo, etc..
  • Server — The physical server on which you are running Komodo to do Docker orchestration.
  • Stack — Docker Compose deployments that can be deployed to a configured Server.
  • Repo — Represents a connection to a Git repository containing Komodo configuration and Docker Compose files.
  • Resource Sync — Configuration-as-code system that allows you to declaratively manage Komodo resources (Servers, Stacks, Deployments, etc.) through TOML files stored in Git repositories or on the host filesystem.

If you go down the route that I and FoxxMD went down, you’ll use Komodo to organize everything nicely into Stacks and deploy it from a git-backed repository. The benefit here is that you can make changes anywhere you have access to your repository and you likely have some fault tolerance built in if you have the git repo cloned in more than one place.

Migrating Docker Containers from unRAID #

One of the first tasks was to figure out how to translate the configurations I’d set up in unRAID over to Komodo. This is fairly simple in hindsight, but unRAID does abstract quite a bit of the Docker-related configuration away into its GUI and doesn’t make it very portable by default. It was recommended by someone on Reddit to use the Composerize plugin to be able to view a Docker Compose-style representation of an application’s container configuration in unRAID. This helped a lot when pulling all the configuration data into dedicated compose.yaml files for each service.

Installing and using the plugin in unRAID is relatively straightforward:

  1. navigate to the Plugins page of your unRAID instance and click the Install Plugin tab
  2. find the plugin installation file in the repo plugin/composerize.plg
  3. use the Raw URL from the GitHub file and paste that into the “remote plugin installation” field, then click the “Install” button
  4. a modal should pop up with the progress of the installation which can be closed when completed
  5. navigate back to the Installed Plugins page and find the composerize icon, then click it

You should now see a “Template” selection on the left side of the page and a “Preview Compose” empty box on the right. Clicking any of the template options from the select dropdown should yield the preview area to change and show a textual representation of the unRAID Container Template as a Compose service.

You can go through each of the templates to find the service to move over to Komodo by copy-pasting the preview output into the respective compose.yaml files for each service in a Stack directory.

Infrastructure as code #

Komodo can be used as a strictly UI-based configuration tool if you want it to be, but where its strength lies is in the ability to generate configuration files that represent its own state. This state can be stored in your own infrastructure-as-code files, and you can employ GitHub or some other hosted git repository to store and manage them.

When I set up Komodo initially, I opted to use my self-hosted Forgejo instance, however I quickly ran into a “chicken and egg” problem: Forgejo is hosted on my unRAID server and when that server needs to be taken offline for any reason then my Komodo servers cannot update their state in the repository. This is mitigated by bringing Forgejo back online, but it can be an inconvenience.

Setting up Komodo #

Once I had the basic configuration files organized in my repository, the next step was setting up the server that would run Komodo’s “Core” (the main komodo-core container and database) and the servers that would run Docker and Periphery. These “servers” were set up as virtual machines in my XCP-NG infrastructure and I started by creating a lower resourced VM specifically to run Komodo’s “Core” then two more servers with multiple CPU cores and memory that would be my cattle machines that can be cloned or replaced.

During the Komodo setup process, do yourself a favour and run Periphery using systemd with a user account, not as root, and not as a docker container (while convenient, it can cause unnecessary headaches; here be dragons and trouble, ye be warned).

Once the Komodo “Core” server booted and the web interface was accessible, I began configuring it to connect to the other two servers running Docker and Periphery. This was a bit of trial and error because at the time I didn’t quite understand what I was doing and in hindsight I wished I’d made more notes along the way to share here because it definitely tripped me up a few times; if I remember, I’ll come back and modify this section.

Now that Komodo was set up, and my private Repo in Forgejo was configured to push and pull configuration from, and I had Servers that could have Stacks deployed to, the process of setting up the containers to run began.

Running services in Komodo #

At this point everything was configured and ready to deploy but I didn’t know where the containers were going to store their data; I had thought about maybe using volume mounts but they have performance issues and aren’t incredibly portable. It was at this point that I remembered reading about using a single folder to store all my container’s app data that was a similar pattern to how unRAID stores container data. Following this pattern worked like a charm! Once I had begun setting up my Stacks and tweaking service configurations and monitoring logs, it was all coming together.

Now that I’ve been running this setup for a few months I can say with a high degree of certainty that Komodo is very stable and I’ve had literally no issues with it. One thing that I have yet to do is create a disaster recovery plan. Given the excellent documentation and the fact that everything is stored in git, and coupled with the XCP-NG backups that I do regularly, the likelihood of a complete rebuild is very low.

Managing image updates with Renovate #

If you, like me, want to remain on the cutting edge then it’s highly likely you’ll start using the latest tag for nearly every Docker container that you spin up. By doing this however, you are opening yourself up to the potential of a breaking change wiping out your service in the event of a benign image pull and container restart.

One way to manage this is to use Renovate for dependency management and have it update the docker images in the compose.yaml files in your git repository instead of relying on Komodo to automatically pull the latest image. This can be accomplished by following this post as I won’t be covering that here, but the setup is fairly trivial and the ROI is high.

The way this works is that Renovate runs on a schedule and looks at all the compose.yaml files in the repo, parses out the image tags used and then uses the image names to lookup on Dockerhub, and other pre-configured registries, the latest image version that would supplant the one in the file. If there is a new version available, Renovate opens a Pull Request with information detailing the update and can usually include release notes as well. Through configuration you can allow auto-merging or disable it to manage the mergeability of the PRs yourself. Either way, doing these updates in the repo creates a history of version updates that can be observed and referenced against in the event a version update causes an issue.

The second part of this is to enable “Poll for updates” and “Auto Update” on each stack if you want to have Komodo keep your stacks up to date but in a way that you can control.

Closing notes #

Komodo is not for beginners; for those who don’t need a solution as full-fledged as Komodo, I recommend something like Dockhand. There is a lot of hidden knowledge that you need to possess prior to successfully deploying and using it, and that comes from the experience of tinkering with Docker and self-hosting. It doesn’t mean I’m saying you shouldn’t play with it, by all means play around and learn, but don’t expect it to be “click-to-deploy” level of easy to get up and running.