We’ve been gradually transitioning from AWS and other cloud solutions to our dedicated bare-metal infrastructure. To many, it might seem like we’re following in the footsteps of Ruby on Rails’ DHH after his recent release of mrsk/kamal.
However, the truth is we began this migration quite some time ago, albeit with mixed success, as it initially left us reliant on certain cloud services like Docker Hub (which we’re also in the process of migrating away from).
Our journey started with heavy reliance on AWS, a common trend among tech companies. Hosting both our clients’ and our internal systems on AWS proved to be quite costly. While this didn’t immediately raise concerns, the mounting bills prompted us to explore alternative solutions – enter bare metal.
Our initial devops setup included:
- A single bare-metal DELL machine
- Docker Swarm (yes, you read that right!)
- A custom-built continuous deployment tool written in our beloved Ruby on Rails (we hadn’t discovered TeamCity at that point)
- Nginx (with manually configured files for each domain)
- Crontab for Certbot SSL
- Swarmpit for managing the entire Swarm stack
At first glance, this setup might sound less than ideal (and it probably was), but it served our needs for a considerable period. As we acquired more machines, we devised a plan to transition to Kubernetes and establish a robust, scalable pipeline (more on that at the end).
Docker Swarm on Bare Metal
We chose Docker Swarm primarily due to its simplicity and ease of scaling beyond the initial applications without overthinking future challenges. It’s easy to install, relatively lightweight, and shares the same syntax as Docker Compose, which we were already familiar with. However, our Docker journey wasn’t without its challenges.
While other solutions like TeamCity exist in the market (though they have their limitations), we developed our own Ruby on Rails CD tool tailored to our specific needs. Essentially, it served as a bridge between our GitHub-hosted repositories, Docker Hub images, and our bare-metal infrastructure. Leveraging webhooks from Docker Hub and GitHub made this integration fairly straightforward. Our in-house tool maintained a registry of our self-hosted projects and handled various scenarios, including static builds, static Angular builds, and Docker images.
We manually configured routing rules for our internal applications, although we did create a starter script for common scenarios. Nginx was our choice because it’s a popular and robust open-source solution for this purpose. We never encountered significant issues with this part of our system, aside from the labor-intensive manual work.
Initially, Swarmpit served as our management UI for Docker Swarm. It featured an appealing mobile UI, allowing us to start, stop, or restart stacks directly from our phones. Additionally, it provided a convenient way for team members without direct server access to access project logs, inspect environment variables, and debug as needed.
This marked the initial phase of our bare-metal setup, which eventually revealed several challenges:
- Excessive manual work
- Fragility of the system
- Limited scalability
- Lack of control over resource allocations
- Integration difficulties with other tools
For instance, we grappled with Docker Swarm’s network IP pool issues that we never fully resolved. Consequently, we made the decision to transition to a more robust system that scales better and offers a superior developer experience.
Enter Kubernetes! We decided it’s time to switch gears and do what the rest of the world is doing (except in our own backyard and under our full control!)
Our Kubernetes-based system introduced:
- A custom Docker registry, eliminating our reliance on Docker Hub
- Cert-manager cluster issuer, replacing Certbot
- Nginx Ingress Controller, eliminating manual Nginx configuration
- Self-hosted GitHub Actions, reducing our reliance on GitHub Actions
- Auto-deployment and GitOps with Argo
Stay tuned for upcoming articles where we’ll provide a detailed, technical overview of our new system.