Like most companies making the transition to the cloud today, one of the biggest challenges we face in empowering our engineering teams to adopt cloud-native technologies, such as Kubernetes and CI/CD, is the high barrier to entry of simply setting up their local machines with a unified set of tools and configs to interact with those cloud environments. As DevOps, we would ideally like all our colleagues to easily have access to the same tools and environments, and to even extend that to our CI/CD pipelines, which have no reason to run different versions of those tools from our development machines.
Our Cloud Journey
Containers are a natural go-to for the job. However, while they’re great at encapsulating many tools with specific versions and repeatable, predictable outcomes, containers are designed with remote services and jobs in mind, rather than day-to-day use as a local work environment.
When you start to use containers as a daily working tool, you quickly stumble on many roadblocks. You need to master Docker CLI commands and flags to launch your container in various conditions, environment variables, volume mounts and port mappings. The syntax is different depending if you already started your container in the past, or if it’s already running and you just want to open a few extra shells into it. What if you need to access multiple kubernetes clusters in parallel, each with its own container? And don’t forget that if you modify any config files within your container, those will be discarded the next time you rebuild and launch a fresh copy of your image! Finally, if you want to share the same tools and configs with your colleagues and (gasp!) even your CI/CD pipeline, you’re in for an extra ride!
Naively patching up some Dockerfile with your desired tools is not the end of the road. For us, it was just the beginning.
We have long tinkered with the idea of packaging up a neat multi-purpose Docker container that would handle most use cases and make it seamless to setup and interact with different Kubernetes clusters. Starting with a few prototypes and then some inspiration from Cloud Posse’s Geodesic container—which does a great job of streamlining the installation and launching of the container— our experiments finally evolved into a tool generic and mature enough to be shared with the community!
Our goal has been to share such a tool with the DevOps community and, thanks to the folks at Sama, this vision has become a reality. Let me introduce you to an MIT-licensed, open source, kubernetes-oriented, general purpose docker container for devs/devops and custom CI/CD pipelines, that we affectionately called “Factotum” (from Latin, basically meaning an employee who does all kinds of work).
What to Know Before You Begin
Even if you can try the vanilla build of Factotum as is from GitHub and Docker Hub, please understand that Factotum is really intended to be customized and made your own in order to leverage its full potential. If you settle to try it, it is worth forking the repo, customizing the Dockerfile and following the instructions in README.md. While that process of setting up your own customized build of Factotum can be rather involved, be assured that using, maintaining, upgrading and sharing it with your teammates afterwards is intended to be as straightforward as possible—it just takes that little initial effort! And, if you encounter any issues or can’t figure how to set it up correctly, don’t hesitate to file an issue in the GitHub repo and give us feedback.