Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
Using Kustomize for Kubernetes deployment
We recently moved our back-end from standalone Debian servers on AWS, to Google Kubernetes Engine (GKE). How did we concluded that using Kustomize was the best solution for us?Photo by Geran de Klerk on UnsplashSome background
Software on Kubernetes (K8s) is deployed in the form of containers; mostly Docker containers, although other flavors are available. Containers are stored in a container repository. Repositories generally store all the versions built. Each container build can be tagged with different strings. Then, when creating or updating a K8s cluster, .yaml definition files use tags to specify the exact container version to be used.
One common, and much frowned upon, method of specifying version is a special purpose tag called :latest. It tells K8s to use whichever build is the most recent of its type in the repository. It’s considered bad practice since it can be difficult to know exactly what the latest build is. For example, if you and your co-worker build and push the same container at the same time with different code :latest will refer to whatever build finished last. Sufficient to say — it’s a source of confusion that is best avoided when deploying software to be used by paying customers.
Latest concerns
Despite the shortcomings, many projects use the :latest mechanism since setting up something better takes time. So the task is not prioritized — at least not until something significant breaks.
We were no different. During development we used :latest for quite a while. As the team grew and we started setting up multiple environments it became clear that something needed to change. Then, as we investigated how to best tag our 18 different containers, we found more things to worry about.
Our concerns went like this:
- The K8s definition .yamlfiles are 90% the same across environments (test, and production)
- Maintaining full sets of almost identical files for each deployment environment (cluster) is incredibly boring. Because it’s boring we’ll get sloppy about it. This sounds like a bunch of stupid mistakes just waiting to happen
- We need a way to tag builds to control what versions run where. It should be possible to ensure that the production environment has the “old” well-tested build, that the test environment has the next candidate release, and so on.
- It should be possible to promote battle-hardened containers from the test environment to production without re-building
Googling around, there was plenty of advice, although most of it seemed geared towards advanced CI/CD pipelines with Jenkins, Spinnaker, Ansible, or tools like Helm.
Abstracted abstractions
While all of these “grown-up” solutions are great and solve real problems, they introduce more tools, more configuration, and in most cases, yet another layer of abstraction (“YALA” anyone?). For someone still using their pink sparkly K8s training wheels and having just gotten the hang of clawing out .yaml files, writing configuration for tools to make output for tools seemed like asking for trouble. Furthermore, anyone coming on-board to the project would have to install and learn the tools as well. That’s overhead we could do without.
As other development proceeded we kept revisiting the topic. It became increasingly urgent as we were getting close to having a real system to test on a larger scale.
K6ze to the rescue
We gave Kustomize a try, and to our amazement had a bare-bones PoC which solved our concerns up and running within a couple of hours. In a day we had something which handled configuration differences, did tagging of containers, and controlled deployment based on tags. Nothing fancy. Decidedly lo-tech. Aspiring to crude. But then again, crude is often underestimated.
I put the shotgun in an Adidas bag and padded it out with four pairs of tennis socks, not my style at all, but that was what I was aiming for: If they think you’re crude, go technical; if they think you’re technical, go crude. I’m a very technical boy. So I decided to get as crude as possible. These days, thought, you have to be pretty technical before you can even aspire to crudeness.
— Johnny Mnemonic by William Gibson
The real deal — from commit to production
Our goal was to set up the smallest possible system for getting commits safely from the master git branch into the test environment and then to production. There is nothing special about it. Although in this day and age, where it sometimes seems like it takes an additional cluster dedicated to building software and a team of specialists to babysit it, that is perhaps special in itself. There are no automatically triggered builds or deployments, no automatic tests. It mainly consists of scripts which do small tasks.
Branches
Some background information before diving into the details. We use three branches:
- master this is where all development happens. Everyone builds and runs this locally. It should never break, and mostly never does
- test where we push master whenever we reach a point we’re considering for release. This branch is deployed on the K8s test cluster
- prod the code running in production. Nothing makes it into prod without having gone through validation and smoke-tests in test
For experiments and bigger changes we use short-lived feature branches.
Containers are built from the test branch and promoted/tagged to run on the production cluster once found to be stable. Considering that we rarely build anything from the prod branch, we could have managed without it. But we’ve found that it’s convenient to have a branch which contains the latest production code for quick reference — without having to look at commit logs and doing too much git-fiddling.
Similarly, it might have been possible to do away with the test branch, and only tag master when deploying to the testing environment. Possible — although the thought made peoples eyes cross. It has way more mental overhead, and we’ve concluded that “thinking” is to be avoided at all costs when deploying code.
YAML files and Kustomize
Our entire back-end system consists of one set of K8s .yaml files. The default files have settings for the test environment. There are patches (patchesStrategicMerge in Kustomize parlance) where prod differs from test. The patches for the production environment mostly consist of:
- ConfigMaps (that is, configuration — the biggest difference between the environments)
- Services with public IPs. The environments are exposed on different IP addresses
- Containers with higher CPU and memory limits in the production environment
The magic of Kustomize comes into play when we deploy to production. The patches are applied to the .yaml files. This is what enables us to use the same definition files across different environments.
For reference, the files used to set up the testing environment are about 2700 lines long. The patches for the production environment total 350 lines. This is quite a bit more manageable than using two full sets of .yaml files which are almost, but not quite, the same.
Typical deploy to the test cluster
A couple of times per week, or when we see that there are fixes for 5 or more Trello tickets, we build and deploy for testing. We have scripts which handle most of this. Except obviously the sanity/smoke testing. The whole procedure, if we “unroll” the scripted parts, looks like this:
- Pull the latest changes from the master branch
- Run what little we have in the way of automated tests
- Manually smoke-test local build to check that nothing weird is happening
- Merge changes from master to the test branch
- Build containers from the test branch and push them to the container repository using either npm or make depending on the container
- Find the most recent tags in the container repository and lists them for each container. Check that these are the tags we want
- Copy the the list of new tags into the dev/kustomization.yaml file
- Check in the changed dev/kustomization.yaml
- Trigger the deployment using Kustomize. cd ./yaml && kustomize build test | kubectl apply -f -
- Finally — move all Trello tickets from “To be verified” status to “Deployed in test”, so the gals and guys in QA know what’s next
All in all, it takes about 10–15 minutes — most of which is spent on manual testing. QA then checks all the tickets and bumps them to “Verified” once they’re sure bugs are corrected and features work as expected.
The scripts we use do basic stuff like changing branch in several source repositories at once, merge master into test, and building the most common repositories. Most are 10–20 lines long, and kept simple so that anyone with a basic knowledge of shell scripts can understand them.
Releases
We’re not operating on a super strict release schedule. The cadence has varied from a single release in three months, to five releases per week. All very pragmatic. On average, there is usually a release every two weeks or so.
The procedure for deploying to the production environment is quite a bit shorter since we don’t build anything. It’s mainly about looking up the container tags currently used in the test environment. For example, 1.0.0–70-g39e7fb0-test. Read that as: “70 commits after git tag 1.0.0, test branch, git commit hash 39e8fb0”. We add a tag which indicates the container is ready for production by changing the suffix from test to prod. For example, 1.0.0–70-g39e7fb0-test becomes 1.0.0–70-g39e8fb0-prod. Then we push the update to the container repository, update the prod/kustomization.yaml file with the new tag, and run kustomize build prod | kubectl apply -f -.
Kubernetes takes care of the rest, by starting the new container (pod) and tearing down the old one.
Demo time!
We’ve thrown together a minimal version of the directories and files we use for anyone to play around with.
Here’s how to check it out:
- Download Kustomize v2.0.3 and put it somewhere you can reach it (your PATH).
- git clone https://github.com/audunf/kustomize-example
- cd kustomize-example/yaml
- kustomize build test | tee test.yaml
- Rest your weary eyes on the output to get familiar with it
- kustomize build prod | tee prod.yaml
- diff test.yaml prod.yaml to check how the output differs
Containers are pushed to the container registry with version set to a combination of git tag (if present), number of commits since last tag, commit hash, and branch name. The prod/kustomization.yaml and test/kustomization.yaml files are used to select which version ends up in the different environments.
To see how the container version tag will look for a git repository, do this:
cd <some git repo>echo `git describe --tags --always --dirty`-`git rev-parse --abbrev-ref HEAD`
For example:
- No tags, clean: 8024499-master
- No tags, dirty (changes which have not been checked in): 8024499-dirty-master
- Master branch tagged with 1.0.0, dirty, two commits made after the 1.0.0 tag was set, with a commit hash: 1.0.0–2-g7066a24-dirty-master
For more information, please see the repo on github.
Who we are?
Celerway makes software for mobile SD-WAN routers. We use K8s for everything on the back-end.
Why and how we use Kustomize for Kubernetes deployment was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.