Kubernetes has been a favorite tool at HOV – it’s versatile and verbose. There are numerous options for deployment and cluster management that we have found but we can’t use them all. Let’s take a look at a couple of decisions we made and how we applied them.
Early Stages of Adoption
We had a vague idea of how to bridge our existing CI / CD practices into Kubernetes and did a couple of experiments on how to deal with this.
First, was a disruptive approach using kustomize. We had a single repository containing all definitions that are being reused across various environments depending on kustomize’s main features; variants and overlays.
Variants are environment-specific output configurations. Overlays are reusable definitions that are being overlaid for a specific environment.
Kustomize allows managing metadata and using specific configuration maps for an environment.
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: development-
commonLabels:
variant: development
org: reitscreener-job
bases:
- ../../base
configMapGenerator:
- name: reitscreener-job
envs:
- development.env
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: production-
commonLabels:
variant: production
org: reitscreener-job
bases:
- ../../base
configMapGenerator:
- name: reitscreener-job
envs:
- production.env
In the above environment configuration, there is a bases syntax that points to where the base folder is. The base folder contains the kuztomization.yml file which we import for all our reusable resources.
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
org: reitscreener
resources:
# Global Jobs
- alert.yml
- exchange-rate.yml
# SGX Jobs
- sgx-annual-report.yml
- sgx-quarter-report.yml
- sgx-announcements.yml
- sgx-bondyield.yml
- sgx-index-ftse.yml
- sgx-insight.yml
- sgx-share-price.yml
- sgx-insider-trade.yml
- sgx-insider-trade-links.yml
- sgx-volatility.yml
We can then use overlays to reuse the base definition while deploying to different environments.
---
sudo kustomize build overlays/production | sudo kubectl apply -f - sudo kustomize build overlays/development | sudo kubectl apply -f -
Second was to build a centralized deployment service and expose APIs for CI service to call.
export VERSION=`cat .version` &&
curl --location --fail --request POST '<https://devops.inhouseproject.com/deploy>' \\\\
--header 'Content-Type: application/json' \\\\
--header 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c' \\\\
--data-raw '{
"environment": "development",
"type": "deployment",
"service": "inhouseservice",
"version": "'"$VERSION"'"
}'
The good thing about this is that we can control access to the API level but we are already using GitHub and CircleCI both with good access control features, for our CI pipeline. We were also experiencing service downtime because of maintenance and git concurrency issues, we called git commands in the application layer, which should have been the least of our concerns.
Lastly, we are now looking at using decentralized deployments; each repository will be responsible for its deployment pipeline.
We have been looking into multiple tools (draft, helm, gitkube) however, they solve only part of our problems. Helm is not as straightforward as it looks and draft is the nearest but its support has been halted and the repository has been marked as read-only.
The next best thing we have is skaffold. It follows our existing way of deploying containers – build, push, and deploy. We find the usage simple and straightforward. Let’s take a closer look at how we implemented it in one of our projects in the next sections.
YAML Configuration
Skaffold relies on a configuration file that can be generated using the following command:
❯ skaffold init
Configuration skaffold.yaml was written
You can now run [skaffold build] to build the artifacts
or [skaffold run] to build and deploy
or [skaffold dev] to enter development mode, with auto-redeploy
This is a sample configuration skaffold file. It shows the ECR repository and the k8s manifest files.
apiVersion: skaffold/v2beta7
kind: Config
metadata:
name: sample-project-api
build:
artifacts:
- image: xxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/sample-project-api
local:
push: true
deploy:
kubectl:
manifests:
- k8s/**/*.yaml
Skaffold builds the images locally using docker commands that are set using the local definition.
It will also push the built image into the given ECR link, but you need to have AWS access.
It will then deploy all the k8s manifests declared in the configuration using kubectl command.
There is always room to do more, but this is the minimum setup that is required.
Building Images
Building images in our previous setup was straight forward but still tedious.
We created a CircleCI job that calls docker commands and pushes them into ECR.
version: 2
jobs:
...
build_docker_image:
docker:
- image: in-house-circleci-image:latest
environment:
AWS_ACCESS_KEY_ID: ...
AWS_SECRET_ACCESS_KEY: ...
DOCKER_REGISTRY: xxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com
steps:
- setup_remote_docker:
version: 18.05.0-ce
docker_layer_caching: true
- run:
name: save SHA to a file
command: echo $CIRCLE_SHA1 > .circle-sha
- restore_cache:
keys:
- repository-{{ checksum ".circle-sha" }}
- restore_cache:
keys:
- build-{{ checksum ".circle-sha" }}
- restore_cache:
keys:
- dependencies-production-{{ checksum "package-lock.json" }}
- run:
name: build docker image
command: >
docker build .
--tag $DOCKER_REGISTRY/$IMAGE_NAME:$CIRCLE_BUILD_NUM
- run:
name: login to docker registry
command: eval `aws ecr get-login --no-include-email --region ap-southeast-1`
- run:
name: push docker image to docker registry
command: docker push $DOCKER_REGISTRY/$IMAGE_NAME:$CIRCLE_BUILD_NUM
- run:
name: tag latest image
command: docker tag $DOCKER_REGISTRY/$IMAGE_NAME:$CIRCLE_BUILD_NUM $DOCKER_REGISTRY/$IMAGE_NAME:latest
- run:
name: push latest image
command: docker push $DOCKER_REGISTRY/$IMAGE_NAME:latest
- run:
name: store version number
command: echo $CIRCLE_BUILD_NUM > .version
- store_artifacts:
path: .version
- save_cache:
key: version-{{ checksum ".circle-sha" }}
paths:
- .version
...
If you compare it to the new setup, all the details are now hidden into a single command.
version: 2
jobs:
...
build:
docker:
- image: circleci/node:12.18.2-stretch
steps:
- attach_workspace:
at: .
- setup_remote_docker:
version: 18.06.0-ce
docker_layer_caching: true
- restore_cache:
keys:
- dependencies-{{ checksum "package-lock.json" }}
- aws-cli/setup
- run:
name: login to aws
command: bash ./aws-login.sh
- run:
name: install skaffold
command: |
curl -Lo skaffold <https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64>
chmod +x skaffold
sudo mv skaffold /usr/local/bin
- run:
name: npm run build
command: npm run build:ci
- run:
name: skaffold build
command: skaffold build --tag=$CIRCLE_BUILD_NUM
...
Deploying Images
The deployment part is where we migrate from the API requests into local kubectl.
This setup uses the local kubectl and it should point to the correct cluster using the kubeconfig.
version: 2
jobs:
...
deploy:
docker:
- image: circleci/node:12.18.2-stretch
steps:
- attach_workspace:
at: .
- setup_remote_docker:
version: 18.06.0-ce
docker_layer_caching: true
- restore_cache:
keys:
- dependencies-{{ checksum "package-lock.json" }}
- aws-cli/setup
- run:
name: login to aws
command: bash ./aws-login.sh
- run:
name: install skaffold
command: |
curl -Lo skaffold <https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64>
chmod +x skaffold
sudo mv skaffold /usr/local/bin
- run:
name: skaffold deploy
command: skaffold deploy --images 498457837717.dkr.ecr.ap-southeast-1.amazonaws.com/sample-project-api:$(cat .version)
...
Closing Thoughts
It is easier to get the tools into action when you already understand the concepts behind because tools consist of a lot of features that you might not need for now. The basic flow for our deployment is for us to be able to build, push, and deploy. Fortunately, there are a lot of tools that the community has had available, we just need to pick the ones closest to what we are already doing. It’s always a good idea to explore things outside of the accepted norms as this is how you become holistic and not dogmatic.
More from
Engineering
Importance of Design QA workflow
Marvin Fernandez, Engineering Manager
SQA questions to ponder
Joy Mesina, QA
Writing a Software Test Plan
Ron Labra, Software Quality Analyst