IAS Technology Insider is dedicated to discussing computer science, software engineering, and emerging technologies. From deep dives into machine learning algorithms and cloud computing architectures to discussions on cybersecurity trends and data analytics methodologies, our tech experts offer insights and analyses that resonate with enthusiasts and professionals alike.
By Associate Software Engineer, IAS
At IAS, a gradual transition to Kubernetes for some of our tools and libraries aims to decouple them from our monolithic application into a more efficient microservices architecture. However, early in this process, a common software challenge emerged: how can developers effectively test application changes in a Kubernetes environment?
Unit tests can be conducted against the application’s code using various mocking frameworks, which provides a foundational level of testing. However, the challenge arises in executing integration tests within the new Kubernetes environment where the application is intended to operate.
The challenge of testing functionality
Having a way to run and test an application’s functionality and correctness in an environment that simulates how and where the service will be accessed will provide an added layer of confidence when deploying to any production level cluster(s).
As an added benefit, running test suites against an application living in a Kubernetes cluster will provide important insights into the effectiveness of your Kubernetes configuration solutions (i.e, Docker images, Resources, environment variables, etc.).
One way a developer can do this is to deploy any changes to an IAS Kubernetes testing cluster. In order to do that, they first need to create and merge a pull request from a K8s tenant repository and wait for the canary process to detect and release that change. Unfortunately, this process can quickly become long and troublesome if you need to continuously make minor changes to the application’s code and observe the outcome. So, how do we achieve the same reliability as deploying directly to an IAS Kubernetes testing cluster, but locally?
Short answer? Minikube.
What exactly is minikube?
At its core, minikube is a tool that allows you to run a single-node Kubernetes cluster on your local machine. With a few basic hardware requirements (2 CPUs, 2 GBs of free memory, 20 GBs of free disk space) and a Docker container, minikube enables those interested in Kubernetes to experience what it’s like working within the Kubernetes ecosystem. Once installed, you’re able to quickly start & stop your cluster, create deployments on the fly, and expose them for quick and easy access to your services. Even more relevant to our need was its ability to deploy and use our Kubernetes related tech-stack, which includes but is not limited to:
- Docker
- Helm
- Kustomize
- etc.
What initially only seemed like a light-weight tool that allowed users to learn within a Kubernetes environment was suddenly our answer.
Deploying applications to minikube
Our first step was obtaining a Docker image of our application. This image can be built locally with the application’s Dockerfile or pulled from a remote repository. At IAS, whenever a pull request is created in our application’s repository, a Docker image reflecting the changes in the pull request can be created and uploaded into a private Docker repository using a comment trigger. So, we were able to easily pull that Docker image and load it into our minikube cluster. Docker images are also automatically generated and uploaded to the private Docker repository on new application releases. Because Docker images are versioned and tagged, we were able to indicate what version we’d like to pull and deploy (PR-version or Release-version).
As of today, all of our applications’ Kubernetes configuration files are packaged and versioned by Helm into their own respective “charts”. You can read more about Helm and Helm Charts here. Our Helm charts, like our Docker images, are also automatically uploaded into a private Helm repository. Once there, they’re used to configure and deploy our applications into IAS Kubernetes clusters. So our next step was pulling these charts so we could deploy our application to our local minikube cluster.
Once we had a Docker image and a Helm chart on our local machine, we were able to deploy to our minikube cluster and watch as our application’s pods came up.
All that was left to do was making our pods accessible by running a port-forward command on whatever port we choose–and voila, we were now able to completely access and interact with our application locally!
Once we connected to our pod(s) we were able to start the process of building a test suite that acts as a client to our application.
Testing services and results
Before writing any tests, we needed to make sure that we established a connection between our test suite and our application. Whether the service is REST or gRPC based, it was important that the connection to the service existed and persisted throughout the lifetime of testing.
Using popular testing frameworks like Spock, we built a comprehensive test suite that encompasses our applications’ most important features. This includes but is not limited to expected responses to different requests, headers returned by various endpoints, and Kubernetes-related configuration values.
Because these tests can now run essentially anywhere, we’ve used it as a unique opportunity to incorporate them into our CICD pipeline using a Jenkins-backed environment. Our Jenkinsfile is configured to actively listen for comment triggers like “run integration” on a pull request, and then run the scripts we created before to deploy the application to minikube and run our test suite once it’s triggered. Today, any pull requests made against our application’s repository are required to run our test suite through Jenkins before they’re able to be merged.
Using minikube, we’ve been able to continuously develop and deploy our Kubernetes applications with little to no breaking changes. Our team’s engineers can now easily test and develop around an instance of our applications running in a local minikube cluster, giving us an added layer of confidence and efficiency that other testing methods weren’t able to.
Looking for your next challenge?
IAS is a global leader in digital media quality. Our engineers collaborate daily to design for excellence as we strive to build high performing platforms and leverage impactful tools to make every impression count. We analyze emerging industry trends in order to drive innovation, research new areas of interest, and enhance our revolutionary technology to provide top-tier media quality outcomes. IAS is an ever-expanding company in a constantly evolving space, and we are always looking for new collaborative, self-starting technologists to join our team. If you are interested, we would love to have you on board! Check out our job opportunities here.
Article originally published on August 3, 2022.