Search for:
Installing Jenkins – Continuous Integration with GitHub Actions and Jenkins-2

The next step involves creating a PersistentVolumeClaim resource to store Jenkins data to ensure that the Jenkins data persists beyond the pod’s life cycle and will exist even when we delete the pod.

To apply the manifest, run the following command:
$ kubectl apply -f jenkins-pvc.yaml

Then, we will create a Kubernetes Secret called regcred to help the Jenkins pod authenticate with the Docker registry. Use the following command to do so:
$ kubectl create secret docker-registry regcred –docker-username= \ –docker-password= –docker-server=https://index.docker.io/v1/

Now, we’ll define a Deployment resource, jenkins-deployment.yaml, that will run the Jenkins container. The pod uses the jenkins service account and defines a PersistentVolume resource called jenkins-pv-storage using the PersistentVolumeClaim resource called jenkins-pv-claim that we defined. We define the Jenkins container that uses the Jenkins controller image we created. It exposes HTTP port 8080 for the Web UI, and port 50000 for JNLP, which the agents would use to interact with the Jenkins controller. We will also mount the jenkins-pv-storage volume to /var/jenkins_home to persist the Jenkins data beyond the pod’s life cycle. We specify regcred as the imagePullSecret attribute in the pod image. We also use initContainer to assign ownership to jenkins for /var/jenkins_home.

As the file contains placeholders, replace with your Docker Hub user and with a Jenkins admin password of your choice using the following commands:
$ sed -i ‘s//actual_dockerhub_user/g’ jenkins-deployment.yaml

Apply the manifest using the following command:
$ kubectl apply -f jenkins-deployment.yaml

As we’ve created the deployment, we can expose the deployment on a LoadBalancer Service using the jenkins-svc.yaml manifest. This service exposes ports8080 and 50000 on a load balancer. Use the following command to apply the manifest:
$ kubectl apply -f jenkins-svc.yaml

Let’s get the service to find the external IP to use that to access Jenkins:
$ kubectl get svc jenkins-service

NAME EXTERNAL-IP PORT(S) jenkins-service LOAD_BALANCER_EXTERNAL_IP 8080,50000

Now, to access the service, go to http://:8080 in your browser window:

Figure 11.9 – Jenkins login page

As we can see, we’re greeted with a login page. This means Global Security is working correctly. Let’s log in using the admin username and password we set:

Figure 11.10 – Jenkins home page

As we can see, we’ve successfully logged in to Jenkins. Now, let’s go ahead and create our first Jenkins job.

Installing Jenkins – Continuous Integration with GitHub Actions and Jenkins-1

As we’re running on a Kubernetes cluster, we only need the latest official Jenkins image from Docker Hub. We will customize the image according to our requirements.

The following Dockerfile file will help us create the image with the required plugins and the initial configuration:
FROM jenkins/jenkins
ENV CASC_JENKINS_CONFIG /usr/local/casc.yaml
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY casc.yaml /usr/local/casc.yaml
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli –plugin-file /usr/share/jenkins/ref/plugins.txt

The Dockerfile starts from the Jenkins base image. Then, we declare two environment variables – CASC_JENKINS_CONFIG, which points to the casc.yaml file we defined in the previous section, and JAVA_OPTS, which tells Jenkins not to run the setup wizard. Then, we copy the casc.yaml and plugins.txt files to their respective directories within the Jenkins container. Finally, we run jenkins-plugins-cli on the plugins.txt file, which installs the required plugins.

The plugins.txt file contains a list of all Jenkins plugins that we will need in this setup.

Tip

You can customize and install more plugins for the controller image based on your requirements by updating the plugins.txt file.

Let’s build the image from the Dockerfile file using the following command:
$ docker build -t /jenkins-controller-kaniko .

Now that we’ve built the image, use the following command to log in and push the image to Docker Hub:
$ docker login
$ docker push /jenkins-controller-kaniko

We must also build the Jenkins agent image to run our builds. Remember that Jenkins agents need all the supporting tools you need to run your builds. You can find the resources for the agents in the following directory:
$ cd ~/modern-devops/ch11/jenkins/jenkins-agent

We will use the following Dockerfile to do that:
FROM gcr.io/kaniko-project/executor:v1.13.0 as kaniko
FROM jenkins/inbound-agent
COPY –from=kaniko /kaniko /kaniko
WORKDIR /kaniko
USER root

ThisDockerfile uses a multi-stage build to take the kaniko base image and copy the kaniko binary from the kaniko base image to the inbound-agent base image. Let’s go ahead and build and push the container using the following commands:
$ docker build -t /jenkins-jnlp-kaniko .
$ docker push /jenkins-jnlp-kaniko

To deploy Jenkins on our Kubernetes cluster, we will first create a jenkins service account. A Kubernetes service account resource helps pods authenticate with the Kubernetes API server. We will give the service account permission to interact with the Kubernetes API server as cluster-admin using a cluster role binding. A Kubernetes ClusterRoleBinding resource helps provide permissions to a service account to perform certain actions in the Kubernetes cluster. The jenkins-sa-crb. yaml manifest describes this. To access these resources, run the following command:
$ cd ~/modern-devops/ch11/jenkins/jenkins-controller

To apply the manifest, run the following command:
$ kubectl apply -f jenkins-sa-crb.yaml

Connecting Jenkins with the cluster – Continuous Integration with GitHub Actions and Jenkins

We will install the Kubernetes plugin to connect the Jenkins controller with the cluster. We’re doing this because we want Jenkins to dynamically spin up agents for builds as Kubernetes pods.

We will start by creating a kubernetes configuration under jenkins.clouds, as follows:
jenkins
clouds:
kubernetes:
serverUrl: “https://”
jenkinsUrl: “http://jenkins-service:8080”
jenkinsTunnel: “jenkins-service:50000”
skipTlsVerify: false
useJenkinsProxy: false
maxRequestsPerHost: 32
name: “kubernetes”
readTimeout: 15
podLabels:
key: jenkins
value: agent

As we have a placeholder called within the configuration, we must replace this with the Kubernetes control plane’s IP address. Run the following command to fetch the control plane’s IP address:
$ kubectl cluster-info | grep “control plane”

Kubernetes control plane is running at https://35.224.6.58

Now, replace the placeholder with the actual IP address you obtained from the preceding command by using the following command:
$ sed -i ‘s//actual_ip/g’ casc.yaml

Let’s look at each attribute in the config file:
• serverUrl: This denotes the Kubernetes control plane server URL, allowing the Jenkins controller to communicate with the Kubernetes API server.
• jenkinsUrl: This denotes the Jenkins controller URL. We’ve set it to http://jenkins-service:8080.
• jenkinsTunnel: This describes how the agent pods will connect with the Jenkins controller. As the JNLP port is 50000, we’ve set it to jenkins-service:50000.
• podLabels: We’ve also set up some pod labels, key=jenkins and value=agent. These will be set on the agent pods.

Other attributes are also set to their default values.

Every Kubernetes cloud configuration consists of multiple pod templates describing how the agent pods will be configured. The configuration looks like this:
kubernetes:
templates:
name: “jenkins-agent”
label: “jenkins-agent”
hostNetwork: false
nodeUsageMode: “NORMAL”
serviceAccount: “jenkins” imagePullSecrets:
name: regcred yamlMergeStrategy: “override” containers:

Here, we’ve defined the following:
• The template’s name and label. We set both to jenkins-agent.
• hostNetwork: This is set tofalse as we don’t want the container to interact with the host network.
• seviceAccount: We’ve set this to jenkins as we want to use this service account to interact with Kubernetes.
• imagePullSecrets: We have also provided an image pull secret called regcred to authenticate with the container registry to pull the jnlp image.

Every pod template also contains a container template. We can define that using the following configuration:

containers:
name: jnlp
image: “/jenkins-jnlp-kaniko”
workingDir: “/home/jenkins/agent”
command: “”
args: “” livenessProbe:
failureThreshold: 1
initialDelaySeconds: 2
periodSeconds: 3
successThreshold: 4
timeoutSeconds: 5
volumes:
secretVolume:
mountPath: /kaniko/.docker
secretName: regcred

Here, we have specified the following:
• name: Set to jnlp.
• image: Here, we’ve specified the Docker agent image we will build in the next section. Ensure that you replace the placeholder with your Docker Hub user by using the following command:

$ sed -i ‘s//actual_dockerhub_user/g’ casc.yaml
• workingDir: Set to /home/jenkins/agent.
• We’ve set the command and args fields to blank as we don’t need to pass them.
• livenessProbe: We’ve defined a liveness probe for the agent pod.
• volumes: We’ve mounted the regcred secret to the kaniko/.docker file as a volume. As regcred contains the Docker registry credentials, Kaniko will use this to connect with your container registry.

Now that our configuration file is ready, we’ll go ahead and install Jenkins in the next section.

Spinning up Google Kubernetes Engine – Continuous Integration with GitHub Actions and Jenkins

Once you’ve signed up and are in your console, open the Google Cloud Shell CLI to run the following commands.

You need to enable the Kubernetes Engine API first using the following command:

$ gcloud services enable container.googleapis.com

To create a two-node autoscaling GKE cluster that scales from one to five nodes, run the following command:

$ gcloud container clusters create cluster-1 –num-nodes 2 \

–enable-autoscaling –min-nodes 1 –max-nodes 5 –zone us-central1-a

And that’s it! The cluster will be up and running.

You must also clone the following GitHub repository for some of the exercises provided: https:// github.com/PacktPublishing/Modern-DevOps-Practices-2e.

Run the following command to clone the repository into your home directory and cd into the following directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

$ cd modern-devops/ch11/jenkins/jenkins-controller

We will use the Jenkins Configuration as Code feature to configure Jenkins as it is a declarative way of managing your configuration and is also GitOps-friendly. You need to create a simple YAML file with all the required configurations and then copy the file to the Jenkins controller after setting an environment variable that points to the file. Jenkins will then automatically configure all aspects defined in the YAML file on bootup.

Let’s start by creating the casc.yaml file to define our configuration.

Creating the Jenkins CaC (JCasC) file

The Jenkins CaC (JCasC) file is a simple YAML file that helps us define Jenkins configuration declaratively. We will create a single casc.yaml file for that purpose, and I will explain parts of it. Let’s start by defining Jenkins Global Security.

Configuring Jenkins Global Security

By default Jenkins is insecure – that is, if you fire up a vanilla Jenkins from the official Docker image and expose it, anyone can do anything with that Jenkins instance. To ensure that we protect it, we need the following configuration:

jenkins:
remotingSecurity:
enabled: true
securityRealm:
local:
allowsSignup: false
users:
id: ${JENKINS_ADMIN_ID}
password: ${JENKINS_ADMIN_PASSWORD}
authorizationStrategy:
globalMatrix:
permissions:
“Overall/Administer:admin”
“Overall/Read:authenticated”

In the preceding configuration, we’ve defined the following:

  • remotingSecurity: We’ve enabled this feature, which will secure the communication between the Jenkins controller and agents that we will create dynamically using Kubernetes.
  • securityRealm: We’ve set the security realm to local, which means that the Jenkins controller itself will do all authentication and user management. We could have also offloaded this to an external entity such as LDAP:
  • allowsSignup: This is set tofalse. This means you don’t see a sign-up link on the Jenkins home page, and the Jenkins admin should manually create users.
  • users: We’ll create a single user with id and password sourced from two environment variables called JENKINS_ADMIN_ID and JENKINS_ADMIN_PASSWORD, respectively.
  • authorizationStrategy: We’ve defined a matrix-based authorization strategy where we provide administrator privileges to admin and read privileges to authenticated non-admin users.

Also, as we want Jenkins to execute all their builds in the agents and not the controller machine, we need to specify the following settings:

jenkins:
systemMessage: “Welcome to Jenkins!”
numExecutors: 0

We’ve set numExecutors to 0 to allow no builds on the controller and also set systemMessage on the Jenkins welcome screen.

Now that we’ve set up the security aspects of the Jenkins controller, we will configure Jenkins to connect with the Kubernetes cluster.

Scalable Jenkins on Kubernetes with Kaniko – Continuous Integration with GitHub Actions and Jenkins-2

Jenkins follows a controller-agent model. Though technically, you can run all your builds on the controller machine itself, it makes sense to offload your CI builds to other servers in your network to have a distributed architecture. This does not overload your controller machine. You can use it to store the build configurations and other management data and manage the entire CI build cluster, something along the lines of what’s shown in the following diagram:

Figure 11.7 – Scalable Jenkins

In the preceding diagram, multiple static Jenkins agents connect to a Jenkins controller. Now, this architecture works well, but it needs to be more scalable. Modern DevOps emphasizes resource utilization, so we only want to roll out an agent machine when we want to build. Therefore, automating your builds to roll out an agent machine when required is a better way to do it. This might be overkill when rolling out new virtual machines, as it takes some minutes to provision a new VM, even when using a prebuilt image with Packer. A better alternative is to use a container.

Jenkins integrates quite well with Kubernetes, allowing you to run your build on a Kubernetes cluster. That way, whenever you trigger a build on Jenkins, Jenkins instructs Kubernetes to create a new agent container that will then connect with the controller machine and run the build within itself. This is build on-demand at its best. The following diagram shows this process in detail:

Figure 11.8 – Scalable Jenkins CI workflow

This sounds great, and we cango ahead and run this build, but there are issues with this approach. We must understand that the Jenkins controller and agents run as containers and aren’t full-fledged virtual machines. Therefore, if we want to run a Docker build within the container, we must run the container in privileged mode. This isn’t a security best practice, and your admin should already have turned that off. This is because running a container in privileged mode exposes your host filesystem to the container. A hacker who can access your container will have full access so that they can do whatever they want in your system.

To solve that problem, you can use a container build tool such as Kaniko. Kaniko is a build tool provided by Google that helps you build your containers without access to the Docker daemon, and you do not even need Docker installed in your container. It is a great way to run your builds within a Kubernetes cluster and create a scalable CI environment. It is effortless, not hacky, and provides a secure method of building your containers, as we will see in the subsequent sections.

This section willuse Google Kubernetes Engine (GKE ). As mentioned previously, Google Cloud provides a free trial worth $300 for 90 days. You can sign up at https://cloud.google.com/ free if you have not already done so.

Scalable Jenkins on Kubernetes with Kaniko – Continuous Integration with GitHub Actions and Jenkins-1

Imagine you’re running a workshop where you build all sorts of machines. In this workshop, you have a magical conveyor belt called Jenkins for assembling these machines. But to make your workshop even more efficient and adaptable, you’ve got a team of tiny robot workers called Kaniko that assist in constructing the individual parts of each machine. Let’s draw parallels between this workshop analogy and the technology world:

  • Scalable Jenkins: Jenkins is a widely used automation server that helps automate various tasks, particularly those related to building, testing, and deploying software. “Scalable Jenkins” means configuring Jenkins in a way that allows it to efficiently handle a growing workload, much like having a spacious workshop capable of producing numerous machines.
  • Kubernetes: Think of Kubernetesas the workshop manager. It’s an orchestration platform that automates the process of deploying, scaling, and managing containerized applications. Kubernetes ensures that Jenkins and the team of tiny robots (Kaniko) work seamlessly together and can adapt to changing demands.
  • Kaniko: Kaniko is equivalent to your team of miniature robot workers. In the context of containerization, Kaniko is a tool that aids in building container images, which are akin to the individual parts of your machines. What makes Kaniko special is that it can do this without needing elevated access to the Docker daemon. Unlike traditional container builders, Kaniko doesn’t require special privileges, making it a more secure choice for constructing containers, especially within a Kubernetes environment.

Now, let’s combine the three tools and see what we can achieve:

  • Building containers at scale: Your workshop can manufacture multiple machines simultaneously, thanks to Jenkins and the tiny robots. Similarly, with Jenkins on Kubernetes using Kaniko, you can efficiently and concurrently create container images. This ability to scale is crucial in modern application development, where containerization plays a pivotal role.
  • Isolation and security: Just as Kaniko’s tiny robots operate within a controlled environment, Kaniko ensures that container image building takes place in an isolated and secure manner within a Kubernetes cluster. This means that different teams or projects can use Jenkins and Kaniko without interfering with each other’s container-building processes.
  • Consistency and automation: Similar to how the conveyor belt (Jenkins) guarantees consistent machine assembly, Jenkins on Kubernetes with Kaniko ensures uniform container image construction. Automation is at the heart of this setup, simplifying the process of building and managing container images for applications.

To summarize, scalable Jenkins on Kubernetes with Kaniko refers to the practice of setting up Jenkins to efficiently build and manage container images using Kaniko within a Kubernetes environment. It enables consistent, parallel, and secure construction of container images, aligning perfectly with modern software development workflows.

So, the analogy of a workshop with Jenkins, Kubernetes, and Kaniko vividly illustrates how this setup streamlines container image building, making it scalable, efficient, and secure for contemporary software development practices. Now, let’s dive deeper into Jenkins.

Jenkins is the most popular CI tool available in the market. It is open source, simple to install, and runs with ease. It is a Java -based tool with a plugin-based architecture designed to support several integrations, such as with a source code management tool such as Git, SVN, and Mercurial, or with popular artifact repositories such as Nexus and Artifactory. It also integrates well with well-known build tools such as Ant, Maven, and Gradle, aside from the standard shell scripting and Windows batch file executions.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-2

You must define two secrets within your repository using the following URL: https://github. com//mdo-posts/settings/secrets/actions.

Define two secrets within the repository:
DOCKER_USER=
DOCKER_PASSWORD=

Now, let’s move this build.yml file to the workflows directory by using the following command:
$ mv build.yml .github/workflows/

Now, we’re ready to push this code to GitHub. Run the following commands to commit and push the changes to your GitHub repository:
$ git add –all
$ git commit -m ‘Initial commit’
$ git push

Now, go to the Workflows tab of your GitHub repository by visiting https://github.com//mdo-posts/actions. You should see something similar to the following:

Figure 11.2 – GitHub Actions

As we can see, GitHub has run a build using our workflow file, and it has built the code and pushed the image to Docker Hub. Upon visiting your Docker Hub account, you should see your image present in your account:

Figure 11.3 – Docker Hub image

Now, let’s try to break our code somehow. Let’s suppose that someone from your team changed the app.py code, and instead of returning post in the create_post response, it started returning pos. Let’s see what would happen in that scenario.

Make the following changes to the create_post function in the app.py file:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘pos’: str(inserted_post.inserted_id)}), 201

Now, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

Now, go to GitHub Actions and find the latest build. You will see that the build will error out and give the following output:

Figure 11.4 – GitHub Actions – build failure

As we can see, the Build the Docker image step has failed. If you click on the step and scroll down to see what happened with it, you will find that the app.test.py execution failed. This is because of a test case failure with AssertionError: ‘post’ not found in {‘pos’: ‘60458fb603c395f9a81c9f4a’}. As the expected post key was not found in the output, {‘pos’: ‘60458fb603c395f9a81c9f4a’}, the test case failed, as shown in the following screenshot:

Figure 11.5 – GitHub Actions – test failure

We uncovered the error when someone pushed the buggy code to the Git repository. Are you able to see the benefits of CI already?

Now, let’s fix the code and commit the code again.

Modify the create_post function of app.py so that it looks as follows:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘post’: str(inserted_post.inserted_id)}), 201

Then, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

This time, the buildwill be successful:

Figure 11.6 – GitHub Actions – build success

Did you see how simple this was? We got started with CI quickly and implemented GitOps behind the scenes since the config file required to build and test the code also resided with the application code.

As an exercise, repeat the same process for the reviews, users, ratings, and frontend microservices.
You can play around with them to understand how it works.

Not everyone uses GitHub, so the SaaS offering might not be an option for them. Therefore, in the next section, we’ll look at the most popular open source CI tool: Jenkins.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-1

Before we can use GitHub Actions, we need to create a GitHub repository. As we know that each microservice can be independently developed, we will place all of them in separate Git repositories. For this exercise, we will focus only on the posts microservice and leave the rest to you as an exercise.

To do so, go to https://github.com/new and create a new repository. Give it an appropriate name. For this exercise, I am going to use mdo-posts.

Once you’ve created it, clone the repository by using the following command:

$ git clone https://github.com/<GitHub_Username>/mdo-posts.git

Then, change the directory into the repository directory and copy the app.py, app.test. py, requirements.txt, and Dockerfile files into the repository’s directory using the following commands:

$ cd mdo-posts

$ cp ~/modern-devops/blog-app/posts/* .

Now, we need to create a GitHub Actions workflow file. We’ll do this in the next section.

Creating a GitHub Actions workflow

A GitHub Actions workflow is a simple YAML file that contains the build steps. We must create this workflow in the .github/workflows directory within the repository. We can do this using the following command:

$ mkdir -p .github/workflows

We will use the following GitHub Actions workflow file, build.yaml, for this exercise:

name: Build and Test App
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
name: Login to Docker Hub
id: login
run: docker login -u ${{ secrets.DOCKER_USER  }} -p ${{ secrets.DOCKER_PASSWORD }}
name: Build the Docker image
id: build
run: docker build . –file Dockerfile –tag ${{ secrets.DOCKER_USER  }}/
mdo-posts:$(git rev-parse –short “$GITHUB_SHA”)

  • name: Push the Docker image
    id: push
    run: docker push ${{ secrets.DOCKER_USER }}/mdo-posts:$(git rev-parse –short “$GITHUB_SHA”)

This file comprisesthe following:

•    name: The workflow’s name – Build and Test App in this case.

  • on: This describes when this workflow will run. In this case, it will run if apush or pull request is sent on the main branch.
  • jobs: A GitHub Actions workflow contains one or more jobs that run in parallel by default. This attribute includes all jobs.
  • jobs.build: This is a job that does the container build.
  • jobs.build.runs-on: This describes where the build job will run. We’ve specified ubuntu-latest here. This means that this job will run on an Ubuntu VM.
  • jobs.build.steps: This consists of the steps that run sequentially within the job. The

build job consists of four build steps: checkout, which will check out the code from your repository; login, which will log in to Docker Hub; build, which will run a Docker build on your code; and push, which will push your Docker image to Docker Hub. Note that we tag the image with the Git commit SHA. This relates the build with the commit, making Git the single source of truth.

  • jobs.build.steps.uses: This is the first step and describes an action you will run as a part of your job. Actions are reusable pieces of code that you can execute in your pipeline. In this case, it runs the checkout action. It checks out the code from the current branch where the action is triggered.

Tip

Always use a version with your actions. This will prevent your build from breaking if a later version is incompatible with your pipeline.

  • jobs.build.steps.name: This is thename of your build step.
  • jobs.build.steps.id: This is the unique identifier of your build step.
  • jobs.build.steps.run: This is the command it executes as part of the build step.

The workflow also contains variables within ${{ }} . We can define multiple variables within the workflow and use them in the subsequent steps. In this case, we’ve used two variables – ${{ secrets.DOCKER_USER }} and ${{ secrets.DOCKER_PASSWORD }}. These variables are sourced from GitHub secrets.

Tip

It is best practice to use GitHub secrets to store sensitive information. Never store these details directly in the repository with code.

Building a CI pipeline with GitHub Actions – Continuous Integration with GitHub Actions and Jenkins-2

This directory contains multiple microservices and is structured as follows:
├── frontend
│ ├── Dockerfile
│ ├── app.py
│ ├── app.test.py
│ ├── requirements.txt
│ ├── static
│ └── templates
├── posts
│ ├── Dockerfile
│ ├── app.py
│ ├── app.test.py
│ └── requirements.txt ├── ratings …
├── reviews …
└── users …

The frontend directory contains files for the frontend microservice, and notably, it includes app.py (the Flask application code), app.test.py (the unit tests for the Flask application), requirements.txt (which contains all Python modules required by the app), and Dockerfile. It also includes a few other directories catering to the user interface elements of this app.

The posts, reviews, ratings, and users microservices have the same structure and contain app.py, app.test.py, requirements.txt, and Dockerfile files.

So, let’s start by switching to the posts directory:
$ cd posts

As we know that Docker is inherently CI-compliant, we can run the tests using Dockerfile itself.

Let’s investigate the Dockerfile of the posts service:
FROM python:3.7-alpine
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add –no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
RUN python app.test.py
CMD [“flask”, “run”]

ThisDockerfile starts with the python:3.7-alpine base image, installs the requirements, and copies the code into the working directory. It runs the app.test.py unit test to check whether the code would work if we deploy it. Finally, the CMD command defines a flask run command to run when we launch the container.

Let’s build our Dockerfile and see what we get:
$ docker build –progress=plain -t posts .
4 [1/6] FROM docker.io/library/python:3.7-alpine
5 [internal] load build context
6 [2/6] RUN apk add — no-cache gcc musl-dev linux-headers
7 [3/6] COPY requirements.txt requirements.txt
8 [4/6] RUN pip install -r requirements.txt
9 [5/6] COPY . .
10 [6/6] RUN python app.test.py
10 0.676 ————————————————-
10 0.676 Ran 8 tests in 0.026s
11 exporting to image
11 naming to docker.io/library/posts done

As we can see, it built the container, executed a test on it, and responded with Ran 8 tests in 0.026s and an OK message. Therefore, we could use Dockerfile to build and test this app. We used the –progress=plain argument with the docker build command. This is because we wanted to see the stepwise output of the logs rather than Docker merging progress into a single message (this is now a default behavior).

Now, let’s look at GitHub Actions and how we can automate this step.

Building a CI pipeline with GitHub Actions – Continuous Integration with GitHub Actions and Jenkins-1

GitHub Actions is a SaaS-based tool that comes with GitHub. So, when you create your GitHub repository, you get access to this service out of the box. Therefore, GitHub Actions is one of the best tools for people new to CI/CD and who want to get started quickly. GitHub Actions helps you automate tasks, build, test, and deploy your code, and even streamline your workflow, making your life as a developer much easier.

Here’s what GitHub Actions can do for you:

  • CI: GitHub Actions can automatically build and test your code whenever you push changes to your repository. This ensures that your code remains error-free and ready for deployment.
  • CD: You can use GitHub Actions to deploy your application to various hosting platforms, such as AWS, Azure, and GCP. This allows you to deliver updates to your users quickly and efficiently.
    • Workflow automation: You can create custom workflows using GitHub Actions to automate repetitive tasks in your development process. For example, you can automatically label and assign issues, trigger builds on specific events, or send notifications to your team.
  • Custom scripts: GitHub Actions allows you to run custom scripts and commands, giving you full control over your automation tasks. Whether you need to compile code, run tests, or execute deployment scripts, GitHub Actions can handle it.
  • Community actions: GitHub Actions has a marketplace where you can find pre-built actions created by the community. These actions cover a wide range of tasks, from publishing to npm to deploying to popular cloud providers. You can easily incorporate these actions into your workflow.
  • Scheduled jobs: You can schedule actions to run at specific times or intervals. This is handy for tasks such as generating reports, sending reminders, or performing maintenance during non-peak hours.
  • Multi-platform support: GitHub Actions supports various programming languages, operating systems, and cloud environments, which means you can build and deploy applications for different platforms with ease.
  • Integration: GitHub Actions seamlessly integrates with your GitHub repositories, making it a natural extension of your development environment. You can define workflows by using YAML files directly in your repository.

GitHub Actions revolutionizes the way developers work by automating routine tasks, ensuring code quality, and streamlining the SDLC. It’s a valuable tool for teams and individual developers looking to enhance productivity and maintain high-quality code.

Now, let’s create a CI pipeline for our sample Blog App. Blog App consists of multiple microservices, and each microservice runs on an individual Docker container. We also have unit tests written for each microservice, which we can run to verify the code changes. If the tests pass, the build will pass; otherwise, it will fail.

To access the resources for this section, cd into the following directory:

$ cd ~/modern-devops/blog-app