Search for:
The Blog App and its deployment configuration – Continuous Deployment/ Delivery with Argo CD

Since we discussed the Blog App in the last chapter, let’s look at the services and their interactions again:

Figure 12.5 – The Blog App and its services and interactions

So far, we’ve created CI pipelines for building, testing, and pushing our Blog App microservice containers. These microservices need to run somewhere. So, we need an environment for this. We will deploy the application in a GKE cluster; for that, we will need a Kubernetes YAML manifest. We built the container for the posts microservice as an example in the previous chapter, and I also left building the rest of the services as an exercise for you. Assuming you’ve built them, we will need the following resources for the application to run seamlessly:

  • MongoDB: We will deploy an auth-enabled MongoDB database with root credentials. The credentials will be injected via environment variables sourced from a Kubernetes Secret resource. We also need to persist our database data, so for that, we need a PersistentVolume mounted to the container, which we will provision dynamically using a PersistentVolumeClaim. As the container is stateful, we will use a StatefulSet to manage it and, therefore, a headless Service to expose the database.
  • Posts, reviews, ratings, and users: The posts, reviews, ratings, and users microservices will interact with MongoDB through the root credentials injected via environment variables sourced from the same Secret as MongoDB. We will deploy them using their respective Deployment resources and expose all of them via individual ClusterIP Services.
  • Frontend: The frontend microservice does not need to interact with MongoDB, so there will be no interaction with the Secret resource. We will also deploy this service using a Deployment resource. As we want to expose the service on the internet, we will create a LoadBalancer Service for it.

We can summarize these aspects with the following diagram:

Figure 12.6 – The Blog App – Kubernetes resources and interactions

Now, as we’re following the GitOps model, we need to store the manifests of all the resources on Git. However, since Kubernetes Secrets are not inherently secure, we cannot store their manifests directly on Git. Instead, we will use another resource called SealedSecrets to manage this securely.

In Chapter 2, Source Code Management with Git and GitOps, we discussed application and environment repositories forming the fundamental building blocks of GitOps-based CI and CD, respectively. In the previous chapter, we created an application repository on GitHub and used GitHub Actions (and Jenkins) to build, test, and push our application container to Docker Hub. As CD focuses on the Ops part of DevOps, we will need an Environment repository to implement it, so let’s go ahead and create our Environment repository in the next section.

Complex deployment models – Continuous Deployment/ Delivery with Argo CD

Complex deployment models, unlike simple deployment models, try to minimize disruptions and downtimes within the application and make rolling out releases more seamless to the extent that most users don’t even notice when the upgrade is being conducted. Two main kinds of complex deployments are prevalent in the industry; let’s take a look.

Blue/Green deployments

Blue/Green deployments (also known as Red/Black deployments) roll out the new version (Green) in addition to the existing version (Blue). You can then do sanity checks and other activities with the latest version to ensure that everything is good to go. Then, you can switch traffic from the old to the new version and monitor for any issues. If you encounter problems, you switch back traffic to the old version. Otherwise, you keep the latest version running and remove the old version:

Figure 12.3 – Blue/Green deployments

You can take Blue/Green deployments to the next level using canary deployments.

Canary deployments and A/B testing

Canary deployments are similar to Blue/Green deployments but are generally utilized for risky upgrades. So, like Blue/Green deployments, we deploy the new version alongside the existing one. Instead of switching all traffic to the latest version at once, we only switch traffic to a small subset of users. As we do that, we can understand from our logs and user behaviors whether the switchover is causing any issues. This is calledA/B testing. When we do A/B testing, we can target a specific group of users based on location, language, age group, or users who have opted to test Beta versions of a product. That will help organizations gather feedback without disrupting general users and make changes to the product once they’re satisfied with what they are rolling out. You can make the release generally available by switching over the total traffic to the new version and getting rid of the old version:

Figure 12.4 – Canary deployments

While complex deployments cause the least disruption to users, they are generally complex to manage using traditional CI tools such as Jenkins. Therefore, we need to get the tooling right on it. Several CD tools are available in the market, including Argo CD, Spinnaker, Circle CI, and AWS Code Deploy. As this entire book is focused on GitOps, and Argo CD is a GitOps native tool, for this chapter, we will focus on Argo CD. Before we delve into deploying the application, let’s revisit what we want to deploy.

CD models and tools – Continuous Deployment/ Delivery with Argo CD

A typical CI/CD workflow looks as described in the following figure and the subsequent steps:

Figure 12.1 – CI/CD workflow

  1. Your developers write code and push it to a code repository (typically a Git repository).
  2. Your CI tool builds the code, runs a series of tests, and pushes the tested build to an artifact repository. Your CD tool then picks up the artifact and deploys it to your test and staging environments. Based on whether you want to do continuous deployment or delivery, it automatically deploys the artifact to the production environment.

Well, what do you choose for a delivery tool? Let’s look at the example we covered in Chapter 11, Continuous Integration. We picked up the posts microservice app and used a CI tool such as GitHubActions/Jenkins that uses Docker to create a container out of it and push it to our Docker Hub container registry. Well, we could have used the same tool for deploying to our environment.

For example, if we wanted to deploy to Kubernetes, it would have been a simple YAML update and kubectl apply. We could easily do this with any of those tools, but we chose not to do it. Why? The answer is simple – CI tools are meant for CI, and if you want to use them for anything else, you’ll get stuck at a certain point. That does not mean that you cannot use these tools for CD. It will only suit a few use cases based on the deployment model you follow.

Several deployment models exist based on your application, technology stack, customers, risk appetite, and cost consciousness. Let’s look at some of the popular deployment models that are used within the industry.

Simple deployment model

The simple deployment model is one of the most straightforward of all: you deploy the required version of your application after removing the old one. It completely replaces the previous version, and rolling back involves redeploying the older version after removing the deployed one:

Figure 12.2 – Simple deployment model

As it is a simple way of deploying things, you can manage this using a CI tool such as Jenkins or GitHub Actions. However, the simple deployment model is not the most desired deployment methodbecause of some inherent risks. This kind of change is disruptive andtypically needs downtime. This means your service would remain unavailable to your customers for the upgrade period. It might be OK for organizations that do not have users 24/7, but disruptions eat into the service-level objectives (SLOs) and service-level agreements (SLAs) of global organizations. Even if there isn’t one, they hamper customer experience and the organization’s reputation.

Therefore, to manage such kinds of situations, we have some complex deployment models.

Always use post-commit triggers – Continuous Integration with GitHub Actions and Jenkins

Post-commit triggers help your team significantly. They will not have to log in to the CI server and trigger the build manually. That completely decouples your development team from CI management.

Configure build reporting

You don’t want your development team to log in to the CI tool and check how the build runs. Instead, all they want to know is the result of the build and the build logs. Therefore, you can configure build reporting to send your build status via email or, even better, using a Slack channel.

Customize the build server size

Not all builds work the same in similar kinds of build machines. You may want to choose machines based on what suits your build environment best. If your builds tend to consume more CPU than memory, it will make sense to choose such machines to run your builds instead of the standard ones.

Ensure that your builds only contain what you need

Builds move across networks. You download base images, build your application image, and push that to the container registry. Bloated images not only take a lot of network bandwidth and time to transmit but also make your build vulnerable to security issues. Therefore, it is always best practice to only include what you require in the build and avoid bloat. You can use Docker’s multi-stage builds for these kinds of situations.

Parallelize your builds

Run tests and build processes concurrently to reduce overall execution time. Leverage distributed systems or cloud-based CI/CD platforms for scalable parallelization, allowing you to handle larger workloads efficiently.

Make use of caching

Cache dependencies and build artifacts to prevent redundant downloads and builds, saving valuable time. Implement caching mechanisms such as Docker layer caching or use your package manager’s built-in caches to minimize data transfer and build steps.

Use incremental building

Configure your CI/CD pipeline to perform incremental builds, rebuilding only what has changed since the last build. Maintain robust version control practices to accurately track and identify changes.

Optimize testing

Prioritize and optimize tests by running quicker unit tests before slower integration or end-to-end tests.

Use testing frameworks such as TestNG, JUnit, or PyTest to categorize and parallelize tests effectively.

Use artifact management

Efficiently store and manage build artifacts, preferably in a dedicated artifact repository such as Artifactory or Nexus. Implement artifact versioning and retention policies to maintain a clean artifact repository.

Manage application dependencies

Keep a clean and minimal set of dependencies to reduce build and test times. Regularly update dependencies to benefit from performance improvements and security updates.

Utilize Infrastructure as Code

Utilize Infrastructure as Code (IaC) to provision and configure build and test environments consistently.

Optimize IaC templates to minimize resource utilization, ensuring efficient resource allocation.

Use containerization to manage build and test environments

Containerize applications and utilize container orchestration tools such as Kubernetes to manage test environments efficiently. Leverage container caching to accelerate image builds and enhance resource utilization.

Automating a build with triggers – Continuous Integration with GitHub Actions and Jenkins

The best way to allow your CI build to trigger when you make changes to your code is to use a post-commit webhook. We looked at such an example in the GitHub Actions workflow. Let’s try to automate the build with triggers in the case of Jenkins. We’ll have to make some changes on both the Jenkins and the GitHub sides to do so. We’ll deal with Jenkins first; then, we’ll configure GitHub.

Go to Job configuration | Build Triggers and make the following changes:

Figure 11.16 – Jenkins GitHub hook trigger

Save the configuration by clicking Save. Now, go to your GitHub repository, click Settings | Webhooks | Add Webhook, and add the following details. Then, click Add Webhook:

Figure 11.17 – GitHub webhook

Now, push a change to the repository. The job on Jenkins will start building:

Figure 11.18 – Jenkins GitHub webhook trigger

This isautomated build triggers in action. Jenkins is one of the most popular open source CI tools on the market. The most significant advantage of it is that you can pretty much run it anywhere. However, it does come with some management overhead. You may have noticed how simple it was to start with GitHub Actions, but Jenkins is slightly more complicated.

Several other SaaS platforms offer CI and CD as a service. For instance, if you are running on AWS, you’d get their inbuilt CI with AWS Code Commit and Code Build; Azure provides an entire suite of services for CI and CD in their Azure DevOps offering; and GCP provides Cloud Build for that job.

CI follows the same principle, regardless of the tooling you choose to implement. It is more of a process and a cultural change within your organization. Now, let’s look at some of the best practices regarding CI.

Building performance best practices

CI is an ongoing process, so you will have a lot of parallel builds running within your environment at a given time. In such situations, we can optimize them using several best practices.

Aim for faster builds

The faster you can complete your build, the quicker you will get feedback and run your next iteration. A slow build slows down your development team. Take steps to ensure that builds are faster. For example, in Docker’s case, it makes sense to use smaller base images as it will download the code from the image registry every time it does a build. Using a single base image for most builds will also speed up your build time. Using tests will help, but make sure that they aren’t long-running. We want to avoid a CI build that runs for hours. Therefore, it would be good to offload long-running tests into another job or use a pipeline. Run activities in parallel if possible.

Installing Jenkins – Continuous Integration with GitHub Actions and Jenkins-2

The next step involves creating a PersistentVolumeClaim resource to store Jenkins data to ensure that the Jenkins data persists beyond the pod’s life cycle and will exist even when we delete the pod.

To apply the manifest, run the following command:
$ kubectl apply -f jenkins-pvc.yaml

Then, we will create a Kubernetes Secret called regcred to help the Jenkins pod authenticate with the Docker registry. Use the following command to do so:
$ kubectl create secret docker-registry regcred –docker-username= \ –docker-password= –docker-server=https://index.docker.io/v1/

Now, we’ll define a Deployment resource, jenkins-deployment.yaml, that will run the Jenkins container. The pod uses the jenkins service account and defines a PersistentVolume resource called jenkins-pv-storage using the PersistentVolumeClaim resource called jenkins-pv-claim that we defined. We define the Jenkins container that uses the Jenkins controller image we created. It exposes HTTP port 8080 for the Web UI, and port 50000 for JNLP, which the agents would use to interact with the Jenkins controller. We will also mount the jenkins-pv-storage volume to /var/jenkins_home to persist the Jenkins data beyond the pod’s life cycle. We specify regcred as the imagePullSecret attribute in the pod image. We also use initContainer to assign ownership to jenkins for /var/jenkins_home.

As the file contains placeholders, replace with your Docker Hub user and with a Jenkins admin password of your choice using the following commands:
$ sed -i ‘s//actual_dockerhub_user/g’ jenkins-deployment.yaml

Apply the manifest using the following command:
$ kubectl apply -f jenkins-deployment.yaml

As we’ve created the deployment, we can expose the deployment on a LoadBalancer Service using the jenkins-svc.yaml manifest. This service exposes ports8080 and 50000 on a load balancer. Use the following command to apply the manifest:
$ kubectl apply -f jenkins-svc.yaml

Let’s get the service to find the external IP to use that to access Jenkins:
$ kubectl get svc jenkins-service

NAME EXTERNAL-IP PORT(S) jenkins-service LOAD_BALANCER_EXTERNAL_IP 8080,50000

Now, to access the service, go to http://:8080 in your browser window:

Figure 11.9 – Jenkins login page

As we can see, we’re greeted with a login page. This means Global Security is working correctly. Let’s log in using the admin username and password we set:

Figure 11.10 – Jenkins home page

As we can see, we’ve successfully logged in to Jenkins. Now, let’s go ahead and create our first Jenkins job.

Installing Jenkins – Continuous Integration with GitHub Actions and Jenkins-1

As we’re running on a Kubernetes cluster, we only need the latest official Jenkins image from Docker Hub. We will customize the image according to our requirements.

The following Dockerfile file will help us create the image with the required plugins and the initial configuration:
FROM jenkins/jenkins
ENV CASC_JENKINS_CONFIG /usr/local/casc.yaml
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY casc.yaml /usr/local/casc.yaml
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli –plugin-file /usr/share/jenkins/ref/plugins.txt

The Dockerfile starts from the Jenkins base image. Then, we declare two environment variables – CASC_JENKINS_CONFIG, which points to the casc.yaml file we defined in the previous section, and JAVA_OPTS, which tells Jenkins not to run the setup wizard. Then, we copy the casc.yaml and plugins.txt files to their respective directories within the Jenkins container. Finally, we run jenkins-plugins-cli on the plugins.txt file, which installs the required plugins.

The plugins.txt file contains a list of all Jenkins plugins that we will need in this setup.

Tip

You can customize and install more plugins for the controller image based on your requirements by updating the plugins.txt file.

Let’s build the image from the Dockerfile file using the following command:
$ docker build -t /jenkins-controller-kaniko .

Now that we’ve built the image, use the following command to log in and push the image to Docker Hub:
$ docker login
$ docker push /jenkins-controller-kaniko

We must also build the Jenkins agent image to run our builds. Remember that Jenkins agents need all the supporting tools you need to run your builds. You can find the resources for the agents in the following directory:
$ cd ~/modern-devops/ch11/jenkins/jenkins-agent

We will use the following Dockerfile to do that:
FROM gcr.io/kaniko-project/executor:v1.13.0 as kaniko
FROM jenkins/inbound-agent
COPY –from=kaniko /kaniko /kaniko
WORKDIR /kaniko
USER root

ThisDockerfile uses a multi-stage build to take the kaniko base image and copy the kaniko binary from the kaniko base image to the inbound-agent base image. Let’s go ahead and build and push the container using the following commands:
$ docker build -t /jenkins-jnlp-kaniko .
$ docker push /jenkins-jnlp-kaniko

To deploy Jenkins on our Kubernetes cluster, we will first create a jenkins service account. A Kubernetes service account resource helps pods authenticate with the Kubernetes API server. We will give the service account permission to interact with the Kubernetes API server as cluster-admin using a cluster role binding. A Kubernetes ClusterRoleBinding resource helps provide permissions to a service account to perform certain actions in the Kubernetes cluster. The jenkins-sa-crb. yaml manifest describes this. To access these resources, run the following command:
$ cd ~/modern-devops/ch11/jenkins/jenkins-controller

To apply the manifest, run the following command:
$ kubectl apply -f jenkins-sa-crb.yaml

Connecting Jenkins with the cluster – Continuous Integration with GitHub Actions and Jenkins

We will install the Kubernetes plugin to connect the Jenkins controller with the cluster. We’re doing this because we want Jenkins to dynamically spin up agents for builds as Kubernetes pods.

We will start by creating a kubernetes configuration under jenkins.clouds, as follows:
jenkins
clouds:
kubernetes:
serverUrl: “https://”
jenkinsUrl: “http://jenkins-service:8080”
jenkinsTunnel: “jenkins-service:50000”
skipTlsVerify: false
useJenkinsProxy: false
maxRequestsPerHost: 32
name: “kubernetes”
readTimeout: 15
podLabels:
key: jenkins
value: agent

As we have a placeholder called within the configuration, we must replace this with the Kubernetes control plane’s IP address. Run the following command to fetch the control plane’s IP address:
$ kubectl cluster-info | grep “control plane”

Kubernetes control plane is running at https://35.224.6.58

Now, replace the placeholder with the actual IP address you obtained from the preceding command by using the following command:
$ sed -i ‘s//actual_ip/g’ casc.yaml

Let’s look at each attribute in the config file:
• serverUrl: This denotes the Kubernetes control plane server URL, allowing the Jenkins controller to communicate with the Kubernetes API server.
• jenkinsUrl: This denotes the Jenkins controller URL. We’ve set it to http://jenkins-service:8080.
• jenkinsTunnel: This describes how the agent pods will connect with the Jenkins controller. As the JNLP port is 50000, we’ve set it to jenkins-service:50000.
• podLabels: We’ve also set up some pod labels, key=jenkins and value=agent. These will be set on the agent pods.

Other attributes are also set to their default values.

Every Kubernetes cloud configuration consists of multiple pod templates describing how the agent pods will be configured. The configuration looks like this:
kubernetes:
templates:
name: “jenkins-agent”
label: “jenkins-agent”
hostNetwork: false
nodeUsageMode: “NORMAL”
serviceAccount: “jenkins” imagePullSecrets:
name: regcred yamlMergeStrategy: “override” containers:

Here, we’ve defined the following:
• The template’s name and label. We set both to jenkins-agent.
• hostNetwork: This is set tofalse as we don’t want the container to interact with the host network.
• seviceAccount: We’ve set this to jenkins as we want to use this service account to interact with Kubernetes.
• imagePullSecrets: We have also provided an image pull secret called regcred to authenticate with the container registry to pull the jnlp image.

Every pod template also contains a container template. We can define that using the following configuration:

containers:
name: jnlp
image: “/jenkins-jnlp-kaniko”
workingDir: “/home/jenkins/agent”
command: “”
args: “” livenessProbe:
failureThreshold: 1
initialDelaySeconds: 2
periodSeconds: 3
successThreshold: 4
timeoutSeconds: 5
volumes:
secretVolume:
mountPath: /kaniko/.docker
secretName: regcred

Here, we have specified the following:
• name: Set to jnlp.
• image: Here, we’ve specified the Docker agent image we will build in the next section. Ensure that you replace the placeholder with your Docker Hub user by using the following command:

$ sed -i ‘s//actual_dockerhub_user/g’ casc.yaml
• workingDir: Set to /home/jenkins/agent.
• We’ve set the command and args fields to blank as we don’t need to pass them.
• livenessProbe: We’ve defined a liveness probe for the agent pod.
• volumes: We’ve mounted the regcred secret to the kaniko/.docker file as a volume. As regcred contains the Docker registry credentials, Kaniko will use this to connect with your container registry.

Now that our configuration file is ready, we’ll go ahead and install Jenkins in the next section.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-2

You must define two secrets within your repository using the following URL: https://github. com//mdo-posts/settings/secrets/actions.

Define two secrets within the repository:
DOCKER_USER=
DOCKER_PASSWORD=

Now, let’s move this build.yml file to the workflows directory by using the following command:
$ mv build.yml .github/workflows/

Now, we’re ready to push this code to GitHub. Run the following commands to commit and push the changes to your GitHub repository:
$ git add –all
$ git commit -m ‘Initial commit’
$ git push

Now, go to the Workflows tab of your GitHub repository by visiting https://github.com//mdo-posts/actions. You should see something similar to the following:

Figure 11.2 – GitHub Actions

As we can see, GitHub has run a build using our workflow file, and it has built the code and pushed the image to Docker Hub. Upon visiting your Docker Hub account, you should see your image present in your account:

Figure 11.3 – Docker Hub image

Now, let’s try to break our code somehow. Let’s suppose that someone from your team changed the app.py code, and instead of returning post in the create_post response, it started returning pos. Let’s see what would happen in that scenario.

Make the following changes to the create_post function in the app.py file:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘pos’: str(inserted_post.inserted_id)}), 201

Now, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

Now, go to GitHub Actions and find the latest build. You will see that the build will error out and give the following output:

Figure 11.4 – GitHub Actions – build failure

As we can see, the Build the Docker image step has failed. If you click on the step and scroll down to see what happened with it, you will find that the app.test.py execution failed. This is because of a test case failure with AssertionError: ‘post’ not found in {‘pos’: ‘60458fb603c395f9a81c9f4a’}. As the expected post key was not found in the output, {‘pos’: ‘60458fb603c395f9a81c9f4a’}, the test case failed, as shown in the following screenshot:

Figure 11.5 – GitHub Actions – test failure

We uncovered the error when someone pushed the buggy code to the Git repository. Are you able to see the benefits of CI already?

Now, let’s fix the code and commit the code again.

Modify the create_post function of app.py so that it looks as follows:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘post’: str(inserted_post.inserted_id)}), 201

Then, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

This time, the buildwill be successful:

Figure 11.6 – GitHub Actions – build success

Did you see how simple this was? We got started with CI quickly and implemented GitOps behind the scenes since the config file required to build and test the code also resided with the application code.

As an exercise, repeat the same process for the reviews, users, ratings, and frontend microservices.
You can play around with them to understand how it works.

Not everyone uses GitHub, so the SaaS offering might not be an option for them. Therefore, in the next section, we’ll look at the most popular open source CI tool: Jenkins.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-1

Before we can use GitHub Actions, we need to create a GitHub repository. As we know that each microservice can be independently developed, we will place all of them in separate Git repositories. For this exercise, we will focus only on the posts microservice and leave the rest to you as an exercise.

To do so, go to https://github.com/new and create a new repository. Give it an appropriate name. For this exercise, I am going to use mdo-posts.

Once you’ve created it, clone the repository by using the following command:

$ git clone https://github.com/<GitHub_Username>/mdo-posts.git

Then, change the directory into the repository directory and copy the app.py, app.test. py, requirements.txt, and Dockerfile files into the repository’s directory using the following commands:

$ cd mdo-posts

$ cp ~/modern-devops/blog-app/posts/* .

Now, we need to create a GitHub Actions workflow file. We’ll do this in the next section.

Creating a GitHub Actions workflow

A GitHub Actions workflow is a simple YAML file that contains the build steps. We must create this workflow in the .github/workflows directory within the repository. We can do this using the following command:

$ mkdir -p .github/workflows

We will use the following GitHub Actions workflow file, build.yaml, for this exercise:

name: Build and Test App
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
name: Login to Docker Hub
id: login
run: docker login -u ${{ secrets.DOCKER_USER  }} -p ${{ secrets.DOCKER_PASSWORD }}
name: Build the Docker image
id: build
run: docker build . –file Dockerfile –tag ${{ secrets.DOCKER_USER  }}/
mdo-posts:$(git rev-parse –short “$GITHUB_SHA”)

  • name: Push the Docker image
    id: push
    run: docker push ${{ secrets.DOCKER_USER }}/mdo-posts:$(git rev-parse –short “$GITHUB_SHA”)

This file comprisesthe following:

•    name: The workflow’s name – Build and Test App in this case.

  • on: This describes when this workflow will run. In this case, it will run if apush or pull request is sent on the main branch.
  • jobs: A GitHub Actions workflow contains one or more jobs that run in parallel by default. This attribute includes all jobs.
  • jobs.build: This is a job that does the container build.
  • jobs.build.runs-on: This describes where the build job will run. We’ve specified ubuntu-latest here. This means that this job will run on an Ubuntu VM.
  • jobs.build.steps: This consists of the steps that run sequentially within the job. The

build job consists of four build steps: checkout, which will check out the code from your repository; login, which will log in to Docker Hub; build, which will run a Docker build on your code; and push, which will push your Docker image to Docker Hub. Note that we tag the image with the Git commit SHA. This relates the build with the commit, making Git the single source of truth.

  • jobs.build.steps.uses: This is the first step and describes an action you will run as a part of your job. Actions are reusable pieces of code that you can execute in your pipeline. In this case, it runs the checkout action. It checks out the code from the current branch where the action is triggered.

Tip

Always use a version with your actions. This will prevent your build from breaking if a later version is incompatible with your pipeline.

  • jobs.build.steps.name: This is thename of your build step.
  • jobs.build.steps.id: This is the unique identifier of your build step.
  • jobs.build.steps.run: This is the command it executes as part of the build step.

The workflow also contains variables within ${{ }} . We can define multiple variables within the workflow and use them in the subsequent steps. In this case, we’ve used two variables – ${{ secrets.DOCKER_USER }} and ${{ secrets.DOCKER_PASSWORD }}. These variables are sourced from GitHub secrets.

Tip

It is best practice to use GitHub secrets to store sensitive information. Never store these details directly in the repository with code.