Search for:
The importance of CD and automation – Continuous Deployment/ Delivery with Argo CD-1

CD forms the Ops part of your DevOps toolchain. So, while your developers are continuously building and pushing code and your CI pipeline is building, testing, and publishing the builds to your artifact repository, the Ops team will deploy the build to the test and staging environments. The QA team is the gatekeeper that will ensure that the code meets a certain quality, and only then will the Ops team deploy the code to production.

Now, for organizations implementing only the CI part, the rest of the activities are manual. For example, operators will pull the artifacts and run commands to do the deployments manually. Therefore, your deployment’s velocity will depend on the availability of your Ops team to do it. As the deployments are manual, the process is error-prone, and human beings tend to make mistakes in repeatable jobs.

One of the essential principles of modern DevOps is to avoid toil. Toil is nothing but repeatable jobs that developers and operators do day in and day out, and all of that toil can be removed by automation. This will help your team focus on the more important things at hand.

With continuous delivery, standard tooling can deploy code to higher environments based on certain gate conditions. CD pipelines will trigger when a tested build arrives at the artifact repository or, in the case of GitOps, if any changes are detected in the Environment repository. The pipeline then decides, based on a set configuration, where and how to deploy the code. It also establishes whether manual checks are required, such as raising a change ticket and checking whether it’s approved.

While continuous deployment and delivery are often confused with being the same thing, there is a slight difference between them. Continuous delivery enables your team to deliver tested code in your environment based on a human trigger. So, while you don’t have to do anything more than click a button to do a deployment to production, it would still be initiated by someone at a convenient time (a maintenance window). Continuous deployments go a step further when they integrate with the CI process and will start the deployment process as soon as a new tested build is available for them to consume. There is no need for manual intervention, and continuous deployment will only stop in case of a failed test.

The monitoring tool forms the next part of the DevOps toolchain. The Ops team can learn from managing their production environment and provide developers with feedback regarding what they need to do better. That feedback ends up in the development backlog, and they can deliver it as features in future releases. That completes the cycle, and now you have your team churning out a technology product continuously.

Technical requirements – Continuous Deployment/ Delivery with Argo CD

In the previous chapter, we looked at one of the key aspects of modern DevOps – continuous integration (CI ). CI is the first thing most organizations implement when they embrace DevOps,but things don’t end with CI, which only delivers a tested build in an artifact repository. Instead, we would also want to deploy the artifact to our environments. In this chapter, we’ll implement the next part of the DevOps toolchain – continuous deployment/delivery (CD).

In this chapter, we’re going to cover the following main topics:

  • The importance of CD and automation
  • CD models and tools
  • The Blog App and its deployment configuration
  • Continuous declarative IaC using an Environment repository
  • Introduction to Argo CD
  • Installing and setting up Argo CD
  • Managing sensitive configurations and secrets
  • Deploying the sample Blog App

Technical requirements

In this chapter, we will spin up a cloud-based Kubernetes cluster, Google Kubernetes Engine (GKE), for the exercises. At the time of writing, Google Cloud Platform (GCP) provides a free $300 trial for 90 days, so you can go ahead and sign up for one at https://console.cloud.google.com/.

You will also need to clone the following GitHub repository for some exercises: https://github.

com/PacktPublishing/Modern-DevOps-Practices.

Run the following command to clone the repository into your home directory, and cd into the ch12 directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

$ cd modern-devops/ch12

So, let’s get started!

Utilize cloud-based CI/CD – Continuous Integration with GitHub Actions and Jenkins

Consider adopting cloud -based CI/CD services such as AWS CodePipeline, Google Cloud Build, Azure DevOps, or Travis CI for enhanced scalability and performance. Harness on-demand cloud resources to expand parallelization capabilities and adapt to varying workloads.

Monitor and profile your CI/CD pipelines

Implement performance monitoring and profiling tools to identify bottlenecks and areas for improvement within your CI/CD pipeline. Regularly analyze build and test logs to gather insights for optimizing performance.

Pipeline optimization

Continuously review and optimize your CI/CD pipeline configuration for efficiency and relevance.

Remove unnecessary steps or stages that do not contribute significantly to the process.

Implement automated cleanup

Implement automated cleanup routines to remove stale artifacts, containers, and virtual machines, preventing resource clutter. Regularly purge old build artifacts and unused resources to maintain a tidy environment.

Documentation and training

Document best practices and performance guidelines for your CI/CD processes, ensuring that the entire team follows these standards consistently. Provide training and guidance to team members to empower them to implement and maintain these optimization strategies effectively.

By implementing these strategies, you can significantly enhance the speed, efficiency, and reliability of your CI/CD pipeline, ultimately leading to smoother software development and delivery processes. These are some of the best practices at a high level, and they are not exhaustive, but they are good enough so that you can start optimizing your CI environment.

Summary

This chapter covered CI, and you understood the need for CI and the basic CI workflow for a container application. We then looked at GitHub Actions, which we can use to build an effective CI pipeline. Next, we looked at the Jenkins open source offering and deployed a scalable Jenkins on Kubernetes with Kaniko, setting up a Jenkins controller-agent model. We then understood how to use hooks for automating builds, both in the GitHub Actions- based workflow and the Jenkins-based workflow. Finally, we learned about build performance best practices and dos and don’ts.

By now, you should be familiar with CI and its nuances, along with the various tooling you can use to implement it.

Always use post-commit triggers – Continuous Integration with GitHub Actions and Jenkins

Post-commit triggers help your team significantly. They will not have to log in to the CI server and trigger the build manually. That completely decouples your development team from CI management.

Configure build reporting

You don’t want your development team to log in to the CI tool and check how the build runs. Instead, all they want to know is the result of the build and the build logs. Therefore, you can configure build reporting to send your build status via email or, even better, using a Slack channel.

Customize the build server size

Not all builds work the same in similar kinds of build machines. You may want to choose machines based on what suits your build environment best. If your builds tend to consume more CPU than memory, it will make sense to choose such machines to run your builds instead of the standard ones.

Ensure that your builds only contain what you need

Builds move across networks. You download base images, build your application image, and push that to the container registry. Bloated images not only take a lot of network bandwidth and time to transmit but also make your build vulnerable to security issues. Therefore, it is always best practice to only include what you require in the build and avoid bloat. You can use Docker’s multi-stage builds for these kinds of situations.

Parallelize your builds

Run tests and build processes concurrently to reduce overall execution time. Leverage distributed systems or cloud-based CI/CD platforms for scalable parallelization, allowing you to handle larger workloads efficiently.

Make use of caching

Cache dependencies and build artifacts to prevent redundant downloads and builds, saving valuable time. Implement caching mechanisms such as Docker layer caching or use your package manager’s built-in caches to minimize data transfer and build steps.

Use incremental building

Configure your CI/CD pipeline to perform incremental builds, rebuilding only what has changed since the last build. Maintain robust version control practices to accurately track and identify changes.

Optimize testing

Prioritize and optimize tests by running quicker unit tests before slower integration or end-to-end tests.

Use testing frameworks such as TestNG, JUnit, or PyTest to categorize and parallelize tests effectively.

Use artifact management

Efficiently store and manage build artifacts, preferably in a dedicated artifact repository such as Artifactory or Nexus. Implement artifact versioning and retention policies to maintain a clean artifact repository.

Manage application dependencies

Keep a clean and minimal set of dependencies to reduce build and test times. Regularly update dependencies to benefit from performance improvements and security updates.

Utilize Infrastructure as Code

Utilize Infrastructure as Code (IaC) to provision and configure build and test environments consistently.

Optimize IaC templates to minimize resource utilization, ensuring efficient resource allocation.

Use containerization to manage build and test environments

Containerize applications and utilize container orchestration tools such as Kubernetes to manage test environments efficiently. Leverage container caching to accelerate image builds and enhance resource utilization.

Running our first Jenkins job – Continuous Integration with GitHub Actions and Jenkins

Before we create our first job, we’ll have to prepare our repository to run the job. We will reuse the mdo-posts repository for this. We will copy a build.sh file to the repository, which will build the container image for the posts microservice and push it to Docker Hub.

The build.sh script takes IMAGE_ID and IMAGE_TAG as arguments. It passes them to the Kaniko executor script, which builds the container image using the Dockerfile and pushes it to Docker Hub using the following code:

IMAGE_ID=$1 && \
IMAGE_TAG=$2 && \
export DOCKER_CONFIG=/kaniko/.dockerconfig && \
/kaniko/executor \
–context $(pwd) \
–dockerfile $(pwd)/Dockerfile \
–destination $IMAGE_ID:$IMAGE_TAG \
–force

We will need to copy this file to our local repository using the following commands:

$ cp ~/modern-devops/ch11/jenkins/jenkins-agent/build.sh ~/mdo-posts/

Once you’ve done this, cd into your local repository – that is, ~/mdo-posts – and commit and push your changes to GitHub. Once you’ve done this, you’ll be ready to create a job in Jenkins.

To create a new job in Jenkins, go to the Jenkins home page and select New Item | Freestyle Job.

Provide a job name (preferably the same as the Git repository name), then click Next.

Click on Source Code Management, select Git, and add your Git repository URL, as shown in the following example. Specify the branch from where you want to build:

Figure 11.11 – Jenkins Souce Code Management configuration

Go to Build Triggers, select Poll SCM, and add the following details:

Figure 11.12 – Jenkins – Build Triggers configuration

Then, click on Build | Add Build Step | Execute shell. The Execute shell build step executes a sequence of shell commands on the Linux CLI. In this example, we’re running the build.sh script with the <your_dockerhub_user>/<image> argument and the image tag. Change the details according to your requirements. Once you’ve finished, click Save:

Figure 11.13 – Jenkins – Execute shell configuration

Now, we’re ready to build this job. To do so, you can either go to your job configuration and click Build Now or push a change to GitHub. You should see something like the following:

Figure 11.14 – Jenkins job page

Jenkins will successfully create an agent pod in Kubernetes, where it will run this job, and soon, the job will start building. Click Build | Console Output. If everything is OK, you’ll see that the build was successful and that Jenkins has built the posts service and executed a unit test before pushing the Docker image to the registry:

Figure 11.15 – Jenkins console output

With that, we’re able to run a Docker build using a scalable Jenkins server. As we can see, we’ve set up polling on the SCM settings to look for changes every minute and build the job if we detect any. However, this is resource-intensive and does not help in the long run. Just imagine that you have hundreds of jobs interacting with multiple GitHub repositories, and the Jenkins controller is polling them every minute. A better approach would be if GitHub could trigger a post-commit webhook on Jenkins. Here, Jenkins can build the job whenever there are changes in the repository. We’ll look at that scenario in the next section.

Spinning up Google Kubernetes Engine – Continuous Integration with GitHub Actions and Jenkins

Once you’ve signed up and are in your console, open the Google Cloud Shell CLI to run the following commands.

You need to enable the Kubernetes Engine API first using the following command:

$ gcloud services enable container.googleapis.com

To create a two-node autoscaling GKE cluster that scales from one to five nodes, run the following command:

$ gcloud container clusters create cluster-1 –num-nodes 2 \

–enable-autoscaling –min-nodes 1 –max-nodes 5 –zone us-central1-a

And that’s it! The cluster will be up and running.

You must also clone the following GitHub repository for some of the exercises provided: https:// github.com/PacktPublishing/Modern-DevOps-Practices-2e.

Run the following command to clone the repository into your home directory and cd into the following directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

$ cd modern-devops/ch11/jenkins/jenkins-controller

We will use the Jenkins Configuration as Code feature to configure Jenkins as it is a declarative way of managing your configuration and is also GitOps-friendly. You need to create a simple YAML file with all the required configurations and then copy the file to the Jenkins controller after setting an environment variable that points to the file. Jenkins will then automatically configure all aspects defined in the YAML file on bootup.

Let’s start by creating the casc.yaml file to define our configuration.

Creating the Jenkins CaC (JCasC) file

The Jenkins CaC (JCasC) file is a simple YAML file that helps us define Jenkins configuration declaratively. We will create a single casc.yaml file for that purpose, and I will explain parts of it. Let’s start by defining Jenkins Global Security.

Configuring Jenkins Global Security

By default Jenkins is insecure – that is, if you fire up a vanilla Jenkins from the official Docker image and expose it, anyone can do anything with that Jenkins instance. To ensure that we protect it, we need the following configuration:

jenkins:
remotingSecurity:
enabled: true
securityRealm:
local:
allowsSignup: false
users:
id: ${JENKINS_ADMIN_ID}
password: ${JENKINS_ADMIN_PASSWORD}
authorizationStrategy:
globalMatrix:
permissions:
“Overall/Administer:admin”
“Overall/Read:authenticated”

In the preceding configuration, we’ve defined the following:

  • remotingSecurity: We’ve enabled this feature, which will secure the communication between the Jenkins controller and agents that we will create dynamically using Kubernetes.
  • securityRealm: We’ve set the security realm to local, which means that the Jenkins controller itself will do all authentication and user management. We could have also offloaded this to an external entity such as LDAP:
  • allowsSignup: This is set tofalse. This means you don’t see a sign-up link on the Jenkins home page, and the Jenkins admin should manually create users.
  • users: We’ll create a single user with id and password sourced from two environment variables called JENKINS_ADMIN_ID and JENKINS_ADMIN_PASSWORD, respectively.
  • authorizationStrategy: We’ve defined a matrix-based authorization strategy where we provide administrator privileges to admin and read privileges to authenticated non-admin users.

Also, as we want Jenkins to execute all their builds in the agents and not the controller machine, we need to specify the following settings:

jenkins:
systemMessage: “Welcome to Jenkins!”
numExecutors: 0

We’ve set numExecutors to 0 to allow no builds on the controller and also set systemMessage on the Jenkins welcome screen.

Now that we’ve set up the security aspects of the Jenkins controller, we will configure Jenkins to connect with the Kubernetes cluster.

Scalable Jenkins on Kubernetes with Kaniko – Continuous Integration with GitHub Actions and Jenkins-2

Jenkins follows a controller-agent model. Though technically, you can run all your builds on the controller machine itself, it makes sense to offload your CI builds to other servers in your network to have a distributed architecture. This does not overload your controller machine. You can use it to store the build configurations and other management data and manage the entire CI build cluster, something along the lines of what’s shown in the following diagram:

Figure 11.7 – Scalable Jenkins

In the preceding diagram, multiple static Jenkins agents connect to a Jenkins controller. Now, this architecture works well, but it needs to be more scalable. Modern DevOps emphasizes resource utilization, so we only want to roll out an agent machine when we want to build. Therefore, automating your builds to roll out an agent machine when required is a better way to do it. This might be overkill when rolling out new virtual machines, as it takes some minutes to provision a new VM, even when using a prebuilt image with Packer. A better alternative is to use a container.

Jenkins integrates quite well with Kubernetes, allowing you to run your build on a Kubernetes cluster. That way, whenever you trigger a build on Jenkins, Jenkins instructs Kubernetes to create a new agent container that will then connect with the controller machine and run the build within itself. This is build on-demand at its best. The following diagram shows this process in detail:

Figure 11.8 – Scalable Jenkins CI workflow

This sounds great, and we cango ahead and run this build, but there are issues with this approach. We must understand that the Jenkins controller and agents run as containers and aren’t full-fledged virtual machines. Therefore, if we want to run a Docker build within the container, we must run the container in privileged mode. This isn’t a security best practice, and your admin should already have turned that off. This is because running a container in privileged mode exposes your host filesystem to the container. A hacker who can access your container will have full access so that they can do whatever they want in your system.

To solve that problem, you can use a container build tool such as Kaniko. Kaniko is a build tool provided by Google that helps you build your containers without access to the Docker daemon, and you do not even need Docker installed in your container. It is a great way to run your builds within a Kubernetes cluster and create a scalable CI environment. It is effortless, not hacky, and provides a secure method of building your containers, as we will see in the subsequent sections.

This section willuse Google Kubernetes Engine (GKE ). As mentioned previously, Google Cloud provides a free trial worth $300 for 90 days. You can sign up at https://cloud.google.com/ free if you have not already done so.

Scalable Jenkins on Kubernetes with Kaniko – Continuous Integration with GitHub Actions and Jenkins-1

Imagine you’re running a workshop where you build all sorts of machines. In this workshop, you have a magical conveyor belt called Jenkins for assembling these machines. But to make your workshop even more efficient and adaptable, you’ve got a team of tiny robot workers called Kaniko that assist in constructing the individual parts of each machine. Let’s draw parallels between this workshop analogy and the technology world:

  • Scalable Jenkins: Jenkins is a widely used automation server that helps automate various tasks, particularly those related to building, testing, and deploying software. “Scalable Jenkins” means configuring Jenkins in a way that allows it to efficiently handle a growing workload, much like having a spacious workshop capable of producing numerous machines.
  • Kubernetes: Think of Kubernetesas the workshop manager. It’s an orchestration platform that automates the process of deploying, scaling, and managing containerized applications. Kubernetes ensures that Jenkins and the team of tiny robots (Kaniko) work seamlessly together and can adapt to changing demands.
  • Kaniko: Kaniko is equivalent to your team of miniature robot workers. In the context of containerization, Kaniko is a tool that aids in building container images, which are akin to the individual parts of your machines. What makes Kaniko special is that it can do this without needing elevated access to the Docker daemon. Unlike traditional container builders, Kaniko doesn’t require special privileges, making it a more secure choice for constructing containers, especially within a Kubernetes environment.

Now, let’s combine the three tools and see what we can achieve:

  • Building containers at scale: Your workshop can manufacture multiple machines simultaneously, thanks to Jenkins and the tiny robots. Similarly, with Jenkins on Kubernetes using Kaniko, you can efficiently and concurrently create container images. This ability to scale is crucial in modern application development, where containerization plays a pivotal role.
  • Isolation and security: Just as Kaniko’s tiny robots operate within a controlled environment, Kaniko ensures that container image building takes place in an isolated and secure manner within a Kubernetes cluster. This means that different teams or projects can use Jenkins and Kaniko without interfering with each other’s container-building processes.
  • Consistency and automation: Similar to how the conveyor belt (Jenkins) guarantees consistent machine assembly, Jenkins on Kubernetes with Kaniko ensures uniform container image construction. Automation is at the heart of this setup, simplifying the process of building and managing container images for applications.

To summarize, scalable Jenkins on Kubernetes with Kaniko refers to the practice of setting up Jenkins to efficiently build and manage container images using Kaniko within a Kubernetes environment. It enables consistent, parallel, and secure construction of container images, aligning perfectly with modern software development workflows.

So, the analogy of a workshop with Jenkins, Kubernetes, and Kaniko vividly illustrates how this setup streamlines container image building, making it scalable, efficient, and secure for contemporary software development practices. Now, let’s dive deeper into Jenkins.

Jenkins is the most popular CI tool available in the market. It is open source, simple to install, and runs with ease. It is a Java -based tool with a plugin-based architecture designed to support several integrations, such as with a source code management tool such as Git, SVN, and Mercurial, or with popular artifact repositories such as Nexus and Artifactory. It also integrates well with well-known build tools such as Ant, Maven, and Gradle, aside from the standard shell scripting and Windows batch file executions.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-2

You must define two secrets within your repository using the following URL: https://github. com//mdo-posts/settings/secrets/actions.

Define two secrets within the repository:
DOCKER_USER=
DOCKER_PASSWORD=

Now, let’s move this build.yml file to the workflows directory by using the following command:
$ mv build.yml .github/workflows/

Now, we’re ready to push this code to GitHub. Run the following commands to commit and push the changes to your GitHub repository:
$ git add –all
$ git commit -m ‘Initial commit’
$ git push

Now, go to the Workflows tab of your GitHub repository by visiting https://github.com//mdo-posts/actions. You should see something similar to the following:

Figure 11.2 – GitHub Actions

As we can see, GitHub has run a build using our workflow file, and it has built the code and pushed the image to Docker Hub. Upon visiting your Docker Hub account, you should see your image present in your account:

Figure 11.3 – Docker Hub image

Now, let’s try to break our code somehow. Let’s suppose that someone from your team changed the app.py code, and instead of returning post in the create_post response, it started returning pos. Let’s see what would happen in that scenario.

Make the following changes to the create_post function in the app.py file:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘pos’: str(inserted_post.inserted_id)}), 201

Now, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

Now, go to GitHub Actions and find the latest build. You will see that the build will error out and give the following output:

Figure 11.4 – GitHub Actions – build failure

As we can see, the Build the Docker image step has failed. If you click on the step and scroll down to see what happened with it, you will find that the app.test.py execution failed. This is because of a test case failure with AssertionError: ‘post’ not found in {‘pos’: ‘60458fb603c395f9a81c9f4a’}. As the expected post key was not found in the output, {‘pos’: ‘60458fb603c395f9a81c9f4a’}, the test case failed, as shown in the following screenshot:

Figure 11.5 – GitHub Actions – test failure

We uncovered the error when someone pushed the buggy code to the Git repository. Are you able to see the benefits of CI already?

Now, let’s fix the code and commit the code again.

Modify the create_post function of app.py so that it looks as follows:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘post’: str(inserted_post.inserted_id)}), 201

Then, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

This time, the buildwill be successful:

Figure 11.6 – GitHub Actions – build success

Did you see how simple this was? We got started with CI quickly and implemented GitOps behind the scenes since the config file required to build and test the code also resided with the application code.

As an exercise, repeat the same process for the reviews, users, ratings, and frontend microservices.
You can play around with them to understand how it works.

Not everyone uses GitHub, so the SaaS offering might not be an option for them. Therefore, in the next section, we’ll look at the most popular open source CI tool: Jenkins.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-1

Before we can use GitHub Actions, we need to create a GitHub repository. As we know that each microservice can be independently developed, we will place all of them in separate Git repositories. For this exercise, we will focus only on the posts microservice and leave the rest to you as an exercise.

To do so, go to https://github.com/new and create a new repository. Give it an appropriate name. For this exercise, I am going to use mdo-posts.

Once you’ve created it, clone the repository by using the following command:

$ git clone https://github.com/<GitHub_Username>/mdo-posts.git

Then, change the directory into the repository directory and copy the app.py, app.test. py, requirements.txt, and Dockerfile files into the repository’s directory using the following commands:

$ cd mdo-posts

$ cp ~/modern-devops/blog-app/posts/* .

Now, we need to create a GitHub Actions workflow file. We’ll do this in the next section.

Creating a GitHub Actions workflow

A GitHub Actions workflow is a simple YAML file that contains the build steps. We must create this workflow in the .github/workflows directory within the repository. We can do this using the following command:

$ mkdir -p .github/workflows

We will use the following GitHub Actions workflow file, build.yaml, for this exercise:

name: Build and Test App
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
name: Login to Docker Hub
id: login
run: docker login -u ${{ secrets.DOCKER_USER  }} -p ${{ secrets.DOCKER_PASSWORD }}
name: Build the Docker image
id: build
run: docker build . –file Dockerfile –tag ${{ secrets.DOCKER_USER  }}/
mdo-posts:$(git rev-parse –short “$GITHUB_SHA”)

  • name: Push the Docker image
    id: push
    run: docker push ${{ secrets.DOCKER_USER }}/mdo-posts:$(git rev-parse –short “$GITHUB_SHA”)

This file comprisesthe following:

•    name: The workflow’s name – Build and Test App in this case.

  • on: This describes when this workflow will run. In this case, it will run if apush or pull request is sent on the main branch.
  • jobs: A GitHub Actions workflow contains one or more jobs that run in parallel by default. This attribute includes all jobs.
  • jobs.build: This is a job that does the container build.
  • jobs.build.runs-on: This describes where the build job will run. We’ve specified ubuntu-latest here. This means that this job will run on an Ubuntu VM.
  • jobs.build.steps: This consists of the steps that run sequentially within the job. The

build job consists of four build steps: checkout, which will check out the code from your repository; login, which will log in to Docker Hub; build, which will run a Docker build on your code; and push, which will push your Docker image to Docker Hub. Note that we tag the image with the Git commit SHA. This relates the build with the commit, making Git the single source of truth.

  • jobs.build.steps.uses: This is the first step and describes an action you will run as a part of your job. Actions are reusable pieces of code that you can execute in your pipeline. In this case, it runs the checkout action. It checks out the code from the current branch where the action is triggered.

Tip

Always use a version with your actions. This will prevent your build from breaking if a later version is incompatible with your pipeline.

  • jobs.build.steps.name: This is thename of your build step.
  • jobs.build.steps.id: This is the unique identifier of your build step.
  • jobs.build.steps.run: This is the command it executes as part of the build step.

The workflow also contains variables within ${{ }} . We can define multiple variables within the workflow and use them in the subsequent steps. In this case, we’ve used two variables – ${{ secrets.DOCKER_USER }} and ${{ secrets.DOCKER_PASSWORD }}. These variables are sourced from GitHub secrets.

Tip

It is best practice to use GitHub secrets to store sensitive information. Never store these details directly in the repository with code.