Search for:
The importance of CD and automation – Continuous Deployment/ Delivery with Argo CD-2

CD offers several advantages. Some of them are as follows:

  • Faster time to market: CD and CI reduce the time it takes to deliver new features, enhancements, and bug fixes to end users. This agility can give your organization a competitive edge by allowing you to respond quickly to market demands.
  • Reduced risk: By automating the deployment process and frequently pushing small code changes, you minimize the risk of large, error-prone deployments. Bugs and issues are more likely to be caught early, and rollbacks can be less complex.
  • Improved code quality: Frequent automated testing and quality checks are an integral part of CD and CI. This results in higher code quality as developers are encouraged to write cleaner, more maintainable code. Any issues are caught and addressed sooner in the development process.
  • Enhanced collaboration: CD and CI encourage collaboration between development and operations teams. It breaks down traditional silos and encourages cross-functional teamwork, leading to better communication and understanding.
  • Efficiency and productivity: Automation of repetitive tasks, such as testing, building, and deployment, frees up developers’ time to focus on more valuable tasks, such as creating new features and improvements.
  • Customer feedback: CD allows you to gather feedback from real users more quickly. By deploying small changes frequently, you can gather user feedback and adjust your development efforts accordingly, ensuring that your product better meets user needs.
  • Continuous improvement: CD promotes a culture of continuous improvement. By analyzing data on deployments and monitoring, teams can identify areas for enhancement and iterate on their processes.
  • Better security: Frequent updates mean that security vulnerabilities can be addressed promptly, reducing the window of opportunity for attackers. Security checks can be automated and integrated into the CI/CD pipeline.
  • Reduced manual intervention: CD minimizes the need for manual intervention in the deployment process. This reduces the potential for human error and streamlines the release process.
  • Scalability: As your product grows and the number of developers and your code base complexity increases, CD can help maintain a manageable development process. It scales effectively by automating many of the release and testing processes.
  • Cost savings: Although implementing CI/CD requires an initial investment in tools and processes, it can lead to cost savings in the long run by reducing the need for extensive manual testing, lowering deployment-related errors, and improving resource utilization.
  • Compliance and auditing: For organizations with regulatory requirements, CD can improve compliance by providing a detailed history of changes and deployments, making it easier to track and audit code changes.

It’s important to note that while CD and CI offer many advantages, they also require careful planning, infrastructure, and cultural changes to be effective.

There are several models and tools available to implement CD. We’ll have a look at some of them in the next section.

The importance of CD and automation – Continuous Deployment/ Delivery with Argo CD-1

CD forms the Ops part of your DevOps toolchain. So, while your developers are continuously building and pushing code and your CI pipeline is building, testing, and publishing the builds to your artifact repository, the Ops team will deploy the build to the test and staging environments. The QA team is the gatekeeper that will ensure that the code meets a certain quality, and only then will the Ops team deploy the code to production.

Now, for organizations implementing only the CI part, the rest of the activities are manual. For example, operators will pull the artifacts and run commands to do the deployments manually. Therefore, your deployment’s velocity will depend on the availability of your Ops team to do it. As the deployments are manual, the process is error-prone, and human beings tend to make mistakes in repeatable jobs.

One of the essential principles of modern DevOps is to avoid toil. Toil is nothing but repeatable jobs that developers and operators do day in and day out, and all of that toil can be removed by automation. This will help your team focus on the more important things at hand.

With continuous delivery, standard tooling can deploy code to higher environments based on certain gate conditions. CD pipelines will trigger when a tested build arrives at the artifact repository or, in the case of GitOps, if any changes are detected in the Environment repository. The pipeline then decides, based on a set configuration, where and how to deploy the code. It also establishes whether manual checks are required, such as raising a change ticket and checking whether it’s approved.

While continuous deployment and delivery are often confused with being the same thing, there is a slight difference between them. Continuous delivery enables your team to deliver tested code in your environment based on a human trigger. So, while you don’t have to do anything more than click a button to do a deployment to production, it would still be initiated by someone at a convenient time (a maintenance window). Continuous deployments go a step further when they integrate with the CI process and will start the deployment process as soon as a new tested build is available for them to consume. There is no need for manual intervention, and continuous deployment will only stop in case of a failed test.

The monitoring tool forms the next part of the DevOps toolchain. The Ops team can learn from managing their production environment and provide developers with feedback regarding what they need to do better. That feedback ends up in the development backlog, and they can deliver it as features in future releases. That completes the cycle, and now you have your team churning out a technology product continuously.

Technical requirements – Continuous Deployment/ Delivery with Argo CD

In the previous chapter, we looked at one of the key aspects of modern DevOps – continuous integration (CI ). CI is the first thing most organizations implement when they embrace DevOps,but things don’t end with CI, which only delivers a tested build in an artifact repository. Instead, we would also want to deploy the artifact to our environments. In this chapter, we’ll implement the next part of the DevOps toolchain – continuous deployment/delivery (CD).

In this chapter, we’re going to cover the following main topics:

  • The importance of CD and automation
  • CD models and tools
  • The Blog App and its deployment configuration
  • Continuous declarative IaC using an Environment repository
  • Introduction to Argo CD
  • Installing and setting up Argo CD
  • Managing sensitive configurations and secrets
  • Deploying the sample Blog App

Technical requirements

In this chapter, we will spin up a cloud-based Kubernetes cluster, Google Kubernetes Engine (GKE), for the exercises. At the time of writing, Google Cloud Platform (GCP) provides a free $300 trial for 90 days, so you can go ahead and sign up for one at https://console.cloud.google.com/.

You will also need to clone the following GitHub repository for some exercises: https://github.

com/PacktPublishing/Modern-DevOps-Practices.

Run the following command to clone the repository into your home directory, and cd into the ch12 directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

$ cd modern-devops/ch12

So, let’s get started!

Utilize cloud-based CI/CD – Continuous Integration with GitHub Actions and Jenkins

Consider adopting cloud -based CI/CD services such as AWS CodePipeline, Google Cloud Build, Azure DevOps, or Travis CI for enhanced scalability and performance. Harness on-demand cloud resources to expand parallelization capabilities and adapt to varying workloads.

Monitor and profile your CI/CD pipelines

Implement performance monitoring and profiling tools to identify bottlenecks and areas for improvement within your CI/CD pipeline. Regularly analyze build and test logs to gather insights for optimizing performance.

Pipeline optimization

Continuously review and optimize your CI/CD pipeline configuration for efficiency and relevance.

Remove unnecessary steps or stages that do not contribute significantly to the process.

Implement automated cleanup

Implement automated cleanup routines to remove stale artifacts, containers, and virtual machines, preventing resource clutter. Regularly purge old build artifacts and unused resources to maintain a tidy environment.

Documentation and training

Document best practices and performance guidelines for your CI/CD processes, ensuring that the entire team follows these standards consistently. Provide training and guidance to team members to empower them to implement and maintain these optimization strategies effectively.

By implementing these strategies, you can significantly enhance the speed, efficiency, and reliability of your CI/CD pipeline, ultimately leading to smoother software development and delivery processes. These are some of the best practices at a high level, and they are not exhaustive, but they are good enough so that you can start optimizing your CI environment.

Summary

This chapter covered CI, and you understood the need for CI and the basic CI workflow for a container application. We then looked at GitHub Actions, which we can use to build an effective CI pipeline. Next, we looked at the Jenkins open source offering and deployed a scalable Jenkins on Kubernetes with Kaniko, setting up a Jenkins controller-agent model. We then understood how to use hooks for automating builds, both in the GitHub Actions- based workflow and the Jenkins-based workflow. Finally, we learned about build performance best practices and dos and don’ts.

By now, you should be familiar with CI and its nuances, along with the various tooling you can use to implement it.

Automating a build with triggers – Continuous Integration with GitHub Actions and Jenkins

The best way to allow your CI build to trigger when you make changes to your code is to use a post-commit webhook. We looked at such an example in the GitHub Actions workflow. Let’s try to automate the build with triggers in the case of Jenkins. We’ll have to make some changes on both the Jenkins and the GitHub sides to do so. We’ll deal with Jenkins first; then, we’ll configure GitHub.

Go to Job configuration | Build Triggers and make the following changes:

Figure 11.16 – Jenkins GitHub hook trigger

Save the configuration by clicking Save. Now, go to your GitHub repository, click Settings | Webhooks | Add Webhook, and add the following details. Then, click Add Webhook:

Figure 11.17 – GitHub webhook

Now, push a change to the repository. The job on Jenkins will start building:

Figure 11.18 – Jenkins GitHub webhook trigger

This isautomated build triggers in action. Jenkins is one of the most popular open source CI tools on the market. The most significant advantage of it is that you can pretty much run it anywhere. However, it does come with some management overhead. You may have noticed how simple it was to start with GitHub Actions, but Jenkins is slightly more complicated.

Several other SaaS platforms offer CI and CD as a service. For instance, if you are running on AWS, you’d get their inbuilt CI with AWS Code Commit and Code Build; Azure provides an entire suite of services for CI and CD in their Azure DevOps offering; and GCP provides Cloud Build for that job.

CI follows the same principle, regardless of the tooling you choose to implement. It is more of a process and a cultural change within your organization. Now, let’s look at some of the best practices regarding CI.

Building performance best practices

CI is an ongoing process, so you will have a lot of parallel builds running within your environment at a given time. In such situations, we can optimize them using several best practices.

Aim for faster builds

The faster you can complete your build, the quicker you will get feedback and run your next iteration. A slow build slows down your development team. Take steps to ensure that builds are faster. For example, in Docker’s case, it makes sense to use smaller base images as it will download the code from the image registry every time it does a build. Using a single base image for most builds will also speed up your build time. Using tests will help, but make sure that they aren’t long-running. We want to avoid a CI build that runs for hours. Therefore, it would be good to offload long-running tests into another job or use a pipeline. Run activities in parallel if possible.

Connecting Jenkins with the cluster – Continuous Integration with GitHub Actions and Jenkins

We will install the Kubernetes plugin to connect the Jenkins controller with the cluster. We’re doing this because we want Jenkins to dynamically spin up agents for builds as Kubernetes pods.

We will start by creating a kubernetes configuration under jenkins.clouds, as follows:
jenkins
clouds:
kubernetes:
serverUrl: “https://”
jenkinsUrl: “http://jenkins-service:8080”
jenkinsTunnel: “jenkins-service:50000”
skipTlsVerify: false
useJenkinsProxy: false
maxRequestsPerHost: 32
name: “kubernetes”
readTimeout: 15
podLabels:
key: jenkins
value: agent

As we have a placeholder called within the configuration, we must replace this with the Kubernetes control plane’s IP address. Run the following command to fetch the control plane’s IP address:
$ kubectl cluster-info | grep “control plane”

Kubernetes control plane is running at https://35.224.6.58

Now, replace the placeholder with the actual IP address you obtained from the preceding command by using the following command:
$ sed -i ‘s//actual_ip/g’ casc.yaml

Let’s look at each attribute in the config file:
• serverUrl: This denotes the Kubernetes control plane server URL, allowing the Jenkins controller to communicate with the Kubernetes API server.
• jenkinsUrl: This denotes the Jenkins controller URL. We’ve set it to http://jenkins-service:8080.
• jenkinsTunnel: This describes how the agent pods will connect with the Jenkins controller. As the JNLP port is 50000, we’ve set it to jenkins-service:50000.
• podLabels: We’ve also set up some pod labels, key=jenkins and value=agent. These will be set on the agent pods.

Other attributes are also set to their default values.

Every Kubernetes cloud configuration consists of multiple pod templates describing how the agent pods will be configured. The configuration looks like this:
kubernetes:
templates:
name: “jenkins-agent”
label: “jenkins-agent”
hostNetwork: false
nodeUsageMode: “NORMAL”
serviceAccount: “jenkins” imagePullSecrets:
name: regcred yamlMergeStrategy: “override” containers:

Here, we’ve defined the following:
• The template’s name and label. We set both to jenkins-agent.
• hostNetwork: This is set tofalse as we don’t want the container to interact with the host network.
• seviceAccount: We’ve set this to jenkins as we want to use this service account to interact with Kubernetes.
• imagePullSecrets: We have also provided an image pull secret called regcred to authenticate with the container registry to pull the jnlp image.

Every pod template also contains a container template. We can define that using the following configuration:

containers:
name: jnlp
image: “/jenkins-jnlp-kaniko”
workingDir: “/home/jenkins/agent”
command: “”
args: “” livenessProbe:
failureThreshold: 1
initialDelaySeconds: 2
periodSeconds: 3
successThreshold: 4
timeoutSeconds: 5
volumes:
secretVolume:
mountPath: /kaniko/.docker
secretName: regcred

Here, we have specified the following:
• name: Set to jnlp.
• image: Here, we’ve specified the Docker agent image we will build in the next section. Ensure that you replace the placeholder with your Docker Hub user by using the following command:

$ sed -i ‘s//actual_dockerhub_user/g’ casc.yaml
• workingDir: Set to /home/jenkins/agent.
• We’ve set the command and args fields to blank as we don’t need to pass them.
• livenessProbe: We’ve defined a liveness probe for the agent pod.
• volumes: We’ve mounted the regcred secret to the kaniko/.docker file as a volume. As regcred contains the Docker registry credentials, Kaniko will use this to connect with your container registry.

Now that our configuration file is ready, we’ll go ahead and install Jenkins in the next section.

Spinning up Google Kubernetes Engine – Continuous Integration with GitHub Actions and Jenkins

Once you’ve signed up and are in your console, open the Google Cloud Shell CLI to run the following commands.

You need to enable the Kubernetes Engine API first using the following command:

$ gcloud services enable container.googleapis.com

To create a two-node autoscaling GKE cluster that scales from one to five nodes, run the following command:

$ gcloud container clusters create cluster-1 –num-nodes 2 \

–enable-autoscaling –min-nodes 1 –max-nodes 5 –zone us-central1-a

And that’s it! The cluster will be up and running.

You must also clone the following GitHub repository for some of the exercises provided: https:// github.com/PacktPublishing/Modern-DevOps-Practices-2e.

Run the following command to clone the repository into your home directory and cd into the following directory to access the required resources:

$ git clone https://github.com/PacktPublishing/Modern-DevOps-Practices-2e.git \ modern-devops

$ cd modern-devops/ch11/jenkins/jenkins-controller

We will use the Jenkins Configuration as Code feature to configure Jenkins as it is a declarative way of managing your configuration and is also GitOps-friendly. You need to create a simple YAML file with all the required configurations and then copy the file to the Jenkins controller after setting an environment variable that points to the file. Jenkins will then automatically configure all aspects defined in the YAML file on bootup.

Let’s start by creating the casc.yaml file to define our configuration.

Creating the Jenkins CaC (JCasC) file

The Jenkins CaC (JCasC) file is a simple YAML file that helps us define Jenkins configuration declaratively. We will create a single casc.yaml file for that purpose, and I will explain parts of it. Let’s start by defining Jenkins Global Security.

Configuring Jenkins Global Security

By default Jenkins is insecure – that is, if you fire up a vanilla Jenkins from the official Docker image and expose it, anyone can do anything with that Jenkins instance. To ensure that we protect it, we need the following configuration:

jenkins:
remotingSecurity:
enabled: true
securityRealm:
local:
allowsSignup: false
users:
id: ${JENKINS_ADMIN_ID}
password: ${JENKINS_ADMIN_PASSWORD}
authorizationStrategy:
globalMatrix:
permissions:
“Overall/Administer:admin”
“Overall/Read:authenticated”

In the preceding configuration, we’ve defined the following:

  • remotingSecurity: We’ve enabled this feature, which will secure the communication between the Jenkins controller and agents that we will create dynamically using Kubernetes.
  • securityRealm: We’ve set the security realm to local, which means that the Jenkins controller itself will do all authentication and user management. We could have also offloaded this to an external entity such as LDAP:
  • allowsSignup: This is set tofalse. This means you don’t see a sign-up link on the Jenkins home page, and the Jenkins admin should manually create users.
  • users: We’ll create a single user with id and password sourced from two environment variables called JENKINS_ADMIN_ID and JENKINS_ADMIN_PASSWORD, respectively.
  • authorizationStrategy: We’ve defined a matrix-based authorization strategy where we provide administrator privileges to admin and read privileges to authenticated non-admin users.

Also, as we want Jenkins to execute all their builds in the agents and not the controller machine, we need to specify the following settings:

jenkins:
systemMessage: “Welcome to Jenkins!”
numExecutors: 0

We’ve set numExecutors to 0 to allow no builds on the controller and also set systemMessage on the Jenkins welcome screen.

Now that we’ve set up the security aspects of the Jenkins controller, we will configure Jenkins to connect with the Kubernetes cluster.

Scalable Jenkins on Kubernetes with Kaniko – Continuous Integration with GitHub Actions and Jenkins-2

Jenkins follows a controller-agent model. Though technically, you can run all your builds on the controller machine itself, it makes sense to offload your CI builds to other servers in your network to have a distributed architecture. This does not overload your controller machine. You can use it to store the build configurations and other management data and manage the entire CI build cluster, something along the lines of what’s shown in the following diagram:

Figure 11.7 – Scalable Jenkins

In the preceding diagram, multiple static Jenkins agents connect to a Jenkins controller. Now, this architecture works well, but it needs to be more scalable. Modern DevOps emphasizes resource utilization, so we only want to roll out an agent machine when we want to build. Therefore, automating your builds to roll out an agent machine when required is a better way to do it. This might be overkill when rolling out new virtual machines, as it takes some minutes to provision a new VM, even when using a prebuilt image with Packer. A better alternative is to use a container.

Jenkins integrates quite well with Kubernetes, allowing you to run your build on a Kubernetes cluster. That way, whenever you trigger a build on Jenkins, Jenkins instructs Kubernetes to create a new agent container that will then connect with the controller machine and run the build within itself. This is build on-demand at its best. The following diagram shows this process in detail:

Figure 11.8 – Scalable Jenkins CI workflow

This sounds great, and we cango ahead and run this build, but there are issues with this approach. We must understand that the Jenkins controller and agents run as containers and aren’t full-fledged virtual machines. Therefore, if we want to run a Docker build within the container, we must run the container in privileged mode. This isn’t a security best practice, and your admin should already have turned that off. This is because running a container in privileged mode exposes your host filesystem to the container. A hacker who can access your container will have full access so that they can do whatever they want in your system.

To solve that problem, you can use a container build tool such as Kaniko. Kaniko is a build tool provided by Google that helps you build your containers without access to the Docker daemon, and you do not even need Docker installed in your container. It is a great way to run your builds within a Kubernetes cluster and create a scalable CI environment. It is effortless, not hacky, and provides a secure method of building your containers, as we will see in the subsequent sections.

This section willuse Google Kubernetes Engine (GKE ). As mentioned previously, Google Cloud provides a free trial worth $300 for 90 days. You can sign up at https://cloud.google.com/ free if you have not already done so.

Scalable Jenkins on Kubernetes with Kaniko – Continuous Integration with GitHub Actions and Jenkins-1

Imagine you’re running a workshop where you build all sorts of machines. In this workshop, you have a magical conveyor belt called Jenkins for assembling these machines. But to make your workshop even more efficient and adaptable, you’ve got a team of tiny robot workers called Kaniko that assist in constructing the individual parts of each machine. Let’s draw parallels between this workshop analogy and the technology world:

  • Scalable Jenkins: Jenkins is a widely used automation server that helps automate various tasks, particularly those related to building, testing, and deploying software. “Scalable Jenkins” means configuring Jenkins in a way that allows it to efficiently handle a growing workload, much like having a spacious workshop capable of producing numerous machines.
  • Kubernetes: Think of Kubernetesas the workshop manager. It’s an orchestration platform that automates the process of deploying, scaling, and managing containerized applications. Kubernetes ensures that Jenkins and the team of tiny robots (Kaniko) work seamlessly together and can adapt to changing demands.
  • Kaniko: Kaniko is equivalent to your team of miniature robot workers. In the context of containerization, Kaniko is a tool that aids in building container images, which are akin to the individual parts of your machines. What makes Kaniko special is that it can do this without needing elevated access to the Docker daemon. Unlike traditional container builders, Kaniko doesn’t require special privileges, making it a more secure choice for constructing containers, especially within a Kubernetes environment.

Now, let’s combine the three tools and see what we can achieve:

  • Building containers at scale: Your workshop can manufacture multiple machines simultaneously, thanks to Jenkins and the tiny robots. Similarly, with Jenkins on Kubernetes using Kaniko, you can efficiently and concurrently create container images. This ability to scale is crucial in modern application development, where containerization plays a pivotal role.
  • Isolation and security: Just as Kaniko’s tiny robots operate within a controlled environment, Kaniko ensures that container image building takes place in an isolated and secure manner within a Kubernetes cluster. This means that different teams or projects can use Jenkins and Kaniko without interfering with each other’s container-building processes.
  • Consistency and automation: Similar to how the conveyor belt (Jenkins) guarantees consistent machine assembly, Jenkins on Kubernetes with Kaniko ensures uniform container image construction. Automation is at the heart of this setup, simplifying the process of building and managing container images for applications.

To summarize, scalable Jenkins on Kubernetes with Kaniko refers to the practice of setting up Jenkins to efficiently build and manage container images using Kaniko within a Kubernetes environment. It enables consistent, parallel, and secure construction of container images, aligning perfectly with modern software development workflows.

So, the analogy of a workshop with Jenkins, Kubernetes, and Kaniko vividly illustrates how this setup streamlines container image building, making it scalable, efficient, and secure for contemporary software development practices. Now, let’s dive deeper into Jenkins.

Jenkins is the most popular CI tool available in the market. It is open source, simple to install, and runs with ease. It is a Java -based tool with a plugin-based architecture designed to support several integrations, such as with a source code management tool such as Git, SVN, and Mercurial, or with popular artifact repositories such as Nexus and Artifactory. It also integrates well with well-known build tools such as Ant, Maven, and Gradle, aside from the standard shell scripting and Windows batch file executions.

Creating a GitHub repository – Continuous Integration with GitHub Actions and Jenkins-2

You must define two secrets within your repository using the following URL: https://github. com//mdo-posts/settings/secrets/actions.

Define two secrets within the repository:
DOCKER_USER=
DOCKER_PASSWORD=

Now, let’s move this build.yml file to the workflows directory by using the following command:
$ mv build.yml .github/workflows/

Now, we’re ready to push this code to GitHub. Run the following commands to commit and push the changes to your GitHub repository:
$ git add –all
$ git commit -m ‘Initial commit’
$ git push

Now, go to the Workflows tab of your GitHub repository by visiting https://github.com//mdo-posts/actions. You should see something similar to the following:

Figure 11.2 – GitHub Actions

As we can see, GitHub has run a build using our workflow file, and it has built the code and pushed the image to Docker Hub. Upon visiting your Docker Hub account, you should see your image present in your account:

Figure 11.3 – Docker Hub image

Now, let’s try to break our code somehow. Let’s suppose that someone from your team changed the app.py code, and instead of returning post in the create_post response, it started returning pos. Let’s see what would happen in that scenario.

Make the following changes to the create_post function in the app.py file:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘pos’: str(inserted_post.inserted_id)}), 201

Now, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

Now, go to GitHub Actions and find the latest build. You will see that the build will error out and give the following output:

Figure 11.4 – GitHub Actions – build failure

As we can see, the Build the Docker image step has failed. If you click on the step and scroll down to see what happened with it, you will find that the app.test.py execution failed. This is because of a test case failure with AssertionError: ‘post’ not found in {‘pos’: ‘60458fb603c395f9a81c9f4a’}. As the expected post key was not found in the output, {‘pos’: ‘60458fb603c395f9a81c9f4a’}, the test case failed, as shown in the following screenshot:

Figure 11.5 – GitHub Actions – test failure

We uncovered the error when someone pushed the buggy code to the Git repository. Are you able to see the benefits of CI already?

Now, let’s fix the code and commit the code again.

Modify the create_post function of app.py so that it looks as follows:
@app.route(‘/posts’, methods=[‘POST’])
def create_post():

return jsonify({‘post’: str(inserted_post.inserted_id)}), 201

Then, commit and push the code to GitHub using the following commands:
$ git add –all
$ git commit -m ‘Updated create_post’
$ git push

This time, the buildwill be successful:

Figure 11.6 – GitHub Actions – build success

Did you see how simple this was? We got started with CI quickly and implemented GitOps behind the scenes since the config file required to build and test the code also resided with the application code.

As an exercise, repeat the same process for the reviews, users, ratings, and frontend microservices.
You can play around with them to understand how it works.

Not everyone uses GitHub, so the SaaS offering might not be an option for them. Therefore, in the next section, we’ll look at the most popular open source CI tool: Jenkins.