This is another article about Docker pipelines with Jenkins.
I had already written about Jenkins libraries to standardize Jenkins pipelines in an organization. This can be seen as a complement in an OpenShift context.
OpenShift is a solution based on Kubernetes and that brings additional features such as a portal, a catalog of solutions (including Jenkins), many security features and new K8s resources. Among these new resources, there is the notion of Build Config. A build config object allows to associate a source configuration with a Jenkins project. So, basically, you write a YAML file, indicating as an example the location of a git repository, the branch you want to build, etc. You then pass it to the oc client (oc apply -f my.yaml) and it will create a Jenkins project. You will find more information about build configurations on the OpenShift project.
The problem I got with build configs was that the generated project in Jenkins is a simple one.
It does not support multiple branches. Generating a multi-branch pipeline would be much better, but it is not feasible. I was then advised to look at Jenkins’ Job DSL. It relies on a Jenkins plug-in that from a seed project can populate and configure Jenkins projects. Thanks to this plug-in, I quickly got a Jenkins configuration as code.
Using the Job DSL
The first thing to do is to create a seed project in Jenkins.
It is a free-style project with a build step that executes a Job DSL script. Everything is documented here.
The following script is stored in a git repository.
I copy it in my seed’s configuration and that’s it. Here is what it looks like…
def gitBaseUrl = "https://some.gitlab/path/to/a/my/gitlab/group" def gitRepos = [ ["docker-project-name-1":"path-to-project-1"], ["docker-project-name-2":"path-to-project-2"] ] for (gitRepo in gitRepos) { for ( e in gitRepo ) { // Create Jenkins folders and reference shared libraries folder("${e.value}") { properties { folderLibraries { libraries { libraryConfiguration { name("my-library-for-docker") defaultVersion('master') implicit(false) allowVersionOverride(true) includeInChangesets(true) retriever { modernSCM { scm { git { remote('https://some.gitlab/path/to/a/jenkins/library') credentialsId('my-credentials-id') }}} } } } }} } // Create multi-branch pipeline projects multibranchPipelineJob("${e.value}/${e.key}") { branchSources { branchSource { source { git { id("${e.value}-${e.key}") remote("${gitBaseUrl}/${e.value}/${e.key}.git") credentialsId('middleware-jenkins-gittoken') }} strategy { defaultBranchPropertyStrategy { props { // Do not trigger build on branch scan noTriggerBranchProperty() } } } } } // Listen to changes in branches and new tags configure { node -> node / sources / data / 'jenkins.branch.BranchSource' / source / traits { 'jenkins.plugins.git.traits.BranchDiscoveryTrait'() 'jenkins.plugins.git.traits.TagDiscoveryTrait'() } } // Verify new branches and new tags everyday triggers { cron('@daily') } // What to do with old builds? orphanedItemStrategy { discardOldItems { numToKeep(10) daysToKeep(-1) } } } } }
Here, I explicitly reference the projects I want to generate.
I reuse the git’s folder structure and project it in Jenkins. Shared libraries can be defined globally in Jenkins, but also on folders. The Job DSL does not allow to configure global stuff. But it allows to do it on folders. The multi-branch pipelines are quite easy to understand. We consider both branches and tags.
If you need to customize the script, you can find examples on Github and most of all in your Jenkins instance.
Once the script is set in your seed project, just build it in Jenkins and it will update your Jenkins projects. The build is idem-potent. You can run it as many times as you want. It will overwrite the current settings. So, if you need to update the DSL script, just do it and run a new build, everything will be updated. This is useful if you add new shared libraries. In the same way, the Jenkins plug-in tracks the projects it has created. So, if I remove a project from my list, with the default behavior, it will not delete the Jenkins related project but only disable it (it will be read-only).
I have investigated about directly referencing the script instead of copying it.
It seems you cannot automatically propagate changes from the sources, you have to validate the changes first. I guess this is a security feature. I have not searched a lot of about this, this is not a big matter for us, for the moment.
Normalized Pipeline for Docker Images
The job DSL defines a shared library at the folder level.
Here is a simple pipeline library (myDockerPipeline.groovy) for our Docker images.
1. Checkout the sources.
2. Verify some assertions on our Dockerfile.
3. Build the image.
4. Publish it in the right repository (not the same for branches and tags).
5. Perform an AQUA analysis of development images. We assume another pipeline handles images built from a tag (not shown here).
There is no test in this pipeline, although we could add some.
In Jenkins, it is achieved with…
// Shared library that defines the generic pipeline for Docker images. def call( Map pipelineParams ) { // Basic properties def tokenString = pipelineParams.gitRepoPath.replace('/', '-') + "--" + pipelineParams.gitRepoName def imageName = pipelineParams.gitRepoName.replace('-dockerfile', '') def label = 'base' // Complex properties (configure build trigger through an URL and a token) properties([ pipelineTriggers([ [$class: 'GenericTrigger', genericVariables: [ [key: 'ref', value: '$.ref'], [ key: 'before', value: '$.before', expressionType: 'JSONPath', regexpFilter: '', defaultValue: '' ] ], genericRequestVariables: [ [key: 'requestWithNumber', regexpFilter: '[^0-9]'], [key: 'requestWithString', regexpFilter: ''] ], genericHeaderVariables: [ [key: 'headerWithNumber', regexpFilter: '[^0-9]'], [key: 'headerWithString', regexpFilter: ''] ], causeString: 'Triggered after a change on $ref', token: "${tokenString}", printContributedVariables: true, printPostContent: true, regexpFilterText: '$ref', regexpFilterExpression: 'refs/heads/' + BRANCH_NAME ] ]) ]) podTemplate(label: label, cloud: 'openshift', containers: [ containerTemplate( name: "jnlp", image: "my-jenkins-jnlp:v3.11", envVars: [ envVar(key: 'ENV_DOCKER_HOST', value: 'remote-docker-engine'), envVar(key: 'ENV_LOCAL_IMG_NAME', value: 'my-team/' + imageName), envVar(key: 'ENV_DEV_IMG_NAME', value: 'my-team/dev/' + imageName), envVar(key: 'ENV_RELEASE_IMG_NAME', value: 'my-team/releases/' + imageName) ] ), containerTemplate( name: "aqua", image: "our-aqua-image:v3.11", command: 'cat', ttyEnabled: true, envVars: [ envVar(key: 'ENV_DEV_IMG_NAME', value: 'my-team/dev/' + imageName) ] ) ], serviceAccount: "jenkins") { node(label) { container(name: 'jnlp') { // Checkout stage('Checkout') { checkout scm } // Lint stage('Linting') { // Do we have the right labels in the Dockerfile? verifyDockerfile() } // Build stage('Build') { sh 'docker -H "${ENV_DOCKER_HOST}" build -t "$ENV_LOCAL_IMG_NAME" .' } // Stages executed for a TAG if(env.TAG_NAME) { stage('Publish') { sh'''#!/bin/bash # Push to BUILD docker -H "${ENV_DOCKER_HOST}" tag \ "$ENV_LOCAL_IMG_NAME" \ "${ENV_RELEASE_IMG_NAME}":"${TAG_NAME}" docker -H "${ENV_DOCKER_HOST}" push \ "${ENV_RELEASE_IMG_NAME}":"${TAG_NAME}" # Push to releases docker -H "${ENV_DOCKER_HOST}" tag \ "$ENV_LOCAL_IMG_NAME" \ "${ENV_RELEASE_IMG_NAME}":"${TAG_NAME}" docker -H "${ENV_DOCKER_HOST}" push \ "${ENV_RELEASE_IMG_NAME}":"${TAG_NAME}" ''' } } // Simple branch else if(env.BRANCH_NAME) { stage('Publish') { sh'''#!/bin/bash # Push to BUILD docker -H "${ENV_DOCKER_HOST}" tag \ "$ENV_LOCAL_IMG_NAME" \ "${ENV_DEV_IMG_NAME}":"${BRANCH_NAME}" docker -H "${ENV_DOCKER_HOST}" push \ "${ENV_DEV_IMG_NAME}":"${BRANCH_NAME}" ''' } } } // Here, we use the AQUA plug-in to scan images // (we reference a remote AQUA installation). container(name: 'aqua') { if(env.BRANCH_NAME) { stage("Hosted : Aqua CI/CD Scan Image") { ansiColor('css') { aqua customFlags: '', hideBase: false, hostedImage: '"${ENV_DEV_IMG_NAME}":"${BRANCH_NAME}"', localImage: '', locationType: 'hosted', notCompliesCmd: 'echo "The AQUA has failed."', onDisallowed: 'fail', showNegligible: true, registry: 'our-registry-id', register: true } } } } } } }
The main thing to notice is what is commented as the complex properties.
By default, our Jenkins installation has a global listener activated: https://our-jenkins-url/generic-webhook-trigger/invoke
So, anyone sending a HTTP notification to this address could trigger something. The question is how to use it to trigger a specific job, for a specific branch? Well, we use a simple token for that. Here, the token is based on the repository name and location. As an example, our git repo “some-path/some-project” will be associated with the following token: some-path–some-project
So, if someone notifies https://our-jenkins-url/generic-webhook-trigger/invoke?token=some-path–some-project, then the job’s configuration will catch it. The other properties allow to filter the right branch and only trigger the right Jenkins job.
Another element to notice is the custom verifyDockerfile library.
Here is its code (verifyDockerfile.groovy).
def call() { def dockerfileContent = readFile('Dockerfile') assert dockerfileContent.contains('LABEL maintainer=') : "No maintainer was found." assert dockerfileContent.contains('"common-mailbox@our-domain.com"') : "The maintainer must be common-mailbox@our-domain.com" // OK, the check is somehow basic }
It allows to verify some parts of the Dockerfile.
Eventually, here is an example of our pipeline (Jenkinsfile).
@Library('my-library-for-docker') _ myDockerPipeline( gitRepoPath: 'repo-path', gitRepoName: 'repo-name' )
This way, the content of our Jenkinsfile is minimalist.
We can update our library at any time without having to update the Jenkinsfile. No matter how many Docker images you maintain, you are sure all of them follow a same pipeline.
As a reminder, all the groovy libraries must be located under the vars directory in your project.
About the Source Branch Plug-ins
You must have noticed I referenced the projects by hand at the beginning of the seed’s script.
It is possible to avoid this and to use a plug-in to scan directly your sources. There are existing ones for Github, Bitbucket and GitLAB.
You have to define an organization folder and its properties.
Once the seed is built, it will scan the Git forge and create multi-branch pipeline projects (for the branches that have a Jenkinsfile). Here is a sample for GitLAB.
organizationFolder('GitLab Organization Folder') { displayName('GitLAB') // "Projects" organizations { gitLabSCMNavigator { projectOwner("my-gitlab-group") credentialsId("personal-token") serverName("my-gitlab-url") traits { subGroupProjectDiscoveryTrait() // discover projects inside subgroups gitLabBranchDiscovery { strategyId(3) // discover all branches } } } } // "Project Recognizers" projectFactories { workflowMultiBranchProjectFactory { scriptPath 'Jenkinsfile' } } // "Orphaned Item Strategy" orphanedItemStrategy { discardOldItems { daysToKeep(10) numToKeep(5) } } // "Scan Organization Folder Triggers" triggers { periodicFolderTrigger { interval('60') } } }
As you can see, it is a little bit less verbose.
We have not chosen this approach though. Overall, the manual declaration is suitable for now. We also noticed some glitches with the GitLAB plug-in, mainly about character encoding and avatars. This is not a big issue by itself, fixes will come for that.