Shared Responsibilities in Pipelines for Docker and Kubernetes

DevOps, and in fact DevSecOps, tend to be the norm today.
Or let’s say it tend to be an ideal today. Many organizations are far from it, even if more and more are trying to adopt this way of work.

This kind of approach better fits small structures. With few means, it is not unusual for people to do everything themselves. Big structures, in particular among the GAFAM, have split their teams in small units (pizza teams are one example) that behave more or less like small structures. The issue with big organizations, is that they face additional challenges, mainly keeping a certain coherence in the information system, containing entropy and preventing anarchy. This is about having a common security policy, using identified languages, libraries and frameworks, and knowing what is used and where. In one word: governance.

This may seem like obsolete debates from a technical point of view. But from an organizational point of view, this is essential. Big structures often have support contracts. They need to know what runs or not. Especially when there can be controls from their contractors (think about Oracle or IBM as an example). They also need to know what kinds of technologies run. If a single person in the company masters this or that framework, how will the project be maintained? Knowing what runs allows to identify the required skills. Structures with many projects also need to know how projects interact with each others. This can impact the understanding when a crisis arises (what parts of the system depend on what). Last but not least, security is a crucial aspect, that goes beyond the sole projects. Having common guidelines (and checks) is imperative, in particular when things go faster and faster, which became possible by automating delivery processes.

One of the key to be agile, goes through continuous integration, and now continuous deployments. The more things are automated, the faster things can get delivered. The governance I described above should find a place in these delivery pipelines. In fact, such a pipeline can be cut into responsibility domains. Each domain is associated with a goal, a responsibility and a role that is in charge. In small organizations, all the roles may be hold by the same team. In big ones, they could be hold by different teams or councils. The key is that they need to work together.

This does not minor the benefits of DevSecOps approaches.
The more polyvalent a team is, the faster it can deliver its project. And if you only have excellent developers, you can have several DevOps teams and expect them to cope on their own. But not every organization is Google or Facebook. Some projects might need help, and keeping a set of common procedures is healthy. You will keep on having councils or pools of experts, even if your teams are DevSecOps. The only requirement is that the project teams are aware of these procedures (regular communication, training sessions) and all of these verifications should be part of automated delivery pipelines.

Responsibility Domains

I have listed 3 responsibility domains for automated pipelines:

  • Project is the first one, where the project team can integrate its own checks and tests. It generally includes functional tests. This is the usual part in continuous integration.
  • Security is the obvious one. It may imply verifications that developers may not think about.
  • Software Governance is the last domain. It may include used programs (do we have a support contract?), version checks, notify some cartography API, etc.

Different stages in the build pipeline cover various concerns

The goal is that each domain is a common, reusable set of steps, defined in its own (Git) repository. Only the security team/role could modify the security stage. Only the Software governance team/role could modify its stage. And only the project team should be able to specify its tests. Nobody should be able to block another domain from upgrading its part. If the security team needs to add a new verification, it can commit it anytime. The global build process should include this addition as soon as possible. The composition remains, but the components can evolve independently.

Every stage of the pipeline is controlled from a different Git repository

This article is the first from a series about implementing such a pipeline for Kubernetes. Since the projects I am involved in all consist in providing solutions as Kubernetes packages, I focus on Docker images and Helm packages. The Docker part asks no question. Helm provides a convenient way to deliver ready-to-use packages for Kubernetes. Assuming a team has to provide a solution for clients, this is in my opinion the best option if one wants to support it efficiently.

Assumptions

To make things simple, I consider we have one deliverable per Git repository. That is to say:

  • Project sources, whose build result is stored somewhere (Maven repo, NPM…).
  • A git repo per Docker image used in the final application.
  • A git repo for the Helm package that deploy everything (such a package can depend on other packages, and reference several Docker images).

A project that develops its own sources would thus have at least 3 Git repositories.
We could mix some of them, and complexify our pipeline. Again, I avoided it to keep things easy to understand.

There are also two approaches in CI/CD tools: build on nodes, and build in containers. There are also many solutions, including Jenkins, GitLab CI, Travis CI, etc. Given my context, I started with Jenkins with build performed on nodes. I might add other options later. The global idea is very similar for all, only the implementation varies.