Use Continuous delivery to avoid the big bang

I hope “continuous delivery” is not already in buzzword territory for you. I wouldn’t talk about the topic if it wasn't really helpful. On this journey I talk about what continuous delivery is, why it’s great for your product, your customers and your developers, and will show you by example how to set up a flexible pipeline for continuously integrating and delivering your software.

Continuous integration (CI) and continuous delivery (CD)

When your team develops any kind of software you need to integrate different pieces with each other. The common way is to use some version control system that contains your source code. All developers commit their changes and the result is a project that can be compiled or interpreted. To make sure that everything works you can use Jenkins or other tools to compile it and run tests on your software. The result is a system that always contains a runnable version (the last version of your version control system that succeeded in the build and tests) and developers can add small pieces and always get fast feedback if something has gone wrong. Usually the end of your build pipeline is some kind of bundling job that bundles your software. For example in JavaScript you want to join all the code together and minify it to decrease the loading time for your customer’s browser. This may however change in the future through the use of a dynamic loader but I still think there will be some kind of improvement step at the end. In compiled programming languages like Java you would add a compile step. For example you can use Maven to create some executable jar or at least deploy your project to a repository like nexus.

CI is a very powerful practice. Your build server always tells you important things. It should execute a bunch of different tests. I have noticed that teams do not put enough effort into maintaining their build. The result is a very slow build (> 10 minutes). That can be fixed by adding multi core support to your compile step and your tests. You can also scale by adding more horsepower to your server. This is crucial: If you cut costs on your remote build then you should already leave the sinking ship. To be sure, your build server should run your build and tests faster than they would run on a local computer because you can just buy a really fast machine. Scaling by people is a lot more expensive and difficult compared to scaling by infrastructure. So keep your builds fast and enjoy building software continuously. Things can go in the wrong direction when your software grows, and your code rots and your tests are slow. It should be helpful rather than annoying.

When you have a solid continuous integration system then you can possibly do more than just build your software as a product of your process. Software products typically have release cycles: your IDE has a version and you are notified when this software can be updated. Nowadays updates of your tools are released a lot more often than they were ten years ago. The reason for this is that teams realized that the effort to ensure a high level of quality was not as high as it was when releasing software only a few times per year. For example if you release it twice a year then developers, product owners and other related people will be happy for four months of the year. As the release date comes closer it will be more stressful for each of the participants. History tells us that it can get even worse. Here's a little snapshot of the project manager's thought processes:

Alright, it takes a lot of effort to release our software twice a year. We will probably save a lot of effort and be less stressed if we release our software once a year.

As time goes by people realize that this does not really solve the issue. Things get even more difficult with a longer release cycle. The months before releasing your software become more stressful and your customers won’t be happy if they only get an update once a year.

Continuous delivery means as soon as your build is successful you release your software. You won’t have any kind of release dates. Usually it only works for software that can be updated in the background without making your customers unhappy with non-stop update notifications. For example it works great for websites because your customers won’t realize that they have downloaded a new version. They don’t need to download any kind of update because the browser will fetch the update as soon as they hit the refresh button. But casual software goes in the same direction. Most of the products receive updates as regularly as possible to keep the changes small and to avoid a big bang. Keeping the version as stable as possible and always being ready to release is a similar feeling.

Imagine if, when you commit your changes and the build is green, your product gets shipped. As a developer you will check your tests multiple times and it will be an exciting moment when you push your update right into production. Testing and committing changes will require more effort but you will never have a big bang. You won’t plan releases any more because you are constantly releasing. You just work continuously on new features and ship them immediately. Having a good foundation of unit, functional, integration and end-to-end tests will save you a lot of trouble. Some kind of mechanism to roll back your release when there is a critical bug is also as important as measurement indicators are. You won’t need a QA any more, though having a set of measurements that tells you if your product works correctly is a great benefit. You also get some indicators that can be compared when testing new features (i.e. A/B tests).
There are some requirements that should be noted:

  • Keep your pipeline flexible. Software evolves and your software will also grow and change. You might add some mobile-related build steps or end-to-end tests. Making changes should be easy as it should be in software. Your pipeline will get more and more complex over time and this needs to be handled to avoid a messy build pipeline.
  • Use the tools that fit the needs of your software. You can easily use different tools for different parts of your build. Most of them are easy to extend and have a plugin interface to make communication to other tools possible.
  • Your pipeline needs to be transparent. If you push an update you want to know what’s happening right now. You especially want to know as soon as possible when something goes wrong. A nice visualization like the Jenkins pipeline state visualization is a great feature to see what's happening when you want to see all pipelines working.

Continuous Delivery using Git, Gitlab-CI and Jenkins as an example

Our target is to set up a basic example based upon a Java back end, a JavaScript front end and and a NodeJS web server. On this technology stack we have a lot of popular scenarios covered.
Set up the following tools:

  • Git
  • Gitlab-CI
  • Jenkins 2
  • Java
  • Maven
  • NodeJS

At first make sure you create a git repository for each project (back end, front end). Also create a build job for each within gitlab. Add some sample code and some tests that will be run in the gitlab build.
You need to define a Gitlab-CI runner. I suggest to use a docker-based runner because it encapsulates your builds nicely. You should see a green snag for each of your projects.

Now you have a simple continuous integration example. Gitlab-CI will execute whatever you define after changes are made. You also have the full git power integrated in a great tooling: branches, merges, code review, issue tracking, milestones and much more.
My experience is that Gitlab works great for the integration part. But the delivery process is a lot more complex. For this part Jenkins has become great over time. If you want you can also just use Jenkins for your integration part but that depends upon your needs. So we need a way to trigger Jenkins from GitLab. There are remote triggers that you can configure for Jenkins Jobs.

Based on this you can trigger this job by adding a deploy step to your Gitlab project:

# .gitlab-ci.yml

image: maven:3-jdk-8

  script: "mvn install -B"

  stage: deploy
  script: "curl -u username:apiPassword https://yourJenkinsUrl:somePort/job/someJob/build?token=someToken"

Now Jenkins jobs are triggered when all tests are passed. So we need to get into the deployment jobs of Jenkins. These kind of jobs can be very different depending on your technology and infrastructure. The web server part will be something like:

  1. Clone the web server repository that contains our web app
  2. Bundle the web app
  3. Start the web server
    The back end part looks like:
  4. Clone the back end repository
  5. Build the software without executing tests (this step is redundant)
  6. Stop the service if it is already running
  7. Copy the jar to some bin directory
  8. Start the new jar

You can for example create a shell script or use some other tools to set up these steps. Create a remote Jenkins runner that runs on your external machine. Target the delivery job to use this runner. Jenkins can authenticate itself using ssh on your server: clone the repository and execute a script.

Your back end should be stateless. So restarting the service after an update should not interrupt a user’s intention. I won’t go deeper into detail because it depends upon your technology stack. In addition, starting multiple instances of services on different hosts, reboot behavior, dockerizing your services, load balancing, deploying branch strategies and other CI related topics are interesting details, though out of this article's scope.

What did we achieve? If you push a change to your repository, your build starts, your tests run, your delivery tool is notified and your software will start in under a minute. Jenkins has some nice plugins that makes delivery very flexible. The new pipeline feature in particular makes one pipeline with different parameters possible, combined with nice workflow visualizations.
Have fun improving your build and experimenting with CI and CD.



Read more posts by this author.