Great content delivered right to your mailbox

Thank you! Check your inbox for our monthly recap!

Introduction

We can say with certainty that all software development companies have encountered challenges related to management and release of new software versions. No matter the company stage, these challenges affect all types of environments: testing, quality assessment and production. An automation process is imperative to reduce costs and errors.
In Sherweb’s Development team, we’ve conducted a whole project to improve our production delivery systems. It started in January 2015.

 

Our project lead highlighted the following issues related to dysfunctional automation:

  • New configuration not added to production environment and causing application crash. There were several error-prone configuration files maintained manually and distributed in several places.
  • New components not deployed on all environments. In the case of major deployments with different implementation activities, it happened that we forgot to deploy one component, making the whole project fail and later troubleshooting difficult.
  • Discrepancies between versions tested in lab environments and those released to the production.
  • Mix of automatic and manual deployments for the same project, or simply full manual deployment.
  • Partial deployment, with new app version deployed but the system still running the old version in memory.
  • Deployed app not working properly due to misconfiguration on the environment. This forced teams into a long debugging process.
  • Merge nightmare between branches due to undetected errors or build errors. There were frequent integration issues that we tried to solve with branching solution.
  • Failed deployment on test environment cause automated test errors.
  • Automation failing completely
  • And more.

When using TFS for deployment, we cannot use the same flow for staging and production for security reasons.

 

What We’ve Initially Agreed To

Our project focused on Continuous Delivery (CD) and Continuous Integration (CI). The main CD goal is to be able to reliably release new versions any time in less than 1 day. CI will help us rapidly detect any new error in the master branch and fix it in less than 10 minutes, or revert to the original in case of bugs.

Some imperatives:

  • Each code change must initiate a build, with both the code and deployment process tested.
  • A single build of code is executed and the same binary is used in the pipeline, from Acceptance to Production.
  • We need an easier and more secure system to manage environment configuration.

We focused on two initial questions to a clear view of all products’ stage and state: Which version is deployed in all environments? In which stage a specific commit is and which action is made in this stage?

Based on these, we were able to build a pipeline for our project, as shown below in our reference view. The reference book Continuous Delivery by Jez Humble and David Farley helped us identify a goal to our project. The next phase was to find the right tools to reach it.

dev_production_delivery1
Illustration: Go screenshot with all commit and stages

Finding the Right Tools

We considered different tools for our CI/CD automation project. Our short list was: Release Management, Jenkins, TeamCity, Bamboo, Go and Octopus Deploy.

Our final choice for CI was Jenkins . It outmatched the other tools especially because of: its flexibility, our team expertise on the product, and its ability to best represent a pipeline. The other tools would not let us have a clear view of both stage and steps.

For CD, we chose Octopus Deploy, mostly for its enterprise-grade security and feature. Octopus is best to solve all our configuration issues, offering both basic settings and enhanced security for critical configuration.

 

Configuring Jenkins and Automating CI

Jenkins offers two major plugins for pipelines, Build Pipeline Plugin and Delivery Pipeline Plugin. We preferred the Delivery Pipeline Plugin for its look and functionality.

dev_production_delivery6
Illustration: Build Pipeline Plugin sample

dev_production_delivery7Illustration: Delivery Pipeline Plugin Sample

Many tools including Jenkins have a main issue: configuring multiple projects and tasks is not easy. For instance, if you set up multiple identical jobs with small differences and that you have to change one action, you need to modify each job separately.

With over a dozen of projects and several pipeline steps expected, we need a good configuration system for Jenkins jobs. A good solution is to create a template job, then use that template with different parameters for a specific job. The other way is Job DSL Plugin.

With this plugin, we first create a pipeline generator script bundle. After that, we only need to fill project configuration settings in a json file. The groovy script will parse them and will automatically create and update more than 500 Jenkins job. This solution makes adding new products, changing pipeline configuration or adding new steps extremely easy. We’ve created several groovy scripts and many PowerShell script modules so we can maximize code reuse.

 

Configuring Octopus Deploy Project and Automating Your Deployment

Octopus is not immune to the issue of maintaining configuration for multiple deploy jobs. We had to also create a bundle of generator scripts for deployments. These scripts use json configuration files to automatically create and update Octopus Deploy steps.

This is how we were able to easily add pipelines and automate the deployment of projects with minimum effort and knowledge of Jenkins and Octopus configuration.dev_production_delivery5

Illustration: Octopus Auto-Generated Deploy step sample

The result

Now each Visual Studio solution has its pipeline with essentially these steps:

  • Build step: build code and detect code rule violations (TFS or git source supported)
  • Octopus Project step: create or update Octopus Deploy jobs
  • Submit:jobs that send all deploy packages to the nuget server and create new Octopus Deploy releases
  • Validation step: make multiple validations like checking configuration on all environments, or checking translation update status in Transifex.
  • Sonar step: code quality control with SonarQube
  • Unit tests: run unit tests or Integration tests
  • Deploy: doing deployments with Octopus
  • Deployment tests: check the application self-diagnostics after deployment
  • Change log: automatic source repository analysis to list changes
  • Approbation step: for multiple-team approbation
  • Check deploy step: validate that production deploy was made with approved version and no errors.

dev_production_delivery3

Illustration: Summary pipeline view showing which version is actually in each stage

 

dev_production_delivery4

Illustration: 2 Commit pipeline showing each commit progression

 

dev_production_delivery2_crop

Illustration: Excerpt of dashboard sowing each product state

Conclusion

Our new system solves many of our initial problems. We can now reliably deploy new features or small hotfixes to production in less than 1 hour. We’re spending much less time investigating bugs on configuration or errors on all environments. It’s also easier to identify which change is causing a bug.

Of course, there are still improvements to be made in the future, especially in reducing our deployment time to a few minutes, making configuration easier, improving error identification, and building faster pipelines for hotfixes.

What is sure, developers at Sherweb have gained much more time to play pool, table tennis and, obviously, for programming.

 

Written by Pascal Martin Employee @ Sherweb

Pascal is a Software Developer at SherWeb with more than a decade of experience in web application development. Pascal has a degree in Computer Science from the Université de Sherbrooke, with a concentration in Intelligent Systems. He’s participated in multiple projects to implement continuous deployment and continuous integration.