How to ensure quality in continuous delivery
Our team at Klipfolio is often asked: “How do you release quality software to production every day?”
I have touched on some of our quality strategies in my continuous delivery blog series. In this post, I'll go a little deeper and focus on how we do it.
It comes down to following a series of strategies and protocols. Here is what we do:
1. Produce small, shippable increments and improvements
It’s easier to have good quality control over small items. That’s why we break the work down into small increments.
This is a different way of thinking about software delivery. Gone are the days of launching huge releases. We ship very small chunks of code every day.
The code may or may not get exposed to users, but the smaller the changes, the lower the risk.
We may have a launch day for a large feature, but most parts of the feature are shipped in small increments before the release date. Deciding when to expose the feature to customers using features switches (see item 12), is more or less a business decision rather than a decision based on the deployment success.
2. Define the development process steps
It’s easier to keep track of things (and make corrections) when you have defined a development process.
We use an issue tracking system in our development process. Every single code change is associated with a work item and goes through a very well-defined set of steps.
These steps—including peer reviews, UX reviews, and sometimes customer success reviews—are designed to make sure the code changes meet a high-quality bar before they are deployed to production.
3. Use work item templates
Defined templates allow teams to follow a consistent standard. For instance, each user story and code change contain at least the following items:
- User story: Defines the core and high-level requirement from the user’s point of view
- Acceptance criteria: Defines the details of the requirement, including the edge cases
- Quality checklist: Verifies that all the steps in the development process (such as automated tests) were executed as a reminder for the team members.
4. Use branches and shippable head-of-code stream
If used properly, code branches can help reduce risk and improve quality. You have to protect that master branch (the trunk) at all costs.
We’ve been using the GitHub Flow as our branch strategy. At its core, it relies on branches for every single change. While not everyone in the community thinks that if you are using GitHub Flow you do continuous integration (CI) and deployment, I think we’ve shown otherwise.
The trick is to not have long-lasting branches and instead have small shippable increments, as I noted in item number 1.
The other point to keep in mind is that your head-of-code stream must always be shippable. This means your CI build on the head-of-stream must always be green. Keeping it healthy should be the team’s top priority.
This is important because if the head-of-stream is in a red state (broken) for too long it will block the releases and potential hot fixes (quick fixes reacting to regression issues introduced by the releases) that need to be deployed immediately.
5. Incorporate continuous integration into your development process
CI is at the core of any deployment pipeline and a must-have for a team that produces high-quality software.
I can’t emphasize enough the importance of including CI in your practices:
We run CI builds for every single branch as well as for the master branch (the head-of-code stream, also referred to as trunk). Every time developers create a new branch, our CI system automatically detects the branch and starts running CI on it, including merging the master into their branch to identify any integration issues.
Additionally, every time a branch is merged into the master branch, we run a CI build on the master branch. Keeping that CI build green is everyone’s priority.
Our CI builds include a few steps, including various automated tests and static analysis, which I will cover in the next couple of items.
6. Use test automation
Test automation is another essential part of a quality-driven software development process, yet it is something too many people avoid.
I always cringe when I hear people say:
"We need to move faster. There's not enough time to write automated tests."
Or when people ask if they can write the tests after they ship.
Those two mindsets, which are unfortunately still commonly heard in the industry, usually indicate that there is not enough understanding about why automated tests are written.
The reasons are simple. Write automated tests because:
- You want to save time and deliver high-quality software;
- You want to save money by avoiding running the same repetitive tests manually;
- It’s much more expensive to fix the bugs after the feature is shipped than it is earlier in the development cycle; and,
- Introducing regression issues can be very costly for your brand.
In other words, automated tests are valuable because they save time and money.
Automated tests are often categorized into the following three groups.
- Unit tests (developers);
- Integration tests;
- Functional tests.
Unit tests are cheaper to write and maintain, so try to have more of those and fewer integration and functional tests.
7. Use static analysis/automated code reviews
Automated tools that do static analysis on your code can improve the quality of your code base and educate the team on the best practices.
There are many tools out there, like FindBugs and Lint, that can be used to find issues in code.
Having these tools can help developers learn best practices, avoid bugs and security issues, and make the code more consistent.
If you are planning to use one of these, I recommend making sure the developers either have tools right in their IDE (Integrated Development Environment) to get in-context feedback or can run them locally.
In addition, the same analysis should also be run on every code commit to a repository. You don’t want your developers to be frustrated because there is no way for them to get the same results in their local environment that they see on the CI system.
8. Make code reviews mandatory
If you are going to choose only one item from this list, choose code reviews. They are one of the most effective ways to improve the quality of your code and the overall software development craftsmanship of your team.
Code reviews, or peer reviews, have several benefits. These include improving the design, finding bugs, mentoring, and knowledge-sharing.
You can pair the developers even before any code is written and start the discussions in the design phase.
Use a tool that makes code reviews easy. In-line commenting and discussion is a critical feature. We use GitHub and GitHub’s Pull Requests for code reviews, and it’s been working for us.
From a cultural point of view, developers must see code reviews as important as writing code and delivering features.
Unfortunately, too many teams see code reviews as chores. If great reviews are not rewarded as much as features are, code reviews won’t have the desired outcome.
Here’s a a good read at IBM developerWorks on practices for more effective code reviews.
9. Include teams from User Experience (UX), Customer Success (CS), and Product Managers or Owners (PM/PO) in the review process
In parallel to the development team, other teams like UX, CS, and PM should also monitor the status of the features being worked on.
Depending on the feature, we involve those teams and get their feedback as part of the verification process.
The UX team often helps us make the user experience better before the feature goes out. Even if the development team has implemented the feature to the exact specifications of the UX team, we still often find opportunities to make things better when we use the software as a customer would.
On a related note, the customer success team always provides critical input because they know exactly how customers use the product. As a result, they do a great job of reminding us how customers don’t always use the software as we expect.
10. Use canary releases
We deploy the releases internally for about two hours before releasing the changes to the public. This helps us test them and uncover any issues.
When the changes are ready and merged into the master branch, they are first deployed to a set of servers that are only used internally by people in the office as they do their daily activities.
These activities include building dashboards for ourselves or our customers, investigating customer issues, or demoing the product. During this two-hour period, if anyone notices any issues they report them through our internal messaging tool.
While we don’t like issues to be found during canary release (because it’s rather late in the game), we appreciate the process every time it saves us from shipping an embarrassing bug to the entire customer base.
Note: It has saved us from embarrassment many times and is totally worth it.
I highly recommend this step, especially if your automated tests do not have a high coverage. It’s amazing how effective this short period and implicit testing can be.
11. Use a crowdsourcing quality assurance (QA) platform
Using a crowdsourcing QA platform is another way to run a large number of regression tests in a short amount of time.
These platforms provide automated means to distribute suites of tests to be run manually by testers. They also provide means of aggregating results and sending them back as notifications either to your favorite CI tools (e.g., Jenkins) or messaging app (e.g., Slack).
There are many players in this market. We’ve been experimenting with Rainforest to get more eyes on the new releases before they are deployed to the public.
Using such platforms helps you to:
- Complement your automated tests with manual tests that are better run by humans (e.g., when you need visual validation);
- Run the manual tests quickly since they are distributed among a pool of QA resources and are run in parallel;
- Avoid using your own QA team for repetitive regression testing and focus them instead on exploratory and in-depth testing of the releases and features.
12. Use feature switches often
Feature switches—switches that turn features on and off for a targeted set of customers—are another tool that helps increase the quality of the features and reduces the risk before exposing them to the entire customer base.
This is a good way to deploy the feature all the way to production in order to observe the impact without exposing it to the customers.
You can use this approach both for risky features and for features that have been in development for months but are not ready to be exposed to customers yet.
13. Accept the risk
No process is risk-free, and mistakes are inevitable. In fact, I will go so far as to say this:
If you aren't making mistakes, you aren't moving fast enough.
Although we adopt all of the above steps, there is still risk in deploying to production every day. We take the risk and embrace it. Every time a problem arises, we step back and look at how we can improve things.
14. Perform post-release monitoring
Performing post-release monitoring, as well as the final item we'll cover, continue the thread of accepting risk.
While we expose ourselves to risk, we are also ready to deal with the results. We use various tools to monitor the production.
- ELK stack for monitoring logs;
- New Relic for monitoring performance and client-side errors;
- Our internal fleet of monitoring and self-healing bots.
15. Have hotfix and rollback processes in place
Make sure you have a process in place to easily fix mistakes or roll back to previous versions.
The process should be automated and reliable so that it works flawlessly—even when you are in crisis mode because your production is affected by a bad release.
It’s also a good idea to make sure that more than one person, ideally many people on the team, know how to use it.
Final words on ensuring quality in continuous delivery
Releasing each day does not mean rushing the code changes out to production. Even though we watch cycle time every week to find opportunities to improve our process and velocity, we consider quality one of our highest priorities.
Work hard to use the right tooling and processes as described above to deliver high-quality software.
- Continuous Delivery: What it is, why it works - and why you need it
- Key metrics for agile development teams
- A Software Development Manifesto
- How do dashboards help agile software development teams?
Ali Pourshahid, Ph.D., is the Director of Software Development at Klipfolio. He can be reached on Twitter @ali_pourshahid.
Subscribe to Klipfolio Labs Blog
Thoughts from our dev team on agile development, productivity, and tooling.