I'm the living repository provided as companion material to the workshop “Confident Software Delivery”, so far facilitated at:
- Agile Testing Days 2018 @ Berlin (November 13th, 2018).
Crafted with ❤️ by Daniel Carral (@dcarral) & Sergio Álvarez (@codecoolture).
Note: workshop participants are guided to complete several tasks on top of a web application specifically designed for this activity. Although this material isn't publicly available yet, we'd be happy to provide access upon request.
- Confident Software Delivery
Who doesn’t want to deliver value to their customers as soon as possible? In a world where time-to-market dictates the success or failure of many companies, confidently and quickly releasing software isn’t a competitive advantage anymore, it’s a must-have.
In this workshop we apply modern tools and techniques allowing developers and testers to run automated test suites, set up new testing environments with ease and automatically deliver new features and changes with confidence.
On top of that, the facilitators strive to create a psychologically safe space where participants, besides getting to know each other, are able to engage in healthy discussions about software delivery and decide what to put into practice back in their teams.
This activity has been designed to leverage the power of Liberating Structures (LS), which are a collection of “simple rules that make it easy to include and unleash everyone in shaping the future” meant to improve the way we work together.
Here you can find more information about the specific structures we are using in this workshop:
Would you like to implement them at your workplace? Have you already used them and would like to share how it went? We'd be happy to hear from you :)
After the release in 2018 of the book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, we can assert with confidence that high IT performance correlates with strong business performance, helping to boost productivity, profitability, and market share.
When we talk about software delivery, we're using Martin Fowler's definition: “the journey from commit to production”.
Improvements in software delivery are possible for every team and in every company, as long as leadership provides consistent support -including time, actions and resources- demonstrating a true commitment to improvement, and as long as team members commit themselves to work.
We expect you to leave this workshop with a solid understanding of key principles behind software delivery performance, hands-on experience creating and optimizing a deployment pipeline, as well as a couple of prioritized action items to put into practice back in your team.
There are four key metrics we can use to effectively measure software delivery performance:
- Delivery lead time
- Deployment frequency
- Mean Time to Restore (MTTR)
- Change fail rate
Below you can find a brief explanation of each of them, all being excerpts from Accelerate:
[...] The elevation of lead time as a metric is a key element of Lean Theory. We measured product delivery lead time as the time it takes to go from code committed to code successfully running in production.
[...] The second metric to consider is batch size. Reducing batch size is another central element of the Lean paradigm. In software, batch size is hard to measure and communicate across contexts as there is no visible inventory. Therefore, we settled on deployment frequency as a proxy for batch size since it is easy to measure and typically has low variability.”
[...] Delivery lead times and deployment frequency are both measures of software delivery performance tempo. However, we wanted to investigate whether teams who improved their performance were doing so at the expense of the stability of the systems they were working on.
[...] Traditionally, reliability is measured as time between failures. However, in modern software products and services, which are rapidly changing complex systems, failure is inevitable, so the key question becomes: how quickly can service be restored?” (and therefore our two other measures: Mean Time to Restore (MTTR) and Change Fail Rate)
-
Continuous Integration (CI): “Development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early“ (from https://www.thoughtworks.com/continuous-integration).
-
Continuous Delivery (CD): “Software development discipline where you build software in such a way that it can be released to production at any time” (from https://martinfowler.com/bliki/ContinuousDelivery.html)
-
On the other hand, Continuous Deployment:
[...] Sometimes confused with Continuous Delivery, it means that every change goes through the pipeline and automatically gets put into production, resulting in many production deployments every day.
[...] Continuous Delivery just means that you are able to do frequent deployments but may choose not to do it, usually due to businesses preferring a slower rate of deployment.
from https://martinfowler.com/bliki/ContinuousDelivery.html.
Please note that, besides the aforementioned differences between them, they all have something they need to exist: automation.
Martin Fowler took some time on May 2013 to document the concept of deployment pipeline as a way to deal with one of the challenges of an automated build and test environment: you want your build to be fast, so that you can get fast feedback, but comprehensive tests take a long time to run.
A deployment pipeline allows this by breaking up the build into stages. Each stage provides increasing confidence, usually at the cost of extra time. Early stages can find most problems yielding faster feedback, while later stages provide slower and more through probing.
Please note that since Martin Fowler hasn't updated the article lately, it doesn't reflect the current state of the art anymore. As frequently highlighted in Accelerate, top performers are the ones who improve more, and faster.
This hands-on workshop leverages the so-called GitLab CI/CD, which is their built-in Continuous Integration, Continuous Deployment, and Continuous Delivery support to build, test, and deploy applications.
Simply put, a deployment pipeline consists of groups of jobs that get executed in stages (batches):
-
Jobs run independently from each other and are executed within the environment of a so-called runner.
-
Stages allow to group jobs so they can be executed in parallel. If they all succeed, the pipeline moves on to the next stage.
All what you need is a .gitlab-ci.yml
file placed at the root directory of
your repository to specify how the project should be built.
What do we need to include there? Time to dive into some of the resources below.
Here we have cherry-picked some articles from the GitLab CI/CD docs:
- Configuration of your jobs with .gitlab-ci.yml
- GitLab CI/CD variables
- Introduction to pipelines and jobs
- Cache dependencies in GitLab CI/CD
- Introduction to environments and deployments
Additionally, it might be worth reading: