Measuring Agile Transformations
Organisations run their agile transformations with teams staffed with trainers and coaches. They often track the progress of their work through the numbers of teams trained, coached and up to speed (sprinting). This is not without value, but at best, these are leading metrics for something much more important – the goal of the transformation.
The “why” of agile in a particular context might be about time to market, generating more innovation, responding to market forces more quickly, being more resilient in a crisis or delighting customers more thoroughly, or a combination of these things.
So make sure you also measure the “needle” moving on these things.
Both leading and lagging metrics are helpful. The leading metrics are not the goal but can help us to move the needle for things that count, such as delighting customers. The lagging metrics measure the goal more directly. Leading metrics are faster to change; lagging metrics are typically slower to change.
So, as well as measuring the number of teams using Scrum, make sure you measure the success of these teams, particularly the success of the products they develop.
Measuring Agile Teams
Agile teams often track metrics in five categories:
- Customer satisfaction.
- Team satisfaction.
- Product quality.
- Return on investment.
- Progress towards a goal.
The ultimate goal of an agile team should be to create and sustain satisfied, or even better, delighted customers.
A commonly used metric for customer satisfaction is Net Promoter Score (NPS).
NPS is based on the question “How likely is it that you would recommend [brand] to a friend or colleague?” using a scale from 0 to 10, where 0 represents not at all likely and 10 represents extremely likely.
NPS divides the scores as follows:
- Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer to others, fueling growth.
- Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
- Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.
NPS is calculated as follows:
NPS = % Promoters – % Detractors
This is just one example of a metric for customer satisfaction, others metrics such as number of active customers are also very common. The main thing is to think about how you measure product success. Customer satisfaction is likely to be a large part of this, so adopt a metric which is easy and quick to gather and interpret. You don’t want large delays between shipping a new version of your product and getting data on customer satisfaction. Make sure that your organisation doesn’t introduce delays between shipping new versions of your product and getting data – this is very common in large, bureaucratic organisations and must be improved if this is the case for you.
An unhappy team will not be able to deliver great products sustainably. A common metric is team happiness, captured regularly by asking each team member to rate their satisfaction as a member in the team on a scale of 1 to 5, where 1 means very dissatisfied and 5 means very satisfied. Note this is effectively a measure of morale, it’s not a generic measure of personal happiness.
Capturing team happiness can be easily built into sprint retrospectives. It should primarily be something for the team to respond to – if happiness is low, a skilled Scrum Master will facilitate helping the team to discover why and what to do about it.
If you do publish happiness, for example, to management, make sure the data is anonymous. Otherwise, the team members will be less able to reveal their concerns during the retrospective.
Tracking product quality can help an agile team to get an early warning signal of problems before they result in a decrease in customer satisfaction.
A commonly used and important metric is the number of defects escaping into production, which basically should not happen! Tracking defects discovered during development is less valuable; discovering bugs is a standard part of development and shouldn’t be discouraged by turning it into a KPI!
Calls to a support or help desk or the number of support tickets raised are other commonly used metrics.
Return on Investment (ROI)
Many agile teams work, sprint by sprint, towards a product goal. A good product goal usually has a business case associated with it – the expected benefit might be additional revenue, cost savings, avoiding additional costs, preventing the loss of existing revenue or a combination of one or more of these things (see cost of delay).
Product owners should actively manage the return on investment of their products. There are various ways of measuring ROI, such as internal rate of return (IRR) and payback period. Try to align your agile teams and products with whatever ROI metrics are in use in your organisation. Whichever approach you use, you will need to measure value (which can only be measured after shipping) and costs (which you can easily measure at the end of each sprint).
Make sure you avoid asking the team to record engineering time, by filling in timesheets or any other method. This is wasteful and results in lower quality by establishing something allied to a “build to budget” or “build to what we estimated” culture. Instead, simply calculate the costs of a sprint, something that should be easy to do by combining day rates and days people spent working on the team during the sprint.
Progress Towards a Goal
For teams which work towards a product goal, it can be important to report on the progress towards that goal, on a sprint by sprint basis. This enables the Product Owner to answer questions such as “how much is left to do?”, “which features can we ship within the agreed budget?” or “how many sprints do we need to complete this set of features?”.
Measuring velocity can be useful here but there are also some problems with velocity which I will describe shortly.
The velocity of a sprint is the sum of the story points of the product backlog items completed during the sprint. So, it’s a measured value.
By getting the developers to estimate the product backlog items in story points and measuring velocity on a sprint by sprint basis, the product owner can answer these questions. Creating such estimates can be part of product backlog refinement.
The problem with velocity is that it is often misused to compare teams: teams with higher velocity being judged to be better than teams with lower velocity. This results in a conscious or unconscious inflation of team’s story point estimates, which destroys any utility that velocity would otherwise have – self defeating in the extreme!
If all concerned avoid this misuse and use techniques such affinity estimation or bucket estimation, velocity can be a very powerful tool for agile teams, and, particularly for product owners.
Beware of Perverse Incentives
A perverse incentive has an unintended and undesirable result that is contrary to the intention. When designing a set of metrics for agile teams, make sure you avoid the pitfall of perverse incentives.
One commonly seen example is measuring lines of code in an attempt to quantify developer productivity – what you will get is most likely, more lines of bad quality code!
In this article we’ve discussed some of the most common metrics used by agile teams and to measure agile transformations. We’ve also shown some of the common pitfalls.
This table summarises a starting point for a set of metrics for agile teams. Do be careful with velocity, however!
|Metric||Frequency||Leading or lagging |
with respect to “product success”
|Net promoter score||Track continuously, pay particular attention to changes when you ship a new version of your product||Lagging. Likely to be well aligned with product success|
|Team happiness||Once a sprint, as part of the sprint retrospective||Leading in the sense that it sets you up for success but doesn’t directly measure it|
|Number of defects escaping into production||Track continuously. Often reported on as part of a sprint review||Lagging in the sense that quality problems have already occured. LIkely to be well inversely aligned to product success|
|Payback period||Manage on a product goal basis, usually based around major releases. Remember that return only happens after you have shipped||Lagging|
|Velocity||Measure at the end of every sprint||Leading|
Here are some links to what other authors have to say about this topic:
For IT operations or devops teams it is also worth looking at the four key metrics proposed in the book Accelerate:
- Change Lead Time. The time to implement, test, and deliver code for a feature .
- Deployment Frequency. The number of deployments in a given duration of time.
- Change Failure Rate. The percentage of failed changes over all changes.
- Mean Time to Recovery (MTTR). The time it takes to restore service after production failure.