Setting up the metrics that help the Scrum Team can indeed be a very difficult job and, in this article, I want to write about some smells that you may observe. These smells are an early warning system or indications which tell you that your metric friend work needs some tweaking.
- Metrics That Take Time To Prepare If a Scrum Team needs time to prepare a set of metrics that usually is a Red flag indicating that the metrics are not coming from one place and need some collation effort. Such a collation effort does not add any specific value and should be discouraged. Also there is always and risk of such manually calculated metrics maybe based on somebody’s perception rather than an universally accepted single source of truth. Where possible it is a better idea to automate the Metrics calculation that way the team’s effort is not vested in calculating the metric.
- Teams Seem Hassled About Metrics / Teams cannot explain rationale Metrics are meant to help the Scrum Teams continually improve their performance. This purpose cannot be achieved if the teams are hassled about the additional burden metrics collection and tracking puts on them. If the Scrum Team is unhappy about some Metrics that they have to track or capture, it usually indicates that either the team does not understand why they are being asked to capture certain data or how the data is going to be used. In such a case being transparent with the Scrum Team about the rationale behind asking for this data helps the team to align better with the ask. Where possible enabling automated data capture also reduces the overhead the Scrum Team has to bear and that can further reduce their resistance.
- Metrics That Are Judgy /Random R-Y-G windows Purpose of metrics should be to help the script to continuously improve their performance. Often the Metrics are structured not as a way to improve the performance continually but to create arbitrary service level agreements which are tried to agile contacts. Many times such metrics have been created with artificial designed acceptable tolerance limits or random Red-Amber-Green ranges rather than tolerance limits which Scrum Teams can understand. In such a scenario Scrum Team might focus on representing data in such a way that their own performance falls into the desirable limit. This might defeat the very purpose of metrics which is to find out ways to improve not to judge a Scrum Teams’ performance.
- Metrics that create barriers Purpose of metrics is to enable the Scrum Team to inspect and adapt in such a way to continually improve its performance. This becomes difficult to achieve when there are Metrics which set team members against each other. Most efficient tester/ the developer with zero defects against his or her name/ star of the month may be some examples of such individual metrics. In such cases it is possible that such metrics create a hindrance for the Scrum Team to work as a single unit and encourage the team members to work as independent players since they want to protect their own metric level. We have to take similar care while we try to pitch 1 Scrum Team against another especially when these teams are working on the same product goal and the product backlog. Such Metrics that compare one team against the another may discourage collaboration among the teams which in the end might hamper the value delivered to the customer