|Category||Metric Name||Definition||Data Required||Use Scenarios and Recommended Practices||Value|
|Delivery Velocity||Requirement Count||Number of issues in type "Requirement"||Issue/Task Management entities: Jira issues, GitHub issues, etc||1. Analyze the number of requirements and delivery rate of different time cycles to find the stability and trend of the development process.|
2. Analyze and compare the number of requirements delivered and delivery rate of each project/team, and compare the scale of requirements of different projects.
3. Based on historical data, establish a baseline of the delivery capacity of a single iteration (optimistic, probable and pessimistic values) to provide a reference for iteration estimation.
4. Drill down to analyze the number and percentage of requirements in different phases of SDLC. Analyze rationality and identify the requirements stuck in the backlog.
|1. Based on historical data, establish a baseline of the delivery capacity of a single iteration to improve the organization and planning of R&D resources.|
2. Evaluate whether the delivery capacity matches the business phase and demand scale. Identify key bottlenecks and reasonably allocate resources.
|Requirement Delivery Rate||Ratio of delivered requirements to all requirements||Issue/Task Management entities: Jira issues, GitHub issues, etc|
|Requirement Lead Time||Lead time of issues with type "Requirement"||Issue/Task Management entities: Jira issues, GitHub issues, etc||1. Analyze the trend of requirement lead time to observe if it has improved over time.|
2. Analyze and compare the requirement lead time of each project/team to identify key projects with abnormal lead time.
3. Drill down to analyze a requirement's staying time in different phases of SDLC. Analyze the bottleneck of delivery velocity and improve the workflow.
|1. Analyze key projects and critical points, identify good/to-be-improved practices that affect requirement lead time, and reduce the risk of delays|
2. Focus on the end-to-end velocity of value delivery process; coordinate different parts of R&D to avoid efficiency shafts; make targeted improvements to bottlenecks.
|Requirement Granularity||Number of story points associated with an issue||Issue/Task Management entities: Jira issues, GitHub issues, etc||1. Analyze the story points/requirement lead time of requirements to evaluate whether the ticket size, ie. requirement complexity is optimal.|
2. Compare the estimated requirement granularity with the actual situation and evaluate whether the difference is reasonable by combining more microscopic workload metrics (e.g. lines of code/code equivalents)
|1. Promote product teams to split requirements carefully, improve requirements quality, help developers understand requirements clearly, deliver efficiently and with high quality, and improve the project management capability of the team.|
2. Establish a data-supported workload estimation model to help R&D teams calibrate their estimation methods and more accurately assess the granularity of requirements, which is useful to achieve better issue planning in project management.
|Commit Count||Number of Commits||Source Code Management entities: Git/GitHub/GitLab commits||1. Identify the main reasons for the unusual number of commits and the possible impact on the number of commits through comparison|
2. Evaluate whether the number of commits is reasonable in conjunction with more microscopic workload metrics (e.g. lines of code/code equivalents)
|1. Identify potential bottlenecks that may affect output|
2. Encourage R&D practices of small step submissions and develop excellent coding habits
|Added Lines of Code||Accumulated number of added lines of code||Source Code Management entities: Git/GitHub/GitLab commits||1. From the project/team dimension, observe the accumulated change in Added lines to assess the team activity and code growth rate|
2. From version cycle dimension, observe the active time distribution of code changes, and evaluate the effectiveness of project development model.
3. From the member dimension, observe the trend and stability of code output of each member, and identify the key points that affect code output by comparison.
|1. identify potential bottlenecks that may affect the output|
2. Encourage the team to implement a development model that matches the business requirements; develop excellent coding habits
|Deleted Lines of Code||Accumulated number of deleted lines of code||Source Code Management entities: Git/GitHub/GitLab commits|
|Pull Request Review Time||Time from Pull/Merge created time until merged||Source Code Management entities: GitHub PRs, GitLab MRs, etc||1. Observe the mean and distribution of code review time from the project/team/individual dimension to assess the rationality of the review time||1. Take inventory of project/team code review resources to avoid lack of resources and backlog of review sessions, resulting in long waiting time|
2. Encourage teams to implement an efficient and responsive code review mechanism
|Bug Age||Lead time of issues in type "Bug"||Issue/Task Management entities: Jira issues, GitHub issues, etc||1. Observe the trend of bug age and locate the key reasons.|
2. According to the severity level, type (business, functional classification), affected module, source of bugs, count and observe the length of bug and incident age.
|1. Help the team to establish an effective hierarchical response mechanism for bugs and incidents. Focus on the resolution of important problems in the backlog.|
2. Improve team's and individual's bug/incident fixing efficiency. Identify good/to-be-improved practices that affect bug age or incident age
|Incident Age||Lead time of issues in type "Incident"||Issue/Task Management entities: Jira issues, GitHub issues, etc|
|Delivery Quality||Pull Request Count||Number of Pull/Merge Requests||Source Code Management entities: GitHub PRs, GitLab MRs, etc||1. From the developer dimension, we evaluate the code quality of developers by combining the task complexity with the metrics related to the number of review passes and review rounds.|
2. From the reviewer dimension, we observe the reviewer's review style by taking into account the task complexity, the number of passes and the number of review rounds.
3. From the project/team dimension, we combine the project phase and team task complexity to aggregate the metrics related to the number of review passes and review rounds, and identify the modules with abnormal code review process and possible quality risks.
|1. Code review metrics are process indicators to provide quick feedback on developers' code quality|
2. Promote the team to establish a unified coding specification and standardize the code review criteria
3. Identify modules with low-quality risks in advance, optimize practices, and precipitate into reusable knowledge and tools to avoid technical debt accumulation
|Pull Request Pass Rate||Ratio of Pull/Merge Review requests to merged||Source Code Management entities: GitHub PRs, GitLab MRs, etc|
|Pull Request Review Rounds||Number of cycles of commits followed by comments/final merge||Source Code Management entities: GitHub PRs, GitLab MRs, etc|
|Pull Request Review Count||Number of Pull/Merge Reviewers||Source Code Management entities: GitHub PRs, GitLab MRs, etc||1. As a secondary indicator, assess the cost of labor invested in the code review process||1. Take inventory of project/team code review resources to avoid long waits for review sessions due to insufficient resource input|
|Bug Count||Number of bugs found during testing||Issue/Task Management entities: Jira issues, GitHub issues, etc||1. From the project or team dimension, observe the statistics on the total number of defects, the distribution of the number of defects in each severity level/type/owner, the cumulative trend of defects, and the change trend of the defect rate in thousands of lines, etc.|
2. From version cycle dimension, observe the statistics on the cumulative trend of the number of defects/defect rate, which can be used to determine whether the growth rate of defects is slowing down, showing a flat convergence trend, and is an important reference for judging the stability of software version quality
3. From the time dimension, analyze the trend of the number of test defects, defect rate to locate the key items/key points
4. Evaluate whether the software quality and test plan are reasonable by referring to CMMI standard values
|1. Defect drill-down analysis to inform the development of design and code review strategies and to improve the internal QA process|
2. Assist teams to locate projects/modules with higher defect severity and density, and clean up technical debts
3. Analyze critical points, identify good/to-be-improved practices that affect defect count or defect rate, to reduce the amount of future defects
|Incident Count||Number of Incidents found after shipping||Source Code Management entities: GitHub PRs, GitLab MRs, etc|
|Bugs Count per 1k Lines of Code||Amount of bugs per 1,000 lines of code||Source Code Management entities: GitHub PRs, GitLab MRs, etc|
|Incidents Count per 1k Lines of Code||Amount of incidents per 1,000 lines of code||Source Code Management entities: GitHub PRs, GitLab MRs, etc|
|Delivery Cost||Commit Author Count||Number of Contributors who have committed code||Source Code Management entities: Git/GitHub/GitLab commits||1. As a secondary indicator, this helps assess the labor cost of participating in coding||1. Take inventory of project/team R&D resource inputs, assess input-output ratio, and rationalize resource deployment|
|Delivery Capability||Build Count||The number of builds started||CI/CD entities: Jenkins PRs, GitLabCI MRs, etc||1. From the project dimension, compare the number of builds and success rate by combining the project phase and the complexity of tasks|
2. From the time dimension, analyze the trend of the number of builds and success rate to see if it has improved over time
|1. As a process indicator, it reflects the value flow efficiency of upstream production and research links|
2. Identify excellent/to-be-improved practices that impact the build, and drive the team to precipitate reusable tools and mechanisms to build infrastructure for fast and high-frequency delivery
|Build Duration||The duration of successful builds||CI/CD entities: Jenkins PRs, GitLabCI MRs, etc|
|Build Success Rate||The percentage of successful builds||CI/CD entities: Jenkins PRs, GitLabCI MRs, etc|