Agile PM Scorecard
… or the six things I review with my steering committee.
Every three weeks I meet with my steering committee to review the status of our system redesign project. Over the course of a year what we talk about and the information I provide has changed. We started talking in a more general sense on scope, budget, and quality, but in many cases we lacked real statistics to support our conversations. As well, given how much I was hammering the importance of quality, my committee wanted more quantifiable data to support what I was saying. I finally hit on the idea of a pseudo balanced scorecard to show project data. After a few iterations we’ve finally settled on the most valid and concise data which works for us:
- Budget. This is an obvious one. Each sprint we spend money on resources (mostly). I report on actual versus planned.
- Scope. This is a little more complex, but the use of Version One helps. The scope of the project is tracked in story points. Specifically, we look at the total remaining scope from the last report, then do some math: (1) story points (SP) added through new stories, SP removed (through re-estimation or backlog grooming), and SP burned down — leaving us at a new total. What we’re looking for here is a trend of falling scope, obviously.
- Remaining Scope per Release. The project is divided into somewhat-logical production releases. Our production releases are set at the quarter level, although the scope may or may not fit into that time frame. We use this data to manage the project / product at the release level, determining if we need to redefine the scope of the release, either adding or taking away functionality. As part of this, we estimate the total sprints remaining to complete the release; this includes a best guess at taking into account unknown and yet-to be added SP (our unknown unknowns).
- Un-estimated Stories per Release. This is so we know what’s out there, and represents one aspect of risk against the releases.
- Quality. Reporting on quality is probably the most difficult part of the scorecard. It is challenging to explain to executives the intricacies of software quality, and coming up with straight numbers to quantify the system’s quality isn’t easy, either. That said, for better or worse I show them: (1) code coverage, (2) defect count, and (3) a ‘quality index’, which is a formula I developed from statistics generated out of our continuous integration style checker.
- Velocity. This is reported in the first section, but we expand on it here, and include a rolling average. This is often the focus of our discussion. I’ll leave it at that
Overall this makes for a meeting where we can get into some good discussion as to why some data looks good or bad. I can use it as a focus to ask for additional resources or help, or we can discuss realistic target dates around functionality. The level of transparency this project has is beyond any other project in our organization, or any project I’ve every run. Sometimes is makes for some difficult conversations, no doubt, but those conversations can be had sooner rather than later.