English: S’mores Pop-Tarts. (Photo credit: Wikipedia)
Tell me how you measure me, and I’ll tell you how I behave.
- Eli Goldratt
This conversation tends to spark some interesting sentiments in agile developers and, like everything else wrong in the world, I think Waterfall is to blame.
You see, developers have labored for years under unrealistic deadlines set by uninformed or just downright crazy project estimates that, even in the best of circumstances, are still bound to be wrong. An antipathy grew between developers and the business as the business kept setting deadlines (sometimes based on developer estimates) and developers kept missing them.
Odd metrics began to pop up around lines of code written or deviations from estimates. Developers were rewarded for writing more code and being closer on their estimates – two things that provide no direct value to the business, incidentally. And when these things were off, the hammer came down.
Then agile came, and all was unicorns eating Pop Tarts and pooping gumdrops.
Now, the nasty managers couldn’t impose some huge project deadline. Now, lines of code were meaningless (they always were, by the way). Now, even individual productivity was subsumed under the productivity of the team. Surely, now, developers and management could work together on meaningful metrics and standards for improvement.
But, no! Ensconced in impenetrable sprints and standing with one’s team as a veritable fist of defiance, the oppressed became the oppressors. Software development is knowledge work! You can’t measure productivity! It’s about learning and sharing and growing! It’s not always about getting visible results! There’s lots of… knowledge… work. So, suck on that, management! You come around here with your metrics and incentives, and I’ll tell you where to stick it. We cannot be measured. Software development is just too different.
Let’s assume, in Unicorn Gumdrop Poop World, that both managers and developers are primarily interested in delivering value to the business as defined by the business’ strategic goals. If this is true – if both of these parties are actually on the same team trying to accomplish the same thing – then they should be able to agree to some way to measure this activity that is reasonable, valuable, and equitable.
For a developer to resist any measurement whatsoever is to place a big roadblock up to improvement. Operating at peak performance and operating at poor performance look exactly the same from the outside. Trying to avoid making this visible might cause someone to wonder if you have something to hide. I confess, even as a veteran developer, I often wonder if this is the case when I hear another developer arguing at length at how esoteric and unmeasurable software development is.
I believe there are at least two metrics that both parties can agree are useful and quantifiable.
How long does it take to turn a request into a reality?
I’m not talking about an entire project; I’m talking about a feature that can be relatively sized. How long does it take that smallest viable unit of functionality and turn it into working software?
Contrary to popular belief, businesses do not reward hard work; they reward the production of value. If Employee A works 8 hours to enter 50 customer records, and Employee B spends two hours writing a script that enters 500 customer records that day, the business as a whole will appreciate Employee B because she delivered more value than Employee A.
Effort means nothing unless it produces something valuable for the business. Since this is the goal that unites managers and developers, it makes sense to measure Throughput as opposed to lines of code, classes written, documentation created, etc.
High throughput is bad if you are creating crap at high velocity.
Although we want higher throughput, this cannot happen at the expense of quality. This metric is a little trickier, but not impossible. You could measure number of bugs. You could measure rework time per feature. You could state that a feature is not done until all bugs are fixed and just add that into lead time, which will lower Throughput. You could state that feature is not done until it has passed a code review.
Despite quality being a little ephemeral, you can define it in a way that’s quantifiable, and that metric is very useful in conjunction with Throughput.
If Quality is high and Throughput is low, perhaps there’s a bottleneck that’s keeping the team from producing as fast as they could. If Throughput is high and Quality is low, then the team is producing too quickly and needs to further limit their work in progress.
No metric can tell the whole story of software development, but they can provide useful points of conversation and collaboration on how the team in general is doing, where we might start looking to incrementally increase performance, and if our adjustments are having the intended effect.