How do you measure programmer productivity? Some managers measure productivity by counting the number of software lines of code (usually abbreviated as either LoC or SLoC) written each day. I recall, decades ago, a friend’s father telling me his company required programmers to write at least five lines of code per day, on average. At the time, the figure seemed low, but the key is on average. As every professional programmer will explain, you don’t write code every day. You also participate in planning, meetings, testing, meetings, design, meetings, bug fixing, and meetings.
LoC is not considered to be the best measurement of programmer progress, and it’s certainly not an indication of quality. It’s even debatable what LoC is: Do you include comments? Most say no. Others feel that since it’s frowned upon to not document your code, comments should be counted. Andrew Binstock recently made a similar argument while discussing the value of comments in code.
What about lines that consist of only braces? What about if
statements split with line feeds? Do you count switch statement conditionals? Perhaps you should, because, after all, good structure counts, right?
Then there are advancements in Java such as lambdas, where the goal is to reduce the amount of code needed to achieve something that required more code in the past. No developer should be penalized for using modern constructs to help write fewer lines of more-effective code, right?
There are times I marvel at how many lines of code I’ve written for whatever project I’m working on. Additionally, common sense says that as you learn a new programming language, the more code you write, the more proficient you get. Looking at it this way, using LoC for progress can be rationalized.
However, it’s hard to get past the fact that elegant code is often more concise—and easier to maintain—than poorly written brute-force code. You don’t want to incentivize developers to be overly verbose or inefficient to achieve a desirable LoC metric.
Bill Gates once said, “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” Indeed, I’ve worked with folks who have valued lines of code removed more than new lines of code added. The most rewarding tasks often involve improving a system design so much that multiple screens-worth of code can be removed.
Given this, here are a few alternatives that can measure programmer progress.
Even when the goal is to track developer productivity using actual numbers, developers often have ample opportunity to make a positive impression with their problem-solving skills. Effective problem-solving requires both thought and research. If you can consistently provide meaningful, and timely, solutions to even the thorniest of problems, management will know you’ve put the time and effort into being prepared.
On a developer team, a person who can solve tricky problems has as much value as someone else who cranks out lots of fast code that implements that solution. Yet who gets the credit? Often, it’s the coder, not the problem solver.
Another issue is that problem-solving can fall into the perception-beats-reality category. Some people are louder and bolder than others, some are good at building on the suggestions of others, and simply overhear solutions, to put it nicely.
A “problems solved” metric probably won’t fly.
You know the value of Agile development practices. Regardless of whether it’s Scrum, Scaled Agile Framework (SAFe), Kanban, or Scrum at Scale, each method uses iterative development with feedback loops to measure developer progress and effectiveness for meeting user needs and providing business value. Each of these practices also measures progress in terms of completed user stories instead of LoC, using measurements such as the following:
In fact, user satisfaction, business value, and software quality are important measurements that counting LoC completely misses.
Measuring software product success is a challenging metric, but it can be achieved. For example, you can measure sales, downloads, or user feedback to indicate how well the product satisfies the needs of its users. Measuring customer satisfaction assesses how well the software has met the requirements you originally defined. Then, by tracking the completion of individual features along with the developers who worked on them, you can get an accurate feel for how well individuals or entire development teams contributed to this satisfaction compared to others.
There are other soft measurements for developer productivity and effectiveness, such as the satisfaction of other developers and of the developer’s manager. Someone who is easy to work with yet average in terms of productivity and software quality will often be valued higher than someone who is above average yet hard to work with or just negative overall. This, once again, is a measurement that LoC will never tell you.
Borrowing from Gates’ statements on aircraft weight, there is an alternative measurement of programmer productivity that involves counting how many times a section of code has been modified. The theory is that the more you modify a body of code, the more effort is going into that part of the system. This is a measurement of hits of code (HoC) instead of lines of code, and I find this intriguing.
HoC, described nicely by Yegor Bugayenko in 2014, measures overall effort per developer for specific parts of the system being built or enhanced.
Beyond comparing the developer effort, HoC helps you gain an understanding of which parts of your system are getting the most attention—and then you can assess if that’s in line with what the business or customer needs. Further, you can measure the effort against your estimates to determine if you planned properly or if you need to assign more developers to work in an area with high HoC—both of which are reasons HoC could be a more valuable unit of measurement than LoC.
In his essay, Bugayenko shows how HoC is more objective than LoC and is more consistent, because it’s a number that always increases and takes the complexity of the codebase into account.
Whether it’s measuring HoC, using expanded Agile metrics, leveraging consistent soft-skills considerations, or valuing developers who have good ideas and problem-solving skills, there are many alternatives to measuring with LoC. Using even a small combination of these alternatives may finally slay the LoC-ness monster.
Eric J. Bruno is in the advanced research group at Dell focused on Edge and 5G. He has almost 30 years experience in the information technology community as an enterprise architect, developer, and analyst with expertise in large-scale distributed software design, real-time systems, and edge/IoT. Follow him on Twitter at @ericjbruno.