A while back, a group of us in the Winter Tech Forum community were discussing the age old question: how to measure engineer productivity (without making engineers hate life). It is hard to do without becoming big brother. While I don't have a neat answer, below are some things I have experimented with.
Most of these don't involve "measurement" of efficiency. In my experience, measurement is extremely difficult to implement without mangling the incentive structure for engineers. In some cases it may be worthwhile, but you can get a lot of mileage out of simply fixing things known to kill productivity:
- Let people work on things they care about.
Ifengineerspeople are working on something they believe in/are excited about, output will go up. Give them control & autonomy, and they'll run through a wall for you. - How does the team choose work?
I have worked on teams in which sales people went directly to developers when they needed something. So, everyone was putting out little fires instead of making progress on big-picture tasks.
We resolved this by forming a committee with representatives from sales, dev, support, & management. They choose the work and the scrum master guarded the dev team against interruptions. The developers involved in this group would also meet separately to ensure they present dev team priorities in terms of paying down technical debt, big picture architecture changes, etc. - How often are developers interrupted for support calls?
Interruptions are the mortal enemy of deep focus work. Support ticketing systems can sometimes be mined to see what features/modules are causing the most support calls. - What is our level of technical debt?
A company which cannot answer this is in for trouble. One method I have seen work is simply creating Jira cards for all tech debt with a rough estimate, so the percentage of tech debt in the backlog can be calculated. - How early in the build process are quality checks run?
Having static code analysis, code coverage, etc. run on the build server is great, but having it run also on developer machines is better. Otherwise, you risk people wrapping up a task, pushing, & context switching to another task, only to find out that static analysis failed, for example. - Track ratio of closed bugs to open bugs.
If it is growing, there may be a problem. Over time, you can also track the history of that ratio in relation to other events. - Measure the amount of duplicate code present in the codebase using something like PMD.
If there is duplication, and the duplicates grow apart, maintenance costs go up. - Schedule uninterrupted developer time. Developers should be able to block out interrupted time to cut down on context switching and guard against interruptions
- Do they have the best machines money can buy? (&
the other points from the Joel Test)
I have seen developers using 15 year old machines!! What a ridiculous false economy. Sometimes fixing this means pushing back on budgets. Same is true of tooling shared by the team. - Designate a person or group of people to run interference between support & developers.
In our case, developers were experiencing significant time loss helping on support tickets. We had a developer switch to a role as head of support, and he is knowledgeable enough to intercept many of the inquiries that would normally reach a developer. Generally, developers don't want to go to support permanently, so perhaps this could be a rotation. - Fix your local builds! (Thanks Drew Stephens)
Many companies have projects that won't build on a fresh machine, fresh clone, etc. Developers can spend countless hours with brittle dependency trees & tooling. - Choose performant tools & languages that integrate well (Thanks Marshall Pierce)
Your choice of programming language & tooling should prioritize developer productivity. At a minimum, you should provide a "paved path" - a set of tools which integrate well.
Comments
Post a Comment