How should companies measure the effectiveness of application development?

How should companies measure the effectiveness of application development?

on Sep 10, 13 • by Chris Bubinas • with 1 Comment

While companies tend to measure inputs like the cost of development or whether the project came in under budget and on time, they rarely evaluate outputs such as how functional the product was or how productive developers were in putting it together...

Home » Static Analysis » How should companies measure the effectiveness of application development?

Software has become a more important component in nearly every business over the last two decades, and building better applications can be a very real source of competitive advantage in the enterprise. However, according to a recent McKinsey report, many businesses do little to track how effective their application development practices are. While companies tend to measure inputs like the cost of development or whether the project came in under budget and on time, they rarely evaluate outputs such as how functional the product was or how productive developers were in putting it together.

One of the major challenges to such so-called productivity metrics is that they can easily become a way to goad or punish developers, fostering resistance and making them counterproductive. The McKinsey study offered a new approach to measuring developer productivity based on software functionality, raising some important questions: Should companies be measuring the output of their development projects, and, if so, how should they be doing it?

Common approaches to measuring developer productivity might account for the number of lines of code written in a specific timeframe or the quality of the software based on defect rates. The best way to evaluate the effectiveness of application development – or whether it should be done at all – is a subject of continuous debate, and objections to existing metrics are myriad.

What makes software effective?
McKinsey suggested that these metrics can be improved upon by adopting an approach that evaluates “use cases” and “use-case points.” Use cases are tangible functional outcomes of a program, and defining them can give a more comprehensive view of how the software works as opposed to simply drawing up a laundry list of features. Use-case points are essentially scores for how well each of these user functions works in practice. By measuring the quality of each use case, businesses can gain a clear idea of how well their software works. And by thinking of design through the lens of use cases, development teams and business units can communicate more clearly and avoid building in unnecessary functions.

While the McKinsey approach offers a worthwhile alternative to more statistically clear-cut measurements such as defect rates or developer speed, the underlying challenge remains the same: To meet executives’ productivity, effectiveness or quality standards, what tools do developers have at their disposal? Clearer design methodologies are just one part of the patchwork of approaches programmers will likely have to use to ultimately create better code – code that satisfies either defect rate metrics or use-case point metrics.

Building in quality
Supplementing design with tools designed to improve quality during the initial coding process is a good start, according to software quality experts Capers Jones and Olivier Bonsignour, authors of The Economics of Software Quality. In an interview with TechTarget, they explained that many companies do not take even basic steps toward defect prevention or pre-testing in the form of automated source code analysis. By implementing existing technologies and straightforward methods such as peer code review, companies can simultaneously improve performance across a number of effectiveness metrics.

“It you approach software quality using state-of-the-art methods, you will achieve a synergistic combination of high levels of defect removal efficiency, happier customers, better team morale, shorter development schedules, lower development costs, lower maintenance costs and total cost of ownership (TCO) that will be less than 50 percent of the same kinds of projects that botch up quality,” Jones and Bonsignour told TechTarget.

While debate is likely to continue as to the best way to measure effectiveness, or whether doing so is even possible, any tools that generally raise the quality of the code being written are likely to be seen as an improvement by both developers and their managers. Implementing solutions such as static analysis can be an effective way to improve quality from beyond a design level.

Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.

Related Posts

One Response to How should companies measure the effectiveness of application development?

  1. Chris,
    Interesting post. In my view, the McKinsey authors may have misrepresent function points as their assessment is biased towards manual counting yet automated function point counting eliminates many of the barriers (scalability, cost) that they list as limitations. Bill Curtis recently published his insights into measuring ADM effectiveness. I’d be interested in your take on it. http://www.castsoftware.com/resources/document/whitepapers/modern-software-productivity-measurement-the-pragmatic-guide

    Pete

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top