We've seen our software projects go up and go down; some have seen them go down in flames or up in smokes. We've experienced periods when all the work and effort invested in a piece of software lead to a result in the product quality, stability and competitive features. But we've also lived to see the times when all the work lead to more problems--the more we did, the worse things got! It is embedded in a developer's subconscience that, once complexity reaches a certain limit, we lose the control of the whole system and its maintenance becomes a nightmare. However, we rarely try to quantify this intuitive idea, although these issues were addressed as early as the 70's. One of the pioneers of scientific approach to software project management were IBM's Belady and Lehman (*). In their 1976 paper, "A Model of Large Program Development", they quantify what we all know intuitively.
This paper postulates two quite obvious "Laws of Program Evolution Dynamics": law of continuing change and law of increasing entropy. However, the third postulated law, that of statistically smooth growth, is much more interesting... Based on a statistical model that the authors built using data gathered for more than a decade, while developing OS/360, an operating system for IBM's mainframes (**), this law is easily stated in form of a curve. We may call it the Belady-Lehman curve.
The graph depicts the expected number of bugs in a software product throughout its lifecycle. As the BL-curve tells us, there is a period when the bug count decreases with the passing of time (and doing of work). However, as the result of constant change of the system (due to bug-fixing, further development, refactoring etc.) the structure of the whole system degenerates, and it becomes harder to maintain, harder to control. At one point, the scales tip, and further work increases the bug count--we introduce more bugs than we are able to fix.
The minimum point of the B-L curve is the moment when the software system is in its most stable version. Continuing the work on the project the same way we did before this B-L minimum point would result in poorer quality of the product. So what do we do? Daniel Berry throws us a couple ideas in "The Inevitable Pain of Software Development, Including of Extreme Programming, Caused by Requirements Volatility" (I found this through Scott Rosenberg's blog at www.wordyard.com):
1. We can rollback to the most stable version, declare all bugs to be features and stop making any changes to the software, which is what was done with classic Unix tools. This renders the software dead, which might work out fine if we have a special position at the market, or we're doing it just for kicks. In real life, the product at this point is never shippable, so the only place this path leads to is--unemployment line.
2. We try our best to change our wicked ways, and somehow move the B-L minimum point further to the right. As Berry points, most of the modern coding paradigms, methodologies and industry practices are invented merely to push that point a bit further. I'll try to throw a few random ones just to prove the point:
And so on. However, Berry leaves us with not much optimism, stating that:
Unsurprisingly however, real-life data never looks much like Belady's and Lehman's curve. Here's the recorded count of open bugs in my company, in a large, classic distributed system heavily customized for most corporate clients.
Real-life data is corrupted by one or more of the following facts:
If our graph doesn't even resemble the Belady-Lehman curve, how do we know where on the B-L curve does our current project stand? Why don't we all comment on it? I encourage readers to post their own charts; they are easily generated if you use Bugzilla, the free bug tracking system.
(*) For a recent idea that can be filed under "scientific approach", see Joel Spolsky's "Evidence-Based Scheduling".
(**) Development of OS/360 also gave us Fred Brooks' "The Mythical Man-Month".
This paper postulates two quite obvious "Laws of Program Evolution Dynamics": law of continuing change and law of increasing entropy. However, the third postulated law, that of statistically smooth growth, is much more interesting... Based on a statistical model that the authors built using data gathered for more than a decade, while developing OS/360, an operating system for IBM's mainframes (**), this law is easily stated in form of a curve. We may call it the Belady-Lehman curve.
The graph depicts the expected number of bugs in a software product throughout its lifecycle. As the BL-curve tells us, there is a period when the bug count decreases with the passing of time (and doing of work). However, as the result of constant change of the system (due to bug-fixing, further development, refactoring etc.) the structure of the whole system degenerates, and it becomes harder to maintain, harder to control. At one point, the scales tip, and further work increases the bug count--we introduce more bugs than we are able to fix.
The minimum point of the B-L curve is the moment when the software system is in its most stable version. Continuing the work on the project the same way we did before this B-L minimum point would result in poorer quality of the product. So what do we do? Daniel Berry throws us a couple ideas in "The Inevitable Pain of Software Development, Including of Extreme Programming, Caused by Requirements Volatility" (I found this through Scott Rosenberg's blog at www.wordyard.com):
1. We can rollback to the most stable version, declare all bugs to be features and stop making any changes to the software, which is what was done with classic Unix tools. This renders the software dead, which might work out fine if we have a special position at the market, or we're doing it just for kicks. In real life, the product at this point is never shippable, so the only place this path leads to is--unemployment line.
2. We try our best to change our wicked ways, and somehow move the B-L minimum point further to the right. As Berry points, most of the modern coding paradigms, methodologies and industry practices are invented merely to push that point a bit further. I'll try to throw a few random ones just to prove the point:
- Modularity and Information Hiding. If we reduce complexity by breaking the product into modules whose implementation is independent from each other, the B-L point of each module will be reached later. It is not a simple sum, given the complexity in integration and the behavior of the whole, but it is still safe to assume that the whole product's B-L point will be further to the right then.
- Unit Testing. If a bug is found and fixed immediately after its creation, then is it a bug at all? It shouldn't then be summed into the value plotted in the curve--thus unit testing tames the angle in which the B-L curve grows.
- Bug Tracking. Obviously, we use this to assess our position on the graph.
- Code Control. Again obviously, we use this to move freely left and right on the graph.
- Agile Methodology and Iterative Development. By reviewing our position more often, we are able to alter our actions earlier, and avoid reaching the steep growing part of the B-L curve.
And so on. However, Berry leaves us with not much optimism, stating that:
Consequently, the software tends to decay no matter what. The B-L upswing is inevitable.
Unsurprisingly however, real-life data never looks much like Belady's and Lehman's curve. Here's the recorded count of open bugs in my company, in a large, classic distributed system heavily customized for most corporate clients.
Real-life data is corrupted by one or more of the following facts:
- The number of reported bugs is not equal to the real number of existing bugs: duh;
- History "starts" at one point in time: in our example, Bugzilla was installed in 2003;
- Testing teams change through time: growth of the team means more found and reported bugs, altering our perspective on the matter;
- Pushing the development teams prior to deliveries results in local minimums of the B-L curve: in our chart, local minimums are suspiciously close to shipping dates.
If our graph doesn't even resemble the Belady-Lehman curve, how do we know where on the B-L curve does our current project stand? Why don't we all comment on it? I encourage readers to post their own charts; they are easily generated if you use Bugzilla, the free bug tracking system.
(*) For a recent idea that can be filed under "scientific approach", see Joel Spolsky's "Evidence-Based Scheduling".
(**) Development of OS/360 also gave us Fred Brooks' "The Mythical Man-Month".