Scary Quantificationgreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread
Make sure you read the new article "Quantifying Effects of y2k" on this host (Yourdon) site. Sample scary quote:
"But the question remains: how good a testing job did your staff really do? The only way you're going to find out is by subjecting some, if not all, of your code to a detailed third-party, independent audit. Vendors like Cap Gemini, Reasoning Systems, and MatriDigm Corp do this for a living; on samples ranging from 9 million lines of code up to 50 million lines of code, involving dozens of different companies, these vendors have found 450 and 900 date-related bugs per million lines of code. Remember: this is code that had already been remediated, had already been tested, and was ready to be put back into operation. And keep in mind that we're not talking about millions of lines of modified code, but rather millions of lines of total code in the system; thus, if your organization has 100 million lines of code, these vendors are suggesting that you will have 45,000 to 90,000 bugs on 1 Jan 2000. "
-- Runway Cat (firstname.lastname@example.org), December 21, 1998
Hi RC: Yep, those same metrics jumped out at me too when I read the article. One other thing I saw this week was a cover page story on PCWEEK about number of bugs increasing each year. They had a nice chart that had 1994 - 1998 and shows a dramatic increase in the overall total each year. From memory (only ballpark) 1993 had only about a dozen bugs or so while this year is closer to 100.
The info on bugs was compiled, in part, from the www.bugnet.com database, which is where I found that humorous "12 bugs of Christmas." Anyway, when you add up the increasing number of bugs and what Ed wrote you get some hard ammo for the pollyannas.
-- Rob Michaels (email@example.com), December 21, 1998.
it's been a long time (say 20 years) since I did my time doing COBOL - so if this question is silly, I appologize, but anyway - has anybody done any studies on how long it takes on average to fix a bug discovered to have been caused while attempting to debug something else?
In other words, it would seem fairly easy to put paid to the "three days to a week" fix time being bandied about by the pollyannas, if we could say that it averages X hours to fix one such bug, and then multiply by the number of available bugs. (yes I know that's over simplifying, and yes, I know that the fixes will also cause some bugs, but I'm trying for round numbers here, okay?)
-- Arlin H. Adams (firstname.lastname@example.org), December 21, 1998.
Arlin: Not a silly question at all. I haven't seen any metrics for this. Maybe Capers Jones or someone on the forum knows. We do know some other things though.
Consider that the 'when' part of the question (when the bug is found after the code is remediated) plays a large part in the 'how much'. It has been widely documented that the earlier in the life-cycle that a bug is found (requirements/analysis for example as opposed to user testing or in production) the more expensive it is to debug. Time is money. Since Y2K remediation projects go through stages also, there is a correlation here. As of 1997, the cost was about $1.50 per line of code for inspection and correction. This has about doubled in 1998 from what I have heard. So while we may not have the how long, we have solid metrics for the how much, and the quantity of introduced bugs too. This is helpful to know.
The how long average may not exist. There are over 500 computer languages currently, and there may be several, perhaps many, averages... depending on several variables including the language itself.
-- Rob Michaels (email@example.com), December 21, 1998.
Arlin: My favorite quote on 'fix on failure' for large software systems comes from Cory Hamaski's DC Y2K Weather Report #105 :
These mega systems cannot be fixed in a few hours or days. It will take months. If these systems can be fixed on failure, then lets just roll the clock forward RIGHT NOW, and fix these mega systems over this weekend and be done with it.
-- Arnie Rimmer (Arnie_Rimmer@usa.net), December 22, 1998.
My feeling is that 90% of bugs are trivial to fix once discovered, 90% of what's left are 10x harder, 90% of the 1% are a further 10x harder, and so on to infinity. This is why only the very simplest of software projects is ever bug-free, and the really big ones often never reach usability at all.
A nontrivial bug that's also a showstopper is the nightmare. The bigger the organisation or system, the more likely it is to happen.
Cory isn't saying that the systems can't be fixed. He's saying that they can't be fixed IN TIME given what he sees as the current state of play. I'm hoping that because he's a specialist in obsolescent big- company mainframe systems, he gets to see the worst cases and this biases his world-view. I have to say, Cory's posts worry me more than any of the doom-mongers do.
-- Nigel Arnot (firstname.lastname@example.org), December 22, 1998.
You are right on Nigel.
Average size Software Porjects: 25% fail and will be cancelled prior to implementation, 15% finish late averaging 6-7 months delay, and 60% finish on time but are unreliable until debugged.
Large software Projects: 65%, 21% and 14% respectively.
Source: Sunset Research
-- Rob Michaels (email@example.com), December 22, 1998.
Rob - Bugs which are caught and fixed early in the software lifecycle should be _less_ expensive than those which show up later. We've always used the admittedly-informal "decimal point rule": Catch 'em in design, costs you 50 cents. Move the decimal one place for every phase they get through. If they show up all the way out in deployment/production, you've got a $5,000 problem (including some very annoyed customers.) Always helps management understand when I have to justify the Test phase of the project. "Factor of 10 in the costs if we don't do all the testing, boss!"
That's why the approach of deploying without testing will be such a disaster. Those of you who've worked with Microsoft NT 4.x: imagine what NT 4.0 would have done to important systems if Microsoft hadn't done all that Beta testing. It was bad (and expensive) enough when they did!
Now imagine that it's not your messaging server, but your enterprise financial system that's gone four paws in the air. Ugly...
-- Mac (firstname.lastname@example.org), December 22, 1998.
Just a point of reference here - initial commercial release of Windows 95 was estimated to have 14,000 bugs. Still worked.
CS theory says you will never get all the bugs out of a complex system - check it out - the equation gives log time for remaining bugs.
-- Paul Davis (email@example.com), December 22, 1998.