Heavy Sigh, is this OUR BITR Koskinen?????

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

First, I am a lurker, this is my first post here and I wanted to say THANK YOU! I am prepared, not paranoid because of the intelligent and excellent discourse I have read here and on other sites. I am hopeful that in our new society we will have this time next year we will all have the self-respect and self responsibility to take good care of each other and ourselves. I know it aint gonna be easy, but if we have to end up starting over I hope we all learn that compassion, empathy, co-operation and honesty are far more important than profit margins and stock portfolios and luxury for a few at the expense of many.

Blessings to all from all is much more sustainable and desireable. Yourdon's Warning on the Trade-Offs: Deadline, Budget, and Bugs Remaining Link: http://www.yourdon.com/index2.html Comment: There are no free lunches. You must pay for compliant code: in time and money. The time limit was imposed by 1/1/00. The money limit was established by managers who probably did not understand how large-scale software projects operate.

Yourdon might have added another constraint: experienced mainframe programmers. They are in declining supply. The fact that their salaries did not soar to $400/hour indicates that businesses and governments did not hire most of them.

He says, categorically, that the traditional schedules of large-scale software projects will appear in the y2k repair projects. This means that a significant percentage will be late.

Then there is the matter of bugs. These projects will not be delivered bug-free. Testing must be imposed to remove them, and testing takes time and money, both of which have been in short supply.

This is from his site.

* * * * * * * * * * * *

. . . One of the things that has amazed me throughout the Y2K episode is the ease with which government spokesmen, industry leaders, television reporters, pundits, analysts, consultants, and individual citizens assert that they know such-and-such, or that they can prove that such-and-such is a fact. Having been educated as a mathematician [1], and having spent a career working with computer software, I tend to be cautious about such strong statements. . . .

In this spirit, I thought it might be useful to describe those aspects of Y2K about which I feel very confident, based on axioms that have served me throughout my adult life, or conclusions that can be derived from those axioms. . . .

I Know What I Know. . . .

And what I know about large projects in large companies is that a substantial percentage of them are finished late, and/or over budget, and/or riddled with bugs. An individual computer project, or an individual company, may beat the odds from time to time; but I know that over the past 40 years, which is roughly the period of time that measurements have been kept about software projects, roughly 25% of large projects have been canceled before completion, and only about 60% have been delivered on time or ahead of schedule. I also know, from long experience, that software project managers have been notoriously optimistic about meeting their deadlines, right up until the last moment -- quite literally until the day before the deadline, in some cases.

Further, I know that software project managers have been notoriously optimistic about the absence of bugs (or "glitches," as they are frequently called in Y2K discussions), regardless of how much or how little testing they have done. And finally, I know, from visits to hundreds of companies around the world, that the political environment in most large software projects makes it difficult, if not impossible, for bad news to percolate up to the top of the organization. At best, the bad news is filtered as it rises through each layer of management; at worst, it is completely squelched. . . .

If we use the Software Engineering Institute's Capability Maturity Model as a technical measure of the caliber of a software organization, it's worth noting that at last count, there were roughly 49 organizations around the world at level 4 or 5 ... but roughly 70-75% of the software organizations are at level 1 on the SEI scale, which implies that they have no orderly process for developing their systems, and cannot be counted on to meet schedules or budgets in a consistent fashion.

Here's another thing I know that I know: there is a tradeoff between the schedule, the budget, and the "bugginess" of a large software project. . . .

. . . I know that I know this is an unpopular truth -- non-technical business managers typically operate under the illusion that they can bully the programmers into delivering the same amount of software in half the originally scheduled time, with the same budget and the same level of quality, through the simple expedient of working 80-hour weeks instead of 40 hour weeks. But with rare exceptions, this has not worked for the past 40 years, at least not in large software projects. This is something I know, deep down in my bones. It's something that most senior managers don't want to hear, and refuse to believe, but I know it -- and I know that I know it. . . .

So, how does this confident knowledge of mine about large software projects the many confident predictions and announcements of Y2K readiness? How does it square, for example, with President Clinton's recent statement that 99.9% of the Federal government's mission-critical systems are now Y2K-compliant -- i.e., that the Federal government is ready for Y2K? A cynic might observe that this statement was made by a man who said, in sworn testimony, that "it depends on what the meaning of 'is' is." A careful observer of the Federal government's Y2K efforts might observe that the government had identified some 9,000 mission-critical systems in 1997, but the number mysteriously dropped to roughly 6,000 by early 1999 -- so perhaps what Mr. Clinton was really saying is "all of the systems we managed to repair are mission-critical," rather than "all of the systems that are mission-critical have been successfully repaired."

But let's leave the cynicism aside, and take the statement at face value; given the industry's mediocre track record for the past 40 years, how is it possible that 99.9% of the mission-critical systems were actually finished in time for the "ultimate" deadline? One aspect of the apparent contradiction is easily explained, based on the tradeoff between schedule, budget, and bugs that I mentioned above. The initial 1997 estimate for Federal government Y2K repairs by the GAO was approximately $2 billion; the most recent estimate that I saw in September 1999 was approximately $8 billion -- and we don't know how accurate the estimates are for post-Y2K repairs. In the best case, this means that the Federal government managed to achieve the politically crucial objective of finishing its repairs in time, at the expense of the politically acceptable sacrifice of over-spending its budget by a factor of four. Whether we think this is reasonable or unreasonable, whether we think it demonstrates competence or incompetence, is not the issue here; the main point is that one could plausibly argue that the Federal government achieved the spectacular record of 99.9% schedule compliance because it was willing to tolerate whatever budget over-run was necessary.

Unfortunately, this leaves out the third factor in the "eternal triangle" of software projects: bugs. Neither Mr. Clinton, nor Mr. Koskinen, nor anyone else in the government has told us how many bugs we should expect in the Y2K-compliant systems that will begin running in "production mode" on January 1st. For that matter, they didn't tell us how much testing was done, how "complete" the test coverage was, how many bugs were identified during the testing process, how many "bad fixes" were identified, what the "mean time to repair" was, or anything else about the nature of the bug-elimination activity. . .

Here again, there are some things that I know about large software projects. I know, from the extensive work carried out by software metrics gurus like Capers Jones, Howard Rubin, and Larry Putnam, that the software industry has typically had approximately 75 defects per 10,000 lines of "delivered code" (or 0.75 defects per function point) -- i.e., software that has been supposedly tested, delivered to the customer, and put into production. And I know, from the reports furnished by such Y2K vendors as Cap Gemini and MatriDigm, that there are between 10 and 100 bugs per 10,000 lines of Y2K-related code that has been remediated and tested. In short, we know that under normal project conditions, software is delivered in a state that is far from error-free; and we know that under normal Y2K project conditions, the best we can hope for is that a Y2K project will have roughly 7 times fewer bugs than a normal project. Indeed, it's more likely that we'll find that a Y2K project has approximately the same number of bugs as any other kind of software project we've carried out for the past 40 years. . . .

More than anything else, I know that there is no "silver bullet" for escaping the realities of schedule, budget, and bugginess interactions in a large software project. Appeals to loyalty won't do it. Threats and edicts and patriotic speeches won't do it. . . .

How much of a high-volume business environment can be run manually if the computers are down? It's one thing to suggest that a small business can fall back to manual operations if their PC's are down; but how realistic is it to assume that a large bank, or airline reservation system, can operate manually if its computers fail? An example of this situation is the Internal Revenue Service: in statements describing its Y2K contingency plans, the IRS has stated that if its computers are unavailable for processing tax returns and issuing refunds in early 2000, then agents will be assigned to carry out the process manually. The aggregate output of all agents qualified to do this work, according to the IRS, is 10,000 tax refunds per day. Unfortunately, the IRS receives nearly 100 million tax returns, a reasonable percentage of which involve refunds; there is simply no way that the existing workforce could accommodate this volume of work, and it's unrealistic to assume that additional agents could be hired and trained quickly enough to prevent the entire operation from being overwhelmed. Indeed, the "manual fallback" option is only useful to think about as a short-term remedy for handling the highest-priority "transactions" in a business; but that leads to the next question... How many mission-critical systems can fail before an organization cannot operate? How long can a business sustain Y2K disruptions before it must close its doors? Unless a business operates in a hurricane zone, or a war zone, or some other environment where crises are a commonplace event, the chances are that there is little or no "organizational memory" for dealing with disruptions to one or more mission-critical systems that last for more than a few days. . . .

How many non-mission-critical systems can fail before the aggregate impact is equivalent to that of a mission-critical failure? This issue is an obvious one; I raise it only as a reminder that most of the Y2K discussions have focused only on the mission-critical systems, which typically comprise only 10-20% of an organization's total systems. . . .

There are a few organizations that claim to have repaired all of their systems, but most will admit that they have focused their efforts only on the arbitrarily-defined category of mission-critical; and even if their efforts have been perfectly successful in repairing those mission-critical systems, there is bound to be an impact from the Y2K failures in the fairly important, moderately useful, and slightly useful systems. For example, the largest U.S. organizations have more than 100,000 PC's, and most of the Fortune 500 companies have approximately 10,000 PC's. If 20% of those PC's are associated with mission-critical business processes, perhaps we can assume that they have all been checked individually. But as for the other 80% -- many of which are in regional sales offices, or laptops assigned to traveling sales representatives -- there's a very good chance that the organization has simply sent a memo to all employees telling them what steps they should carry out to check their own machines. It's not unreasonable to imagine that half of those non-mission-critical PC's will have serious problems -- because the users ignored the Y2K memo, or didn't carry out the Y2K testing activities properly, or added non-compliant programs and data to their machine after the testing was completed. So, what happens if 40% of the organization's PC's turn out to be non-compliant when everyone shows up for work on January 3, 2000? Perhaps it's not enough of a problem to cause bankruptcy, but it's got to have an impact on the overall productivity of the organization. How much of an impact? I don't know. . . .

The ultimate question that a Y2K planner has to ask is: what are the things that are not even on my radar screen? What are the things that are so unexpected that, if they occur, I will have absolutely no contingency plan or backup strategy? One hopes that national governments have a larger "radar screen" than the typical organization, and that the typical organization has a larger radar screen than the typical individual. But given the unprecedented nature of the Y2K phenomenon, I am convinced that Y2K planners at every level will find that there are things that they simply did not plan for. . . .

Surely, all of this activity cannot be justified for dealing with the "known known" Y2K problems -- if 99.9% of the Federal government systems are indeed ready for Y2K, why is there a need for any command center? And it seems like overkill for the "known unknown" problems; for example, terrorists and hackers are a known problem, their specific plans and targets are unknown, but is that enough to justify a brand-new $40 million Y2K command center in Washington? Perhaps the greatest value in all of these emergency command centers will be having a group of savvy, experienced problem-solvers who can all work together, under pressure, to cope with the "unknown-unknown" problems that may crop up. . . .

As for me -- an individual citizen, responsible only for myself and my family -- I can only say that I wish I had been told the truth. I know enough about Y2K to be strongly convinced that I have not been told the truth -- and I know enough about the philosophy of government to know that, common practice notwithstanding, the ideal standard is one of truthfulness. . .



-- Laurie in Idaho (laurelayn@yahoo.net), December 27, 1999

Answers

Laurie,

Ummm... thanks for posting this again, but it's the essay that I posted on my web site, at http://www.yourdon.com/index2.html a couple of days ago. Looks like you pulled it off Gary North's site, because that's where I saw his reference to the shortage of mainframe programmers...

Ed

-- Ed Yourdon (ed@yourdon.com), December 27, 1999.


Welcome Laurie, great post.

"the ideal standard is one of truthfulness. . . " - amen

-- snooze button (alarmclock_2000@yahoo.com), December 27, 1999.


Echoes the Naval War College's comment -

Our bottom line: The future is transparency--get used to it!

-- Mac (sneak@lurk.com), December 27, 1999.


Nice summary - thank you for the effort.

(From somebody who has to de-bug those programs all-too-often after delivery.)

-- Robert A Cook, PE (Marietta, GA) (cook.r@csaatl.com), December 27, 1999.


I'm going to print this out and frame it. This will become a historic document.

-- Carl (no3daystorm@hotmail.com), December 27, 1999.


Laurie, please don't post anything like this again, as it gives me a headache reading something that is logical and makes sense. Not used to it.

Give me the spin and the BS, any day of the week, that's what I am used to.

Seriously, everything you've stated makes perfectly good common sense. Therefore the conclusions you've reached also make sense. It would only be blind luck that would carry this nation through this without some serious problems erupting somewhere. Or for that matter in a number of locations, all at one time.

I thank you for your observations. Interesting reading.

-- Richard (Astral-Acres@webtv.net), December 27, 1999.


*<) Yipes!

Well - here you go again Ed! Hope he transcribed it correctly (over there) and attributed it correctly (over there) to you (over here) for us to re-read (over hear...)

-- Robert A Cook, PE (Marietta, GA) (cook.r@csaatl.com), December 27, 1999.


Laurie, God bless. It sure would be wonderful if we all practiced decency, honesty, and compassion. It's kind of what I'm hoping for. Does it take the end of the world as we know it for this to happen?

-- Mara (MaraWayne@aol.com), December 27, 1999.

Moderation questions? read the FAQ