Yourdon's Latest Ramblings

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Sort of a yawner of an essay. Some good points but most certainly no revelations here. Y2K will happen. Glitches will occur. No big deal. We're used to delays. We blame technology on most of our problems today. Tommorrow will be no different. The next six days will be remixed drivel from journalists and so-called Y2K experts. Yes, more airlines will reduce flights. Yes, people on this forum will find more creative ways to find so-called subliminal doomer statements from electric company press releases. Yes, perhaps a million Microsoft software items may be considered non-compliant by Jan 1. And finally, some Walmart in buthole, Kansas may run out of water this week. Who cares, close your eyes, use a mental windowing technique to buy you some time over the rollover period and hopefully you''ll wake up on the other side unscathed.

Bernard Llama

-- Bernard (Llama man@cool.net), December 26, 1999

Answers

Lol - Ignorance is bliss, eh Bernard!

"Who cares, close your eyes, use a mental windowing technique to buy you some time" That is exactly the kind of attitude that brought us to this stupid dilemma, and is going to cost consumers trillions of dollars.

You are a funny guy, and perhaps ideally suited for a career as a corporate CEO or a politician.

-- Hawk (flyin@high.again), December 26, 1999.


Llama Brain is the only yawner here. Thank's again Ed for helping to wake up the world. When will you be starting that nap llama? Hopefully sooner than later?

Ed Yourdon's Web Site ---------------------------------------------------------------------- ---------- Y2K: I Know What I Know "I know what I know I'll sing what I said We come and we go That's a thing that I keep In the back of my head" "I Know What I Know," by Paul Simon, from the Graceland, album, 1986

During these final days of 1999, I've been getting a rash of phone calls and e-mail messages from newspaper journalists, TV reporters, and concerned individuals, asking for my "latest thinking" and/or "final predictions" about Y2K -- as if there is some last-minute epiphany that will make the outcome undeniably clear to everyone. But while there are now more and more frequent reports and updates on the Y2K situation, I don't think the "big picture" has changed much at all during the past year. I still think that the readiness/compliance claims being made by many organizations and government agencies are optimistic at best. I still don't think that the U.S. can escape the effects of serious Y2K problems elsewhere in the world, given the nature of today's interconnected global economy. And I still don't think the consequences of Y2K disruptions will be overcome within the short time-frame of a three-day "winter storm." I'm sure that critics can find details to quibble with, but in general, I still stand by the arguments and conclusions in the various Y2K essays that you can find in the articles and essays section of this site. One of the things that has amazed me throughout the Y2K episode is the ease with which government spokesmen, industry leaders, television reporters, pundits, analysts, consultants, and individual citizens assert that they know such-and-such, or that they can prove that such-and-such is a fact. Having been educated as a mathematician [1], and having spent a career working with computer software, I tend to be cautious about such strong statements. Just as much of our traditional mathematics is based on certain axioms -- e.g., the axiom that two parallel lines will never meet, even if extended indefinitely -- it seems to me that many of the arguments and conclusions about Y2K are based on some basic axioms -- i.e., things that we assume to be true, even though we will never be able to prove them to be true.

In this spirit, I thought it might be useful to describe those aspects of Y2K about which I feel very confident, based on axioms that have served me throughout my adult life, or conclusions that can be derived from those axioms. However, unlike some of the other commentators on the Y2K scene, I can't say that I know everything; indeed, my knowledge and wisdom are so limited that there are things that I don't even know that I don't know. There is even a third area of wisdom, namely the things that I know that I don't know; these are worth identifying too.

I Know What I Know "If you want to know the taste of a pear, you must change the pear by eating it yourself. . . . If you want to know the theory and methods of revolution, you must take part in revolution. All genuine knowledge originates in direct experience." Mao Zedong, Speech, July 1937, Yenan, China.

I know that the sun has come up, each and every day, for the slightly more than 20,000 days that I have been alive on this planet. I've experienced this with my own senses, and I have no doubt that it's true. My knowledge of physics and celestial mechanics is not sufficient to use this firm knowledge to prove that the sun will come up tomorrow, but it's enough for me to able to predict tomorrow's sunrise with great confidence. And in the absence of compelling evidence and scientific argument to the contrary, I'm so confident of tomorrow's sunrise that I'm even willing to recommend that others base their plans on this highly likely event taking place tomorrow, and the day after, and several million more days after that. When it comes to software and computer systems, I know certain things, too -- simply because I have experienced them, day after day, throughout a career that has lasted more than 35 years. But I also know that what I know about software projects is associated with big companies, big computer agencies, and big projects. That's the environment I've worked in, that's where I've provided my consulting advice to hundreds of companies, and that's what I've written some 25 textbooks about.

And what I know about large projects in large companies is that a substantial percentage of them are finished late, and/or over budget, and/or riddled with bugs. An individual computer project, or an individual company, may beat the odds from time to time; but I know that over the past 40 years, which is roughly the period of time that measurements have been kept about software projects, roughly 25% of large projects have been canceled before completion, and only about 60% have been delivered on time or ahead of schedule. I also know, from long experience, that software project managers have been notoriously optimistic about meeting their deadlines, right up until the last moment -- quite literally until the day before the deadline, in some cases. Further, I know that software project managers have been notoriously optimistic about the absence of bugs (or "glitches," as they are frequently called in Y2K discussions), regardless of how much or how little testing they have done. And finally, I know, from visits to hundreds of companies around the world, that the political environment in most large software projects makes it difficult, if not impossible, for bad news to percolate up to the top of the organization. At best, the bad news is filtered as it rises through each layer of management; at worst, it is completely squelched.

Obviously, there are exceptions -- and I know that, too. There are good managers, successful projects, exemplary organizations that thrive on open, forthright discussions, and honest, ethical behavior at every level of the organization. If we use the Software Engineering Institute's Capability Maturity Model as a technical measure of the caliber of a software organization, it's worth noting that at last count, there were roughly 49 organizations around the world at level 4 or 5 ... but roughly 70-75% of the software organizations are at level 1 on the SEI scale, which implies that they have no orderly process for developing their systems, and cannot be counted on to meet schedules or budgets in a consistent fashion.

Here's another thing I know that I know: there is a tradeoff between the schedule, the budget, and the "bugginess" of a large software project. A competent software manager can create an "optimal" plan that will employ the right number of people, for the right amount of time, in order to develop the right software with an acceptable level of quality (i.e., absence of bugs). If such a plan is created, and if it is then carried out by competent software engineers in a rational corporate environment, then it's possible to deliver the software on time, within budget, and with an acceptably low number of bugs. But if any one of these three critical parameters is "compressed" -- e.g., if a nominal two-year schedule is compressed into one year -- then one or both of the other two parameters will "expand" in a greater-than-linear proportion. To put it simply: if you cut the schedule of a software project in half, then you'll more than double the budget, and/or more than double the number of bugs in the delivered system. This hardly seems like rocket science, but I know that I know this is an unpopular truth -- non-technical business managers typically operate under the illusion that they can bully the programmers into delivering the same amount of software in half the originally scheduled time, with the same budget and the same level of quality, through the simple expedient of working 80-hour weeks instead of 40 hour weeks. But with rare exceptions, this has not worked for the past 40 years, at least not in large software projects. This is something I know, deep down in my bones. It's something that most senior managers don't want to hear, and refuse to believe, but I know it -- and I know that I know it.

Just as 20,000 consecutive sunrises in the past doesn't necessarily guarantee that I'll see the sun peeking over the Sangre de Cristo mountains tomorrow morning, so it could be argued that 40 years of generally mediocre performance by software organizations doesn't necessarily prove that they'll fail at their Y2K efforts. But the optimists who argue, "This time it's different!", or "This time, they know that they have to do things right!" is generally made by people who have never participated in a large software project, and generally wouldn't recognize a computer if they fell over one. By analogy: it's very charitable to trust a long-time alcoholic when he tells you that after 40 years of binge drinking, he is really, truly, definitely going to stop this time -- but the odds are pretty good that you're going to be sadly disappointed.

So, how does this confident knowledge of mine about large software projects the many confident predictions and announcements of Y2K readiness? How does it square, for example, with President Clinton's recent statement that 99.9% of the Federal government's mission- critical systems are now Y2K-compliant -- i.e., that the Federal government is ready for Y2K? A cynic might observe that this statement was made by a man who said, in sworn testimony, that "it depends on what the meaning of 'is' is." A careful observer of the Federal government's Y2K efforts might observe that the government had identified some 9,000 mission-critical systems in 1997, but the number mysteriously dropped to roughly 6,000 by early 1999 -- so perhaps what Mr. Clinton was really saying is "all of the systems we managed to repair are mission-critical," rather than "all of the systems that are mission-critical have been successfully repaired."

But let's leave the cynicism aside, and take the statement at face value; given the industry's mediocre track record for the past 40 years, how is it possible that 99.9% of the mission-critical systems were actually finished in time for the "ultimate" deadline? One aspect of the apparent contradiction is easily explained, based on the tradeoff between schedule, budget, and bugs that I mentioned above. The initial 1997 estimate for Federal government Y2K repairs by the GAO was approximately $2 billion; the most recent estimate that I saw in September 1999 was approximately $8 billion -- and we don't know how accurate the estimates are for post-Y2K repairs. In the best case, this means that the Federal government managed to achieve the politically crucial objective of finishing its repairs in time, at the expense of the politically acceptable sacrifice of over- spending its budget by a factor of four. Whether we think this is reasonable or unreasonable, whether we think it demonstrates competence or incompetence, is not the issue here; the main point is that one could plausibly argue that the Federal government achieved the spectacular record of 99.9% schedule compliance because it was willing to tolerate whatever budget over-run was necessary.

Unfortunately, this leaves out the third factor in the "eternal triangle" of software projects: bugs. Neither Mr. Clinton, nor Mr. Koskinen, nor anyone else in the government has told us how many bugs we should expect in the Y2K-compliant systems that will begin running in "production mode" on January 1st. For that matter, they didn't tell us how much testing was done, how "complete" the test coverage was, how many bugs were identified during the testing process, how many "bad fixes" were identified, what the "mean time to repair" was, or anything else about the nature of the bug-elimination activity. (They did tell us, in a few specific instances like Social Security and FAA, that independent verification and validation, or IV&V, was carried out by an independent third party, but they didn't tell us who did the IV&V, or what the detailed results were.) Mr. Koskinen, in particular, has gone to great pains in recent weeks to remind us that "glitches" are a common, everyday occurrence -- and that we should not be shocked or disappointed if there are some glitches associated with Y2K repairs. In concept, he's right, especially when he warns us about incorrectly blaming everyday glitches on Y2K. But it's only useful if we have some way to make relative comparisons between everyday glitches and Y2K-related glitches. Mr. Koskinen has told us, for example, that at any given moment, approximately 2% of the nation's ATM machines are inoperable -- but he has given us no estimates of the percentage of ATM's that will be rendered inoperable because of Y2K errors.

Here again, there are some things that I know about large software projects. I know, from the extensive work carried out by software metrics gurus like Capers Jones, Howard Rubin, and Larry Putnam, that the software industry has typically had approximately 75 defects per 10,000 lines of "delivered code" (or 0.75 defects per function point) -- i.e., software that has been supposedly tested, delivered to the customer, and put into production. And I know, from the reports furnished by such Y2K vendors as Cap Gemini and MatriDigm, that there are between 10 and 100 bugs per 10,000 lines of Y2K-related code that has been remediated and tested. In short, we know that under normal project conditions, software is delivered in a state that is far from error-free; and we know that under normal Y2K project conditions, the best we can hope for is that a Y2K project will have roughly 7 times fewer bugs than a normal project. Indeed, it's more likely that we'll find that a Y2K project has approximately the same number of bugs as any other kind of software project we've carried out for the past 40 years. In other words, it's deja vu all over again -- a concept that I discussed in more detail in an essay by the same title.

But we do know that one thing is different about Y2K projects: the deadline is immovable. Thus, it's far more likely that the deadline (December 31st, 1999) was determined first, and the other project parameters -- budget and bugginess -- were considered second. That is, the typical Y2K project manager was probably told, "You're starting your project tomorrow morning, and since you have to be finished by December 31, 1999, that means you have no more than X calendar days. Tell me how much money and how many people you need to finish in time." In the worst case, the project manager is told, "You have X calendar days, and we can only spare Y programmers to work on the project, and we can only squeeze Z dollars out of the budget. Figure out how to make all of this work." Notice that these "marching orders" don't say anything about the number of defects -- but if you constrain the schedule and the budget and the human resources available to work on the project, then the only "variable" left to the manager's discretion is the number of bugs. Or, to put it another way, if the project manager finds that he's stuck with a Y2K project that has to be finished in half the amount of time that should have been allowed, and with half the number of programmers, and half the amount of money, the only way to make up for the shortfall (over and above the inevitable "death march" behavior of heavy overtime throughout the project) is to reduce the amount of time that should have been allocated for testing, and thus suffer the consequences of higher-than-normal bugs.

More than anything else, I know that there is no "silver bullet" for escaping the realities of schedule, budget, and bugginess interactions in a large software project. Appeals to loyalty won't do it. Threats and edicts and patriotic speeches won't do it. On a small project, I know that some of these difficulties can be overcome by brute force -- i.e., young, energetic programmers can work 120 hours per week in order to overcome the obstacles of an aggressive schedule, miserly budget, and severe quality constraint. But on a large software project, there is no such magic. There's no reason that President Clinton or John Koskinen should be expected to know such things; they are the products of Yale Law School, and while I'm willing to assume that they're experts in the field of law, they shouldn't be faulted for not knowing certain fundamental truths about software. They just don't know -- but I know. I may or may not be able to convince or persuade anyone else, but I am absolutely certain of my knowledge in this area. I know, and I know that I know.

So much for the world of large businesses, large government agencies, and large Y2K projects. What about small businesses, small government agencies (especially at the local and county level), and small countries (not the sophisticated small countries like Singapore and Switzerland, but the technologically primitive, developing nations)? As I'll suggest in the next section of this essay, there is a lot that I don't know about these entities. But I do know that if an organization has not started its Y2K remediation effort as of mid-December 1999, and if it has publicly announced that it has no intention of doing so before the end of the year, then the chances of it being Y2K-"compliant" or Y2K-"ready" are close to zero. I'm sure there must be some small companies in this country -- and probably many more in the developing nations -- that have no computers, no telephones, no fax machines, no photocopiers, and no embedded systems of any kind. And perhaps there are a few companies that are miraculously Y2K-compliant by virtue of having acquired new, Y2K- compliant hardware, operating systems (preferably Macintosh, since Microsoft has been sending out Y2K updates to its system software all through 1999), application software, and embedded systems. And I'm willing to accept the possibility that there are some companies that have never bothered to set the date on any of their computerized systems, because nothing they do in their business is date- sensitive. I believe that all of these represent a small percentage of the overall total, but I know that I don't know the exact percentage. All that I do know is this: if you haven't started your Y2K work by December 31, 1999, then there's no way you can hope to finish by January 1, 2000.

The reason this is significant is that virtually every survey -- whether conducted by optimists, pessimists, or neutral pollsters -- indicates that somewhere between 1/3 and 1/2 of the "small" part of the world (small businesses, small government agencies, and small countries) have not started their Y2K work, and will not do any Y2K repairs until they see what problems they encounter after the beginning of the New Year. Though I have been involved in surveying large companies about various aspects of Y2K, I have not personally conducted any of the small-world Y2K surveys. Thus, the knowledge that I have in this area is, technically, not first-hand knowledge, but second-hand knowledge taken from numerous sources that are generally considered reputable and credible. And this means that I should preface my assertion that "one-third to one-half of 'small' entities are going to discover that they're non-compliant on or after January 1, 2000" with the qualifier "I know that I'm confident" rather than "I know that I know."

I Know Some Of The Things I Don't Know "The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people are so full of doubts." Bertrand Russell

There are lots of things that we know enough about to have opinions, often very strong opinions -- but when pressed, we have to admit that we don't have facts, or any kind of certain knowledge. Here are some of the things about which I have opinions, but am willing to admit that I don't know for certain: How will the small businesses differ from the large businesses, in terms of Y2K outcomes? It's reasonable to assume that small businesses have less automation than large businesses, but I don't know whether today's small businesses are more dependent, or less dependent, on their computers than large businesses. It's reasonable to assume that small businesses have less sophistication, and are less likely to have done a thorough job of Y2K remediation, testing, and contingency planning than large businesses; but I don't know just how sloppy a job they may have done. Similarly, I know from my own experience in running small businesses that there is less bureaucracy and overhead in a small company, and that it can therefore be more flexible and quick to respond. But on the other hand, small businesses usually have far less cash reserves, and are often just one payroll away from bankruptcy; if Y2K disruptions anywhere in the supply chain (with or without disruptions in their own computers) cause them to lose a month's orders, a month's production, a month's invoices and receivables, will that be enough to put them out of business? I know that these are likely to be significant issues for many small businesses in the New Year, but I don't know what the overall effect will be. How realistic is it to assume that a small business, which normally depends on one or two PC's, can operate manually? I can imagine a situation where a small business has just made a transition to computers within the past year or two; in that case, the old manual forms, procedures, and tools are probably still around. There are manual Rolodex files with the names and addresses of customers and vendors; there are manual typewriters gathering dust in the closet; and there are three-part carbon forms for the invoices, the purchase orders, etc. Thus, in theory, the loss of the company's PC's might not be fatal, as the staff could manage to carry on with their old, manual approach. But if the transition to computers took place more than 2-3 years ago, it's more difficult to imagine falling back to a manual mode of operation  the Rolodex files are out of date, the typewriters have been thrown out, the carbon forms have disappeared, and there has been so much turnover that only one or two of the employees remember how to do things manually. And since PC's have been part of the U.S. corporate culture for nearly 20 years, it's more likely that a new business created any time in the 1990s would have been automated from the very beginning  i.e., there are no Rolodex files, no manual typewriters, no carbon forms, and no "legacy" skills of doing things the old-fashioned way. When faced with the prospect of bankruptcy, employees and owners in a small business will make heroic efforts to innovate and jury-rig whatever they've got available; but the loss of their computerized records (compounded by the fact that many small businesses have sloppy or non- existent backup procedures) may turn out to be fatal. I know that this is going to be an issue, but I don't know how serious it will be. How much of a high-volume business environment can be run manually if the computers are down? It's one thing to suggest that a small business can fall back to manual operations if their PC's are down; but how realistic is it to assume that a large bank, or airline reservation system, can operate manually if its computers fail? An example of this situation is the Internal Revenue Service: in statements describing its Y2K contingency plans, the IRS has stated that if its computers are unavailable for processing tax returns and issuing refunds in early 2000, then agents will be assigned to carry out the process manually. The aggregate output of all agents qualified to do this work, according to the IRS, is 10,000 tax refunds per day. Unfortunately, the IRS receives nearly 100 million tax returns, a reasonable percentage of which involve refunds; there is simply no way that the existing workforce could accommodate this volume of work, and it's unrealistic to assume that additional agents could be hired and trained quickly enough to prevent the entire operation from being overwhelmed. Indeed, the "manual fallback" option is only useful to think about as a short-term remedy for handling the highest-priority "transactions" in a business; but that leads to the next question... How many mission-critical systems can fail before an organization cannot operate? How long can a business sustain Y2K disruptions before it must close its doors? Unless a business operates in a hurricane zone, or a war zone, or some other environment where crises are a commonplace event, the chances are that there is little or no "organizational memory" for dealing with disruptions to one or more mission-critical systems that last for more than a few days. Indeed, the very term "mission-critical" is vague and ambiguous -- how else can one explain the Federal government's decision to re-prioritize its own systems, so that the approximately 9,000 systems that had been categorized as mission-critical in 1997 shrank to a mere 6,000 mission-critical systems in 1999? Whatever it is that we mean by a mission-critical system, there are two other things that we know are going to be issues, but we don't know enough about to predict accurate outcomes -- i.e., what percentage of an organization's mission-critical systems have to fail before the entire organization fails? And second, how long can a mission-critical system be out of service before it's regarded as having "failed"? If one interprets the phrase literally, the failure of any single mission-critical system is sufficient to prevent the "mission" of the organization from being carried out; but that only makes sense if we assume that the mission-critical system is dysfunctional for a long enough period of time that the organization has "lost control" of the environment with which it was interacting. In the case of a real-time process control system, that might be as short as a few seconds; in the case of a payroll system, it might take weeks or months before the employees get so fed up that they walk away from their jobs. Aside from the "extreme" cases (e.g., a chemical plant or nuclear reactor), it's likely that more than one mission-critical system would have to fail, and the failure(s) would have to last for more than a few days, before the consequences were fatal to the organization -- but since we have little or no historical precedent to guide us, we have to admit that even though the nature of the problem is known, the outcome is unknown. How many non-mission-critical systems can fail before the aggregate impact is equivalent to that of a mission-critical failure? This issue is an obvious one; I raise it only as a reminder that most of the Y2K discussions have focused only on the mission-critical systems, which typically comprise only 10-20% of an organization's total systems. Of course, the notion of simply dividing all of an organization's systems into two categories -- mission-critical and unimportant (which is the obvious implication of a system described as less than mission-critical) -- is overly simplistic. It's more appropriate to say that there is an entire spectrum of systems within a typical organization, ranging from utterly useless (i.e., the organization would be better off if it were thrown away) to slightly useful, to moderately useful, to fairly important, to very important, to essential (i.e., mission-critical). There are a few organizations that claim to have repaired all of their systems, but most will admit that they have focused their efforts only on the arbitrarily-defined category of mission-critical; and even if their efforts have been perfectly successful in repairing those mission-critical systems, there is bound to be an impact from the Y2K failures in the fairly important, moderately useful, and slightly useful systems. For example, the largest U.S. organizations have more than 100,000 PC's, and most of the Fortune 500 companies have approximately 10,000 PC's. If 20% of those PC's are associated with mission-critical business processes, perhaps we can assume that they have all been checked individually. But as for the other 80% -- many of which are in regional sales offices, or laptops assigned to traveling sales representatives -- there's a very good chance that the organization has simply sent a memo to all employees telling them what steps they should carry out to check their own machines. It's not unreasonable to imagine that half of those non-mission-critical PC's will have serious problems -- because the users ignored the Y2K memo, or didn't carry out the Y2K testing activities properly, or added non-compliant programs and data to their machine after the testing was completed. So, what happens if 40% of the organization's PC's turn out to be non- compliant when everyone shows up for work on January 3, 2000? Perhaps it's not enough of a problem to cause bankruptcy, but it's got to have an impact on the overall productivity of the organization. How much of an impact? I don't know. How long will it take for Y2K problems to "ripple" through a supply- chain network? Dell Computer Company is proud of the fact that many of the parts used to assemble a Dell PC spend less than eight hours in the factory, from the moment they arrive from the parts producer, until the finished PC is shipped out to a customer. While it's an admirable example of just-in-time (JIT) manufacturing, it also suggests that if the required parts don't show up (because the parts manufacturer was having problems, or the shipping agent had problems, or the roads were closed, or whatever), then Dell will be impacted within eight hours. Obviously, the time-delay is longer for other industries, particularly since some industries have followed a practice that individual citizens have been explicitly warned not to do: stockpiling. The strategy is simple and obvious: if you have a week's worth of spare parts sitting in your inventory, then you won't notice the impact of a Y2K problem unless it takes more than a week for your parts-producer, together with the rest of the supply "pipeline", to solve their respective Y2K problems. In some cases, it could take weeks or months for a Y2K problem to ripple through the economy; for example, consider the impact of Y2K problems in the agricultural industry that seriously disrupt the spring planting season in the U.S. We're currently eating crops and foodstuffs that were grown and harvested during the fall and winter of 1999; we might not feel the full impact of a spring-2000 agricultural disruption until late summer or fall of 2000. Again, we know that this is an issue we should be looking at, but we don't know what the magnitude of the problem will be. A final comment about this category: the fact that there are so many things that I know that I don't know -- and that nobody really knows -- ought to be a cause for caution and concern, not casual optimism. Given the kind of problems I've outlined above, it seems naive at best to suggest (as the U.S. Secretary of Commerce recently suggested) that Y2K will not have a noticeable impact on the U.S. economy. Remember: There Are Things We Don't Know That We Don't Know "Only the unknown frightens men. But once a man has faced the unknown, that terror becomes the known." Antoine de Saint-Exupiry, Wind, Sand, and Stars (published in Terre des Hommes, chapter 2, section 2, 1939).

The ultimate question that a Y2K planner has to ask is: what are the things that are not even on my radar screen? What are the things that are so unexpected that, if they occur, I will have absolutely no contingency plan or backup strategy? One hopes that national governments have a larger "radar screen" than the typical organization, and that the typical organization has a larger radar screen than the typical individual. But given the unprecedented nature of the Y2K phenomenon, I am convinced that Y2K planners at every level will find that there are things that they simply did not plan for. There may be military consequences, political consequences, socio-economic consequences, and religious consequences that have never appeared in anyone's contingency plans. And there may be "wild card" events whose occurrence can be postulated -- e.g., a Y2K-inspired political assassination, or a Y2K-related speech by the Pope to a billion Catholics, or a Y2K-related nuclear meltdown of several Eastern European reactors on February 29th, when most people thought the Y2K problem was behind us -- but whose consequences turn out to be far more severe than anyone anticipated. Perhaps this category -- the "unknown unknowns" -- is the best explanation for the extensive preparations by government authorities all over the world to put their police forces and armed forces on special alert for the Millennium Rollover. There has been no published evidence, for example, to suggest that the Japanese population is going to behave in anything other than its normal sedate fashion; yet the Japanese government has announced plans to have nearly 100,000 soldiers mobilized for a Y2K emergency. Similar mobilization plans have been announced by the Canadian and British governments, despite repeated indications that the population is facing the arrival of the millennium with indifferent complacency. In the U.S., a $40 million Y2K command center has been set up by the Federal government, and each of the 50 states has set up an emergency center to monitor events.

Surely, all of this activity cannot be justified for dealing with the "known known" Y2K problems -- if 99.9% of the Federal government systems are indeed ready for Y2K, why is there a need for any command center? And it seems like overkill for the "known unknown" problems; for example, terrorists and hackers are a known problem, their specific plans and targets are unknown, but is that enough to justify a brand-new $40 million Y2K command center in Washington? Perhaps the greatest value in all of these emergency command centers will be having a group of savvy, experienced problem-solvers who can all work together, under pressure, to cope with the "unknown-unknown" problems that may crop up.

Conclusion Perhaps, when the Y2K dust settles, we will discover that the largest and most severe of the unknown-unknown problems was the human, sociological reaction to Y2K technological problems. Bank runs and food hoarding fall into the category of "known-unknown" problems -- i.e., we know that there is a potential for such problems to occur, but the extent and timing of such problems is unknown. Beyond this, though, what if ... Y2K causes an enraged population to march on Washington, burn the city to the ground, and lynch every politician in sight? What if Y2K leads an enraged population to burn every computer programmer at the stake, thereby making it impossible fix any of the technological problems? What if a new religious messiah emerges from a Y2K crisis and convinces his followers to launch a religious jihad on the rest of the human race? What if ... the list is endless, and is indeed limited entirely by our own imagination at this point. But in that case, one question will remain: would the unknown-unknown human reaction have been less severe and less unpredictable if the governments of the world had made more of an effort to tell their citizens the truth about Y2K, rather than dismissing it as the proverbial "bump in the road"? Or, to put it another way: will the unknown-unknown human reaction -- e.g., combinations of panic, terror, frustration, rage, and betrayal -- be worse if the technologically-oriented Y2K problems turn out to be more severe, widespread, and long-lasting than the governments of the world have led us to believe? Hopefully, we'll be able to look back upon all of this at some point in the future, and make an objective judgment about whether our leaders and our governments did the right thing with Y2K. As for me -- an individual citizen, responsible only for myself and my family -- I can only say that I wish I had been told the truth. I know enough about Y2K to be strongly convinced that I have not been told the truth -- and I know enough about the philosophy of government to know that, common practice notwithstanding, the ideal standard is one of truthfulness. Even Richard Nixon, a President whose truthfulness was severely questioned, proposed a standard that I believe would have led to a more successful Y2K outcome than what we will be facing in a few days:

"Let us begin by committing ourselves to the truth -- to see it like it is, and tell it like it is -- to find the truth, to speak the truth, and to live the truth." Richard M. Nixon, Presidential nomination acceptance speech, Miami, Aug. 9, 1968.

Amen.

---------------------------------------------------------------------- ----------

Footnote 1. Lest one get the impression that mathematicians know more than other folks, it's worth repeating Bertrand Russell's comment: "Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true." (Mysticism and Logic, chapter 4, 1917; article first published in International Monthly, vol. 4, 1901).

---------------------------------------------------------------------- ----------

-- Gordon (g_gecko_69@hotmail.com), December 26, 1999.


Thanks Hawk,

That was the nicest thing anyone has said about me on this forum. I'll cherish it even after the CDC. I am a soccer player first, and politician second.

Berny

-- Bernard (Llama man@cool.net), December 26, 1999.


Also see this thread:

"New Y2K Essay"

http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=0026a1


-- Linkmeister (link@librarian.edu), December 26, 1999.

Only one of those jobs requires you to use your head too, Bernard, and we both know which one that is! ;-)

-- kritter (kritter@adelphia.net), December 26, 1999.


Anyone ever see a smart llama? ... The only ones that I have ever seen just stand around, gum a lot and ooze lanolin. < vbg >

-- John (jh@NotReal.ca), December 26, 1999.

Soccer! That explains it: too many balls to the head!

-- W (me@home.now), December 26, 1999.

Llama man, no offense, but you say you are a "soccer player first and a politician second."

But do you know anything about SOFTWARE? Ed Yourdon DOES. And I BELIEVE him when he says "I know what I know". I don't TRUST that Koskinen or Clinton - the lawyers - know a damn thing about software. As Ed says, he will assume that they're experts in law based on(their) having graduated from Yale Law School. But Neither of them know computers, and I'm not trusting MY safety and the safety of my family to a couple of lawyers, or even a soccer player. Or a musician. Or a chief justice of the Supreme Court. Or a "feel-good" preacher. Or a salesclerk with Wal-Mart. You get my drift.

Ed may not know everything - but based on what he DOES know -and he KNOWS he knows it - about computers and large software projects - I'm sticking with Ed.

And I found the essay extremely interesting,not at All "a yawner"; but unlike some people, I don't already Know everything.

It's actually quite reassuring to me - the article, I mean. Because I know that the man - or one of them - to whom I have listened - is emminently(sp? Meaning? chuckle) qualified to speak on this subject, and just as important to Me, he is equally as Trustworthy on this subject. Something politicians just don't seem to know how to be.

Thanks, Ed. Thanks a LOT.

-- DB (tomG@h.com), December 26, 1999.


ps - I don't mean to sound like I'm denigrating "clerks at Wal-Mart", cause I'm Not. I'm just saying that based on the enormity of the problem, I am more likely to take what Ed has to say about computers and Bugs and large software projects as being closer to Fact. Because it IS his area of expertise. Just as I pay attention to Cory H. when he talks mainframes. That is His area of expertise. And like Gary North, and probably others on This site, I may not Be a computer expert, but I Read the experts. They're the ones I listen to. Not Lawyers. No. That is all I meant.

-- DB (tomG@h.com), December 26, 1999.

Moderation questions? read the FAQ