To Hoffmeister: Your Arguments Are Persuasive But Not Convincing : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Dear Hoffmeister,

I read one of your recent responses to Ed Yourdon. I was interested to see arguments from an articulate polly with an IT background. Although I have no IT background (I'm an MD with one semester hour of computer programming about 30 years ago), I want to tell you that your arguments are pretty persuasive. However, with an issue so potentially important, I must try to gauge the level of concern among large numbers of IT professionals, especially those taking a "big picture" view. As you know, those levels of concern are quite variable.

I'm confident that the organization that has devoted the most resources to assessing and dealing with the Y2K problem is the US government, without a close second. Although people deeply concerned about Y2K often feel the government has let us down, there is a paradox that has been operating in my Y2K psyche. Whenever I have felt soothed or confused, as the case may be, by polly talk within or outside of the government, it is the memory of statements made and written by government officials that keeps me, more than anything else, concerned about Y2K. Although most of the alarming statements were not made directly by IT professionals, I have no doubt that they reflect the advice of some of the most highly esteemed IT people in this country.

Last week's Senate report was consistent with previous ones with respect to the level of concern that I believe is reflected. One statement that got my attention was this, from the Executive Summary. "The cost to regain operational capability for any mission-critical failure will range from $20,000 to $3.5 million, with an average of 3 to 15 days necessary to regain lost function." Such a generic statement is difficult to interpret, since there will doubtless be a lot of trivial failures, and since we might expect failures in Italy, for example, to last longer than those in the US.

But it does tell me one thing. The IT professionals advising our federal government must believe that many failures next year will be qualitatively different from the trivial failures of 1999.

I use the word "trivial" because you used it to describe Y2K failures as a general class. I wonder if you would have used that word one year ago, or if instead you have been led to believe in the triviality of Y2K by the non-events of 1999 (incidentally, I was wrong about 1999).

Here is why I believe failures next year may not be so trivial. Even though Gartner Group says we are already experiencing a high rate of Y2K "failures", and that the frequency of failures next year will not be vastly greater than now (maybe even less if you include failures from side effects of code remediation), many of those 1999 failures may be ghosts. Many may have been averted by a single quick-fix or data entry workaround method. Each quick-fix in 1999 may have paved the way to preventing other failures later in the year, failures already accounted for in Gartner's graph. In practical reality, these failures may have been removed from the 1999 graph and dumped onto the heap of failures in the 2000 graph, where a new and probably much more time consuming fix will be necessary. Many of them, of course, have already been fixed properly, the percentage depending on the quality of the Y2K project.

This dumping phenomenon could significantly alter the ratio of 1999 failures to 2000 failures, meaning that we may not now be experiencing anything remotely resembling the frequency and severity of the actual failures we'll see in 2000. In that case, one of Cory Hamasaki's favorite predictions, "Times up, rules change," would be appropriate.

There is a good example of a unique Y2K failure that found its way into public awareness because of the impossibility of a quick fix. The problem of year 2000 expiration dates on credit cards could not be immediately contained within IT departments, because the consumers were holding a piece of the system in their wallets. This problem, which caused significant inconveniences for a large number of people, was first addressed by reissuing credit cards with pre-2000 expiration dates, allowing time for the remediation and testing of the involved programs. It was only weeks or months later that systems were fixed to the point that banks could reissue cards with 2000 dates.

What would have happened if every part of these systems had been immediately accessible to the banks' IT departments, as is true in almost all systems? The companies would have instructed all data entry personnel to enter only pre-2000 expiration dates, or use a purely software-contained workaround, and the general public would never have heard about it. How many tens of thousands of systems around the world are working that way right now? Every time they function in this way, a 1999 failure has "happened." But that failure exists only in an abstract world.

Here's an analogy. Imagine a lobster being held on the end of a vertical string which is being lowered into a pot of boiling water. The lobster represents a computing system, the boiling water represents the year 2000, and the string is a timeline from early pre-Y2K dates (lower end of string) to the year 2000 (top of string). Ultimately, the lobster's only defense is to put on a protective coating, which it is capable of doing, but only at a given rate of body coverage. As the lobster is being lowered, it is doing two things: putting on its armor and climbing up the string. The armor represents Y2K compliance, and climbing up the string is the workarounds and quick fixes for pre-2000 failures.

A number of near-sighted ants happen to be hanging around on the lobster's back. They are not particularly worried, because they've noticed that whenever heat approaches, the lobster moves further up the string, away from the heat. They can't see far enough to tell that when January arrives, the lobster will have run out of string. Although they don't realize it, their only hope is for the lobster's armor to be intact by January. And although the lobster's ability to climb the string has been quite helpful, it is only a temporizing measure which may provide little predictive value about the lobster's ability to cover itself in time. This lack of predictive value is analogous to the apparent ability of hopelessly-behind Y2K organizations or countries to be doing quite well in 1999.

Am I sure of the degree of seriousness of all this? Not at all. But I keep going back to those statements by government officials that caught my attention. Sherry Burns of the CIA said last year that most people expect things will continue to work next year the way they always have, but "that will not be the case." The release of the US Senate report last spring was accompanied by a statement to the effect that Y2K would have profound effects, that it could bring one of the largest crises in the history of this country, and that anyone who thinks it will be a bump in the road is "simply misinformed." Admittedly, statements issued almost simultaneously were reassuring.

I am grateful to those government officials who have called it the way they see it despite pressure to sound optimistic. I'm also grateful to their advisors from the IT world.

Maybe those alarming statements were based on old data (though statements reflecting new data are still pretty alarming). Maybe the optimistic turn in the embedded systems outlook changes things so favorably that those earlier statements are truly outdated. Maybe the lobster analogy is off track. I hope so. But my wife and I are not betting our family's well-being on it, we have tried to warn our community, and we are preparing for trouble.

-- Bill Byars (, September 26, 1999


Just a quick note.

Your main point seems to derive from this paragraph:

Last week's Senate report was consistent with previous ones with respect to the level of concern that I believe is reflected. One statement that got my attention was this, from the Executive Summary. "The cost to regain operational capability for any mission-critical failure will range from $20,000 to $3.5 million, with an average of 3 to 15 days necessary to regain lost function." Such a generic statement is difficult to interpret, since there will doubtless be a lot of trivial failures, and since we might expect failures in Italy, for example, to last longer than those in the US.

Realize that statement also comes from the Gartner Group:


The same Gartner Group that states here:

http: //

1999 failures are likely to have a greater negative impact on customer-facing services and interruptions than failures occurring throughout 2000...

-- Hoffmeister (, September 26, 1999.

The figures for the monetary cost of the average failure clearly come from that Gartner report. There's a pretty big discrepancy, though, between the predicted duration of the average failure in the Senate report (3 to 15 days) vs. Gartner's paper (10% being 3 days or longer), meaning the average would probably be less than 2 days. Hopefully, the committee is listening to more than one organization, and it looks as though they are- the main reason being that the overall tone of the report seems more alarming than anything I've ever seen from Gartner Group. Another is that I haven't seen any emphasis, if anything at all, from the government on the fact that, according to Gartner group, a lot of Y2K is already over.

-- Bill Byars (, September 26, 1999.

"Y2K is already over" is a polly fiction created to keep the masses snoozing as long as possible.

-- don't believe it (, September 26, 1999.

I find this concept of most of the failures occuring in 1999 a little odd. Of course many Y2K failures will have already occured--anything expiring after the first of the year and so on. However, surely there are real-time applications that don't look forward whatsoever that will be significant and disruptive come January. There might even be some significant systems that look forward that have not been fixed but that won't fail until November and December when forward dates come into play...

-- Mara Wayne (, September 26, 1999.

I'll have to reread the Senate report; but, I think they were referring to "mission-critical" failures, not "average" failures. It appears they were just using the Gartner Group's definition of a mission-critical failure.

-- Hoffmeister (, September 26, 1999.


I think most of this is a semantic issue. Hoffmeister is trying to distinguish between I've called first-order and second-order y2k problems.

First-order problems are actual bugs because of software mishandling of dates, one way or another, stemming from an inability to understand 00 as 2000. The only first-order bugs encountered so far are due to various forms of look-aheads in code (fiscal years, planning and budgeting, etc.) Second-order problems are any problems stemming from the effort to avoid first-order problems, which don't involve date handling directly at all.

Examples of second-order problems are: new implementations (replacing old software with SAP, junking old computers for new ones, switching from mainframe to client-server architectures, etc), upgrades, patches, problems with testing, productivity lost due to redirecting efforts to remediation and testing or to code freezes, non-date bugs introduced (and suffered from) while trying to fix date bugs, allocation of valuable resources to test environments (time-machine testing), and so on. There have even been some economy-level disruptions due to buying some things at an accelerated pace this year, leading to very soft sales next year. This includes stockpiling as well as sweeping hardware upgrading across the board. Not to mention the hundreds of billions spent remediating rather than moving forward.

Hoffmeister's basic argument is that when all is said and done, these second-order problems are worse than the first-order date bugs themselves. That actual date bugs tend to be easy to locate (even before the strike, and often stone obvious after), and trivial to repair for the most part, and highly localized. Second-order problems run the gamut of everything that could possibly go wrong. First order bugs are fixable by nerds, while second-order problems are management issues.

And Hoffmeister concludes that MOST of the second-order problems have already been faced and solved, and most of the remainder will be behind us by rollover. Therefore, the worst is already over (or we are right now in the thick of it). There may be some newsworthy glitches here and there after rollover (embedded failures, some royal screwups) but basically the worst is over, and remediation has been a success.

Notice that Hoffmeister has carefully avoided any consideration of what might be called third-order problems -- problems stemming from public FEAR of y2k, and irrational public reactions resulting from that fear, which have the potential to do grave damage even if every single y2k bug were in fact fixed perfectly. This is a real threat, but all indications so far is that the public has adopted a react-to- failure strategy, rather than an anticipatory posture. This is mostly good news (no bank runs) but not entirely (no sensible preparations either).

-- Flint (, September 26, 1999.

well i am very low tech no very little about these things... read the stuff, try to learn. so what it all boils down to is your gut feeling, does not matter what the government says does not matter what the IT says, what matters is what you say. My dad worked hard all his life , my mom always wanted a cadallac... well they got one, got all this fancy stuff too. I went to the store with dad and he filled it up with gas and wrote down the milage and i said hey why you doing that dad and natch he told me he never knew when that stuff would break and when it did he wanted to know how much gas he had. got some of my dad in me and i know if you rope a deer you better be prepared to get drug through the woods because you will get drug. I am ready no matter what anyone says

-- sandy (, September 26, 1999.

The code remediation and testing (and the errors) that are going on now, in 1999 when everyone's system is intact, are completely different from the real time problems we're going to see in 2000. The testing is not being done under the actual conditions of use after rollover, nor is the error correction. Right now we're just playing softball; it doesn't matter how good we are at softball now because next year we'll all be mudwrestling instead.

-- cody (, September 27, 1999.

As always, Hoffmeister does an excellent job of showing why nobody can prove that Y2K is going to be anything other than a bump in the road -- all you have to do is believe self-reported, unverified data, that claims everything is going to be well. And, of course, believe that an absence of Y2K problems in 1999 means that there will be an absence of Y2K problems in 2000.

To which, I reply, bullshit. Self-verified data is hardly even worth considering; you might as well hire a team of cheerleaders. The only thing that a lack of Y2K problems thus far shows is that Y2K -- you know, Year 2000 -- has not yet come. Somewhere, in all the statistics and definitions of mission-critical, this seems to get omitted.

The question of how severe the impact of Y2K will be often is tied to one's personal preparation plans -- i.e., perceived small problems yield a small amount of preparation, envisioned large problems yield a large amount of preparation. Large preparation for small problems is essentially innocuous (e.g., you can always eat your stored food, you can always put your money back in the bank). Small preparation for large problems can be deadly. You pays you money and you takes you chances.

95 days.

-- Jack (jsprat@eld.~net), September 27, 1999.

We must kindly remember that Hoff's, Flint's, etc work does not end 1/1/2000, in 95 days.

The possibility of bank runs doesn't decrease for at least another year.

-- lisa (, September 27, 1999.

Sorry, Lisa, but I'm outta here by Feb 01, 2000 at the latest.

Y'all can keep up the "skeer", as Nathan Bedford Forrest used to say, as long as you want.

-- Hoffmeister (, September 27, 1999.

Moderation questions? read the FAQ