Debate Round 1

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

For anyone interested:

--------------

This posts addresses IT systems and Y2k errors.

No one seriously questions the fact that IT systems operate at less than 100% today. Errors in IT systems are a fact of life. And that the "system", if you will, has not collapsed to date from IT errors.

From this, it would appear that the "burden of proof" would rest on those claimimg that Y2k will collapse the system. Past experience shows that this sort of collapse has not happened to date.

It has been postulated, however, that Y2k is a singular event. That Y2k is "systemic", and has the potential to generate massive, simultaneous errors, across IT systems. That past experience is no predictor in this case, since it is a singular event.

So, we have do doubt that global IT systems have a built-in "fault tolerance". The question is, will the Y2k rollover exceed this "fault tolerance"?

While it may be virtually impossible to estimate fault tolerance level, it is my contention that IT systems in general have, almost without question, in the last 8 months experienced errors orders of magnitude greater than that which will be experienced at the rollover to 2000. And that in all liklihood, we have already experienced several "singular events" relating to Y2k errors, that are at least of the same order of magnitude of the Y2k rollover. And that little or no effect of these has been felt by the average person. And certainly, no collapse has taken place.

To estimate the actual error rates, the following assumptions and sources will be used:

1) Starting point is the universe of non-compliant systems at the start of 1998. These are the systems at risk for generating Y2k failures. The base unit of measure for a failure is a "function point". For those interested, see Capers Jones' article What Are Function Points? for a detailed definition. But in a nutshell, a function point is an Input, Output, Inquiry, Data File, or Interface in an application. Function points are then derived by weighting each of these.

2) Metrics for error rates, date logic, etc., are taken from Capers Jones' article Probabilities of Year 2000 Damages. Jones is probably the foremost authority and source on software metrics, and is quoted by many, including Ed Yourdon in his analysis.

3) The "singular event" that has the most potential to cause simultaneous errors, and as such the greatest risk of collapse, is the two-week period surrounding Jan 1, 2000. Gartner Group has estimated that 10% of potential Y2k errors will occur during this time frame (source: Slide 2). To add a level of certainty, this analysis will double the Gartner Group estimate, and assume 20% of Y2k errors will occur during this time frame.

4) An assumption is made that 66% of non-compliant systems will be either remediated or replaced, leaving 33% untouched. This assumption I would characterize as somewhat middle of the road. Capers Jones has estimated a "Best Case" scenario of 85% in the US, 75% in Europe, and 65% elsewhere (source). Other surveys, such as an often quoted Cap Gemini survey of US firms (source), cite 78% expecting to have more than 76% of their code fully fixed and tested.

5) An assumption that, of the non-compliant systems that are addressed, 50% were replaced, and 50% remediated. Based on personal experience, the percentage replaced would be much higher, but I recognize the bias within my SAP experience. A quick review of the Fortune 500 level SEC statements will find, for example, SAP mentioned in a large number of cases. And, as the business size decreases, the liklihood of the business using canned applications, as opposed to custom-developed systems, increases. As well, I think the failure of the predicted COBOL programmer shortage to materialize is also an off-shoot of the fact so many systems were replaced.

6) Finally, that system implementations typically occur month-end intervals. Particularly true of financial systems, but in general as well. This again is based on my experience with system implementations.

With the above sources and assumptions, I will first make an estimate of the percentage of function points that will fail during the Y2k rollover.

Capers Jones estimates that, on average, 15% of the function points within an application contain Year 2000 date errors. Our assumption is that 33% of these are untouched, and remain within the code. This leaves 5% of the function points with untouched Y2k errors.

Jones also estimates that in general, IT is only 95% efficient at finding and fixing Y2k errors. 10% of the function points are either fixed or replaced, and our assumption is that 50% of those are fixed. So 5% of the function points have gone thru remediation. With the 95% efficiency, it leaves .25% of the function points that have been remediated still containing Y2k errors. Adding together, 5.25% of function points will still contain Y2k errors.

But these errors are spread out. The doubling of Gartner Group's estimate gave 20% of errors occuring during the rollover. Applying this gives 1.05% of function points failing during the rollover.

So, our baseline is 1.05% of function points failing on rollover.

Is this a singular event, with an unprecedented number of errors? I truly do not think so.

Our universe of errors begins at the start of 1998. We have 24 month-end periods until the year 2000, where systems implementations occur.

Start by distributing the unremediated or missed Y2k errors. Gartner Group estimates 25% of the errors will occur in 1999. So 80% of the 5.25% of function points with Y2k errors will not occur at rollover, or 4.2%. 25% of these will occur during 1999, or 1.05%. Spreading these out over the 12 periods in 1999 gives .0875% of function points failing due to Y2k errors at each period.

Next, we need to deal with errors introduced through remediation. Capers Jones calls these "Bad Fixes". In essence, any time code is modified, errors can and will be introduced.

Jones estimates that in fixing function points, 10% will introduce new errors, 70% of which will be caught. So 5% of the function points are being fixed. 10%, or .5%, will contain new errors, of which 70% will be caught. So .15% of the function points will contain new errors. These are not spread out according to Y2k error distributions, but can occur literally anywhere. The vast majority of remediated applications are being reimplemented in 1999. So again, spreading this out over the 12 periods in 1999 leaves .0125% of function points failing due to new errors introduced due to remediation at each period.

Adding this to the .0875% of function points failing because of unremediated Y2k errors gives .1% of function points failing at each period in 1999.

Finally, we need to consider errors due to system replacement. Implementations introduce errors at a much higher rate than normal remediation. Jones estimates that 5 errors per function point are introduced through new development. To be fair, not all errors are equal, just as many types of Y2k errors can be lived with. But to be ultra-conservative, let's assume only 15% of delivered errors in software implementations are comparable to Y2k errors. That leaves 75% of the function points in replacement systems containing errors on par with Y2k errors.

In our universe of non-compliant systems, only 33% were left untouched, leaving 66%. 50% of these were replaced, as opposed to remediated, leaving 33% of function points in non-compliant applications being replaced. From above, 75% of these will contain errors on par with Y2k errors, or 24.75% of function points.

These errors are spread out over the 24 periods in 1998 and 1999. I'll assume a uniform distribution, though again it probably should be weighted more heavily to this year. But uniform distribution leaves 1.03% of function points generating errors on par with Y2k errors at each of the 24 periods in 1998 and 1999.

Adding the previous error rate of .1% gives 1.13% of function points generating errors during each of the 12 monthends in 1999. This rate compares with the estimated baseline of 1.05% of function points generating errors at rollover.

Now, the point is not some false level of precision, but to estimate a general error level. And the metrics and estimates show that yes, in all liklihood each of the 8 month-end periods in 1999 has generated errors of the same level of magnitude that can be expected at the rollover.

Thus far, IT departments have dealt with the errors. No collapse has occurred. And I don't expect that to change at the rollover to the year 2000.



-- Hoffmeister (hoff_meister@my-deja.com), August 11, 1999

Answers

I think the basic error in this analysis is very simple. There is no reason whatsoever to believe the claims of organizations as to their "expected" rate of remediation of their systems. The reason I say this is that, both by personal experience as well as by studies of the likelihood of projects being completed on time, any claims of future completion of projects must be discounted very heavily. Remember, many organizations were planning to be finished by December 31st, 1998, with a full year for testing, and yet almost no one has announced that they made that deadline. So the studies that you cite, indicating the numbers of companies that "expect" to be finished with a specific percentage of their work, presumably by December 31st, 1999, simply point out that these organizations are not yet finished. Why should we believe them this time?

-- Steve Heller (stheller@koyote.com), August 11, 1999.

Actually, the analysis is based in no way on any single company completing their Y2k project. It is based on a percentage of function points that have been addressed, either through replacement or remediation.

It is undeniable that some percentage of function points have been addressed. For example, merely the 20,000+ installations of SAP illustrate that fact. And, unless you are contending that corporations are out and out lying about the status of Y2k remediation, and that they are actually spending money on nothing, it is also undeniable that remediation is actually taking place.

The argument seems then to be the percentage. I made an estimate, using some fairly pessimistic sources. I could have, for example, cited the recently mentioned IDG survey 1000 companies here , where less than 2% expect to miss the the Jan 1 deadline. I did not, to allow a fairly large margin of error.

But, since you bring up studies on projects being completed on time, let's take a look. Ed Yourdon references Capers Jones statistics on projects being late in this article. I'll completely disregard the difference between software development projects, and maintenance projects, in order to provide another margin of error. (This is a distinction, by the way, that does not escape Jones).

The statistics show that, on average, 62.47% of projects are either delivered early or on time. 13.71% are late, and 23.82% are cancelled.

Now, it's doubtful many Y2k projects are merely cancelled. So lets lump that into late, and use 37.53% as late.

Off the bat, the 62.47% is very close to the 66% I used, and in fact would alter the analysis very little, and would not change the conclusion at all. But let's look further.

You mention the 12/31/98 deadline. Again, I have doubts about the meaning here; it does not seems to me most that made this claim were speaking of code renovation, as the "year for testing" implies they were not expecting to be complete then.

But taking it at face value, and using the study of software completions, it would appear that some large percentage would have, in fact, completed on time. The study suggests 62.47%. The study also states that, on average, projects that are late are late by 7.65 months. Which means that, using the study, half of those late would be completed about now, or another 18.765%, leaving a total of 81.235%. This percentage could then be extended to account for those completing later in the year.

So, even using the studies you reference, I could make a very strong case for a much higher percentage, over 80%, than the 66% I used.

-- Hoffmeister (hoff_meister@my-deja.com), August 12, 1999.


I think you've missed a very important change in Cap Gemini's estimates of the Y2K situation. It's true that according to your citation, from May 17th:

Seventy-four percent of major corporations had expected to have more than half of their code "completely tested and compliant" by January 1 of this year, according to the last quarterly survey issued in December. But the current tracking poll reveals that only 55 percent actually reached this goal.

But according to their most recent news, as of August 10, 1999:

Fewer than half of America's largest companies (48 percent) expect all of their critical systems to be prepared for the Year 2000, according to a new survey by Cap Gemini America, Inc., an information technology and management consulting leader. One in five companies (18 percent) expect that 75 percent or less of their critical systems will be "completely tested and compliant" by December 31, 1999. Thirty-six percent expect between 76 and 99 percent of their applications to be ready for Year 2000, and two percent anticipate completing work on 50 percent or less of their systems. [emphasis added]

Please note that although the earlier survey said that 55% were "done" as of January 1st, the later survey says that only 48% "expect" to be done by December 31st, 1999. Apparently some of them who thought they were "done" at the beginning of the year now aren't even expecting to be done by the end!

But even worse is the little word "critical" that somehow slipped into the survey results between the earlier and later surveys. Far from being "done" at the end of last year, or even this year, it's only their critical systems that are even being talked about anymore. What about their "noncritical" systems? What percentage of their systems are "critical"? How do they decide which are critical? We don't know the answer to any of these questions, so their reported statistics are meaningless. However, it is clear that anyone who isn't going to be finished with their "critical" systems has given up on the non-critical systems. In other words, the situation must be much worse than it appeared to be when they were talking about completion dates for all of their systems.

-- Steve Heller (stheller@koyote.com), August 12, 1999.


As for your first point: Please note that although the earlier survey said that 55% were "done" as of January 1st, the later survey says that only 48% "expect" to be done by December 31st, 1999. Apparently some of them who thought they were "done" at the beginning of the year now aren't even expecting to be done by the end!

Steve, please reread the first article:

Seventy-four percent of major corporations had expected to have more than half of their code "completely tested and compliant" by January 1 of this year, according to the last quarterly survey issued in December. But the current tracking poll reveals that only 55 percent actually reached this goal.

The first is talking of a percentage expecting to have "more than half" their code done; the second is talking of those expecting "all" their code done.

This a tracking poll, which began I believe in 1995. As such, it is extremely doubtful that they have "changed" the questions. Cap Gemini has included the term "critical" in the past, as well as omitting it.

In any case, you seem to be focussing far too much on this one survey. I picked it because it was the most pessimistic I found, and it also backed up the 66% I used. Capers Jones direct estimates also make this a conservative estimate. And, as I demonstrated in the previous post, applying the history of IT project completions also makes this 66% a conservative estimate.

-- Hoffmeister (hoff_meister@my-deja.com), August 13, 1999.


Steve, please reread the first article: Seventy-four percent of major corporations had expected to have more than half of their code "completely tested and compliant" by January 1 of this year, according to the last quarterly survey issued in December. But the current tracking poll reveals that only 55 percent actually reached this goal. The first is talking of a percentage expecting to have "more than half" their code done; the second is talking of those expecting "all" their code done.

You're correct; I misread that part of the article. However, that doesn't change the basic point, which is that their estimate of when they would be a certain percentage completed with their project was way off; 74 percent expected to be halfway done by a particular date, and only 55 percent claimed that they were halfway done by that date. This is especially telling when you consider that it's very difficult to determine when a software project is half done: many software projects are "almost done" for most of their entire lifespan.

This a tracking poll, which began I believe in 1995. As such, it is extremely doubtful that they have "changed" the questions. Cap Gemini has included the term "critical" in the past, as well as omitting it.

I don't think this is something that you should guess about. Do you know for a fact whether they have changed the question? If not, why can't they even report their own survey questions and answers correctly? If so, my point stands.

In any case, you seem to be focussing far too much on this one survey. I picked it because it was the most pessimistic I found, and it also backed up the 66% I used. Capers Jones direct estimates also make this a conservative estimate. And, as I demonstrated in the previous post, applying the history of IT project completions also makes this 66% a conservative estimate.

The reason I'm focusing on this survey is that it is your source. Clearly, if their reporting of their latest question and the latest results is correct, whether or not they previously had used the term "critical", there is no reason whatsoever for us to believe that these companies are going to finish their remediation in time. The definition of "critical" is subjective, which renders the statistics in question invalid. Therefore, the only prudent course of action is to prepare for the consequences of failed remediation by many large companies.

-- Steve Heller (stheller@koyote.com), August 13, 1999.


I don't think this is something that you should guess about. Do you know for a fact whether they have changed the question? If not, why can't they even report their own survey questions and answers correctly? If so, my point stands.

Without having hardcopy of the actual survey, I cannot state unequivocally that the question has not changed.

However, you base your point on the fact that the term "critical" appeared in the latest release. Going back through previous releases by Cap Gemini:

Dec, 1997

Critical NOT used.

April, 1998

Critical NOT used.

July, 1998

Critical NOT used.

August, 1998

Critical IS used.

October, 1998

Critical NOT used.

December, 1998

Critical IS used.

May, 1999

Critical NOT used.

August, 1999

Critical IS used.

So, unless you believe they change the question every quarter or so, there is no doubt that the use of the term "critical" in the latest release does not indicate they have changed the question.

The reason I'm focusing on this survey is that it is your source. Clearly, if their reporting of their latest question and the latest results is correct, whether or not they previously had used the term "critical", there is no reason whatsoever for us to believe that these companies are going to finish their remediation in time. The definition of "critical" is subjective, which renders the statistics in question invalid. Therefore, the only prudent course of action is to prepare for the consequences of failed remediation by many large companies.

As I said previously, the analysis assumes in no way that any organization fully completes their remediation on time.

The analysis merely makes the assumption that, in total, 66% of the function points with Year 2000 errors are either replaced or remediated.

No single survey is the basis for that assumption. This survey provides some support. Previously referenced estimates by Capers Jones does as well. Applying the history of Software Project completions, which you suggested, makes this a conservative estimate. The referenced survey by IDC makes this a very conservative estimate as well.

The analysis is an attempt to determine just what the consequences of partial completion of remediation will be. The results show that the consequences, in all liklihood, will be no worse than what we are currently experiencing.

-- Hoffmeister (hoff_meister@my-deja.com), August 13, 1999.


The analysis merely makes the assumption that, in total, 66% of the function points with Year 2000 errors are either replaced or remediated.

I guess I haven't been clear about my main problem with your analysis. The question is what that 66% is a percentage of. If it is of "critical systems", however defined, as it appears to be, what percentage of total systems does that represent? Without knowing that, I don't understand how anyone can draw conclusions as to the percentage of function points that are or will be remediated. Do you know the answer to this question?

-- Steve Heller (stheller@koyote.com), August 13, 1999.


No, the analysis is not based only on "critical" systems. The analysis is of active applications with Y2k errors.

The only reference to "critical" systems is from the addition of the term in the Cap Gemini survey. And that still seems open to conjecture. From this article Some Fortune 500 Y2K Late:

Fewer than half of US Fortune 500 companies expect all of their computer systems to be ready for Year 2000-related failures, in part because they are devoting much of their attention to ensuring that their top "mission-critical" systems are Y2K-compliant, a new survey has found.

So, even this survey seems open to interpretation.

The analysis is meant to estimate error rates. As such, it deals with active applications. Non-active or dormant applications do not have the potential to generate any appreciable errors for Y2k.

As stated, the 66% represents the percentage of active applications with Y2k errors, that are either replaced or put through remediation.

33% then represents the percentage of active applications with Y2k errors that go completely untouched. This would include active, non-critical systems, as well as critical systems not remediated.

Error rates are based on function points. An underlying assumption then is made that function points are evenly distributed between systems, which again is conservative. Though not an exact correlation, larger systems tend to be more critical, and thus are more likely to fall in the 66%.

As for what percentage of all systems are "critical", this without a doubt varies from site to site. Ed Yourdon in this article uses between 50% and 66% of systems being "critical".

But a large portion of those systems deemed "non-critical" are in fact "dormant", or non-active systems. Yourdon makes this point in the referenced article. In this Capers Jones paper, The Global Economic Impact of the Year 2000 Software Problem, a study of IBM data centers found between 40% and 70% of the applications could be classified as "dormant", or not being run in over a year. While IBM may not be completely representative, a large portion of systems labelled "non-critical" are in fact those found to be dormant.

-- Hoffmeister (hoff_meister@my-deja.com), August 13, 1999.


No, the analysis is not based only on "critical" systems. The analysis is of active applications with Y2k errors.

The only reference to "critical" systems is from the addition of the term in the Cap Gemini survey. And that still seems open to conjecture.

So now you are claiming that it is merely "conjecture" whether the percentage completion figures in the Cap Gemini studies refer to only "critical" systems or to all systems, even though they use the modifier "critical" in half of their reports? Clearly, if you can't even agree on the significance of the figures from the source that you yourself cite, there's no point in continuing the discussion any further.

-- Steve Heller (stheller@koyote.com), August 13, 1999.


Steve, we have spent far too much time on this one survey.

You raised a valid question regarding the survey. Without having the actual survey and definitions, no definitive answer can be made.

To further the discussion, completely disregard the survey, as being inconclusive. I have supplied multiple other sources as backup for the figure this survey referenced, including the sources you yourself suggested.

Is this the only problem you have with my post?

-- Hoffmeister (hoff_meister@my-deja.com), August 13, 1999.




-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999

Answers

Continued....

---------

Steve, we have spent far too much time on this one survey. You raised a valid question regarding the survey. Without having the actual survey and definitions, no definitive answer can be made. To further the discussion, completely disregard the survey, as being inconclusive.

Okay, fine with me.

I have supplied multiple other sources as backup for the figure this survey referenced, including the sources you yourself suggested.

Could you please give me those sources again? I've re-read the entire thread and haven't been able to find any reference to surveys containing any actual information about percentage of completed remediation, other than the Cap Gemini survey that we have already beaten into the ground. All I've seen is a couple of slides and articles that mention in passing the number of companies that do or don't "expect" to make the deadline, with no actual information about how far along they are now. Do you have any references to any surveys that actually ask the companies what they have completed, other than the Cap Gemini surveys?

Is this the only problem you have with my post?

The problem I have with your post is that your estimates of the number of companies that will finish any particular percentage of the remediation is so far unsupported by evidence, as far as I can tell. I'll be happy to examine the evidence, as soon as you provide it.

-- Steve Heller (stheller@koyote.com), August 13, 1999.


Well, since you suggested it, let's look at using the history of IT projects.

These metrics support an estimate of 80% or greater. 66% then leaves a fairly large margin of error.

-- Hoffmeister (hoff_meister@my-deja.com), August 13, 1999.


Well, since you suggested it, let's look at using the history of IT projects. These metrics support an estimate of 80% or greater. 66% then leaves a fairly large margin of error.

That's very nice, but as we've already seen, the question is 80 percent of what? Have the companies actually been doing the work required to get their systems fixed? Is there any way we can find out this information? Apparently, there is no way to find this out, or at least neither you nor I can come up with one.

The problem is that trying to use statistics like "80 percent completion rates" assumes the very thing that we're trying to find out here (or that I'm trying to find out, at least): have they been working on all their systems, or just their critical systems, or just their most critical systems, or just their "top" mission critical systems, or what? You can get a different answer to that question from every survey.

Unlike "normal" projects, which can be laid out to a timetable selected by the company, Y2K projects have a fixed, immovable deadline. If companies didn't start all of their projects on time, and there seems to be no way we can tell whether they did, they almost certainly won't finish on time.

I guess the only conclusion we can draw from this discussion is that there is no way to find out any useful information about how much of the remediation companies are going to finish before the ball drops. Under such circumstances, the only prudent course is to take precautions against massive failures to remediate.

-- Steve Heller (stheller@koyote.com), August 13, 1999.


First, your comment regarding "normal" projects flies in the face of my experience. Virtually every project I've been involved with, for good or bad, has been worked backwards from a given target date.

Second, the distinction between "critical" and "non-critical" systems matters very little to the analysis. The vast bulk of "non-critical" systems are, in fact, dormant or inactive. Capers Jones uses a general estimate of 50% of a software portfolio as being inactive (source), which corresponds with the previous study mentioned by IBM.

Unless you have other information, every Y2k Project at the least addresses critical systems. Using the past metrics on software project completion yielded somewhere above 80% of the Projects being completed. Note this again is very conservative; a project not completely finished does not yield a 0% of applications addressed. Dropping this down to 66% of total active applications takes into account the non-critical applications.

Further support is found from Capers Jones, who uses an average of 75% in his Best Case scenario for overall errors addressed. Again providing support that 66% is at least in the ballpark.

Note this is not an exact estimate, which you seem to require. Again, I've provided support for the number I used. While questioning it, you have provided no evidence the number is any less. But use even 50%, if you wish. The result changes very little, and the conclusion changes not at all; that the current error rate being experienced right now is of the same magnitude as what can be expected at the rollover.

-- Hoffmeister (hoff_meister@my-deja.com), August 14, 1999.


First, your comment regarding "normal" projects flies in the face of my experience. Virtually every project I've been involved with, for good or bad, has been worked backwards from a given target date.

While I've been involved in some projects like that, all of which were vastly late if not canceled entirely, the vast majority of the ones I've been involved with have actually been estimated in some reasonable manner. As this indicates, any project "estimated" like that has approximately 100 percent chance of being late, as the "estimate" has no relation whatsoever to the amount of time it actually takes to do the project. I believe any reputable source in the information-processing field would agree with me.

Second, the distinction between "critical" and "non-critical" systems matters very little to the analysis. The vast bulk of "non-critical" systems are, in fact, dormant or inactive. Capers Jones uses a general estimate of 50% of a software portfolio as being inactive (source), which corresponds with the previous study mentioned by IBM.

Sorry, your link points to the front page of the large site, not to any specific information about dormant systems. I'll be happy to look at your evidence, if you provide a valid link.

Unless you have other information, every Y2k Project at the least addresses critical systems. Using the past metrics on software project completion yielded somewhere above 80% of the Projects being completed. Note this again is very conservative; a project not completely finished does not yield a 0% of applications addressed. Dropping this down to 66% of total active applications takes into account the non-critical applications.

I have no doubt that they are addressing critical systems. However, I still haven't seen any evidence whatsoever that critical systems make up the vast majority or even any majority of all systems that need to be remediated. Let's take a look at what Ed Yourdon has to say about this issue as regards the federal government's systems:

But even the most passionate optimists find it difficult to argue that the non-mission-critical systems will be repaired in time; after all, the Federal government will barely be able to finish repairing its 6,399 mission-critical systems in time, and has had virtually nothing to say about the fate of another 66,000 non-critical systems.

While I have no specific information about the prevalence of noncritical systems in industry, I have no reason to believe that is vastly different from this proportion, which would mean that noncritical systems are over 90 percent of the total. If you have some evidence to the contrary, please present it.

Further support is found from Capers Jones, who uses an average of 75% in his Best Case scenario for overall errors addressed. Again providing support that 66% is at least in the ballpark.

Where does he get his information? Did he do a survey? Do you have a link to anything more than a slide or sentence in an article??

Note this is not an exact estimate, which you seem to require. Again, I've provided support for the number I used. While questioning it, you have provided no evidence the number is any less. But use even 50%, if you wish. The result changes very little, and the conclusion changes not at all; that the current error rate being experienced right now is of the same magnitude as what can be expected at the rollover.

Okay, I've provided some evidence that the proportion of critical systems is less than 10 percent. Does that change your conclusion any?

-- Steve Heller (stheller@koyote.com), August 14, 1999.


Sorry, your link points to the front page of the large site, not to any specific information about dormant systems. I'll be happy to look at your evidence, if you provide a valid link.

Sorry. The link should be here.

As well, this essay references the IBM study that found between 40% and 70% of a software portfolio inactive or dormant.

Where does he get his information? Did he do a survey? Do you have a link to anything more than a slide or sentence in an article??

Unfortunately, Steve, the internet does not provide full access to all information. But some more info:

This slide gives a fairly detailed breakdown, by industry, of the support for his 85% figure. Again, it would not appear he is simply pulling a figure out of the air.

Capers Jones here also makes the statement:

Given the strong probability that somewhere between 10% and perhaps 35% of potential year 2000 software problems will still be present at the dawn of the next century, it is now time to begin to start contingency planning for minimizing the damages which unrepaired year 2000 problems might cause.

So his high-end estimate matches my assumption of 66% addressed.

I have no doubt that they are addressing critical systems. However, I still haven't seen any evidence whatsoever that critical systems make up the vast majority or even any majority of all systems that need to be remediated. Let's take a look at what Ed Yourdon has to say about this issue as regards the federal government's systems:...

While I have no specific information about the prevalence of noncritical systems in industry, I have no reason to believe that is vastly different from this proportion, which would mean that noncritical systems are over 90 percent of the total. If you have some evidence to the contrary, please present it.

Okay, I've provided some evidence that the proportion of critical systems is less than 10 percent. Does that change your conclusion any?

Seriously? You're actually making the statement that you have no reason to believe the Federal Government is less efficient that business?

But OK, let's take a look even here.

What you have provided is evidence that 10% of the total Federal systems are being addressed as "critical". Not of active systems that require remediation or replacement, which is what the analysis is based upon.

The previous links support a range of 40-70% of applications being "dormant" or "inactive". My guess would be the Fed is at the high-range here, but let's just use Jones' general estimate of 50%.

The latest OMB report lists 6,399 systems identified as "mission-critical". Using your 90%, this approximates a total of 64,000 total systems. Applying Jones' 50% yields a total of 32,000 "active" systems.

The analysis starts with the Universe of applications non-compliant at the beginning of 1998. Some percentage of applications are already compliant. From the OMB report at the beginning of 1998, available here, 40% of the mission-critical systems are either compliant or being retired. Leaving 60% of systems requiring remediation or replacement.

Applying this percentage to the 32,000 "active" applications leaves 19,200 active, non-compliant systems.

So the percentage of active, non-compliant systems being addressed by the Federal Government as "mission-critical" would be approximately 33%.

Note also that the Fed is not only addressing "mission-critical" systems. A quick review of the latest OMB report here lists a number of agencies complete with "non-critical" systems as well.

So, even for the Federal Government, support for using at least a 50% number for active, non-compliant systems addressed can be found.

-- Hoffmeister (hoff_meister@my-deja.com), August 14, 1999.


Okay, let's assume for the moment that your estimate of 66% of "systems addressed" is correct. Let's even assume that most of those systems will be remediated more or less successfully, even though this is a vastly different statement. You've made an estimate of the proportion of errors that will be experienced this year rather than next, and concluded that we are already seeing as many or more errors this year as we will early next year. Since the current error rate is being handled by IT, you conclude that next year will be more of the same.

But you're still missing an important point : the types of errors that are experienced this year vs. next. Since it is not yet 2000, the types of errors that would have occurred so far are lookahead errors, not real-time information system errors. As has been discussed at length in c.s.y2k and other venues, the so-called "Jo Anne effect" errors, which are of this sort, are relatively easy to handle. This is because they do not impact the day-to-day functioning of the organization. Of course, it is important to be able to balance the books of the organization, but failure to do that or difficulty in doing that is not a show-stopper in most cases. That is why I, for one, have not predicted massive publicly visible IT problems this year.

Next year, we will see a new class of errors: inability to properly process live, real-time or near-real-time information with post-1999 dates. This is the sort of error that can bring an organization to a screeching halt. Even if the actual rate of errors is the same as this year, the seriousness of this type of error will be much greater than that of the lookahead errors. Therefore, I feel safe in predicting tremendous IT systems problems next year, always assuming that the lights are on and the other infrastructure pieces are working.

-- Steve Heller (stheller@koyote.com), August 14, 1999.


Okay, let's assume for the moment that your estimate of 66% of "systems addressed" is correct. Let's even assume that most of those systems will be remediated more or less successfully, even though this is a vastly different statement. You've made an estimate of the proportion of errors that will be experienced this year rather than next, and concluded that we are already seeing as many or more errors this year as we will early next year. Since the current error rate is being handled by IT, you conclude that next year will be more of the same.

OK. But realize the analysis did account for estimated rates of missing errors in remediated systems, and bad fixes.

But you're still missing an important point : the types of errors that are experienced this year vs. next. Since it is not yet 2000, the types of errors that would have occurred so far are lookahead errors, not real-time information system errors. As has been discussed at length in c.s.y2k and other venues, the so-called "Jo Anne effect" errors, which are of this sort, are relatively easy to handle. This is because they do not impact the day-to-day functioning of the organization. Of course, it is important to be able to balance the books of the organization, but failure to do that or difficulty in doing that is not a show-stopper in most cases. That is why I, for one, have not predicted massive publicly visible IT problems this year.

No, Steve. A very small percentage of errors expected this year were of the "look-ahead" or "JAE" type, less than 10%. I agree, these errors have been vastly overblown.

The bulk of the errors I'm considering are the errors due to system implementations, and account for virtually every instance of system problems associated as "Y2k-Related" to date.

Next year, we will see a new class of errors: inability to properly process live, real-time or near-real-time information with post-1999 dates. This is the sort of error that can bring an organization to a screeching halt. Even if the actual rate of errors is the same as this year, the seriousness of this type of error will be much greater than that of the lookahead errors. Therefore, I feel safe in predicting tremendous IT systems problems next year, always assuming that the lights are on and the other infrastructure pieces are working.

While the seriousness of rollover errors may be much greater than look-ahead problems, the seriousness of implementation errors at the very least are on par with possible rollover errors, and in fact probably exceed rollover errors in level of seriousness.

Even so, I have heavily discounted implementation errors, by 85%, to provide an even greater margin of error. The analysis merely assumes that only 15% of the errors due to implementations are on par with all Y2k rollover errors, an assumption I think of as extremely conservative.

-- Hoffmeister (hoff_meister@my-deja.com), August 14, 1999.


While the seriousness of rollover errors may be much greater than look-ahead problems, the seriousness of implementation errors at the very least are on par with possible rollover errors, and in fact probably exceed rollover errors in level of seriousness.

Yes, perhaps, but the implementation errors that you are referring to are different from rollover errors in one important way: rollover errors will not occur until next year. Therefore, however many of them exist, they will all show up next year. This means that attempts to determine what percentage of problems have already been seen or will be seen before the end of the year cannot succeed unless and until we know how many rollover errors there are. This is impossible to determine, and therefore any attempt to calculate the percentage of problems have already been seen is doomed to failure.

-- Steve Heller (stheller@koyote.com), August 14, 1999.


Now I want to revisit another point that I made to which you have not responded:

First, your comment regarding "normal" projects flies in the face of my experience. Virtually every project I've been involved with, for good or bad, has been worked backwards from a given target date.

While I've been involved in some projects like that, all of which were vastly late if not canceled entirely, the vast majority of the ones I've been involved with have actually been estimated in some reasonable manner. As this indicates, any project "estimated" like that has approximately 100 percent chance of being late, as the "estimate" has no relation whatsoever to the amount of time it actually takes to do the project. I believe any reputable source in the information-processing field would agree with me.

Let me add a little bit more to my reply to your comments: are you saying that no matter what date is arbitrarily assigned as the ending date of a project, the likelihood of the project's successful completion is the same? For example, let's suppose that we have the project to rewrite all of the IRS's information systems, and the deadline is nine months from now. Would such a project have the same chance of completion as if the ending date were nine years later? Or, for that matter, if the ending date doesn't matter, why not make it tomorrow?

Of course, the point I'm making is that, without any information about the means of estimating how long a project is supposed to take, there is no way to know whether or not it is likely to be done on time, or how late it is likely to be. I've tried to find references on the Internet for studies on the effects of arbitrarily assigned ending dates on the likelihood of successful project completion within those dates. So far, I haven't had any luck. However, commonsense and experience indicate that projects with arbitrarily assigned ending dates will be finished on time only on the rarest occasions, if ever. The very notion of estimation of how long a project will take implies that the ending date cannot be determined arbitrarily, but must be calculated according to estimates of how long each individual section of the project will take. Any other means of "estimation" is merely political in nature, and has nothing to do with reality.

The naive answer to this, of course, is that if you have to get the project done in a fixed amount of time, you just add manpower to make the "man-months" calculation come out properly. However, equally of course, this is a well-known fallacy. I'm sure you're aware of the book The Mythical Man-Month, by Frederick Brooks, which explodes this fallacy once and for all. One of his most relevant comments is (from memory, so it may not be word for word): More software projects have gone astray from lack of calendar time than for any other cause, gross incompetence included.

-- Steve Heller (stheller@koyote.com), August 14, 1999.


Yes, perhaps, but the implementation errors that you are referring to are different from rollover errors in one important way: rollover errors will not occur until next year. Therefore, however many of them exist, they will all show up next year. This means that attempts to determine what percentage of problems have already been seen or will be seen before the end of the year cannot succeed unless and until we know how many rollover errors there are. This is impossible to determine, and therefore any attempt to calculate the percentage of problems have already been seen is doomed to failure.

Estimating the potential error rate due to Y2k is actually fairly straightforward. Capers Jones references 15% of function points containing Y2k errors. Other studies, for example by Howard Rubin, cite findings of 30 date references per 1000 lines of code, or 3%. The potential error rate for Y2k errors in code is far from an unknown phenomenon.

As well, the fact that system implementations do cause high rates of errors is undeniable.

While it is no doubt impossible to determine exact figures on error rates, it is not impossible to demonstrate that the potential error rates are in line with what we are currently experiencing.

As for target dates. The idea that target dates are based on detailed assessments of each individual section, while idealistic, has very little to do with reality.

At best, a large scale project begins with a small number of people doing very broadbrush scoping. Depending on size, this lasts between 2 weeks and a month. The result is a budget estimate, and a target date. All long before indvidual tasks are determined and scoped. And this target date remains, except in extreme circumstances.

I did not say this was ideal, or good. Just reality.

The actual tasks are then scoped backwards from the date. Yes, I'm familiar with the "Mythical Man Month". I'm also very familiar with the reality software implementations in large corporations.

In using the software metrics for project completions, I was again very conservative. These should be scaled to reflect the fact we are speaking of remediation projects, and not development projects. But I left them untouched, again to provide a larger margin for error.

-- Hoffmeister (hoff_meister@my-deja.com), August 14, 1999.


Estimating the potential error rate due to Y2k is actually fairly straightforward. Capers Jones references 15% of function points containing Y2k errors. Other studies, for example by Howard Rubin, cite findings of 30 date references per 1000 lines of code, or 3%. The potential error rate for Y2k errors in code is far from an unknown phenomenon.

Can you provide some research results to indicates the proportion and seriousness of century rollover errors? When have we had a previous century rollover on which to do the research?

I notice that you did not answer my point that no rollover errors have occurred yet, and therefore cannot be estimated by looking at the number so far encountered. Do you have a response to that?

At best, a large scale project begins with a small number of people doing very broadbrush scoping. Depending on size, this lasts between 2 weeks and a month. The result is a budget estimate, and a target date. All long before indvidual tasks are determined and scoped. And this target date remains, except in extreme circumstances.

While not absolutely ideal, this is still a far cry from deciding the target date completely arbitrarily in advance of any analysis at all, as has been done with every Y2K project that started too late (almost certainly a large proportion of the whole, although again it is impossible to quantify the proportion). Of course, other projects have been "estimated" in the same way, and as I have pointed out, in such a case, there is NO reason to believe that the target date has ANY relationship to how long the project will actually take. Virtually all such projects that I have been involved with have run far over their target dates, and I see no reason to believe that Y2K projects are mysteriously immune to this problem.

-- Steve Heller (stheller@koyote.com), August 14, 1999.




-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.

Last one...

-----------

Can you provide some research results to indicates the proportion and seriousness of century rollover errors? When have we had a previous century rollover on which to do the research?

As for the proportion of Y2k errors, again, Capers Jones' work estimates at 15% of function points. Howard Rubin cites an extensive study in his article, More Millenium Metrics, where 3% of the code base contains date references.

As for seriousness, these run the gamut from trivial to abends. We aren't talking of reading tea-leaves, here. Running programs through a "time-machine" system provides fairly precise answers as to how an application responds to 2000 dates. It is not necessary to have previously experienced a century rollover, to know how an application reacts. Or more precisely, applications have experienced the century rollover on thousands of test machines.

I notice that you did not answer my point that no rollover errors have occurred yet, and therefore cannot be estimated by looking at the number so far encountered. Do you have a response to that?

I didn't respond, because the estimate of rollover errors is based in no way on the number of errors encountered so far. It is based on fairly well-documented studies of the potential for rollover errors.

And again, target dates. The analysis does not depend on any entity's project completing on time. This is not an all or nothing situation. A company may miss the deadline for completing all of its systems, but still complete a substantial portion of applications.

-- Hoffmeister (hoff_meister@my-deja.com), August 14, 1999.


Running programs through a "time-machine" system provides fairly precise answers as to how an application responds to 2000 dates. It is not necessary to have previously experienced a century rollover, to know how an application reacts. Or more precisely, applications have experienced the century rollover on thousands of test machines. Okay, that is a valid point. I will accept time machine simulation as a proxy for real operation after rollover, of course with the usual caveat that testing and production are not identical.

And again, target dates. The analysis does not depend on any entity's project completing on time. This is not an all or nothing situation. A company may miss the deadline for completing all of its systems, but still complete a substantial portion of applications.

My point does not rely on assuming that a company must fix all of its systems. The question is whether setting a projected completion date that is fixed by the calendar rather than by any analysis of the problem reduces the likelihood of achieving any particular state of completion. Both commonsense and experience indicate very strongly that such completion dates are much more likely to be missed by a large margin than completion dates set after even a cursory analysis of the problem (even though the latter are often overrun as well). Do you disagree with this point? If so, on what basis?

Okay, I accept that as a valid way of estimating the seriousness of rollover errors, of course with the caveat that testing and production are not the same.

-- Steve Heller (stheller@koyote.com), August 15, 1999.


Running programs through a "time-machine" system provides fairly precise answers as to how an application responds to 2000 dates. It is not necessary to have previously experienced a century rollover, to know how an application reacts. Or more precisely, applications have experienced the century rollover on thousands of test machines.

Okay, that is a valid point. I will accept time machine simulation as a proxy for real operation after rollover, of course with the usual caveat that testing and production are not identical.

And again, target dates. The analysis does not depend on any entity's project completing on time. This is not an all or nothing situation. A company may miss the deadline for completing all of its systems, but still complete a substantial portion of applications.

My point does not rely on assuming that a company must fix all of its systems. The question is whether setting a projected completion date that is fixed by the calendar rather than by any analysis of the problem reduces the likelihood of achieving any particular state of completion. Both commonsense and experience indicate very strongly that such completion dates are much more likely to be missed by a large margin than completion dates set after even a cursory analysis of the problem (even though the latter are often overrun as well). Do you disagree with this point? If so, on what basis?

-- Steve Heller (stheller@koyote.com), August 15, 1999.


My point does not rely on assuming that a company must fix all of its systems. The question is whether setting a projected completion date that is fixed by the calendar rather than by any analysis of the problem reduces the likelihood of achieving any particular state of completion. Both commonsense and experience indicate very strongly that such completion dates are much more likely to be missed by a large margin than completion dates set after even a cursory analysis of the problem (even though the latter are often overrun as well). Do you disagree with this point? If so, on what basis?

If the calendar date is arbitrary, I'd agree.

The year 2000 is not an arbitrary, unknown date. It allowed organizations to work backwards from a date, to determine when actual remediation had to begin. It also allowed a buffer to be built in, such as the 12/31/98 date you mentioned previously, and a variety of 1999 dates, with probably June-July being most prevalent, to allow for schedule overruns.

To varying levels of degree, we have seen evidence that projects with similiar date constraints, at the very least, follow past experience, if not eclipsing it. The Euro implementation, while not without problems, demonstrated this fact. The airline reservations systems as well. Even the state benefit systems that Yourdon mentions at least mirror past experience with software projects.

The fact that the predicted shortage in COBOL programmers failed to materialize points this out, as well as the declining business of "remediation specialists". While I won't question that some organizations may have started too late, and are truly in a struggle, the evidence does not point to this being the case in general.

-- Hoffmeister (hoff_meister@my-deja.com), August 16, 1999.


If the calendar date is arbitrary, I'd agree.

The year 2000 is not an arbitrary, unknown date. It allowed organizations to work backwards from a date, to determine when actual remediation had to begin. It also allowed a buffer to be built in, such as the 12/31/98 date you mentioned previously, and a variety of 1999 dates, with probably June-July being most prevalent, to allow for schedule overruns.

To varying levels of degree, we have seen evidence that projects with similiar date constraints, at the very least, follow past experience, if not eclipsing it. The Euro implementation, while not without problems, demonstrated this fact. The airline reservations systems as well. Even the state benefit systems that Yourdon mentions at least mirror past experience with software projects.

The fact that the predicted shortage in COBOL programmers failed to materialize points this out, as well as the declining business of "remediation specialists". While I won't question that some organizations may have started too late, and are truly in a struggle, the evidence does not point to this being the case in general.

I'd like to see any evidence that you have that most organizations took Y2K seriously enough to start their projects sufficiently before the deadline that they had any reasonable chance of making it, where "making it" means that they finished enough of their projects to survive as a corporation. I haven't seen any evidence like that; in fact, as far as I can tell, most organizations did not start their Y2K projects in earnest until 1997 or even 1998. Given the decades of previously accumulated software that they had to fix or replace, I have great difficulty believing that a year or two would do the job, especially when they have devoted only a fraction of their IT resources to the project.

As for the meaning of the apparently ample supply of Cobol programmers, I'd draw a diametrically opposite conclusion from it: that the organizations have given up. Otherwise, we should have seen a significant number of large organizations announcing completion of their Y2K projects. So far, I haven't seen more than perhaps one or two such announcements. For the rest, I have seen statements that they are "working on it, and on track". If you know where I could find any sizable number of statements by large organizations that they are done with their Y2K projects, I would greatly appreciate your providing the URL(s). Otherwise, I'll have to conclude that almost no large organizations are finished yet.

-- Steve Heller (stheller@koyote.com), August 16, 1999.


I'd like to see any evidence that you have that most organizations took Y2K seriously enough to start their projects sufficiently before the deadline that they had any reasonable chance of making it, where "making it" means that they finished enough of their projects to survive as a corporation. I haven't seen any evidence like that; in fact, as far as I can tell, most organizations did not start their Y2K projects in earnest until 1997 or even 1998. Given the decades of previously accumulated software that they had to fix or replace, I have great difficulty believing that a year or two would do the job, especially when they have devoted only a fraction of their IT resources to the project.

I would agree that remediation efforts probably did not start in earnest until 1998. But companies began replacing systems before that. For example, look at SAP. In 1994, there were about 1,000 installations. Today, more than 20,000. SAP's revenues more than tripled from 1995 to 1998. And the SAP market has been fuelled, in a very large part, by Y2k. Even a cursory review of corporate Y2k disclosures finds SAP mentioned throughout.

As for the meaning of the apparently ample supply of Cobol programmers, I'd draw a diametrically opposite conclusion from it: that the organizations have given up.

Yes, I've heard this statement before, and find absolutely no credibility in it.

I can literally point to thousands of pieces of information that say corporations are working on the problem.

Can you cite even one reference to a company that has "given up"? Do you truly believe that a corporation would, even at this late stage, just throw their hands in the air and stop working on the problem? I certainly haven't seen it, and it absolutely does not fit with my experience with corporations in general.

Otherwise, we should have seen a significant number of large organizations announcing completion of their Y2K projects. So far, I haven't seen more than perhaps one or two such announcements. For the rest, I have seen statements that they are "working on it, and on track". If you know where I could find any sizable number of statements by large organizations that they are done with their Y2K projects, I would greatly appreciate your providing the URL(s). Otherwise, I'll have to conclude that almost no large organizations are finished yet.

No, and I doubt any organization will truly "finish" until well after the rollover. I expect testing and contingency planning to run almost to the rollover.

Year 2000 projects involve much more than just the remediation. Even applications completed must be continually change-controlled for possible Y2k problems. This is one of the reasons that active systems being remediated were held to last; it just isn't feasible for a large corporation to somehow "freeze" application modifications for a year or more in their active systems.

-- Hoffmeister (hoff_meister@my-deja.com), August 16, 1999.


I would agree that remediation efforts probably did not start in earnest until 1998. But companies began replacing systems before that. For example, look at SAP. In 1994, there were about 1,000 installations. Today, more than 20,000. SAP's revenues more than tripled from 1995 to 1998. And the SAP market has been fuelled, in a very large part, by Y2k. Even a cursory review of corporate Y2k disclosures finds SAP mentioned throughout.

Yes, I'm sure that SAP has done very well with Y2K replacements. But I've still been unable to find any indication that very many companies had even started planning their Y2K effort before late 1997. Therefore, whatever systems were replaced before that with SAP implementations may have been okay, but the other systems weren't even being analyzed before that time. As far as I'm aware, SAP can only do a portion, possibly a minor fraction, of the systems in a very large corporation. Therefore, my comments about the serious hazards of making a late start still apply to those other systems.

Can you cite even one reference to a company that has "given up"? Do you truly believe that a corporation would, even at this late stage, just throw their hands in the air and stop working on the problem? I certainly haven't seen it, and it absolutely does not fit with my experience with corporations in general.

I should have been more clear. I'm not saying that anyone is going to stop working on it entirely. However, most companies appear to be expending a significant percentage of their information systems resources on other projects before their Y2K project is as finished as it can possibly be this year. Maybe this isn't giving up, but it certainly isn't taking the problem seriously enough.

No, and I doubt any organization will truly "finish" until well after the rollover. I expect testing and contingency planning to run almost to the rollover.

Year 2000 projects involve much more than just the remediation. Even applications completed must be continually change-controlled for possible Y2k problems. This is one of the reasons that active systems being remediated were held to last; it just isn't feasible for a large corporation to somehow "freeze" application modifications for a year or more in their active systems.

No, I can't agree with that. Although of course it would be painful, it would be entirely possible for them to freeze their application systems for a year, if the threat were taken seriously enough. Of course, that would require that they finish their Y2K modifications last year, which as far as I can tell almost no one has done. Apparently, top management is either unwilling or unable to understand the threat to the organization posed by Y2K, or unwilling or unable to convey this to the line managers. I have worked for number of large corporations, and unfortunately I find it very easy to believe that top management is out of touch with the people who could explain this to them.

-- Steve Heller (stheller@koyote.com), August 16, 1999.


Yes, I'm sure that SAP has done very well with Y2K replacements. But I've still been unable to find any indication that very many companies had even started planning their Y2K effort before late 1997. Therefore, whatever systems were replaced before that with SAP implementations may have been okay, but the other systems weren't even being analyzed before that time. As far as I'm aware, SAP can only do a portion, possibly a minor fraction, of the systems in a very large corporation. Therefore, my comments about the serious hazards of making a late start still apply to those other systems.

Other than very specialized, custom applications, SAP can handle the bulk of a large corporations applications. This is the very reason SAP has become such a ubiquitous solution, especially as it relates to Y2k.

I should have been more clear. I'm not saying that anyone is going to stop working on it entirely. However, most companies appear to be expending a significant percentage of their information systems resources on other projects before their Y2K project is as finished as it can possibly be this year. Maybe this isn't giving up, but it certainly isn't taking the problem seriously enough.

Again, Y2k is not an unknown problem. Corporations are moving on to other projects because they feel they have their Y2k work well in hand.

No, I can't agree with that. Although of course it would be painful, it would be entirely possible for them to freeze their application systems for a year, if the threat were taken seriously enough. Of course, that would require that they finish their Y2K modifications last year, which as far as I can tell almost no one has done. Apparently, top management is either unwilling or unable to understand the threat to the organization posed by Y2K, or unwilling or unable to convey this to the line managers. I have worked for number of large corporations, and unfortunately I find it very easy to believe that top management is out of touch with the people who could explain this to them.

Anything is possible. The point is would it make sense?

In any case, I think we're beginning to stray. The original analysis did not make overly optimistic asumptions as to remediation. Originally I used 66%; even 50% doesn't change the conclusion. Yourdon uses I believe 80%; Jones acknowledges figures in the same range. I've provided backup for these numbers. And even companies whod did start late, will get some percentage of their applications completed.

-- Hoffmeister (hoff_meister@my-deja.com), August 16, 1999.


Okay, I think we've covered this ground pretty thoroughly. I suggest that each of us write a summary of where we think we've ended up, post them, and then throw this into the public arena. What do you say?

-- Steve Heller (stheller@koyote.com), August 16, 1999.

Fine by me. Go ahead when you're ready.

-- Hoffmeister (hoff_meister@my-deja.com), August 16, 1999.

Here is my summary of the arguments and evidence presented on this thread. First, statements for which there is some supporting evidence that a reasonable person would consider indicative if not conclusive:
  1. There is some information about the likely effect of rollover errors, derived from time machine experiments. These experiments make it possible to determine the likely severity of rollover errors.
  2. There is a study cited by Hoffmeister that provides some information indicating that a fairly large percentage of systems that are not "critical" are "dormant" and therefore do not have to be remediated, based on data from studying IBM's software portfolio. However, that same study also stated that "it turned out to be very difficult to separate the active portions from the dormant portions of software portfolios", which reduces the positive effect of this factor.
  3. A number of large organizations have been replacing a significant portion of their software portfolio with packages like SAP, which would reduce the number of systems that would need to be remediated.

Second, here are the undefined and inconclusive elements of the situation:

  1. The original study by Cap Gemini on the progress of remediation in large companies cited by Hoffmeister has turned out to be ambiguous, in that the wording of the results does not clearly distinguish between "all systems" and "critical systems". As a result, he has abandoned it as a source. This leaves open the question of what stage in remediation the average company has reached or will reach by the end of the year.
  2. There is no study cited that provides reliable information about what percentage of systems are "critical systems". The only study cited was that of the federal government, in which they were less than 10 percent of all systems.
  3. There is no way of determining how companies decide which systems are "critical" and which systems are not. In fact, the distinction may be largely meaningless, as pointed out in an article cited by Hoffmeister in this discussion: "After all, a system that's non-critical to your organization may be very mission-critical to some of your external suppliers, vendors, or customers.
  4. There is no study cited that indicates the effect of an arbitrary ending date on the likelihood of success of any stage of remediation. However, experience and common sense indicates that such projects have a much lower chance of finishing even close to the target date than projects that have been analyzed in advance of setting the target date.
  5. There is no study cited which indicates that the average large company began its Y2K remediation task early enough to have any reasonable probability of finishing or even mostly finishing the task. However, we have agreed that most companies did not start the remediation in earnest until late 1997 or early 1998. This leaves them only a very short period of time to do a massive amount of remediation work, and cannot be construed positively.

What are we to make of all of this? I think the only reasonable conclusion to be drawn from this discussion is that there is no way for us to even reasonably accurately estimate the degree of remediation that large organizations will complete by the end of this year. Therefore, I conclude that it is only prudent to prepare for the possible consequences of a massive failure of remediation.

-- Steve Heller (stheller@koyote.com), August 17, 1999.


The original analysis stands basically unchallenged.

Various aspects have been questioned here, only to be immediately dropped.

One assumption has been questioned extensively, that being the percentage of remediation completed. The basis for this questioning has involved almost solely the difference between "critical" and "non-critical" systems. Almost by definition, this distinction is trivial to the analysis; "non-critical" systems are those the organization feels they can do business without.

Even so, no evidence has been cited that the percentage of remediation completed on active systems is any lower than the figure I used. The one attempt was to use the Federal Government, an admittedly poor example as compared to virtually any coproration. But an examination of even the Fed revealed that support for the figures I used can be found.

Other sources suggested also provide support. Using the history of past software projects results in a much higher percentage. Capers Jones uses a higher percentage in his analysis.

These are sources that have been used to support the more pessimistic analysis of the effects of Y2k.

The analysis is based in no way on optimistic assumptions. In fact, very pessimistic estimates have been used. Indeed, assuming that 50% of the active systems with Y2k errors remain completely untouched qualifies for extremely "massive failure in remediation". And the reason is simple; my opinion on Y2k is not based on an assumption that no problems will occur, but that those problems will be dealt with, as problems are being dealt with today.

The conclusion remains basically unchallenged: that we are currently experiencing IT systems error rates, simultaneously, that at the least are of the same magnitude as those which can be expected at the rollover to the Year 2000. These errors account for virtually every instance of "Y2k-related" errors to date, be it the World Bank, or MCI, or SUN Hydraulics. Without a doubt, implementation errors far exceed the severity of Year 2000 errors; even so, I discounted these errors by 85% within the analysis.

While these errors have caused problems, as system errors have in the past, they have not even begun to approach a level that would cause any form of overall collapse. Neither will the rollover to the Year 2000.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.



-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.

This format works for me. NEAT GUYS!! You have done something I suspect a LOT of folks would have liked to have seen done in open forum but I understand why it couldn't be done that way. I'm going to actually have to READ and THINK on the stuff up there, and look HARD at the quoted numbers.

THANKS AGAIN for finding a format that WORKS!!!

Chuck, who was NOT looking forward to moderating this in open forum.

CR

-- Chuck, a night driver (rienzoo@en.com), August 17, 1999.


Good discussion. The timeline is still developing on the remediation. Dec 31st- all systems, year for testing then March- June etc. Also switching to critical systems.

At this point corporations are unable to verify remediation of critical systems. Critical means critical. They would not function if y2k were today by their own definition.

So companies are working on fewer systems which are taking longer to fix than they initially said and the project completion dates keep slipping away. No wonder the number of "crisis centers" at the corporate level has more than doubled in the last few months.

-- Mike Lang (webflier@erols.com), August 17, 1999.


Very nicely done debate guys.

No one wins it though, as both of you didn't give irrefutable proof about anything. I'm no further along than I was before I read this thread.

-- Chris (%$^&^@pond.com), August 17, 1999.



A monumental effort from both sides! Thanks, guys. And thanks to the moderators, too.

-- Lane Core Jr. (elcore@sgi.net), August 17, 1999.

Hoffy, nice try. Steve, good job.

Hoffy:

(1) 60-85% of Fortune 500 revenues come from abroad. Foreign sources of imported stuff are also essential to the US economy. The world economy is globalized, banking is globalized (think about Allan Greenspan's "cascading cross-defaults" scenarios). Your analysis reveals the "peanut-butter-and-jelly-sandwich" syndrome. Question: Do you live inside a peanut butter and jelly sandwich (wrapped in aluminum foil)?

(2) What about SMBs? What about government and state agencies? Your analysis, as weak as it is, only concerns the Fortune 500, right? Do you care to learn what the Y2K exposure is in foreign governments, agencies, and SMBs? The latest poll reveals that 72% of Italians think that Y2K is a typo error Hoff.

(3) In view of the "serious risks and uncertainties" that the Chevrons and the BankBostons of this world are making public (in writing), what about people's reactions, bank runs, etc ?/

(4) Embedded chips Hoffy?

-- George (jvilches@sminter.com.ar), August 17, 1999.


No, George, to answer your question, the analysis is not based just on Fortune 500 companies.

The Capers Jones estimate of remediation completed, for example, includes the US, Europe, and the rest.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


Hoffmeister:

(a) You didn't answer my questions (note the final "s"). Please read them again, and try harder.

(b) Hoff, gimme a break. Just HOW did Caper Jones, or anyone else for that matter, find out about Y2K remediation practices in Europe, Russia, and "the rest" ??? ("the REST" is the cheapest shot I've read in a long time by the way). Just WHAT does Caper Jones know about remediation practices in say, the "Banco do Estado do Sao Paulo" Brazil??? Or maybe you think Latin banks are not important?? You'll find out how important they are as soon as the Latins don't pay up my dear "because our computers don't work very well" and other junk you'll get to hear come Jan.2000. And sure enough Caper Jones also studied SMBs in Indonesia too...

-- George (jvilches@sminter.com.ar), August 17, 1999.


George, I think the fact that your case depends on questioning the most respected and authoritative source of metrics available, a source relied on by Ed Yourdon, among others, speaks volumes.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


Hoffmesiter=Spinmeister:

The only thing that "speaks VOLUMES" is that you have NOT answered any of my 4 (four) questions above.

-- George (jvilches@sminter.com.ar), August 17, 1999.


Sorry, George. My answer was specifically to 1) and 2).

As for "bank runs" and "people's reactions", these have been predicted how many times now? How are people reacting today to system problems? I know, the typical description of unthinking "sheeples" is popular, but the fact is people are just exhibiting far more common-sense than these scenarios have predicted.

I can virtually guarantee that "Embedded Chips" will be discussed, in fair detail. Sorry, just have to wait.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


You can waste a lot of bandwidth discussing this analysis. It stands on one assumption. That Gartner Group is right with:

"The "singular event" that has the most potential to cause simultaneous errors, and as such the greatest risk of collapse, is the two-week period surrounding Jan 1, 2000. Gartner Group has estimated that 10% of potential Y2k errors will occur during this time frame (source: Slide 2). To add a level of certainty, this analysis will double the Gartner Group estimate, and assume 20% of Y2k errors will occur during this time frame. "

This allows "fix on failure" to address the things planned maintenance doesn't find.

Unfortunately, our experience is that Gartner Group is wrong in two ways:

1. With business systems approximately 20% of the failures are at rollover. The distribution of failures is normal (bell curve) around that date, not flat. It is EXTREMELY difficult to recover a system with multiple interacting errors - takes time. Best thing to do is triage systems, subsystems and processes now.

2. Embedded systems are basically real-time with 95% of the errors occuring at rollover. Some fail catastrophically. This is by far the majority of systems.

Fix on failure won't work at rollover.

-- ng (cantprovideemail@none.com), August 17, 1999.


In summary Hoff:

Questions 1 and 2 : Your answer is that Caper Jones has solved the unfathomable uncertainties of Fortune 10,000 Y2K remediation, plus SMBs, plus state and federal government agencies and institutions, both domestic and FOREIGN, worldwide (China included)!!!!!! You are cute Hoffy. I like you. Your ideas are young and refreshing. Immature I would say.

Question 3: Your answer is that bank runs have already been attempted and failed. Bwwahhhhaaaahhhahhhhhhaaahahahahahahah!!!!!!!!!!!!!!!!! I better not tell you what I think of that one Hoff. You would rather take your hard-earned, free, one-way ticket to the Falkland Islands prematurely.

Question 4: Your answer is that embedded chips will be discussed later. However you make no reference whatsoever about Y2K failures caused by mutual cross-dysfunctional defects between IT and non-IT systems. Just fine Hoff, just like your water-tight reasoning. Sorry to inform you that it is not static nor modular you dummy, its DYNAMIC and systemic for Crissake !!

Hoff, your answers are not serious. And if you think that no one will notice, let me remind you of Abraham Lincoln's : "You may fool part of the people all of the time, or all of the people part of the time, but you can't fool all the people all the time". Sorry Hoff.

-- George (jvilches@sminter.com.ar), August 17, 1999.


Again, your 20% matches the estimate I actually used. Agreed, it is better to fix the systems. No doubt about. But multiple errors occur with even greater frequency during system implementations, and can occur in virtually any area of the system. While rollover failures may also occur simultaneously, the problem identification is simplified by a wide margin.

As I said above, I can guarantee embedded systems will be discussed. But I hope I misread your statement. You certainly aren't suggesting that a majority of embedded systems will fail at rollover, are you?

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.



You're drowning fast Hoff, you better get some help.

-- George (jvilches@sminter.com.ar), August 17, 1999.

Sorry, George, my last post wasn't in response to you.

As for bank runs, etc., you miss my point. I keep hearing that "panic" is just around the corner, that "bank runs" are certain when the "sheeples" start moving.

Thus far, those counting on these scenarios to cause Y2k problems have vastly underestimated the common-sense of the average person.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


Danger Hoff... Danger Hoff. Don't continue to have a battle of wits with an unarmed person such as George. He asked indirect questions and refuses any answer. He can not move the discussion forward. Careful Hoff landmines ahead!

BTW Hoff great debate. You make your points extremely well. Steve kind of rambled a bit but you kept bringing him back to focus on the points.

-- Maria (anon@ymous.com), August 17, 1999.


Hoff:

So you are tacitly accepting my summary of your responses to my original four questions, right?

And as far as bank runs, domestic and FOREIGN, you are soon to find out that what you call 'common sense' is the least common of all senses. Rots of ruck.

-- George (jvilches@sminter.com.ar), August 17, 1999.


No, George, I don't agree, "tacitly" or otherwise.

If you have better estimates with backup, by all means let's here them. I attempted to use the most widely accepted numbers available, and even then discounted them.

Waving your hands blindly and chanting "it's going to be bad" may be entertaining, but doesn't really forward any discussion of just how bad it may be.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


Great Hoff, so you finally got the help you needed, from no less than MCI-failure-in-Chief Marma !!! Congratulations Hoff, she'll help for sure, just watch ! MCI and the CIA and NASA and the US Army and Navy and Air Force are all seeking Marma's help too, so you'll have to be patient now. In the meantime, try to sketch some sort of answer to my four un-answered questions will ya?

-- George (jvilches@sminter.com.ar), August 17, 1999.

It's obvious you have no answers Hoff. Just understand what that means.

-- George (jvilches@sminter.com.ar), August 17, 1999.

I take it then, George, you are "tacitly" saying you have no evidence that the figures are incorrect?

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.

Wrong again Hoff. I am explicitly saying what I clearly outlined in my 4 point summary above. Please read it again, and try to give real answers next time.

-- George (jvilches@sminter.com.ar), August 17, 1999.

So let's see if I got this right:

Hoff says that right now (and all this year, really):

(1) Remediated code is being returned to production at a substantial rate. Not all of that code works correctly, causing lots of problems. Sometimes it needs to be taken back out of production, remediated some more, and then returned for take 2 (or 3...). We are *currently* dealing with a high rate of bugs resulting from incomplete/incorrect remediation.

(2) Upgrades/replacements are happening at a rate well beyond the historical all-time high. Very seldom do such upgrades (especially the MAJOR changes) go smoothly. We are *currently* dealing with a high rate of bugs as a result of significant upgrades and replacements.

(3) There might be a little bit of lookahead code causing problems, but we know those problems are fewer than expected and not the thrust of this argument.

(4) All of these are happening to the code considered most critical by the organizations currently doing remediation and/or upgrades and replacements. Disruptions now taking place as a result of these efforts are therefore highly likely to be about as serious as disruptions are ever likely to become.

Bottom line: BIG organizations don't just reach down and start stirring up their critical code bigtime without substantial penalties, now being paid. NOT being paid later, being paid NOW.

In contrast, Steve seems to argue much more by the book -- that y2k problems have nowhere near reached their peak, that the problems now happening are only a hint of what's coming. After rollover, both the rate and severity of bugs remediation missed will exceed current problems by orders of magnitude.

Steve's arguments are obliged to assume that all current surveys are very wrong, that the experts can't be trusted, that our current metrics can only be applied if they lead to the desired result, and that there is no practical difference between "not done" and "not started".

Steve makes a strong enough argument so that Hoffmeister's points are placed into a good, informative context. The fact that Hoff's argument can only be opposed dogmatically ("Y2K will be very bad because that's the TRUTH!") becomes even more obvious when people like George are forced to stoop to name-calling, playing childish games, and changing the subject.

My personal sense of all this is that Steve's points are good ones -- we can make reasonable approximations, but it's best to err even more conservatively than Hoff does. As a result, I anticipate problems more serious than we are currently experiencing, at least for a while. But for the meltdown the extremists predict, things would have to become so much worse as to defy any analysis beyond sheer, unsupportable policy statements.

By the end of the debate, the doomy side is pretty much reduced to arguing that things will be bad because we can't know otherwise. Or as Cory says, the debate is not symmetrical, and only optimists are obliged to support their positions, while pessimists can play with the net down.

I look forward to the embedded system debate.

-- Flint (flintc@mindspring.com), August 17, 1999.


A request for clarification to Hoffmeister and Steve (after congratulations on good arguments from both sides). It seems to me that your summaries could be further boiled down to the following:

It is clear that we do not yet know the success or failure rates of Y2K remediation efforts. You did disagree on the reliability of the studies and estimates of failure rates that have been made to date, as well as to which systems (critical v. non-critical) those estimates might apply to. However, those disagreements were ultimately not very large. Ultimately, your greatest difference seems to be purely one of how you each react to the uncertainty. Steve chooses to assume the worst possible result while Hoffmeister chooses to assume that Y2K will prove to be no worse (or no better) than past experience.

You touched on this theme in several messages, but it tended to get lost in the other areas of debate. I would suggest as a possible future topic that you focus on this difference. Steve, explain why you believe Y2K is so greatly different than other system failure issues and Hoffmeister, you explain why you draw the oppsoite conclusion. I feel that this discussion would probably do more to explain the difference between the "doomer" and "polly" than almost any other you could have.

-- Paul Neuhardt (neuhardt@ultranet.com), August 17, 1999.


George, you have now proven to be most childish. "so you finally got the help you needed, from no less than MCI-failure-in-Chief Marma !!! Congratulations Hoff, she'll help for sure, just watch ! MCI and the CIA and NASA and the US Army and Navy and Air Force are all seeking Marma's help too, so you'll have to be patient now" Which rule does this break? Thanks for helping to prove my point that you place landmines through out your posts.

Paul, good points about the difference in doomer and polly logic. But if you subtract a lot of the noise in past threads, you would have seen there too.

-- Maria (anon@ymous.com), August 17, 1999.


Flint, you turn up just when Hoff needs it bad, what a coincidence!

You also suffer from the same peanut-butter-and-jelly-sandwich syndrome. Obviously you did not read my four point summary above.

Caper Jones never claimed to cover China's SMBs, Brazil's Banco do Estado do Sao Paulo, and the "rest" as Hoff says (now that sounds like a real ugly international gringo boo boo I tell ya). What good are "averages" in this case of analysis anyway?

Flint buddy, if any given single individual comes back from a 30-year stay in different cities of Africa leading an admitted promiscuous sex life without ever using condoms, the burden of proof to be considered HIV-negative is on HIM, not on averages, or statistics or anything else. You ignore this fact as if AIDS and Y2K were not two realtively similar, unprecedented phenomena for which previous paradigms do not hold true.

-- George (jvilches@sminter.com.ar), August 17, 1999.


Alright Paul, in a nutshell:

I'm used to dealing with errors in IT systems. Especially errors during large-scale system implementations, where every aspect of the software is prone to failure.

By comparison, Y2k errors are truly trivial. Yes, they can cause abends, etc, but the fact remains that the actual fix and cause are trivial, in comparison to other types of errors. And I've yet to see anyone truly dispute this.

The argument has always been that the sheer number of these errors, happening simultaneously, will overwhelm the ability to deal with them, and cause a system collapse. And in the abstract, this argument has some validity. There is certainly a "fault tolerance" level, above which errors would truly cascade.

This was the point behind my post. It is no doubt impossible to determine exactly what the fault tolerance level is. But demonstrating that the system, in fact, has already dealt with error levels at least of the magnitude as what can be expected, without even approaching a collapse, accomplishes the next best thing. That Y2k will not breach that fault tolerance level.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


Hoff,

Re IT: 20% of the problems is a WHOLE LOT of problems at rollover. If the OS is hosed, file maintenance is destroying records, applications are writing bad data and the users are doing thew wrong things the system is telling them to do, things can get VERY f**ked up in a hurry. I saw this happen in a logistic system with a 1 digit year in the key field in 1980. Billions of wasted dollars and years to fix. This time it is much worse because the commercial software (OS, DBMS, and Commerical applications) can be hosed and interfacing systems can be hosed.

The BEST organizations we deal with, who have been diligent working on this problem for years and fixed and tested and certified their systems (or we did) are still finding problems in those systems - bad fixes, overlooked cryptic code, etc. My sense is those organizations have a chance at "fix on failure" when the hump hits.

Re Embedded: I think of embedded on a "box" basis. So the vast majority of systems are embedded boxes. (This isn't strictly true. "Embedded" comes from the military weapon system world, where you can have a processor integrated into a weapon platform, communication system, etc. It is useful however in organizing the effort of finding all the things, analyzing them, figure out what to do and doing it.) The percentage of these boxes which fail varies between .5% and 7%, varying by industry. They fail in often very interesting ways. I've seen them rewrite their PROMs and effectively destroy the hardware. sometimes the stop, sometimes they incorrectly control a process. If one looks at a factory or and ocean going vessel, or a helicopter, it can be awfully hard inventory all these things and get all the software and figure out what to do and do it. Fix on failure will not work in this environment, but that is what a lot of organizations are planning on.

Seven years ago, I thought this was an interesting problem to focus on and help the customers take care of it in normal maintenance. Nobody (not true, but pretty close - over 95%) would put money into actual fixes until about 15 months ago. Yes, if someone wanted to replace a system with a commercial package like SAP or a new "box", they used it as an excuse.

-- ng (cantprovideemail@none.com), August 17, 1999.


ng, no doubt 20% of the problems is a whole lot.

So are the errors we're currently experiencing. And these can be generated virtually anywhere in the code, not just through invalid date logic.

The point was not to minimize the problems. The point was to show we're going through at least as many errors right now.

-- Hoffmeister (hoff_meister@my-deja.com), August 17, 1999.


Does "we" include the 'Banco de la Nacisn Argentina' Hoff?

Does "we" include 'Pedevesa' Hoff?

Does "we" include Italian SMBs Hoff?

Does "we" include India's power stations?

You'll never admit that the scope of your data, of Caper Jones data, or of anybody's data cannot include what your ugly gringo attitude has called, errr.. "the rest". Hoff you just plain ignore what the rest of the world's reality is like and how much it affects the US. You talk about remediation effectiveness, they haven't concluded the awareness stage. Hoff, you live inside a jar of diet mayonaisse (lid tightly closed)

-- George (jvilches@sminter.com.ar), August 17, 1999.


Flint, you summarized Hoff and Steve's debate very well. I appreciate your input a lot as someone who differ with my opinion because you try hard to stay objective and refrain from name calling. This debate is refreshing, it's been a long time since I haven't seen one like this on this forum.

It's clear that you are more trusting in human nature as being more honest and straightforward than either Steve or me are. Especially when money and big egos are involved.

-- Chris (%$^&^@pond.com), August 17, 1999.


Flint does not trust human nature nor Y2K as per the fact that he has withdrawn all of his money from the bank and will/would continue to do so in the near future if he had any left.

When money and big egos are involved care is recommendable. Flint knows this very well and has acted accordingly, despite any wishy- washy, shave-the-shavings analysis that he might project otherwise.

-- George (jvilches@sminter.com.ar), August 17, 1999.


Maria, you said:

Paul, good points about the difference in doomer and polly logic. But if you subtract a lot of the noise in past threads, you would have seen there too. 

Probably, but I find I no longer have the patience to sift through the noise of most threads. The signal to noise ratio on most threads here is so low as to make them useless. Its why I like this thread so much: two people with honest differences having an honest and orderly discussion about them. I got something out of both arguments presented.

Hoffmeister, you said:

But demonstrating that the system, in fact, has already dealt with error levels at least of the magnitude as what can be expected, without even approaching a collapse, accomplishes the next best thing. That Y2k will not breach that fault tolerance level.

The only problem I have with the arguments you presented were that I didnt advance them myself. Sorry if it seemed otherwise. They are a very accurate statement of my own views on the likely severity of Y2K.

It strikes me that this question is the fundamental difference between the pessimists and the optimists: Has enough work been done to bring the magnitude of faults that can be expected back into an acceptable level. The pessimists answer is Probably not while the optimist says Probably. Of course, the doomer says No, because the task is impossible and the polly says Of course, since the problem was never that big to begin with.

-- Paul Neuhardt (neuhardt@ultranet.com), August 17, 1999.


Steven & Hoffmeister,

Great debate! I thought your analysis was brilliant Hoff! Steven, I know you don't want to hear this, but I think Hoff got the better end of this debate. I haven't seen any statistics anywhere that demonstrate that we haven't made enough enough progress on the critical systems to avoid a major disaster. The best you can come up with is that the Federal government is fixing only 10% of its total systems--a point that Hoff took care of quite nicely.

However, I must say this, Steven. You have conducted yourself admirably in this debate. There was no name calling (a la George)and you argued your side as logically as possible, given the relatively weak hand you started with. The facts just aren't there to back you up, but you played your hand well. You have earned my respect for that.

George,

You quoted Abraham Lincoln's : "You may fool part of the people all of the time, or all of the people part of the time, but you can't fool all the people all the time".

Lincoln also said that it is better to remain silent and be thought a fool than to open mouth and remove all doubt.

If you were wise you would heed this advice.

BTW, George. You sound a lot like Ray. You aren't related to him, by any chance. Are you?

Robin Messing

-- Robin S. Messing (rsm7@cornell.edu), August 18, 1999.


Steve

You wrote:

But you're still missing an important point : the types of errors that are experienced this year vs. next. Since it is not yet 2000, the types of errors that would have occurred so far are lookahead errors, not real-time information system errors. As has been discussed at length in c.s.y2k and other venues, the so-called "Jo Anne effect" errors, which are of this sort, are relatively easy to handle. This is because they do not impact the day-to-day functioning of the organization. Of course, it is important to be able to balance the books of the organization, but failure to do that or difficulty in doing that is not a show-stopper in most cases. That is why I, for one, have not predicted massive publicly visible IT problems this year.

Steve, maybe you didn't predict massive failures for this year, but a lot of other y2k gurus did including Ed Yourdon, Michael Hyatt and Cory Hamasaki.

http://greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001FmM

But I believe that we will begin seeing Y2K problems that do cause noticeable disruptions in our day to day lives; I believe we'll start seeing them by this summer, and I believe they'll continue for at least a year. As many people are now aware, 46 states (along with Australia and New Zealand) will begin their 1999-2000 fiscal year on July 1, 1999; New York (and Canada) will already have gone through their Y2K fiscal rollover on April 1, and the remaining three states begin their new fiscal year on August 1, September 1, and October 1. We also have the GPS rollover problem to look forward to on August 22nd, as well as the Federal government's new fiscal year on October 1st. There is, of course, some finite probability that all of these rollover events will occur without any problems; but there's also a finite probability that pigs will learn to fly.

--Ed Yourdon

See also:

http://www.geocities.com/Area51/Vault/1157/jo-anne.htm

http://207.158.205.162/Computech/Issues/hyatt9840.htm

Evidently, you didn't make the predictions that this triumvirate made. Did you make any predictions at all in 1998 about what would happen in 1999 or did you just not make any predictions? In 1998 did you ever state that these three were full of it because they were exaggerating the effects of lookahead errors and that we wouldn't have much of a problem in 1999? If so, could you provide a URL for that prediction? Or did you not make any predictions for 1999 in 1998?

I am asking this question because I want to know if you really thought the lack of major failures in 1999 was predictable or if your statement is merely after-the-fact justification for your point of view.

-- Robin S. Messing (rsm7@cornell.edu), August 18, 1999.


Robin:

Evidently, you didn't make the predictions that this triumvirate made. Did you make any predictions at all in 1998 about what would happen in 1999 or did you just not make any predictions?

The latter.

In 1998 did you ever state that these three were full of it because they were exaggerating the effects of lookahead errors and that we wouldn't have much of a problem in 1999?

No.

I am asking this question because I want to know if you really thought the lack of major failures in 1999 was predictable or if your statement is merely after-the-fact justification for your point of view.

I'm not sure I understand your question, but my opinion in 1998 was that predicting failures in 1999 was chancy at best. The only kind of Y2K system failures that I thought had a reasonable probability of occurring at any time during this year were accounting system failures that would be unlikely to cause disasters (aside possibly from the results to the shareholders).

My position has been consistent: the operational software and embedded systems problems will probably not occur until January 1st, and some of them after that.

-- Steve Heller (stheller@koyote.com), August 18, 1999.


Dear Mr.Messing:

You say you "haven't seen any statistics anywhere that demonstrate that 'we' haven't made enough enough (sic) progress on the critical systems to avoid a major disaster". Now I wonder who you mean by 'we' and why you decided to revert the burden of proof, but let's leave that aside for the moment.

At any rate, what you are saying Mr.Messing is that avoiding a "major disaster" for 'us' is what Y2K remediation and testing is all about. Thus to be consistent with your line of thought I gather that a "medium size disaster" would make you feel comfortable enough, in which case I dare to say that 99% of the world population doesn't have a clue of your risk management philosophies concerning Y2K and that this, by itself, will bring about very serious political consequences.

Still, Mr. Messing, please be aware that there are "lies, damn lies, and statistics", which in the Y2K case mean very little as probable completion percentages are almost worthless indicators if

(1) iron triangle isn't almost 100% functional, both here and ABROAD

(2) international banking isn't 99-100% compliant (Allan Greenspan). Please let me remind you that there are approximately 200000 (two hundred thousand) banks worldwide and that the SWIFT system is far from being compliant.

Furthermore Mr. Messing please also be advised that Hoff's point (4) above concerning world Y2K status was so weak that even his own cited "sources" (check them out please) deny what he is supposedly trying to prove.

Cordially yours

-- George (jvilches@sminter.com.ar), August 18, 1999.


"Has enough work been done to bring the magnitude of faults that can be expected back into an acceptable level. The pessimists answer is Probably not while the optimist says Probably. Of course, the doomer says No, because the task is impossible and the polly says Of course, since the problem was never that big to begin with. " Paul N.

Good observation Paul. Whatever the run of the mill pollys and doomers on this board think and argue is wasted bandwith in the grand scheme of things. I'll listen to an Ed Yourdon and Bruce Webster before I will an Hoffmiester on this issue.

Hoff, since you like to site Capers Jones, here is what they said as stated by Bruce Webster in his book "The Y2K Survival Guide, Getting to, Getting Through, and Getting Past the Year 2000 Problem", page 19.

"Based on progress through mid-1998, Capers Jones of Swoftware Productivity Research estimated that up to 75 percent of U.S. enterprises (including corporations, small businesses, military installations, and federal, state, and urban governments) will face significant to severe Y2K problems."

And as George like to remind us, this is not taking into account the entire world we depend on.

-- Chris (%$^&^@pond.com), August 18, 1999.


Chris, I see you are getting well focused on the reality checks of Y2K.

You now surely understand the reasons and tone behind many of my posts on other threads, specially in "Re: Debate". I sort of feel you are losing your patience a bit too. Under current circumstances that's healthy I think.

Concerning what you've just mentioned above, please go into the slight trouble of clicking on Hoff's own "sources" indicated in point(4) of the opening presentation on this very thread above. He cites Cap Gemini and Caper Jones as his references. Please click on them and you wouldn't believe how could he ever present that as 'evidence' as it disproves his own case. It's important to emphasize though that Hoff's point (4) is not only false, but also the underpinning of his entire argument.

Y2K is a spanking new animal which implies a change of many paradigms. Wellcome to the 21st. century Chris, in a changing environment of constant uncertainty.

Take care.

PS: I can be as polite and as gentleman as the occasion requires. Sometimes the occasion doesn't require it, such as this thread, simply because we are many times reading what 7 x 24 x 365 vested- interest individuals have to say under the misleading disguise of objective analysis. And that doesn't leave too much room for politeness. Ever played football under the rain? Sometimes this is pretty much the same Chris.

-- Argentina (jvilches@sminter.com.ar), August 18, 1999.


Steve,

Thank you for your answers. I just wanted to set the historical record straight. You originally wrote:

Of course, it is important to be able to balance the books of the organization, but failure to do that or difficulty in doing that is not a show-stopper in most cases. That is why I, for one, have not predicted massive publicly visible IT problems this year.

One could imply from this that in 1998 you predicted the impacts of y2k would not be a big deal in 1999. If this were the case then you would have a fairly credible track record for predictions.

Instead, you have no track record at all. This is not necessarily a bad thing. In fact, I congratulate you for not sticking your neck out in making a prediction. That was a much better stance to take than that of many of the leading y2k "experts" who have demonstrated their lack of expertise by making some predictions that turned out to be real howlers.

Still, your record is not as impressive as it would have been if you had taken a public stance and said back in 1998 that Youdon, Hamasaki, Hyatt, North et al didn't know what they were talking about when they talked about failures in 1999.

George

Thank you for your courteous reply. You seem much more intelligent when you respond this way than when you go into your rabid frothing at the mouth mode attack against Hoff. Keep it up.

Your response deserves a more considered response than I have time for now. I will post a response tonight after work.

Robin Messing

-- Robin S. Messing (rsm7@cornell.edu), August 19, 1999.


"Chris, I see you are getting well focused on the reality checks of Y2K."

I have been for a 1 1/2 years. Never lost my focus. Whatever gave you the impression that I wasn't? My plea to George to calm down and let a polly and a doomer debate civily? When things are presented to "newbies" in an unemotional, non-personal manner, they can make up their own minds as to who's making more sense. Flame wars tend to disgust and scare people away.

-- Chris (%$^&^@pond.com), August 19, 1999.


Chris, Mr. Messing, everyone,

One of the reasons why I consider Hoff's underpinings to be false is explained in the "Gartner Group Report" thread.

I cordially suggest that you take a look at the evidence presented there. Thank you.

-- George (jvilches@sminter.com.ar), August 19, 1999.


George,

You wrote:

You say you "haven't seen any statistics anywhere that demonstrate that 'we' haven't made enough enough (sic) progress on the critical systems to avoid a major disaster". Now I wonder who you mean by 'we' and why you decided to revert the burden of proof, but let's leave that aside for the moment.

By "we", I meant the programmers and technicians doing the remediation work. Perhaps I should have used the word "they" since I am not one of them.

As far as the burden of proof goes, I think the side making a case for a major investment in preperation bears the burden of demonstrating that there is a reasonably high probability of disaster.

I have often heard the phrase "prepare for the worst and hope for the best." This sounds reasonable at first glance, but it falls apart when one examines this through a cost/benefit analysis. What exactly is the worst, and should it be prepared for at any cost, no matter how improbable it may be? There is a possibility that I could get run over when I cross the street. If I realy want to prepare for the worst I can mitigate this danger entirely by either refusing to cross the street entirely or by crossing it only when riding inside a tank. I suggest the probability of gettin hit is so small that it does not merit these costly countermeasures. The burden of proof would rest upon anyone who claims otherwise.

To give you another example, Andy and a few others were very concerned in June about the threat posed by either Comet Lee, or another comet that Nostradamus had predicted would be discovered during the August 11th solar eclipse.

http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=000x1n

http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=000whs

http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=000wTn

If we were to take this threat seriously then the obvious thing to do would have been to buy twenty years of specially preserved food and move into deep underground caves to protect us from the impact. That would truly be preparing for the worst and hoping for the best. But most of us would do a cost/benefit analysis and conclude that the risk was to low to warrant such costly measures. The burden of proof that we were about to be impacted by a comet should have fallen upon Andy and his fellow believers.

At any rate, what you are saying Mr.Messing is that avoiding a "major disaster" for 'us' is what Y2K remediation and testing is all about. Thus to be consistent with your line of thought I gather that a "medium size disaster" would make you feel comfortable enough, in which case I dare to say that 99% of the world population doesn't have a clue of your risk management philosophies concerning Y2K and that this, by itself, will bring about very serious political consequences.

Maybe I should have clarified what I meant by "major disaster". By this I was referring to the type of disaster that many on this forum were preparing for, i.e. electricity down for more than a few days, months without food, etc. I have seen no evidence that we would have prolonged power outages. For most people, there is no need to buy electric generators with weeks or months worth of fuel, nor is there a need to put away more than two weeks worth of food. (I'm not even sure there is a need to put away even one weeks worth, but I personally feel more comfortable with a supply of at least two weeks, y2k or no y2k).

Still, Mr. Messing, please be aware that there are "lies, damn lies, and statistics", which in the Y2K case mean very little as probable completion percentages are almost worthless indicators if

(1) iron triangle isn't almost 100% functional, both here and ABROAD

(2) international banking isn't 99-100% compliant (Allan Greenspan). Please let me remind you that there are approximately 200000 (two hundred thousand) banks worldwide and that the SWIFT system is far from being compliant.

I am not sure from this whether you've been reading the Hoff vs Heller debate since you talk in terms of the necessity of compliance Even Heller would agree with Hoffmeister that organizations don't have to be compliant to work. The major point in there disagreement is how far away from compliance our software is and whether or not that is too far.

Here is an interesting article on Debugging the y2k story:

http://www.ghsport.com/public/y2k.htm

The Compliance-Metric Bug

A common abstraction appearing in many stories is "Y2k compliance", which lumps all possible date-related failure modes and consequences for each system into a simple binary ("true/false") measure. Essentially, this concept says that if a system works the same (or as well) with 21st century dates as with 20th century dates, it's considered "Y2k compliant" -- if not, it isn't. You really can't talk about partial compliance unless you have some way to meaningfully map all of the real, multi-dimensional details into this single-dimensional measure.

The "compliance" abstraction works only in the limit of total compliance: obviously, if everything is Y2k-compliant there is no problem. But, if only 90% of individual systems are compliant and the rest are not, "compliance" tells us nothing about what the result will really be. This is because the results depend on more details than "compliance" measures. An expert can wave his hands and imagine whatever he likes, but the percentage of compliant systems is useless as a measure of the outcome -- unless it is very close to 100%.

Therefore, any story that bases its predictions on projected levels of compliance is flawed. Its predictions could turn out to be correct, of course, but only by accident.

"Compliance" is a poor way to measure even an individual system. When you look more closely at the root of the Y2k problem you find more interesting and complex things going on than what you may have been led to imagine. Explanations of the Y2k bug tend to be so simplified that the real technical issues are missed or misrepresented. This may be necessary in order to "inform" more people, but it leads to a false sense of confidence in the resulting story.

In regards to the SWIFT system, from what I have read, the system itself is y2k ready but that many foreign banks have not yet "plugged" into it to test how their fixes mesh with the system.

Furthermore Mr. Messing please also be advised that Hoff's point (4) above concerning world Y2K status was so weak that even his own cited "sources" (check them out please) deny what he is supposedly trying to prove.

This is your strongest point, and you make it very well on the Gartner Group thread. Undoubtedly, the effects of y2k will be greater in other countries than in the U.S. Still, the types of impact this would have in the U.S. are economic. It may well cause a recession here. But going off and moving to a farm or storing a year's worth of food seems like a poor reaction. In fact, the most likely impact of y2k may be that you could get laid off from your job. If that is the case then you might be worse off having spent your money on preparations instead of saving it for a rainy day.

Robin Messing

-- Robin S. Messing (rsm7@cornell.edu), August 19, 1999.


"I cordially suggest that you take a look at the evidence presented there. Thank you."

George, how handsome you look when you speak this way! ;-)

"In fact, the most likely impact of y2k may be that you could get laid off from your job. If that is the case then you might be worse off having spent your money on preparations instead of saving it for a rainy day."

Robin, I was about to shut off the computer when I said to myself "one more thread then I go" and so I clicked this one and scrolled down to new answers. I saw your long post and thought "I'll read this tomorrow, don't have time now", when out of a sudden this last paragraph leaped out of my screen and bit my Illogicness Detector, and now the lights and alarms won't go off until I respond to that.

If I lost my job because of Y2K and didn't have a good storage of food and other necessities, and assuming I'd still have money in the bank and assuming that I was a saver to begin with; I'd now have to go out and buy the food and other necessities right? I'd have to pay higher prices than last year wouldn't I? What if I was one of those people that reports say mainstream americans are, that is I live paycheck to paycheck and have no savings, how will I get the money to buy the food and other necessities?

Y2K preparations ARE for rainy days. Stormy days too. Hurricane days even.

-- Chris (%$^&^@pond.com), August 20, 1999.


You look pretty cute too Chris this late at night...

Mr. Messing, Chris,

We have all made our points I guess, and sometimes it boils down to the glass half empty or half full dilemma. As far as the burden of proof is concerned, I insist on the example I mentioned in a post above in reference to the similarities between AIDS and Y2K.

By the way, where is everybody else? Is it Jim Lord's Y2K Navy report that keeps everyone away ?

I suggest we all three meet at the "Gartner Group report" thread because I will present new stuff concerning asset-based analysis and the required granularity for any statistics to be valid. Self- reporting bias is another big deal discussed at the Gartner Group report thread.

Take care

-- George (jvilches@sminter.com.ar), August 20, 1999.


I thing the discussion to date is interesting but is somewhat myopic in that it has been totally focused on the validity and number of function points that will fail. My programming experience tells me that the bigger problem, and the one that will take the most programmer time, is finding and fixing the errors in the database(s)/ data files that will inevitably result from the program bugs. Having done both, I have found the data problem to be harder and take longer to fix.

Another facet of the same problem is that if a compliant program is fed erroneous data from a program with a bug in it, the compliant program may itself fail (best case) or corrupt its data as well. Since most of the discussion was based around mission critical systems there seems to be the unstated assertion that the companies can do without (work around) the non-mission critical systems. I submit from my own experience that the initial data for the mission critical systems often comes from non-mission critical systems. I have often found that the numbers entered into many mission critical programs are derived from spreadsheets or other special purpose programs. These "programs" are not usually under the control of the corporate IT department and are likely NOT to be counted at all in the Y2K remediation effort. Some of these spreadsheets are fairly complex and were often written by some summer intern or manager who has since left the company. (I have seen spreadsheets that were 100MB in size and had 1000's of calculations in them -- some of which were date dependent). I think it unlikely that all of these "programs" will be Y2K ready. If these programs start feeding erroneous data to the mission critical applications it will be a case of Garbage In-Garbage Out (GIGO). But worse yet it is likely to corrupt the corporate data.

Along the lines of GIGO -- Remember that those function points that were not fixed at all or were fixed incorrectly will still of provide output (would that they would fail hard). Because of the interconnected environment in today's software environment, that data will often be sent to two or more other computers. These computers most often assume that the data are good (it costs too much to do the defensive programming that would rigorously check the data for errors and would not have been done for the same reason that Y2K was not programmed out at the outset) and will use it in their processing. Their output will also be corrupted (GIGO) before it is fed to the next step(s) in the processing chain. At each step along the way more and more stored data will be corrupted. Herein lies the real problem with Y2K and the business applications.

We, as a community, should be interested in what happens to the data (since that is what the computer programs are there to provide) not to the programs themselves. I believe that is the explanation for the large companies turning to their Y2K business continuity command centers. Two major questions are: 1) Where will they get good data to run their businesses? 2) What automation will they use to distill all this data into information that can be acted upon?

In many cases the embedded system problem will just add to the aggravation and stress levels of the people who are trying to fix the problems. This will make the work after the Y2K transition harder than before it. It will also push out schedules because someone critical will not be as available as prior to the transition. Another difficulty will be the "help" that management will be applying to the software work. A late project that is affecting corporate bottom lines will capture allot of attention. Programmers will be called away from fixing code to provide status checks on their activities. I have seen some late projects where this took up to 2 hours a day every day of programmer time (once in the morning and once in the afternoon). Obviously this management "help" significantly slowed down progress.

The late programs also had their testing time significantly curtailed because they had to get the product out. Of course, as expected, the combination of all these factors yielded a high software error rate with the data suitably mangled at the end. But management had it their way and their way had to be right because they were paid more than the programmers and software managers that they hired to do the job right. -- I am not trying to be cynical here, these are just observations that I have made in my 20 years as a developer, development manager, system designer, and interested onlooker.

-- M Hoptiak (mhoptiak@compsysarch.com), August 20, 1999.


Mr. Hoptiak,

Your points are well taken and quite pertinent. IMO you have detected yet another valid reason for disregarding the arguments presented by Hoff. Remediation and testing of data files and data bases are actually turning out being as mission-critical as core IT functions. And as there isn't available any trustworthy indicator/monitor/survey tool with which to verify their existance or status, this whole exercise is futile, at best. It's like running in circles of varying radii and believing you are making headway.

And then there's cross-contamination of data, not dates.

And then there's the 15% defects that Gartner Group has found in fully remediated code.

All of this defies common sense guys. Resolution is far off. Sorry.

Take care

-- George (jvilches@sminter.com.ar), August 20, 1999.


One of the reasons for performing this analysis was to address the corrupt data question.

Without a doubt, data file/database corruption can be a large problem. Often times, the actual fix for a software error is straightforward, but repairing the resulting data files time-consuming, or extremely complex.

However, unless there is some reason to believe that Year 2000 errors corrupt data files or data exchanges at a higher rate than implementation errors, the impact on the analysis posted is basically a wash. That is, the implementation errors we are currently experiencing are corrupting data, at probably even a higher rate than Year 2000 errors.

-- Hoffmeister (hoff_meister@my-deja.com), August 20, 1999.


Hoff, what is your take on the up to 15% defects that Gartner Group has just found in fully remediated code?

-- George (jvilches@sminter.com.ar), August 21, 1999.

Please check the "Milne: Days counted for the House of Cards" thread posted today,

Take care

-- George (jvilches@sminter.com.ar), August 21, 1999.


To: Hoffmesiter

I have a question that I'm pretty sure has yet to be asked here, though I think it goes to the very essence of your original thesis on this thread. You claim that we are we are unlikley to experience any dramatic increase in y2k-related failures because, according to the Gartner group, we must have already been experiencing the approximate number of such errors as we've passed through 1998 and 1999, with no impact even close to the limits of systemic fault tolerance. My concern is that the chart that goes with the Gartner study that you use as the source for this presumption clearly implies that the level of errors will arithmetically increase as we get closer to 1/1/00, while you have distributed the 25% of y2k errors estimated to occur before that date evenly over the 24 month '98-99 period.

In reality, of course, probably noone can really know how these failures will chronologically distribute themselves, but if we take the chart at face value, note how a dramatic increase in y2k failures only starts in mid 1999, and rises to a peak level in the immediate weeks surrounding the rollover. Wouldn't this seem to suggest that while its true that Gartner sees 25% of the errors as occuring before the rollover, they in no way would agree with you that its "conservative" to assume that they are evenly distributed from 1/98 to 12/99, as you have done. Indeed, aren't they implying that they estimate that sometime around mid 1999 the errors will increase dramatically, and thus, for example, if the error rate is several time greater in October '99 than it was in May '99, then the fact is we really can't know yet if the pre rollover is as uneventful as you imply it has been, and will be?

I am writing this on August 22nd, 1999, and from this vantage point, I think such a question is pertinent. In addition, I would note that on that same Gartner chart, it appears that 90%+ of the embedded chip failures WILL be taking place during the immediate week or two surrounding the rollover, which I think most people agree is going to at least increase the risk of systemic failures. Finally, and I admit I may have missed something here, but it seems to me that you came up with an estimated baseline of 1.05% of function points generating errors at rollover, but that you did this by using a month long period, when Gartner specifically refers to the rollover period as the two weeks surrounding January 1. If this is true, wouldn't you have to double the projected error rate?

In any case, these are the various issues that bothered me when focusing on your assumptions, and I would be very appreciative if you can address these specific concerns.In the meantime, I would also like to thank both you and Steve Heller in advance for one of the more interesting excahnges that I've read on this subject...Raymond W.

-- Raymond Weschler (RWBerkeley@aol.com), August 22, 1999.


Raymond, the GartnerGroup charts and estimates are for date-related failures.

The errors I'm talking about are in addition to these purely date errors. Errors that happen because>/i> you are implementing new systems.

-- Hoffmeister (hoff_meister@my-deja.com), August 22, 1999.


Chris,

Sorry for taking so long to respond to you. Yesterday I was busy just trying to keep up with the Pentagon Papers Folly.

You state that y2k preparations would come in usefull even if you lose your job. That is true up to a point. You certainly will have food to eat and that money will not have been wasted. But they type of food you buy is important.

A few people here still see y2k as TEOTWAWKI and believe a Gary Northian scenario of years without power is a possibility. They would tend to buy more expensive dehydrated food (as well as sell their property and move off to a farm to become totally self-sufficient). This would be an overreaction and people could end up shooting themselves in the foot by overpaying for fancy long-term storage food when they can buy less expensive canned food at the store.

There is also a price to be paid if you want to buy a year's worth of food and you lose your job. If you have to move to another area to find a new job then you will end up either abandoning the food or paying for the cost of moving it. Food may be more expensive next year, but you may end up coming out ahead by saving your money and being less tied down with things to move. This of course is an individual decision. There is no hard rule of thumb that will suit everyone. Each person must decide for himself whether they are likely to have to move to get a new job.

Some y2k preps will end up just being a drain of money if we only see economic reprucussions and not a collapse of the infrastructure. Generators and large quantities of diesel oil come to mind. You just can't eat those mistakes. I don't mind olive oil in my diet, but diesel oil is another thing.

Again, there will be exceptions. We could experience a day or two without electricity. Most people can get through this with warm clothes and blankets. But if you are on a dialysis machine and you absolutely need uninterupted electricity then a generator would be essential to have.

-- Robin Messing (rsm7@cornell.edu), August 22, 1999.


Off?

-- Hoffmeister (hoff_meister@my-deja.com), August 22, 1999.

Moderation questions? read the FAQ