French curves and cumulative failuresgreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread
Just doing a bit 'o thinking about that Gartner Group curve. For one thing it is a RATE of failure plot. Second, they expect that failures will start to take off from nominal sometime in June and there is a discontinuity in he curve at the July transition. From there the GG plots what amounts to a standard sine curve with a peak in December/January 1999 and then dropping off in 2000. But AGAIN this is a RATE curve, not a cumulative.
So for grins I plotted a similar line in Excel and did a cumulative line for comparison. Results follow. I did this because I am concerned with failure effects on people. If my car stops working one day then I have a problem until it is fixed. You have all been there. You know, the day where everything went wrong and you were just glad to be able to fall into bed at night exhausted from the constant strain of hitting one problem after another. Car dies, missed appointments, loss of income, cranky family, cost to repair, etc. This is the after effect of ONE failure (it goes on for days and weeks). The effect is domino and it is cumulative. Everytime something goes wrong there is a greater likelihood of other problems developing.
We will hit accumulated failures in August equal to the failure rate for one day in December (approx). This may not sound so bad but it is. By the time Decemeber roles around you and I will have had to deal with four and a half times the failures that will occure in December and many of those failures will not be fixed. Can you see what this means?
The cumulative failures will peak at about 8 times the failure RATE for peak. 50% of the failures will occure before Decemeber 1999. These failures will overwhelm the IT staffs. At what point will they simply have to admit that they can not handle the failures? Gartner estimates that the significant numbers of failures will really start in mid June and escalate from there on out till December/January time frame.
So here are the basic numbers. Dec/Jan rate of failure is 1.0 as an anchor point (relative).
Cumulative failures in relationship to peak rate
April 0.1 May 0.2 June 0.3 July 0.5 August 0.8 breaks lower 10% of standard deviation Sept 1.5 Oct 2.1 Nov 3.1 Dec 4.0 90% of the failures in this time frame Jan 00 5.0 (Aug 99 through Apr 00) Feb 5.9 Mar 6.5 Apr 7.2 breaks upper 10% of standard deviation May 7.6 June 7.9
My opinion and projection from this is that in June 1999 people will start to become tense due to work place related failures. July 1999 people will start to panic. In August 1999 we will see increasing social chaos triggering governmental actions which will culmunate in martial law.
This will calm people down for a month and then we will see an escalation of conflict in our nation as people's tension level rises due to things failing, remaining broken, having to constantly work around things and finally the anticipation of 'rollover'. As things continue to fail significantly druing the first half of 2000 people will lose all patience because they 'thought it would be over' when the date changed. Then it will sink in that this is here to stay for quite a while (the effects and the dead systems which will never be brought back to life).
I will not be able to look at responses till this evening. All constructive contributions welcome. Let's gnaw this bone together shall we???
All trolls and Poly's will be ignored. No time to waste on you folks.
-- David (C.D@I.N), April 29, 1999
David, would it be possible to get a copy of your exel file? Rickjohn
-- Rickjohn (firstname.lastname@example.org), April 29, 1999.
This is Mathematically unsound. You are using "points on a chart" and suggesting this is a function. One may do so if one has a statistically valid sample where the "unit" is defined but at best it is guess work.
In the case above, you have neither the "necessary" or "sufficient" conditions to discuss any "function".
You may have a "first approximation" to a function but you have absolutely no "Function" to attempt to establish any "extrapolations with".
A "curve" or even a straight line is a locus of points. That is all you have and even that is problematical.
Constructing a "function" from that because it "looks like a Sine curve" and then extrapolating that is a no-no in Mathematics.
Suggesting that one can obtain useful information from it vis a vis the "rate of change" is invalid.
A "rate of change" is by definition the First Derivative of a FUNCTION.
Your "connect the dots" methodology ignores the minor detail that no FUNCTION has or perhaps can be defined here. You must prove that at every point, the LimF(x) is defined. Your "looks like a Sine" doesn not count.
Reasons should be obvious: you are dealing with random "reported data" not a "continuous function".
From Freshman College Algebra, one must recall Functions are defined on intervals and that at all points must satisfy the rules of the "Limit". Few people get past memorizing the definition of limits much less being able to construct a function.
-- I. Newton (Isaac@newton.com), April 29, 1999.
Very interesting and makes sense, thats for sure. HOWEVER.....where do you get your accumlative figures. Is there some formula that I don't know about? I mean, what says it can't be better or worse? So far nothing with y2k has gone according to predictions, it seems to me.
I gotta go hit WalMart for more rice and macaroni
-- Taz (Tassie @aol.com), April 29, 1999.
A lot of us expected that by now we'd have heard about more failures from JAE, April fiscal year ends etc. Have all these systems been fixed, or have quick and dirty workarounds have been made which are allowing them to continue functioning. After all, we don't *have* to use computer systems in exactly the way we used to to get by. For instance, certain data processing jobs are just not done, contracts which expire after rollover are entered into systems with a 31 December 1999 expiry, and so on. At some point (I would assume rollover), these workarounds are no longer effective, and if the systems aren't fixed by then, they produce faulty data. So the *apparent* failure of systems would be much greater on 1 January 2000, as it would include all the systems which have failed earlier, but have been kept going by fudging. By making a lot of assumptions I guess some kind of graph of this could be constructed.
-- David Binder (email@example.com), April 29, 1999.
What about the effect of system failures on the stock market. Paper wealth will plummet causing an increase in "social chaos". Consumer confidence will drop through the floor causing further Govt actions that will be too little too late.
-- Johnny (firstname.lastname@example.org), April 29, 1999.
Are you trying to us Murphy's law to predict the future? Just because your car doesnt work (you should have had it serviced) And your wife is grumpy (you should have serviced her) Missed appointments (you should have had contingency plans) Loss of income (should have kept your money in the bank where it was safe) Cost to repair ((car) you fixed on failure) ((wife) anticipation of 'rollover')
Any number of words can be thrown together to "prove" an idea.
-- Cherri (email@example.com), April 29, 1999.
As many of you can tell (Thanks, Dr. A), I don't understand charts and graphs. But, I like a good story.
David, illustrating cumulative failure is a powerful way to educate a curious public to the quality of cascading failure. One technical failure sets the initial conditions for a chain of events that might require redirecting normal activites even without the introduction of additional technical failures.
I wonder about the resilience of our technical/social interface and interoperability.
-- Critt Jarvis (firstname.lastname@example.org), April 29, 1999.
Some of the first problems the public might notice would be in non- accounting software that uses dates in '00. Let's say someone in October 1999 places an order for a product whose arrival date is due in January 2000, or someone in November tries to schedule a doctor's appointment in January.
A friend of mine has personal experience with this kind of problem. In October 1998, he entered an end date of October '00 for a commercial he just recorded and entered into the radio station's computer. The commercial was supposed to play for the first time three days later.
The day the commercial ("spot") was scheduled to run for the first time, the automation, instead of playing the spot, crashed and it took a secretary a few phone calls and about twenty minutes to get the radio station back on the air.
-- Kevin (email@example.com), April 29, 1999.
-- Kevin (firstname.lastname@example.org), April 29, 1999.
Thank you al. And specificly, thank you Sir Isaac for exhuming yourself for a bit of elementary math.
It is obvious that there is no function. The chart that I have seen of Gartner Group's rate of failure looks somewhat sinous. I do not care really even how they got that or what the line looks like.
My point was that we often look at this as if a static event passes by when in this case (many) the 'failure' will live onward in our experience. It is a cumulative effect (the first generation of failures) as well as cascading effect due to dependencies. The original post does not address the dependencies issue.
-- David (C.D@I.N), April 29, 1999.
Did you assign any values for fixing the failures? That is, you added the instantaneous failures from the Gartner graph. Did you subtract from the accumulation the fixes that occur (hopefully) after a period of time?
>The cumulative failures will peak at about 8 times the failure RATE >for peak. 50% of the failures will occur before Decemeber 1999. These >failures will overwhelm the IT staffs. At what point will they simply >have to admit that they can not handle the failures? Gartner >estimates that the significant numbers of failures will really start >in mid June and escalate from there on out till December/January time >frame.
This is the supposition on which Infomagic bases his scenario.
-- Dean -- from (almost) Duh Moines (email@example.com), April 29, 1999.
RickJohn I'll try to send it along to you ASAP (early next week).
Dean, I thought about that but declined to do it. My thought was to simply illustrate the accumulation of failed systems. In terms of big systems I think we are not really talking about days weeks or months of repair effort. MO is that at some early point the IS staffs will be so swamped with failures that they will be paralyzed altogether. Another consideration is that different kinds of systems will crump as we get closer. A quarterly reconcilliation may be very important financially but a failed stocking order for a grocery will be more keenly felt by most average people. ???
Anyway, just some thoughts to chew on.
Thanks for your response.
-- David (C.D@I.N), April 29, 1999.
Though your math is a little suspect, I heartily agree with your contention that if Y2K failures are ever going to have any significant impact, we should see plenty of evidence well in advance of January 1, 2000. We have been seeing these dates down the left hand side of Yourdon's site for a long time, but when the date passes and nothing happens, they simply disappear without comment and without explanations of how disaster was averted. This fact says more about the reality of the situation than all of the words that Ed has written about Y2K put together.
This being the case, I'm going to stick my neck out and make a couple of predictions. Here is what you will see in your Excel spreadsheet.
Month # of Significant Failures Cumulative # of Significant Failures June 0 0 July 0 0 August 0 0 September 0 0 October 0 0 November 0 0
Now, let's assume for a moment that it is December 1, 1999. David, if the above table proves to be correct, will you then adjust your estimate for the impact on January 1? Maybe, being the open minded fellow that you are, you will. But I'll make another little prediction. My guess is the vast majority of the folks who have bought into Y2K scare-mongering will not adjust their beliefs one iota if the quota of expected pre-millennial failures do not show up.
Then when January rolls around and the lights are on, people are yaking on the telephone, and there is cash in the ATM, they still will not relent. Every glitch, no matter how trivial, will be triumphed. Stories will abound of desperate work arounds and programmers working like mad in back rooms, while the front office maintains that everything is normal. We'll be told to just wait wait until everyone is back at work, wait until those year 2000 transactions start getting processed, wait until the disruptions in trade goods show up, wait until month end, wait until February 29.
And then, several months later, when "hope" that Y2K will be anything bigger than a bump in the road has finally drained away, this web site, and so many others like it, will simply blink out of existence.
-- Computer Pro (firstname.lastname@example.org), April 30, 1999.
April 1st has demonstrated that accounting software has been fixed, or "bandaged", or doesn't cause problems that affect manufacturing or distribution. They jury is still out on non-accounting software and embedded systems.
-- Kevin (email@example.com), April 30, 1999.
Pro - Go your way. I go mine. We shall see. As I have stated the original post described a problem of accumulating failures. That was the intent, not prediction. Bait as you like. End of post. Place your bets as we all do.
-- David (C.D@I.N), April 30, 1999.
Pro, there actually already are problems. See the compiling work done by Rob. Those are published problems, which I assume are actually FAR fewer than actual problems. The fact that most problems can presumably be worked around now is great. What David points out is that the work arounds will become more difficult. That makes good sense to me.
-- Tricia the Canuck (firstname.lastname@example.org), May 01, 1999.