Revisting The Problem Distribution Curvegreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread
We have discussed this issue before but thought it might be time to take a fresh look.
One argument we have heard for predicting that things will not get very bad goes like this (I'm paraphrasing - this is not a direct quote):
The number of problems associated with Y2K have typical bell curve distribution. The bell curve peaks on Jan 1, 2000. Y2K-related problems began many years before the peak and will continue for several years after the peak. But since we are already very close to the peak, problems will not be much worse than they already are.
Please comment on the above statement; whether or not you believe the argument 'holds water' and why or why not.
The main problem that I personally see with it is that ignores the 'convergence of problem threads' issue. That is, it ignores how problems can interact with each other like wave ripples in pond.
The bell curve also implicitly defines an 'average problem' - that on average, problems a month before rollover are equivalent to problems 1 month after. If the consequences of post-rollover problems are more serious than pre-rollover problems, the argument can be made that the bell curve view is incomplete.
-- Arnie Rimmer (Arnie_Rimmer@usa.net), November 28, 1999
We have an event that has NEVER happened before. Therefore we do not know what shape or formula it will take.
Why a bell curve? cause we want it to?
how about a circle? where it comes back and bites us in the butt.
-- bob brock (email@example.com), November 28, 1999.
10% within two weeks after rollover
55% during 6-9 months after rollover
15% into 2001 etc.
The problem is what about just the 10% after rollover? Will it be power grids? Water treatment facilties? Telecom; banking; Food distribution/ transportation? Will that 10% affect/effect your LIFE, TOWN OR STATE?
Then the 55%: Will that bankrupt 20% of companies that cause 20% more to fail - how much unemployment will this 55% cause?
Fill in the blanks for all the rest of problems/chaos
-- dw (firstname.lastname@example.org), November 28, 1999.
Bob is exactly right. While I suppose we could argue from past experience that the pattern will resemble a curve, Arnie is 100% correct that all remediation/fixes are not created equal. While it is not provable before-rollover, logic would suggest that the same errors after rollover will take longer to fix due to interaction issues, supply chain slowdown, etc. These might, in turn, reinforce one another. Taken to the limit, there is where the Infomagic scenario comes from, but one doesn't need that scenario to apply the basic logic.
Multiple waves might be a better analogy or, better still, think of errors within a sector as a wave and sectors influencing one another as multiple waves overlapping with one another.
What is certain is that the standard bell curve distribution takes us back to the core question: is Y2K "just another" fix-it effort and not even one of interesting complexity (this is EXACTLY the polly argument), or not. If yes, the bell curve may only be slightly skewed post-rollover.
If different, think fractals. Think chaos. And hope, mystically, that chaos theory will be on "our" side instead of its own ....
-- BigDog (BigDog@duffer.com), November 28, 1999.
"The bell curve peaks on Jan 1, 2000. Y2K-related problems began many years before the peak and will continue for several years after the peak."
I might suggest that the total number of problems will be the summation of a number of bell curves, each representing a different sector. If this is true, we can expect a definite spike due to embedded systems, occurring in a short time frame.
And, the loss of manufacturing capability will have a different effect than the loss of the payroll function for a company. Therefore, the impact of a specific problem downstream is different. Some problems will be easy to fix. These primarily will be stand-a-lone (as are most of the problems we're seeing now). Others will cause greater difficulty because one problem will impact another system downstream.
For the most part the problems that are occurring now are related to accounting, are not huge problems, and have been without major downstream impacts.
But, problems in oil refineries, pipelines or wells may have a much more dramatic effect, as will any power problems, or manufacturing problems.
We've yet to see any impact from the embedded systems, and we won't until January 1.
Finally, the impact of problems on the distribution systems won't be known until all votes have been tallied.
All of the above point to a non-gaussian [not bell shaped] distribution.
-- kkkk (email@example.com), November 28, 1999.
82% of corportions have already felt
disruptions because of Y2K. It's just
that they are able to keep the problems
in-house. Barely. An increase in financial
miscalculations will send a lot of them
over the edge. Like an iceberg, we only
see a small percentage of the whole ice cube.
-- spider (firstname.lastname@example.org), November 28, 1999.
I prefer to think of the problem in terms of Complexity Theory. We live in a mass of interconnected complex adaptive systems. The Year 2000 programming flaw threatens to change the "simple rules" that apply to all the agents of the systems at one time. That has never happened before. There is no way anyone could know what will happen.
Most discussions on the subject focus on the "agent-level" scale. Even analyses of distribution systems or "cascading" fail to recognize the emergent behavior that will occur at a different and larger scale. That emergent behavior is not yet visible.
Heavy dependence on initial conditions translates into a much different world for us all come the new year. What will that world look like? I don't know.
-- Pete (email@example.com), November 28, 1999.
That statement in some ways is reminiscent of the "The Jo Anne Effect wasn't bad so January won't be bad either" type of argument. The weakness of that argument is that many of the problems that have happened so far have involved accounting or financial forecasting software.
Almost all non-accounting software problems, PC BIOS chip and PC operating system problems, and embedded system/process control system problems are still ahead of us. Those are the ones with the potential of being "show-stoppers."
And that arguement also doesn't take into account complications, like this one John Koskinen himself has pointed out:
http://www.usia.gov/cgi- bin/washfile/display.pl?p=/products/washfile/latest&f=99050401.glt&t=/ products/washfile/newsitem.shtml
We are running events in the United States focusing on small businesses, trying to provide them technical information, trying to encourage them to take action in the face of what we find increasingly is a position where many of them are saying they're simply going to wait, see what breaks, and then they will fix it once it's broken. We are trying to tell them that that's a very high risk roll of the dice, because when they go to get the fix, whether it's an upgrade in their software or a replacement for the software or the hardware, it will be obvious what the fix is, everyone will know how to do it, but the risk is, they will be at the end of a very long line of other people who waited to see what broke and then decided to fix it. And the fix will work just fine when it arrives, but it may not arrive until March, April or May of the year 2000, and these companies and governments and those who decided to wait and see may find that they're going to be severely challenged in continuing their operations while they're waiting for that fix to arrive.
-- Linkmeister (firstname.lastname@example.org), November 28, 1999.
I side with Pete. This issue is MUCH too complex to cram into one bell!!! I think the field of "Linear Chaos Theory" might best apply. From the surface it may seem like the pattern is random, but when charting the individual trends, and synergistic effects, then a pattern will emerge. This theory is used in medical research, where for example graphing a 3-D image of an irregular heartbeat actually yield a "regular" pattern.
-- Hokie (email@example.com), November 28, 1999.
Arnie's Q revives an old doubt about the bell distribution. The cause has a discontinuity in it, which is the reference date (NOW). All date handling is operating PAST / FUTURE and NOW. As NOW goes into 2000, there is a step. Past /Future are dates that are in existence as dates or as the result of calculations, but NOW is retrieved from clocks. I know the effect in a very simple application, and I have the feeling that this is precisely the reason that early on "experts" advocated to run remediation/testing and production on totally independent systems. Only then can the NOW discontinuity be simulated without affecting a runnung production system. My knowledge about this is limited, but I have always felt that doubt when anyone proclaimed to have "advanced the clock and nothing happened". Why did nothing happen if they were still connected to the real world? -- There was one item in the Y2k movie that touched on it: "the computer thinks it's 1900". In that situation all stored data were in the future instead of the past in regard to real time. Still puzzled...
-- W (firstname.lastname@example.org), November 29, 1999.
The number of Y2k-related problems can't possibly follow a bell curve. There are too few discrete problems -- multiplied millions of times -- to produce that kind of curve. What's needed for a bell curve is a larger number of discrete events or individuals, each having varying "strength."
The part of the Y2k situation that would most likely produce a bell curve is a count of the number of organizations that are Y2k compliant. That's because there are a very large number of organizations that will become compliant on any given date.
For example, you could use the Fortune 1000 and ask them in which week they became, or will become, compliant. Plot the number of companies for each week and a bell curve should result (albeit, a bell curve with a peak in the last few weeks of 1999). Bell curves can be skewed, with the tail (the side of the curve with the flatter slope) being on the front or back end of the curve.
What's disturbing is that, to me, it looks as if the peak of the compliance curve for the Fortune 1000 will be some time in 2000, with a long tail on the back end. Let's not even talk about all the other organizations and where they are on their curve.
-- Dean -- from (almost) Duh Moines (email@example.com), November 29, 1999.
Thanks for the feedback on this.
-- Arnie Rimmer (Arnie_Rimmer@usa.net), November 29, 1999.