Whats the deal!

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

I realize that we are looking for the slightest "glitch" these days to post as possible proof.

It just seems that there are a lot of computer errors these days, the question is :

Percentage wise, have we always had this amount of computer failures in the world or has there been a sizable increase since the roll over.

Has anybody made up a graph yet? I would do it myself but im not allowed to talk about Y2K anymore;)

Have a nice day

-- voynik (voynik@aol.com), January 07, 2000

Answers

If you're going down that road, be sure to only track past errors that had a date-sensitive or other timing-critical component. You know, like the ones being reported *now*.

-- Ron Schwarz (rs@clubvb.com.delete.this), January 07, 2000.

The date/time issue isn't relevant. The issue is (all faults before Y2K remediation) vs (all faults after Y2K remediation including existing faults, non-date/time faults added during remediation/upgrade, and genuine time/date based faults).

It's irrelevant WHY failures are happening; all that matters is the volume, the impact, the time to fix, and the knockon effects. All the speculation over the veracity of the ubiquitous "Not Y2K related" claim is a pointless sideshow.

In fact, even making a comparison is irrelevant. All that matters is: are there an escalating number of faults now; and are they serious enough to materially impact a significant number of people?

So far, the impact seems to be slight. Let's keep watching though.

-- Servant (public_service@yahoo.com), January 07, 2000.


Exactly!

So, who is volunteer?

-- voynik (voynik@aol.com), January 07, 2000.


Nothing like a "servant" with an agenda!

Spare us the ex cathedra statements, eh?

Hey, "servant" -- your monicker would seem to indicate that you're a government employee. How about divulging the *details*? I think we have a right to see how our "servants" are using our money.

-- Ron Schwarz (rs@clubvb.com.delete.this), January 07, 2000.


Ummm Ron, I think the point "servant" has is valid. the question is NOT how we compare to pre rollover, the question is, are there enough companies like the one I heard about last night where the folks are working 20-22 hour days with bosses running around screaming at people, or like my bride's whrere they haven't been able to get into their UNIX boxes since Saturday. (nothing important there except, payroll/time accounting/job time accounting.... for the week pre- rollover.........)

Proper focus at this point in the comedy is to look at the actual, fluid, current situation, and what it bodes for the next 6 - 9 months.

Chuck

-- Chuck, a night driver (rienzoo@en.com), January 07, 2000.



I found this way of calculating Y2k failures a long time ago. Perhaps it is usefull for statistics:

The actual rate of residual failure depends on a number of factors, but mostly on the size of the system and the scope of the changes. Under average conditions, modest changes to a moderately sized system, the rate would be about 7%. The scope of Y2K changes is, of course, much more extensive than this and many of the systems are extremely large, so the residual failure rate is also likely to be higher. Nevertheless, for the sake of argument, let us again assume an overly optimistic residual failure rate of only 5% for Y2K related changes. But this is only for one system. For a business with multiple systems (which they all have) the chance of a system failure can be computed as:1-(1-f)n, where "f" is the failure rate and "n" is the number of systems.

An average small business would have perhaps 5 systems so, assuming a residual rate of 5%, they have about a 23% chance of at least one system failure [ 1- (1-.05)5 = 0.226 ]. A medium size business would typically have about 25 systems and, therefore, a 72% chance of a failure [ 1-(1-.05)25 = 0.723 ]. A large business with 100 or more systems would have a 99% chance of a failure [ 1-(1-.05)100 = 0.994 ]. This is EVEN IF ALL OF THE SYSTEMS ARE FIXED! Of course, many of these failures will be relatively easy to fix, but others will require an effort beyond the capabilities of the business and they will not be fixed before the business itself fails (this is particularly true for small and medium businesses using packaged software). In addition, the great majority of these failures will have at least some domino effect on related customers and vendors. To make it even worse, virtually everybody will be facing these problems at about the same time, leading to a chaos in which actually fixing the problems becomes almost impossible.

-- hzlz (mph@netbox.org), January 07, 2000.


past computer errors were never significant enough to "break the camel's back" or to hinder productivity as a whole for any major amount of time. let's just see if the camel's back breaks in the future or if we have a major productivity drag. that might show us that the Y2K reports were true.

does anyone have any informed inside insights as to manufacturing processes and how they are looking at this point? not govt spin but INSIDE.

-- tt (cuddluppy@nowhere.com), January 07, 2000.


Hey Morons, you people act as if computer glitches are something new. The only reason it seems like a big deal is because all of a sudden every stinking one of them is being reported on this site. Why? Because a lot of you are feeling stupid right now for buying North's book and 800 lbs. of spam.

Nobody ever said that there would be absolutely no computer problems after the rollover. We had them before and we will continue to have them.

A lot of you need to grow up, get a life, and get a grip on reality.

-- Ben Frederickson (bogrites_rulz31@hotmail.com), January 07, 2000.


-- Ben

Don't you think its time you get an apartment of your own and move out of your parents basement?

-- distgustedwithtrolls (disgusted@trollsmakemepuke.com), January 07, 2000.


Clearly there have been more problems since the rollover, and the problems are on the rise, both in frequency and magnitude. I don't have a graph at this point, but so far, it looks like it's close to 30% greater problems than normal. The magnitude is harder to calculate.

-- (sender@horef.net), January 07, 2000.


Anybody remember when Koskinen, around the middle of December introduced the idea of "benchmarking" Y2K failures with a "normal failure rate", so that people wouldn't be taking every failure as a Y2K failure? It's interesting that they aren't doing anything with that "spin" now. Why? Food for thought.

-- lanina (lanina1963@yahoo.com), January 08, 2000.

Moderation questions? read the FAQ