If only 48% of big business will be ready....

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

It now appears that about 50% of all businesses will NOT be ready for y2k. This according to the latest Cap Gemini findings. Now, since these statistics come from the horses mouth, ( the businesses themselves)that only leaves one very important question to ponder. How bad will y2k problems affect the various thousands of systems that admitedly won't be ready? Does TSHF or can businesses work around?

-- citizen (lost@sea.com), August 12, 1999

Answers

Considering the disasters we've already seen with systems that were supposedly "fixed," it's pretty clear that the ones that aren't fixed will fare even worse. With half of the companies in the US gone, there won't be any workarounds.

-- (its@coming.soon), August 12, 1999.

citizen,

It's not a binary, all-or-nothing situation. Even in good times, I think you'll find that only 99% of the mission-critical systems in a large company are working. There's always a problem somewhere, always a glitch with one or more computers, vendors, business partners, etc.

The significant question is "what percentage of your mission-critical systems do you expect to have fully remediated and tested? I'm working on a survey now that's similar to the Cap Gemini survey -- and mine indicates that approx 92.7% of the organizations believe that 99-100% of their mission critical systems will be fully remediated and tested. That's the good news ...

The bad news is that approx 1% believe that less than half of their mission-critical systems will be ready; and approx 6% believe that 90% or less of their mission-critical systems will be ready. The remaining 1% believe that somewhere between 90% and 99% will be ready.

Even if half of the mission-critical systems are NOT ready, that doesn't necessarily imply that the business will shut down right away (on the other hand, the failure of even a single mission-critical system might do the trick, depending on the circumstances). But for most companies, the other key question to ask is, "If you discover that a mission-critical system is broken on January 1, 2000, how long will it take to fix?" Everyone can guess and predict and prognosticate ... but there is very little "hard" data out there to back it up.

Ed

-- Ed Yourdon (HumptyDumptyY2K@yourdon.com), August 12, 1999.


I have always trusted my intuition to a large extent,generally with favorable results. And as news emerges on various events I like to think I'm ahead of the pack on what is happening and what the result will be. But I must admit that y2k has me stymied!Will it be a 1, or will it be a 10? I will prepare, but I wish I could find a good yardstick to measure with. I think maybe between a 6 and an 8.Maybe not TEOTWAWKI, but much hardship and pain.

-- citizen (lost@sea.com), August 12, 1999.

Ed: is your 92.7% based on an anonymous survey? I assume it is a self-reported by the company?

-- a (a@a.a), August 12, 1999.

Ed:

The significant question is "what percentage of your mission-critical systems do you expect to have fully remediated and tested? I'm working on a survey now that's similar to the Cap Gemini survey -- and mine indicates that approx 92.7% of the organizations believe that 99-100% of their mission critical systems will be fully remediated and tested. That's the good news ...

How can you account for the fact that the Cap Gemini survey is so much more pessimistic than yours? Besides that, there's are several more questions that I haven't seen the answer to anywhere:

1. How is a critical system defined?
2. What happens if a number of noncritical systems fail?
3. Why should anyone believe these latest "expectations", when all the previous ones have apparently been incorrect?

To demonstrate the latter point, if you go back and look at the May 10th Cap Gemini finding and compare it with the latest one, you'll see that the situation has worsened dramatically. Formerly, 55 percent of the large companies claimed that they had made their December 31st, 1998 deadline for fixing all their systems, while now only 48 percent claim that they will make the December 31st 1999 deadline for only their critical systems!

Given that, I think it's pretty clear that none of the statistics mean anything -- except that we are in for a heap of trouble.

-- Steve Heller (stheller@koyote.com), August 12, 1999.



ED:

I am confused. If 50% of all businesses will "NOT be ready" according to the latest Cap Gemini findings, then how are your figures be so optimistic? If Microsoft takes several years to upgrade an operating system (such as, Win 2000/40 million lines of code) bugs and all, how can these huge companies, many who started in the past one, two, maybe three years get so much done?

I realize many are not designing a system from the gournd up, but with so many operating systems, so many software packages and so many platforms, embedded devices, etc., most online on a daily basis - how can so many be so far along? The 99 or 97% stuff seems so hard to fathom.

-- dw (y2k@outhere.com), August 12, 1999.


There are now just 85 workdays 'till the END.

If you allow that no real heavy lifting occurs after Thanksgiving (too many distractions), it means there are just 59 workdays left!



-- K. Stevens (kstevens@It's ALL going away in January.com), August 12, 1999.


K. Stevens,

I think there will be a lot of sweating and heavy lifting after Thanksgiving. This is not a normal year.

-- Mara Wayne (MaraWayne@aol.com), August 12, 1999.


I think there will be a lot of sweating and heavy lifting after Thanksgiving. This is not a normal year.

I agree, Mara. And I think that situation, in and of itself, will cause problems. People who are (rightly or wrongly) accustomed to slowing down the work-pace at the end of the year are not going to be able to do that. In fact, if more and more system implementations are delayed (for one reason or another) until as late as possible, then not only are they going to not be slowing down, they're going to have to speed up: result, increased stress: result, more errors of all kinds.

-- Lane Core Jr. (elcore@sgi.net), August 13, 1999.


I keep asking, but no one seem sto be able to answer...

When did the same short-sighted, duplicitous, incompetent managers that have been Dilbert fodder for the last X years become the world's most effective, scrupulous, technologically compentent people on earth?

If the couldn't be trusted before, what has any of them done to become more trustworthy now?

If, for example, a major airline's mission-critical reservations system works, but its non-mission-critical baggage-handling system doesn't, at what point does it break down?

If airlines can't move people or baggage, what effect does this have on those who were relying on air transportation?

Help me understand...

-- Scott R. Lucado (srlucado@aol.com), August 13, 1999.



Scott, you do understand. (Reality Check)

-- Lane Core Jr. (elcore@sgi.net), August 13, 1999.

There are two important lessons in this Cap Gemini poll. First, it is stated that 52% *admit* that they may not make it. And this is a trend going in the wrong dirction. Second, therefore how many of those that are saying they will be ready, actually won't we ready, but either don't want to say that just yet or are mistaken about their progress?

-- Gordon (gpconnolly@aol.com), August 13, 1999.

Steve et al,

The main point I was trying to make in my response to "citizen" is that there is a big difference between saying you're "more-or-less" ready, and saying that you're COMPLETELY ready. To say "we're not going to finish repairing of our mission-critical systems" sounds pretty awful until you find out that they fully expect to repair 97%, or 98%, etc.

A couple people made an obviously important point: all of this stuff, including the survey I'm working for the Cutter Consortium, the Cap Gemini reports prepared by Professor Howard Rubin, the various Gartner reports, etc., are based on self-reported opinions from people who are probably paid to be optimistic, "can-do" problem solvers. All of this so-called data has to be taken with a large grain of salt (if not several pounds of salt) unless it's confirmed by some kind of independent audit.

Several other points to keep in mind when trying to figure out why the conclusions from survey A don't seem to match those from survey B:

1. In many cases, the sample-size is relatively small (e.g., a few hundred respondents), so a statistical variation is not unusual.

2. The surveys may have been taken at different points in time. Even a few weeks can make a difference in terms of peoples' perception of how they're doing with Y2K.

3. The surveys may ask similar "how-are-ya-doing?" questions, but phrased in slightly different ways, which in turn elicits different responses on the part of the respondents.

Again, the main point I was trying to make to "citizen" is that one must be careful when reading pessimistic surveys to avoid the trap of falling into the "all-or-nothing" mindset. Keep in mind that I'm pretty pessimistic about all of this stuff myself, but I think it's important to avoid getting carried away by the presumably innocent tendency on the part of media reports to express things in extreme terms...

Ed

-- Ed Yourdon (HumptyDumptyY2K@yourdon.com), August 13, 1999.


Scott,

Your point is a valid one, but you probably need a slightly different example. When we all learned that the new Denver International Airport had a tendency to "eat" baggage because its new computer system didn't work, we veteran travellers adopted such obvious work-around strategies as: (a) don't check any baggage anyway, just carry it on board, or (b) don't schedule any flights through Denver, (c) don't schedule any business in Denver, (d) don't talk to anyone in Denver, (d) don't acknowledge that Denver exists....

Sorry, I'm getting carried away. Nearly four years after my first experience with Denver International Airport, I'm still grumpy.

Ed

-- Ed Yourdon (HumptyDumptyY2K@yourdon.com), August 13, 1999.


But then Ed, shouldn't the survey ask:

1. Has your company completed all surveys of its computer hardware, software, customers and venders for year 2000 compliance? If not, when will you finish?

2. Has your company completed testing its y2k solutions on all critical systems? If not, what is your current scheduled date? What was the original scheduled completion date?

3. Have these critical systems been re-installed, and are they in-service now for routine business operations? If not, when will the updated systems be installed for routine operations?

4. Has your company completed testing its year 2000 solutions on all non-critical systems? If not, when will you finish working on non-critical systems?

5. Do you have overseas venders, customers, or suppliers for raw materials? If so, when did these overseas systems complete testing? If they have not yet tested their systems, what is the current test completion date?

___

It would appear that surveys that "allow" unjustified optimism by self-reporting "opinions" rather than completion dates tend to hide the problem.

Granted, false dates and overly optimistic dates will still occur, but the simple passage of completion dates - with no successes reported - will raise or resolve alarm levels.

-- Robert A. Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), August 13, 1999.



Robert,

Good questions -- the survey I'm working on has indeed asked some things in the area of #2 through #5 on your list. Also, lots of questions about whether companies intend to institute a "code freeze," and if so, when. When will contingency planning be finished, when will "go/no-go" decisions be made regarding non-compliant vendors, etc, etc.

Unfortunately, the willingness of companies to spend time answering all of these questions is somewhat limited -- and the resources to check the answers to ensure at least a reasonable amount of consistency is also limited.

Bottom line: we're all groping in the dark, to one extent or another. We do the best we can, with limited data from folks who are self- reporting, to see if we can see any meaningful trends.

But with only 140 days to go, opinions and trends are beginning to matter less than "reality" ...

Ed

-- Ed Yourdon (HumptyDumptyY2K@yourdon.com), August 13, 1999.


Ed,

I loved your comment on the New Denver airport. But, bygolly, that roof is something to see, isn't it? Like a huge tent set up for the greatest "revival" meeting of all time. Sure looked good on paper.

-- Gordon (gpconnolly@aol.com), August 13, 1999.


Robert:

Much as I hate to disagree with Ed, these are NOT good questions. Good questions are MUCH harder to create than you might think. I'll try to illustrate here, not to criticize but just to highlight what you're dealing with when you construct a survey.

[1. Has your company completed all surveys of its computer hardware, software, customers and venders for year 2000 compliance? If not, when will you finish?]

This question assumes that such surveys are required, that they are useful, that they are scheduled, etc. Let's say the office coffee maker (to take a rather silly example) has not been 'surveyed'. If you're answering this, does your answer become 'no'? If you have sent surveys to your customers and vendors but some have never been returned, can you say 'yes', you did the survey? If you're still waiting for replies, is it sufficient to answer "When will you finish" by saying "I don't know"?

In other words, just what are you trying to find out here?

[2. Has your company completed testing its y2k solutions on all critical systems? If not, what is your current scheduled date? What was the original scheduled completion date?]

As you should know, testing is NEVER completed. It's theoretically impossible. Most reasonable organizations, whatever their project planning software shows, basically intend to complete all remediation as soon as they can and test right up until testing becomes moot for one reason or another. They might have a time machine for testing, and running battles over who gets to schedule tests during the day, and who gets stuck with nights and weekends. Are you really interested in how this internal scheduling gets bumped around, and why? Again, what are you really asking here?

[3. Have these critical systems been re-installed, and are they in- service now for routine business operations? If not, when will the updated systems be installed for routine operations?]

Robert, you are now getting into deep water, I'm afraid. OK, let's say the respondent broke 'critical systems' down real fine, into maybe a few thousand such systems. Now say your survey solicits a spreadsheet (a BIG one) showing the exact status of every one of these thousands of systems, on a day-to-day basis, for (say) the last six months. And you notice some were never taken out of service, and some were re-installed and removed several times, and some were retired altogether, and parts of the original system were replaced by a new package or set of packages, and a hopeless (but unprofitable) division of the company was simply dropped rather than remediated, but this led to other systems needing modification not because of date bugs but because of all the shuffling around, etc.

In other words, you're drowning in data. How should all these data be summarized in light of what you were originally asking? What *were* you asking, really?

[4. Has your company completed testing its year 2000 solutions on all non-critical systems? If not, when will you finish working on non- critical systems?]

A lot of the same problems apply here as well. Also, you're assuming that the respondent broke systems down into critical and non- critical. Why assume such a breakdown was made? What if they broke systems down into 5 divisions of importance? What if they divided up systems along some other lines besides criticality (like functional division within the company, or by line vs. staff, or plant-by-plant, or many other reasonable mechanisms? How should they answer when your question makes a MAJOR assumption not applicable in their case?]

[5. Do you have overseas venders, customers, or suppliers for raw materials? If so, when did these overseas systems complete testing? If they have not yet tested their systems, what is the current test completion date?]

Let's start with the first question. How do you deal with multinational corporations? If your supplier is headquartered in Taiwan but has a plant in California supplying you, is this an overseas supplier? What if they also have plants in Mexico and Finland, and you get materials from all of them?

Next, what do you mean by a test? Remember that testing CANNOT be completed, though anyone can arbitrarily declare testing DONE at any time for any reason. Robert, this isn't just hair-splitting. If you're GM and your foreign supplier is Jose's Garage, you betcha they completed their testing, si senor! Jose may not know what you're talking about, but he knows what you want to hear.

OK, on a more general level about surveys: you are often faced with a really lousy choice. IF you ask essay questions, you can be maximally flexible, let the respondents describe the way things REALLY work at their shop, and gain real and useful insight into their status, their operations, their plans and goals, etc. The downside is, people often don't fill out essay questions (they take too long), and the responses you get are DAMN hard to quantify into useful statistical patterns (because they're so different from one survey to the next to defy meaningful compilation). And of course, you STILL might have asked the wrong questions.

Conversely, nice multiple choice questions or yes/no questions or questions asking for specific comparable information (like you asked) have so many built-in assumptions about how the respondent does business and handles remediation that although you can do great statistical stuff with the responses, those responses aren't nearly as meaningful as they seem.

Constructing a survey to collect *useful* data is extremely difficult, believe me. So difficult that most surveys are constructed in such a way as to maximize the usefulness of the responses, at the *often considerable* cost to accuracy and meaningfulness. To create good questions, you need to sit down and really give a LOT of thought to EXACTLY what you're trying to find out. Then you need to construct several sample questions. Then you need to find some guinea pigs and ask them these questions in meetings, getting feedback as to why the answers aren't descriptive of reality and how the questions may have been better phrased or made fewer wrong assumptions. And you iterate on each question, many times with many test audiences. NOT a simple process.

And after all this, you'll still get responses you wouldn't believe, and notes in the margins, and non-responses to some questions, and multiple answers to pick-ONE questions, and on and on and on.

FINALLY (groan), after all this effort, all this question honing and validation and analysis and statistical effort, someone is going to take the whole schmeer and boil it down to a single sound bite -- "ONLY 48% WILL BE READY" -- and you can only sigh.



-- Flint (flintc@mindspring.com), August 13, 1999.


This is likely much worse than a "mere" 52%! We need to look at that 52% not being fully compliant for mission critical systems in terms of trend... A year or so ago, the number was 12%, increasing to 16%, and earlier this year, to 22%. I don't know what the final percentage will be, but it will apparently be a LOT worse than 52%.

Remind me to order a couple dozen chickens (early!) tomorrow. And an extra hundred pounds of chicken food. That is, if I can get them in before Hurricane Dora is due monday night/tuesday...

-- Mad Monk (madmonk@hawaiian.net), August 14, 1999.


Good points, good criticism!

Ah, Sir Flint of the Hard-nose, your observations show exactly why it's so difficult to respond to the newspaper's "two sentence" summary of any given "survey" sent to an entire industry, where not only are the questions not defined, but the answers aren't provided, and often the only statistic given is the number of "companies" who replied.

I was deliberately trying to get as "broad-based" a reply as possible, but agree with you that the "Si, Senor GM- we are ready" response cannot be eliminated.

Now, less than 140 to go, how good are the un-Fortunate 500? How many of them are facing this problem of "we don't know, can't predict, and can't figure out how to figure out" paradox?

How many of the 500 are complete themselves?

How many are complete, but individually vunerable to "local" failures of systems or smaller venders to each operation?

-- Robert A. Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), August 16, 1999.


Moderation questions? read the FAQ