A pessimist's best case scenario

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

To paraphrase Ed Yardeni: For Y2K to be a non-event, the global computer network must be 100% fixed. This seems a safe statement to make. If we can agree that this is the case, here is my best case scenario:

Assume the "iron triangle" survives intact (no power loss, banks survive, phone lines continue working). This would require, IMHO, nothing short of intervention from the Y2K fairy, but for the sake of argument, I'll make that assumption.

A wildly optimistic estimate would say that the global network would be 90% fixed. Won't happen, of course, given that 70% of the world is doing nothing, but for the sake of argument, I'll use 90%. A wildly pessimistic estimate might be 30%. The actual number is probably unquantifiable, but will be somewhere in between.

A conservative estimate as to the number of times the year would be used as data in any computer, and then shared with another computer, would be a billion times a day. It is much higher than this, considering almost a billion shares of stock are traded daily on the NYSE alone, but a billion is a nice round number.

What happens to the 100 million (10% of a billion) bad or lost pieces of data? Can anyone play optimist for a minute and please tell me, because I simply can't find an answer to this...

I run into people in denial or who are blindly optimistic all the time. This argument seems to stump them as well. Lots of ideas, of course, but nothing workable. Manual workarounds, file backups, and data quarantines are the most common "solutions", but none of those are possible on a global scale or address the problem.

The problem, of course, is: how can global information sharing remain viable? If the answer is that it can't, how can otherwise respected economists say that Y2K will be nothing more than a speed bump?

Sorry, my pessimism is showing. Can someone "enlighten" me?

-- Steve Hartsman (hartsman@ticon.net), October 11, 1998

Answers

Steve, you presented an excellent analysis of why things will fall apart. This has nothing to do with optimism/pessimism. Facts are facts and you assumed very conservative numbers that few people could argue with.

The pervasive optimism game is partly to blame for the y2k bugs. Optimism, at the expense of truth, is inculcated into our educational system beginning with kindergarten. How may programmers and/or morally correct (MC) people have informed their bosses/management that this or that won't work or eventually blow up? How often have they been dismissed as pessimists in response to their warnings? How many good deed/truths were punished?

-- Sutterlin (winners@magiclink.com), October 11, 1998.


You know I'm not a pollyanna, but I will poke one smallhole in your argument. Your assumption is that all 10% of the "bad" data usage is vital. Thats not true. Give me any large organization and lets examine how they use computer data. Lots and lots of paper reports/screen displays that are occasionally useful at best. Some reports go directly from delivery to the trash. I can remember a project for a larg oil company where I was contracted to massively change a huge report (financial stuff). Being the good systems analyst/codehead, I visted the department that actually used the report (in another city) and just watched for a couple days. Didn't tell them why I was there. They used ONE count number and trashed the daily report!! A bad date wouldn't have mattered in this instance. Its my general belief that far less than half of all computer output is actually vital to a company. (I know, there is the rub---which part?) The numbers and dependence are still huge, but it does give a ray of hope. Figure out the core function of a company and remediate/isolate that part.

-- R. D..Herring (drherr@erols.com), October 11, 1998.

There is no time to figure out that kind of stuff (anymore than there is time to just fix it all). And even with this "ray of hope" approach, you can see that you are in effect hoping that the missing (or messed up) data will be extraneous. Thats like hoping that the 1%-7% of embedded chips that are going to fail will likewise be only on unimportant things. In theory, of course, it could happen. In practice, no way Jose.

-- Jack (jsprat@eld.net), October 11, 1998.

A billion times a day? Seems like an excessive estimate to me. Anyway, it all depends on what you are doing with the date. And whether the machine gives the correct date. If the machine gives the correct date and a program messes up the date calculations, then that is one set of problems, and the output will often have correct dating, but sometimes not. If the MACHINE date is wrong, then all the output will be wrong. In most data communications of the critical financial nature, incoming data is checked before being added to the database. In such cases, sometimes a bad date will cause the data to be discarded and a request for a resend will be sent. It is possible for the machines to 'lock' on such a request, sending and resending the data until either some counter times out the data request and it is dumped into an error file, or if there is no time out on this type of request, then the operator must intervene to stop the process. In other cases, such as a date to credit an account being a day earlier or later than might otherwise be the case, the data would probably be accepted, and someone might (for example) be paid a day or so early or late. In many cases, the date would be completely non critical and would be ignored - as in the case where a date was included on a report that was used by another machine that only wants three numbers out of that report, and none of them were the date. It all depends on what you are doing.

I have heard a number of companies that have gotten into the game late are using a triage approach to solving their anticipated problems. That would work something like this - First examine the end of year report for 1999. If it works or will work with only a minor patch you test it and put aside EOY work for last. Then you would (must) add spaces to all the databases to allow for extra spacing for long date information. Then you would test your daily reports (no, these aren't the production databases yet, they are copies) against the new data. You must find all the spots in that code that can't handle the longer dates, and fix them. After that, you must repair the data input screens (usually this would be the easy part - relatively speaking) and get everything ready for a quick changeover (some training may be needed if management has demanded changes in some things they always wanted and are gonna put it through now - as if you didn't have enough trouble working under the gun). Then you convert the databases again (to get the latest data), and have everyone do duplicate data entry (in a ideal world) while you run tests of the daily and weekly reports against each database. Hopefully, all the reports match up. Then you get to test against foobar data that mimics what you expect on the various critical dates during the Y2K changeover. After all that you would get to put the new database on line, and get to work on the reports you put off as not critical to finish at once. This process must be repeated for every data store maintained by a large organization. Also, you have problems where cross tab reports cover several databases, and you generally will have to convert them all at once and then be busier than a hen raising ducks trying to solve all the problems with the cross tab reports all at once. It can be done, but the message to the guys just now starting is 'GET OFF THE STICK'.

-- Paul Davis (davisp1953@yahoo.com), October 12, 1998.


Paul: Your response is supposed to make us optimistic? Were you being sarcastic and I just missed it?

-- cody varian (cody@y2ksurvive.com), October 14, 1998.


Moderation questions? read the FAQ