Alright all you paranoid, half-witted Yourdonite obsessive-compulsives, just try rebutting THESE Y2K debunkers!

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Good. Now that I've got everyone's attention, I can turn off the troll act and go back to being a certified, high-strung, spam-packing, gold-hoarding GI. (However, since I secretly fantasize about returning to my blissfully skeptical days of DGI-hood, if felt GOOD to write that sentence!)

But seriously folks, I recently came across one of the better, more logically argued anti-alarmist Y2K "debunking" papers I've seen so far (and I have an entire IE favorites folder chock full of them in case anyone ever wants me to forward a few choice URLs). It almost-and I emphasize the word ALMOST-made me want to chuck my petromax and potty shovel in the remote recesses of a storage shed and donate my 100 cans of creamed corn--actually they're canned beans and such but "creamed corn" sounds better--to a local food bank.

Supposedly written by two programmers, it skillfully points out the simplistic flaws in a lot of Y2K reporting (which by itself doesn't exactly take a lot of mental muscle) as well as a lot of naïve assumptions being made by those whacko survivalist Internet discussion forum posters. (Actually, it's relatively free of ad hominem attacks; but because I've always been proud of bucking mainstream convention I rather like being called a "whacko survivalist; it has a certain ring to it, don't you think?) In any case,

Here's the Link

I'd appreciate any response and/or rebuttal from the hardcore programming/computer geeks around here (Andy, Sysman, or even Ed?) to dissect and discard some of the finer points of the essay, which you gotta admit, at least on the surface, make some very good points.

For those of you less inclined to pore through the entire 9 page article, here are some choice excerpts for your thoughtful perusal. For those who took the time to digest the entire article, take note that its authors actually make a lot of points that have already been discussed and supported by assorted posters in this forum (such as the observations about simplistic coverage of y2k in the media), but I'm quoting only those outtakes that challenge a lot of other assumptions I've seen posted here. (And please, although the occasional flaming, obscenity laden, but cleverly delivered, quip is music to my ears, I'd appreciate some more rigorous comment on these statements, since the caliber of their logic is much higher than most that I've seen infesting all those glibly dismissive editorials that characterize Y2K as a non-problem.)

[snip 1]

A common abstraction appearing in many stories is "Y2k compliance", which lumps all possible date-related failure modes and consequences for each system into a simple binary ("true/false") measure. Essentially, this concept says that if a system works the same (or as well) with 21st century dates as with 20th century dates, it's considered "Y2k compliant" -- if not, it isn't. You really can't talk about partial compliance unless you have some way to meaningfully map all of the real, multi-dimensional details into this single-dimensional measure. The "compliance" abstraction works only in the limit of total compliance: obviously, if everything is Y2k-compliant there is no problem. But, if only 90% of individual systems are compliant and the rest are not, "compliance" tells us nothing about what the result will really be. This is because the results depend on more details than "compliance" measures. An expert can wave his hands and imagine whatever he likes, but the percentage of compliant systems is useless as a measure of the outcome -- unless it is very close to 100%. Therefore, any story that bases its predictions on projected levels of compliance is flawed. Its predictions could turn out to be correct, of course, but only by accident. "Compliance" is a poor way to measure even an individual system. When you look more closely at the root of the Y2k problem you find more interesting and complex things going on than what you may have been led to imagine. Explanations of the Y2k bug tend to be so simplified that the real technical issues are missed or misrepresented. This may be necessary in order to "inform" more people, but it leads to a false sense of confidence in the resulting story.

[snip 2]

Under normal circumstances, when things go wrong or threaten to go wrong with computer systems, they're fixed or prevented in the most expedient way without any other pressure than the need to carry on the work at hand. This goes on every day and is seldom of any interest to anyone other than the programmers and technicians maintaining the systems. The driving force for maintaining a system is close to the technical problems and solutions involved. But in the case of the Y2k crisis, the driving forces are far removed from the problems. Budgets, schedules and even decisions about how to proceed and measure results are driven by management under pressure from lawyers, vendors, government, shareholders and public opinion. In short, this is a social phenomenon that is not truly based on the reality of the technicalities involved in maintaining the computer systems. Under such conditions it's certain that a great deal of time and effort is being spent unwisely. It's no wonder that it's taking a lot of money. Therefore, any story that measures the seriousness of the problem by the amount of money being spent is ignoring some important factors. Nevertheless, simply the amount of money being spent seems to be a major component shaping the belief of many people. But the world is not so simple. There are indirect reasons for the money being spent. Exactly what it is being spent on, and how effectively, are questions that need answers before this evidence can be trusted. Given that management decides, for whatever reasons, to increase the budget of the data processing department, few people in the department will complain. Whether the money goes exactly or directly to enhancing Y2k compliance is perhaps a matter of interpretation. If the company needs to do it in order to meet legal or external expectations, it must be done -- perhaps even at the risk of causing other bugs. No doubt, some are profiting from the increased budgets in diverse ways. Only a close look at individual DP departments will tell the whole story of how the money is being spent. But some likely scenarios are not hard to imagine. Maybe it's a chance to get rid of some old unmaintainable code that should have been rewritten long ago. Fixing the Y2k bug may not be a big deal, but in certain cases the chance to do the rewrites could be too good to pass up. Tax law may require charging maintenance separately from development, but who draws the line between the two?

[snip 3]

It simply isn't true that every company making extensive use of computers has Y2k bugs that will shut down operation if not fixed. Some have tested their systems and found no significant problems at all. There comes a point, much earlier than you may imagine, when the residual date anomalies are less important than other daily issues that have nothing to do with Y2k. Just because someone finds a date anomaly in a certain computer model doesn't mean that it affects every system where that model of computer is used. When you see a long list of computers that have Y2k problems, remember that in a great many applications these problems are non-problems.

[snip 4]

Y2k-disaster scenarios assume that relatively few failures can shut down a large part of the world's economic engine because everything is so interconnected. What is unreal about this story is that most of the interconnections are far from being completely automated. In cases of important services or safety-related links, people with flexibility and ingenuity stand by or are even directly involved in the interconnections. And these people (assuming they aren't conditioned to panic) are all Y2k compliant. Every important link has bypasses, workarounds and emergency procedures, depending on its importance, for the simple reason that it probably has not always worked perfectly in the past and is not expected to work perfectly in the future.

[snip 5]

It's probably true that almost every Y2k expert stresses the seriousness of the problem. But that's virtually the definition of being an expert. Anyone who has promoted himself into such a position is not about to say it isn't a big deal. On the other hand, it seems that many are trying to cover themselves, saying that they don't really know how big the problem will turn out to be, or even approximately what would happen if nothing were done in preparation. In other words, they have no theories that they are willing to back -- because, they maintain, it's an unprecedented event and no theories apply. Apparently it escapes most people that these experts stop far short of what might be done toward modeling their subject. Real experts (though they're not usually called that) know enough to work out conclusions with useful accuracy and reliability. Until they reach that point they're merely guessing. The first time a man went to the moon, the engineers weren't guessing whether it would work. They knew a lot about a complicated and unprecedented scenario before it was played out. Because they have promoted the inherent unreliability of their art, many Y2k experts are in a position where they cannot lose. If the lights stay on New Year's day, they can claim success for their efforts. If lights go out, they can say, "We told you so." When a story hedges its bet by saying that the outcome could be anywhere in a wide range, it really has nothing to say.

[snip 6]

A major underpinning of the stories that predict Y2k Doomsday is the assumption that an overwhelming stress will hit the world's infrastructure at 12:00 AM, January 1, 2000. This is the stuff Hollywood movies are made of. In reality, any effects of the Y2k bug will be spread out over some time, giving us hours, days, weeks and sometimes months to handle them without imposing significant hardship, depending on the nature of the application. In some programs, date calculations involving year 2000 and beyond have been going on for many years, and any problems in those particular routines have been fixed long ago. During calendar year 1999, accounting programs will be called on to handle fiscal-year 2000 dates, and any problems arising will be fixed this year. Another essential ingredient in these stories is the idea that if 1000 systems fail at the same instant it's 1000 times worse than each one failing at random as they normally do. A little thought shows this isn't necessarily true. It would be true if all 1000 systems were maintained by the same person who could only be fixing one at a time. Of course, in reality almost every system has its own maintenance facility and it makes little difference whether they all happen to be fixing something at the same time or at random times

[snip 7]

What's left after debugging the Y2k stories? Maybe not much. If that be the case, it doesn't prove that nothing will happen; it only proves that there have been no convincing arguments for major Y2k consequences beyond inconvenience and overtime work for some people. Here's a very important point: It's not necessary to prove that an unprecedented future event will not happen. If there is no good reason to believe that it will happen, then, as a rational being, you owe it to yourself to assume that it will not happen.

[End snips]

-- Hoarding Fool (hidden@waydeep.com), March 05, 1999

Answers

If the public followed the advice of [snip 7], insurance companies would go out of business.

-- Kevin (mixesmusic@worldnet.att.net), March 05, 1999.

Sure, the media does oversimplify the fundamentals of the technical aspects of the problem. I thought everyone knew that already. My question is: How does this support the authors' prediction that there will be "no problem"?

If anything, this piece underscores the fact that this problem cannot be solved by a cookie-cutter solution. Programs are much more fragile than the document would lead you to believe. This doc just scratches the surface.

-- Codejockey (codejockey@geek.com), March 05, 1999.


could you imagine if GOD told Noah to build the ark with a small hole in the bottom of it and not to worry about it until it leaks.

-- steve pratton (spratten@worldnet.att.net), March 05, 1999.

The authors impress me as having an academic understanding of computing, with some experience at a small software company, but no experience with corporate IT. They keep saying - 'it's an easy problem to fix'. 'No one would do things this way unless they were an idiot, not fit to program.'

This is why US corporations got such a late start on fixing things - the problem looked too easy or too boring. Then when they started looking at the real problem, they realized that things would not be so easy, and a lot of code was in fact written by idiots who shouldn't have been programming. The authors also fail to realize how many corporations are running legacy COBOL (or PL/I or Assembler) code with few programmers actively maintaining it.

I think that other posters on this forum have dealt with the embedded chip problem better than this article, which just assumes that it can't be a problem, because no one would do it that way.

-- fran (fprevas@ccdonline.com), March 05, 1999.


Warning. Sexist joke, sensitives skip this.

The joke concerns a (male) engineer and (male) mathematician, and a beautiful naked woman. All three are against the walls of a small room, the men on one side and the woman on the other. The rules are that each man can take as many steps toward the woman as they want, but each step must be half the length of the previous step.

The engineer starts out with a huge stride, and keeps on trucking. The mathematician (standing still) says "don't you realize that according to the rules, you can never get there?" And the engineer replies "Of course I know that. But I'll get *close enough*!"

Same with y2k remediation. Remediation approaches complete compliance asymptotically. But we find joy at arms length.

-- Flint (flintc@mindspring.com), March 05, 1999.



For some people, it's just too hard to find the energy to respond to these kinds of articles. How many times can you answer the same questions over and over and over and over and over and over? (I feel better now, thank you)

This article interests me because it illustrates an issue that IT Managers don't want to hear...and this gets me in big trouble every time I say this to one of you guys face-to-face:

You guys screwed up. Let me say it again...You guys really screwed this one up.

I can see you reaching for the keyboard already.

IT Managers of large enterprises are usually at an age that their roots in the industry go back to the days when most of them were referred to as "The new computer guy in the accounting department." Their backgrounds, knowledge and developmental experience was grinding financial reports and payroll. They were swept into inventory and purchasing systems as the hardware capacity increased . Their bosses were accountants. To an accountant, fine tuning the monthly inventory-turn report was real important.

Many large enterprises appointed the IT manager the y2k manager. The IT/y2k manager didn't have the background in process- control and factory automation systems. In addition, many thought like accountants. To the IT/y2k manager, the early schedules for remediation included all the "important" stuff like..."Monthly Inventory-Turn Reports" and "Bi-Monthly Fiscal Year Projection of Direct Material Cost Variance Report," (Verrrrry important paperwork)

They forgot that manufacturers manufacture and factories build. The fastest way to kill a company is to have an accountant (or someone who thinks like one) run the company. The choices and priorities made by y2k managers could decide the very survival of the company. Many IT/y2k managers forgot the fundamentals of business - deliver the product. Many wasted too much time on the wrong priorities.

The greatest threat we face is the loss of production of the fundamental building blocks of our economy-- Fuel to convert to electrical energy to power machinery and equipment that produces goods to be distributed. If we can do those things reasonably well on a global scale, anything above that will feel like a bonus.

Well, I guess I'm going to need a flame retardant suit...should I click on "Submit?"

-- PNG (png@gol.com), March 05, 1999.


Dealton - what on earth are you talking about? You have read so much innuendo into something that was very very clear (to those of us who **really** understand applications programming from both a mainframe and (yes you friggin moron) the PC side. Anyone who presents a side of the story which does not present your *favored* doom & gloom is routinely stood up against the wall and shot. I've been working in the MIS field for close to 30 years and make a very good living at it. These guys are a lot closer to the target than you give them credit for. What makes Yourdon such a fantastic expert - are you his brother in-law or something? Ed is a Systems man. Yeah, he works(ed) with Operating Systems and Utility programs. Canned software. Any seasoned vet of this industry will tell you in a heartbeat that the Systems Group is only the TAIL of the dog. From the nose to the butt is the Applications Group. You can just go right ahead and flame me all you want - I've certainly been flamed by better than the likes of a gloomer such as yourself. I really wonder just what in God's name you doomers are going to do when all this washes out without so much as a "bump in the night".. Think about how friggin' stupid you are going to look to all those people you told to "head for the hills". The biggest problem here is not so much in date processing as it is people such as yourself and Yourdon spreading this DOOM crap. Get off your horse and quit reading all this crap and go be a member of the human race again. What a loser.

-- Y2K_Certified (RealProgrammer@ibm.net), March 06, 1999.

Pascal's wager (ver. Y2K) sounds just as good as it did 300 years ago.

"It's better to prepare, and be thought a fool, than not to prepare, and remove all doubt."

One of the witnesses at today's hearing before Bennett's Y2K committee (Friday, 5 March) mentioned that his firm's experience had been that for every 100 errors fixed, about 10 new errors were introduced in the process, and were not caught in testing.

Personally, I hope Plice & Shumacher are right, and only trivial problems, if any, willl be seen. I guess we'll all find out early next year who has the right of it.

-- Tom Carey (tomcarey@mindspring.com), March 06, 1999.


First a note to Y2K_Certified. Instead of putting down Dealton, why don't you give us YOUR analysis of the article, sicnce that was the original request. I've been in the field for more than 30 years, and I agree with many of Dealton's remarks. I'll give you one good point. Systems is only the tail, but most systems software has few Y2K problems. The applications are the problem.

Now, who are you really "Hording Fool"? I almost didn't bother with this because I didn't feel like a troll fight!

The one thing that bothers me most about this is the way they just blow-off the sorting problem. It is usually done for a very important reason, far more than just the listing order on a report. It's also one of the harder ones to fix since it usually requires expanding a date field instead of using windowing. And once you expand a field, now you've got to change every program that accesses that file, even if a program doesn't use the expanded field.

Fix on failure is just non-sense. Many companies have thousands of programs and have been working on the problem for years. Most don't have even a hundred programmers to jump on a problem when is shows up.

And I'm getting tired of this 80 column card story. The real problem is the 2 digit hardware year that was in mainframes for decades. I think they are just PC guys.

I could go on but it's late and I'm beat. See ya. <:)=

-- Sysman (y2kboard@yahoo.com), March 06, 1999.


For my money, having spent a few years dickin' with Government and Health Insurance applications, as well as development, DeAlton has made a few fairly good points. If I were to go back to fix the ssytems I had a part in, it would be a BIG DEAL. One, which mercifully was done in 4 digit year format we don't need to talk about because I won the battle, as a green systems analyst. The next couple, however, are IMS systems,and a LOT of systems touched teh records I worked on. If we were to reset the date to 4 digits, EVERY SYSTEM that LOOKED at even ONE of our segments (errr records for the IMS chalLenged) would have to look at their INTERNAL CODE to make sure that the logic of the program actually was set up to count the number of bytes over to get the info required. Just re-compiling with the new data dictionary entries and teh new DBD's and the new LOGICAL DBD's won't do the job, as a LOT of companies are involved in finding out, or will be in a few months.

Anyone who thinks this is even 90% automatable hasn't had teh pleasure of trying to trace someone elses path of reasoning through 5 levels of REDEFINE clauses. With or with out the original programmer (sober or otherwise).

Chuck, whoj has had his share of 3AM calls when the partitions go **POOF** and then take the WHOLE MACHINE to LALA LAND.

-- Chuck, night driver (rienzoo@en.com), March 06, 1999.



Moderation questions? read the FAQ