Absolutely must have a year for testing?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

In 1998 it was said over and over again by virtually everyone that all remediation had to be complete by Dec. of '98 in order to leave time for the ABSOlUTELY essential year of testing. Of course this didn't happen and most of these same people are now saying something quite different. Many remediation projects will finish (they say) by Dec. 31, 1999 and the are quite self-congratulatory because of it. Do we know something know that we didn't know a year ago? Why is a year of testing not necessary now? I suspect their story changed to fit the circumstances.

-- Lumber Jack (johnsellis@webtv.net), November 22, 1999

Answers

A year is not nessacary because the C.I.O.s knew that the deadline for completion would never be met much less the testing.

-- zoobie the corporate cynic (zoobiezoob@yahoo.com), November 22, 1999.

We don't know yet whether 'we" collectively succeeded or not....any formal, structured testing is better than a hurried job deliberately left incomplete due to time constraints.

Best case is to run the new software, the changed databases, and all the links to other systems through an entire business cycle/fiscal year cycle while there is a still a valid bacakup (original) system intact on both ends.

This "best case" logic is the reason "1 year for testing" was the goal. Obviously, this goal was not met. Hence, you drop back to "not quite as good" solution, which is to finish as soon as possible, install the new program as early as possible, and hope for the best.

That is the current method.

If the changes are not finished in time, then you can always try "fix on failure..." - not because you want to, but because that is the only other choice.

We can (at this point) only hope and pray the "failures" are not too severe.

-- Robert A. Cook, PE (Marietta, GA) (cook.r@csaatl.com), November 22, 1999.


My theory on the reason for so little testing has to do with the systemic nature of the problem. Everything will fail at once, so there will be plenty of others to blame. The secret internal Y2K problems that every major corporation has will be covered up as much as possible by IS executives focusing on all the external glitches that are impacting the business from the outside. So much will be going wrong at once that they might even be believed!

-- Slobby Don (slobbydon@hotmail.com), November 22, 1999.

Everybody wanted a nice neat mantra to chant so companies came up with the '.....leaving one full year for testing' line.....

Truth is, testing is done at the same time as remediation. You don't remediate it all and then test it all.....that's a huge fallacy.

I know the company I worked for did it that way and completed their large Y2K project on time and on budget. Honestly guys, the people on the front line of Y2K projects are nowhere near as worried about it as many of you here are!

-- Craig (craig@ccinet.ab.ca), November 22, 1999.


Craig's experience matches my own. I think Gary North was responsible for the false impression that everyone would do nothing but fixing through 1998, and nothing but testing during 1999. It doesn't work that way AT ALL! And if you dig into this, you'll find that North only located (at most) a couple of dozen firms that made such a claim, but he pounded on it so hard that those who read North and little else thought these couple of dozen were "virtually everyone". North is WELL aware that if you repeat something often enough, some people will believe it. That's the key to advertising, and advertising works.

Fixing-and-testing is a closely coupled, iterative process. You grab a code module, fix all the date bugs you can find, and run it through a local test. This test identified both some of the date bugs you missed, and some of the fixes you got wrong. So you go back and fix *those*, and do the local test again. Repeat until you can't find any more bugs in that module.

When all of the modules in a system have gone through this process, you link them all together and test the system-as-a-whole. Iterate again until the system works properly.

As you complete each system (hopefully most critical first, working down the list as time permits), you add it to the system-of-systems and test *that* iteratively. Some organizations use a time machine for this, others (shudder!) use their production systems, and let the remaining bugs reduce their productivity. Most (but by no means all) organizations have reached this point, and those that used time machines have returned the tested code to production as well.

It was never clear to me just what was intended to be done during this "year of testing" that wasn't done during the fix-and-test process. Possibly this refers to testing between separate organizations, although interface testing is unavoidable as code is returned to production. I suspect the "year of testing" was just a code-phrase created to emphasize that by rollover, testing was to be *very thorough*. In all to many cases, it's not. It's only as thorough as normal new or modified system testing has been historically. If it works, it works.

In practice, I believe most organizations never really got all the way down their lists to the least critical systems. By all appearances, government organizations never even *considered* looking at the 90% of their "noncritical" systems. And I imagine there have been some soul-searching meetings throughout the economy these last few months, as businesses realized they couldn't possibly get to everything they wanted to, and had to decide which systems would be the least painful to postpone (and therefore do without for a while).

-- Flint (flintc@mindspring.com), November 22, 1999.



'.....leaving one full year for testing' line.....

That was what the Y2K remediation IT companies were saying so they could gouge the big corperations for more money. Something they are taught in college, because they actually don't have a whole lot to do except impliment a previously designed software package to fit a job. They had gotten so spoiled demanding big saleries and extra time that they thought they would get away with it in Y2K remediation. When it turned out they they couldn't even do the job (Cobol software remediation) and had to bring in the unemployed software programmers who had lost their jobs to the "golden boys of IT", Big business got suspicious and learned real quick to hire the people who knew what to do in the first place and all those Y2K remediation businesses suddenly became unneeded with their 1 year of testing. They actually thought they would get away with sitting around for a year, while others ths=at new what to do "tested" because they (IT's) said it had to be done.

Normally you have a problem, fix it so you can get back to work, not sit around for a year testing!

It was pretty convenient that that testing was to be done from Dec 1998 to Dec 1999, allowing them to gouge up to the very last minute before the rollover.

-- Cherri (sams@brigadoon.com), November 22, 1999.


To show why as much time should be made avialble as possible, look at what happens when you DON'T have enough time....

http://greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001pS9

Oakland faces Payroll Problems due to insufficient time to train operators.

http://greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001pNt

LA needed extra time to re-fix sewage system....still isn't finished in all plants.

-- Robert A. Cook, PE (Marietta, GA) (cook.r@csaatl.com), November 22, 1999.


It was always a DAY for testing! We must have misunderstood, misread, or misheard their constant bleating! 'Course I wonder what they'll say when they don't even get that one DAY for testing?

-- unknowing (sceptic@whoknows.com), November 22, 1999.

Sir Flint,

I agree with you, but recognize that "a year for testing" would have allowed the time to meet that kind of schedule and "non-critical" systems repair. Training, data exchange, and "oops - it broke again" errors would have a nice long period to crop up over time, rather than all at once. Rare revents - quarterly reports, tax reports, payroll and acounting cycles, all could have been worked through with good data in 1999, rather than from questionable data in 2000. ("Did I get good data from xyz?" "partially good data" or "garbaged data" will become a critical question in the next few months?)

It was a nice phrase, and was a nice intent, but simplywasn't met in the real world....might not have ver been met in the real world, either.

-- Robert A. Cook, PE (Marietta, GA) (cook.r@csaatl.com), November 22, 1999.


Craig et al are right. I've just put a "100% bug free" system into beta test, basically to get them off my back while I keep hammering away at the bugs that I (and they) know are still in there. Testing is a cyclic, iterative, swirly kind of thing. It's only QA parasites and Gary North who make the mistake of artificially compartmentalising the development process.

-- Colin MacDonald (roborogerborg@yahoo.com), November 23, 1999.


Let me get this: TESTING can be done concurrently with REMEDIATION. Sorry, folks, but my "goes against common sense" alarm just went off, big time. And if indeed this were the case, then we would never hear phrases like "final testing phase", regardless of Y2K.

To those that claim that you can be happily remediating code while you go about testing a system, I say to you: You are full of crap. How could you expect the test to be successful if what is being tested is still "in work"? And if the test were successful, but things were still supposed to be "in work", what would that mean??

It's like saying that you can test whether a boat will float while working on the underside. Nice try, pollies, but it don't fly. (Doesn't float, either.)

-- King of Spain (madrid@aol.cum), November 23, 1999.

Moderation questions? read the FAQ