"TESTING HAS BEEN COMPLETED- - - The real definition". Numerous companies have proudly announced that testing has

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

been completed. Translated, in reality this means that we have done all of the testing that we know how to do and since it is so difficult to do comprehensive end to end testing and testing of embedded systems, it is not practical and cost effective to do additional tests. Therefore our testing has been completed. We are not saying that we have tested all systems and made sure that they would all function. We do not know if they will work or not but we are done because we do not have the knowledge, resources or ability to isolate the systems, fix them and test them as a part of the larger systems that they will be required to successfully interect with. It would appear that the companies are withholding these facts or the top management or the spin people writing the press releases do not know the difference. The newsreaders surely do not understand or are misreporting it if they do. Curly and Larry were trying to figure out how all of this testing could be completed so quickly when many of these companies started fixing these systems last year. This is kinda like the President when he redefined what "is" is. Like Curly said, I washed one half of the car and I am not going to wash it anymore so I am finished. It would appear that some of these companies and gobmint agencies will be finished too, defined as bankrupt, no longer functioning, out of business etc.

-- Moe (Moe@3stooges.gom), October 16, 1999



In my understanding, Y2K is about connections. Companies reporting readiness are just that; ready with their contingency plans,hoping their plans will work around the disruption of unfinished remediation of internal connections, critical and non-critical, and somehow survive the unfinished remediation of any external connection disruptions.

They are not ready for end-to-end testing in any industry and they know it is probably too risky to attempt it; it might cause panic and panic is, no matter the stakes, to be avoided. All industries know this; they are simply placating who ever needs placating, doing what remediation they can, checking what the external partners self- evaluated status are and hoping they will survive the next 12-18 months of disruptions.

No one knows what will happen. Active imaginations can imagine and hope they are wrong. Beyond that prepare, prepare, prepare for the worst and hope for the best.

Blessings and peace to you, Moe. Thanks for the thread.

-- Leslie (***@***.net), October 16, 1999.

You are partially correct in your observation - and partially incorrect.

All computer testing can establish (inparticular, when testing can only be done "ahead of the fact" as in this case!) is to simulate the y2k environment (in the "tested" computer and operating system) and in the simulated data.

Then go through - with simulated connections to other simulated computers and operating systems and data - all the expected external "connections" to other processes that use the affected systems data.

So, you can't really be sure you have simulated everything exactly correctly, nor can you be sure (beyond all doubt) that you have "tested" all the actual interfaces, all the inbound data, outbound data, and all the actual responses. Given time, effort, and lots of money, you can get some/many/most of of the interfaces, but you'll never really be certain you've gotten all of them. So erros are possible (very likely) to "leak through" even the most thorough testing sequence.

Question then becomes: did you test enough to make the "leakers" so trivial as to become neglible, or at least easily solvable, once these "leakers" actually occur?

Answer: We don't know. Most likely, only a few companies and governments have tested this thoroughly. Ironically, it appreas to banks and some finance companies have done the best testing - and they are a small minority of all companies affected!

Question then becomes: Did you (generically, in all companies and governments worldwide) fix enough and test enough to identify enough problems to avoid potential disasterous failures?

Answer: We don't know - most likely, no. Very, very few companies have remediated more than "critical systems," and testing has been very spotty. In nuclear systems inparticular - the ONE utility system that we KNOW has been tested and audited - numreous failures have been publicized during the TESTING phase.

This would indicate that the remaining hundreds of thousands of utility systems have NOT been adequately tested and had their errors identified during testing and thus removed. --- [Or it indicates that ONLY the nuclear plants have been tested by idiots and incompetants, and that ALL OTHER utility systems have been tested thoroughly and completely by incredibly competant and perfectly-trained engineers so no errors were made in testing and no faulty remediation was conducted. 8<) ]


The only thing simulated computer testing absolutely proves is that " the systems tested, under exactly same conditions that it was tested under, with exactly the same input conditions that it was tested under, using exactly the same test data as was used in testing, will respond in exactly the same way when it is next tested under exactly the same conditions using exactly the same test input data. ( Further, we are assuming the output information reeived from the test was correct for the input data used, but we aren't really too sure about that either.)"

-- Robert A. Cook, PE (Marietta, GA) (cook.r@csaatl.com), October 18, 1999.

Robert's description is pretty good. Maybe an illustration would be helpful as well.

Say you're developing a new car. Each component is tested individually. If it fails, figure out why and fix it. Most remediation seems to have reached this point.

Next build, the whole car. Does the engine run? Will the gears shift? Electronics (lights, dials, etc.)? Doors open and close properly? Any sign of fluids leaking anywhere? If so, fix it. Somewhat less remediation has been taken this far.

Now, take the car to the test track. How well does it run? Acceleration and braking acceptable? Steering performance? Gase mileage and emissions OK? If we get this far, we know the car does all it's supposed to do. Banks and Wall Street (and some others) have reached this point. Maybe not a whole lot more.

Will this car perform properly on the public highways? Well, those highways are in most respects less stressful than the test track, so our confidence is pretty good. But there are some things we didn't test at all. How well will the car protect its occupants in a collision? We don't know. How well will it work after 50,000 miles? We don't know. These questions can only be answered by real-world operation, to be done Real Soon Now.

There are limitations to testing, to be sure. My experience is that depending on testing rigor and coverage, you can find anywhere from 90% to 99% of the problems. Never all of them. This does NOT mean testing is useless by any means. Even simple basic-functionality testing will weed out the large majority of significant problems. A little testing goes a long way, while a LOT of testing cannot go all the way.

-- Flint (flintc@mindspring.com), October 18, 1999.

Moderation questions? read the FAQ