Are these reported compliant systems in use now? Interfacing with non-compliant systems? Using screening?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

I read a lot of claims of compliant systems that imply that they're in use on line or interfacing with non-compliant systems. Is this possible without risking corruption? Sorry for posing what is probably a stupid question from a novice.

-- William D. King (derryking@uswest.net), June 26, 1999

Answers

Certainly, all of the time. Remediated code is put into production as it is tested. It should work today as the old code would have. If you have literally hundreds of programs to change, you don't fix them all and wait to put them into production all at once. Reasons for this are many. One of the most important is, if something goes wrong in production, it is likely one of the few programs you put in last night.

-- curtis schalek (schale1@ibm.net), June 26, 1999.

>Is this possible without risking corruption?

Short answer: Yes, it's possible.

Long answer:

Some folks talk about "corruption" of Y2k-compliant systems by non-Y2k-compliant systems as though Y2k-noncompliance were a computer virus of some sort. That simply is not so. Y2k-noncompliance is not something that is "catching".

Now, it _is_ possible that Y2k-noncompliant software from one system, "A", could be transferred or copied to another system, "B", on which all resident software had previously been Y2k-compliant. In that case, system "B" would, after such a transfer, no longer be 100% Y2k-compliant, but that would be only because of the new presence of the noncompliant software, not because of "corruption" of the Y2k-compliant software that had been on system "B" before the transfer.

Another aspect of the "corruption" argument is whether data can itself be Y2k-compliant, independently of the software that uses it. Personally, I think it depends on whether the year numbers in the data are unambiguous. If all the year numbers in the data are four-digit (or longer!), and we are speaking of dates in recent history or the not too distant future, then the data has all the information needed to properly handle both pre-2000 and post-2000 dates within itself correctly and thus is Y2k-compliant.

OTOH, if the year numbers in data are only two digits long, it may not be absolutely certain whether they refer to the 20th century or the 21st century (or the 19th ...), and thus the data does _not_ have all the information needed to properly handle both pre-2000 and post-2000 dates within itself correctly. In this case, the data may be Y2k-compliant. Alternatively, if the data has, in addition to two-digit year fields, additional information about the century to which the years belong (perhaps a header field specifying the century to be used), then perhaps the data is unambiguous, and thus Y2k-compliant, after all.

Some people argue that introducing Y2k-noncompliant data to an otherwise Y2k-compliant system of software and data can "corrupt" the system to which they are added. My view is that this is potentially possible, but not necessarily so -- it depends on the details. Show adequate specifications for the software and data in question to a qualified analyst, and he/she can probably determine whether or not "corruption" is possible.

A "corruption" argument with which I generally agree is that Y2k-noncompliant software in one system *may*, because of Y2k-related errors in the software, produce incorrect data even though that data may not contain any year numbers (e.g., incorrect amount of interest added to financial account balances, or incorrect telephone toll charges calculated, because of a mistake in computing the length of time involved). If such incorrect (even though technically Y2k-compliant according to the above criteria) data is then transferred to another system, the fact that the second system now has incorrect data that was the result of a Y2k bug in software elsewhere means that the second system has been corrupted, according to this view. The general answer to this problem is that Y2k-compliance needs to include safeguards against the acceptance and proliferation of such corrupted data. That won't be easy, but there are already many safeguards against the transference of incorrect data betweem systems.

-- No Spam Please (nos_pam_please@hotmail.com), June 27, 1999.


NSP I thought it might take a bit of explaining so thank you for your excellent response. It cleared up a lot for me.

-- William D. King (derryking@uswest.net), June 27, 1999.

No Spam did a pretty good job in explaining all the vagaries and possibilities of "corruption".

We must not underestimate how erroneous date-arithmetic due to the y2k bug can corrupt otherwise valid data. In certain environments this apparently valid but still corrupt data will pass downstream to other entities and have further calculations performed on the corrupt data. The downstream entities may or may not introduce further corruption to the data depending on their y2k compliance and the robustness of the checking interfaces.

You see where this is leading?

How does one protect oneself from incoming corrupt data? How can you set up bridges, filters or firewalls to determine the accuracy of ALL incoming data, from multiple sources, perhaps in the hundreds of thousands range...

My contention is that with the best will in the world you cannot effectively screen all data - it is simply impossible. No one has yet come up with a viable method other than "isolation" in which case you defeat the object of the linked computer systems.

This is particularly the case in Banking - I suggest if you are interested you start by looking at the "imported data" archives at www.garynorth.com, and then take a look at the "banking" archives on this forum, doing a search on "data" "imported" "corruption" etc. etc.

All will be revealed.

-- Andy (2000EOD@prodigy.net), June 27, 1999.


Moderation questions? read the FAQ