U.S. Data Networks Successful Y2K Test

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread


U.S. Data Networks Sail Through Y2K Tests By Jim Wolf

WASHINGTON (Reuters) - A trade group said Wednesday its testing had found no 2000-related glitches in networks that may carry more than a trillion dollars a day in U.S. credit card and other financial transactions.

``It's 'D' minus six months and all systems are go,'' said Martin McCue, chairman of the Washington-based Alliance for Telecommunications Industry Solutions, or ATIS.

The group's latest drill dealt with frame relay networks, the web of systems that authorize U.S. credit card purchases and zap financial data to Federal Reserve clearing houses.

Nancy Pierce, ATIS Director of Industry Forums, estimated more than $1.1 trillion in transactions were processed daily over the U.S. telecommunications infrastructure.

``No Year 2000 date change anomalies were found during the testing,'' which went all the way from swiping a credit card at a simulated point of sale to settlement, ATIS reported.

At issue were fears that some computers may crash or scramble data by misreading 2000 as 1900, the result of old space constraints that pared the date field to two digits.

Tested in the latest round were the rollovers from Dec. 31, 1999, to Jan. 1, 2000; Feb. 28, 2000, to Feb. 29, 2000; Dec. 31, 2000, to Jan. 1, 2001; and Feb. 28, 2001, to March 1, 2001. The industry-wide testing was carried out in partnership with the Y2K Financial Networks Readiness Consortium, or FNRC, an industry group concerned with 2000's possible impact on data transmission.

The FNRC is made up of American Express, Bank of America, First Data Corp. (NYSE:FDC - news), JP Morgan, Mastercard International, MBNA America, Total System Services, Visa International and Wells Fargo & Co. (NYSE:WFC - news)

ATIS member companies Bell Atlantic, MCI WorldCom and SBC Communications served as primary participants in the latest round of internetwork interoperability testing.

The next phase of drills by the ATIS-sponsored Interoperability Test Coordination Committee will assess the date changes' possible impact on international calling. It will take place in August and September, said Daniel Currie, chairman of the test panel.

In test results released in February and April, the ATIS test panel reported no glitches in U.S. interconnected telecommunications networks and public switched telephone networks.

Nearly 2,500 representatives from 500 companies take part in ATIS panels, which develop and test U.S. network interconnection standards.

-- Mr. Decker (kcdecker@worldnet.att.net), July 15, 1999


See this thread: Y2K Testing of Credit Card Transactions, Data Transmissions Conducted <:)=

-- Sysman (y2kboard@yahoo.com), July 15, 1999.

Worthless article.

Was it a "test" or was it a "drill"? A test is not a drill, and a drill is not a test.

Worthless article.

-- Lane Core Jr. (elcore@sgi.net), July 15, 1999.

Thanks for posting great news! Yeah, that's right Lane the doomer chant lives on: I hear nothing, I see nothing. No, this was a test not a drill, but you go ahead continue to pick apart the words so that you can justify your opinion. Now let's see how the rest of the doomer weigh in on this.

-- Maria (anon@ymous.com), July 15, 1999.

The story in the other thread says:

"MCI WorldCom and SBC Communications served as primary participants in the testing activities and donated substantial laboratory and staff resources. FNRC member companies also contributed multiple laboratory sites and test support."

Sure looks like a lab test to me, having nothing to do with the real world. <:)=

-- Sysman (y2kboard@yahoo.com), July 15, 1999.

Sysman, how do you test in the real environment? how do you partition the network and roll the clock forward and isolate the test data from the real data? It causes more problems than you could imagine. It almost started WWIII in the seventies and I haven't heard of it since. Beta versions go into the production environment but Y2K can't beta tested that way because of the clock. Please tell us how you solved this problem.

-- Maria (anon@ymous.com), July 15, 1999.

Good news is great news! So again, everything is just fine in the US. Now then, was this testing environment good enough for the millions of daily transactions from overseas? What about foreign POS status and interfaces? Same for banks and phone lines abroad. This is essential because point of origin transactions cannot be re-routed. They can only be triggered from the original location. Furthermore, foreign transactional testing would require the participation of quite many diverse companies and institutions completely foreign to the US data network testing. Many of these are state-owned agencies (Brazil, Russia, China, Southeast Asia, etc.)

-- George (jvilches@sminter.com.ar), July 15, 1999.

uh, excuse me, but do you really think they would stage a test that showed they were screwed? These results are worthless. Best to use Westergaard inferential analysis - throw out all of the official reports, look for patterns in the ancillary data.

Never in human history have so many humans blindly trusted that so many other humans won't screw up. - Dr. Ed Yardeni

-- a (a@a.a), July 15, 1999.


I don't have an answer for that one, but I do have some more questions about this test. Were these labs set-up using existing devices and protocols? Did they have to change anything to make it work? If so, when will the changes be made in the real world? And this statement: "swiping a credit card at a simulated point of sale to settlement" - simulated? We already know of problems with card readers. Sorry, not enough info here to make me feel that all is A-OK. <:)=

-- Sysman (y2kboard@yahoo.com), July 15, 1999.

Good - they have conducted a "test" - and I don't care if it is a "test" or a "drill" - on the US systems.

But - Maria - as noted above - this was a _test_ on a "lab system" simulating the real world. GOOD!

It shows, that like some other national networks of interconnected computer systems - they HAVE developed a SOLUTION to the problem. Further, this test shows that under the simulated conditions they tested the simulated solution under, the solution works. (There are many other national networks (power and Air Traffic Control) comes to mind) that they have NOT yet tested any theorectical integrated solution, and so don't knwo if they have a solution.

Regardless, they have not yet INSTALLED the new system, nor has it been tested (yet) in service to validate that it carries the current traffic. Nor do we know whether it will carry the intended traffic next year. This test indicates that it "probably will succeed" next year. Good. Now, let us see if it will succeed.

By the way - the international test hasn't been done yet. And this international test (scheduled for Sept) can (in the best of conditions) only verify that international data will be exchanged correctly, not whether international DATA itself is correct from the international users.

-- Robert A Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), July 15, 1999.

Sysman, sorry for the lack of info. I know a little about ATIS and the testing (I know the people involved from my company). I can't directly answer your question (but even if I did I suspect more questions would pop up). My impression was that the devices and protocols used in the test were from the production environment. I can't say what they needed to change if anything. A simulated sales means they didn't connect to an actual reader passing real-time data. They duplicated the devices in a test environment.

My take on Y2K testing: you test as much as you can in the simulated environment. If this includes some kind of integration or end-to-end test, that's great but not necessary. At the very minimum, you need to test the interfaces. These tests should test the rollover and leap year; if you can do more, great but not necessary. Once the code goes back into production, we know that Y2K changes didn't screw up current functionality.

a, give up on the conspiracy chant. The "no news means it's bad and the good news proves a cover up" song is getting old.

-- Maria (anon@ymous.com), July 15, 1999.

I agree Maria - my caution is strictly from the standpoint that the "system" - in whole, and under actual operating conditions, has not yet been "used" in service.

It should work: stress tests (minimum and mximum operating loads) are not easy to simulate in a test environment, but they should be included in the test package if reliable conclusions are to be made from the test. Let us hope they were. Let us hope the repaired (remediated) system can be installed and put in service as planned. BUT, remember that it is not yet in service. It has only passed one in a series of lab-level simulated tests. BUT, @.@ comment is valid in that we have immediately seen every such test publicized by the government - and in most cases (all that I have read!) the scope, duration, and extent of the test was certainly "exaggereated" in the headline, in the text of the first paragraph, and by the "summary quote" from Mr. K's office congratulating the participants. remarkable that every such story gets an individual comment from Mr. K, isn't it?

Only in the middle paragraphs, deep in the story, do the actual limits and "work yet to be done" get mentioned.

And that is the basis for his "conspiracy" - it is your job to show that more things have been completed and tested than reported - and so far, there is little evidence of this.

-- Robert A Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), July 15, 1999.

Robert I agree it won't be stressed in a test world. But, how would a stress test help verify compliance? Y2K is a fix on the dates not functionality. How can changing the date affect the software's ability to handle larger or lesser volumes of data? Maybe I'm missing something. That functionality has been tested and already proven in the real world.

I can't speak for other companies but my code in already installed in the production environment. This "system" is at least partially installed.

As far as the article goes and Mr. K's comments, I take them with a grain of salt. Maybe I've been in requirements definition too long but I always strip away the adjectives and adverbs to get to the meat of the sentence.

-- Maria (anon@ymous.com), July 15, 1999.

It depends on the nature of the change, and the extent of the change w/r to hardware and date read/writes.

For exmple, several years ago, in one FAA ATC system replacement effort that was eventually cancelled - the tests were proceding okay, the new displays were in place and were "properly" reading data from the (existing) radars and were exchanging data correctly with the (exisiting) computers and the (future) computers.

A similar "test environment" was actually built for the new Denver Airport baggage handling system - which like the credit card data management system must be "real time" processed. All the new processes worked correctly in the test environmnet - at the use levels simulated in the test.

But once put into place in the "real world" both failied and had to be rebuilt - the functional design of the whole system simply was not "fast enough" to keep up with the amounts of transactions required. The data was accurately exchanged (in these cases) - it just wasn't useably exchanged.

Either Yourdon or Hamisaki (don't recall which) referenced this too in an earlier book - pointing out by using the metrics for a credit card "batch" process how correctly solving the y2k problem by an incorrect method (improperly "re-translating dates" to-and-fro as data was exchanged internally) would slow the batch transaction from overnight to over-day-and-night.

In that case, he showed that the extra time required to translate data would require massive increases in data processing power, and thus would require replacing the mainframe. Exactly what needed changing in this case, I don't know - I won't guess arbitrarily. The actual system designers in this case (we hope) are making the correct decisions, and will remain on-track to actually implement their system.

We hope it will "work-as-advertised" next year.

-- Robert A Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), July 15, 1999.

Yes ma'am Maria. Dr. Yardeni is a conspiracy monger. And I guess your right, the banks' self-reporting shows y2k is nearly fixed - this being the "ultimate" in testing and all.


-- a (a@a.a), July 15, 1999.

The original story quoted 1 trillion dollars in transactions per day, or just over 11 million dollars in transactions per second.

A stress test then would simulate $15,000,000.00 per second, and would simulate (at 10.00 per purchase (or some other value based on actual amounts!)) would simulate 1.5 million transactions per second. (Aside: Some testers would recommend a checking other limits - for example, what is the maximum number of "big" tranactions that could be handled (1 15,000,000.00 job?), what happens if 15,000,000 $0.01 transactions were actually attempted, etc. What if somebody tries 1.5 million "refunds" (negative transactions) at once?) What if somebody types in a negative number somewhere, etc.) Once the routine operations are successful, most software failures will occur at "wierd" occasions and at transistions between steady state rotuine conditions. Both of these, of course, are typical of year 2000 operations.)

The laetest French rocker, for example, blew up when its guidance computers shutdown. They had a proven, older guidance processing program (used in the Arianne 3, Arianne 4 rockets) was copied into the new computer, in the new rocket. The new rocket was much faster than the old rocket, and the guidance limit (the maximum "readable" sideways velocity) was exceeded by the greater thrust of the Arianne 5 rocket.

result? the first control computer tripped off-line automatically. As it should have, because it was receiveing "bad" data. The second computer tripped off immediately thereafter - because it too was receiving the same "bad" data - data which had exceeded the preset limits of the original program. With no guidance computers, the rocket steered off course and needed to be blown up manually.

Why manually? Because the self-destruct program had been re-programmed correctly, and did not sense any problem in its data. Thus it thought it was still safely on course - until the guidance computers shut themsleves off.

The result would then tell the program testers if the changes in the system _overall_ could handle the load needed to reamin in operation next year. If not, then the system - regardless of anything else - needs to be rebuilt/redesigned to avoid a y2k-induced failure next year.

-- Robert A Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), July 15, 1999.

I start reading the thread thinking, "hey, good news!".

I continue reading all the great posts and enjoy the insight given.

I make it all the way to this point and now I think, "ah, f*ck, more spin."

-- forum regular (don't spin me @ny.more), July 15, 1999.


I agree with your point on it depends how the changes were made. Yes some implementations can slow the system. In my case, it didnt. So maybe the transaction doesnt go through as quickly, then youll need to analyze the tolerance levels. Whats acceptable? I dont know but I suspect that the current system can take that increase. I dont think were even close to stressing the current system today. I also suspect just as we have performed throughput analysis and performance testing, the other companies did as well.

Your example of the Denver baggage system explains the problems they found. The functional design was flawed. Y2K doesnt change any functional design to any system. It changes the dates. I agree that improperly re-translating dates to-and-fro as data was exchanged internally will slow the system. But this is obvious to those of us doing Y2K remediation and we have already consider this in our solution.

Whats with the discussion of rockets? Frankly, I wouldnt want a computer to automatically blow up my rocket. Manual destruction has been done before, nothing new here. If you put this discussion in here as a case in point about stress testing, sorry I dont buy it. Y2K remediation is not like rocket science, no comparison in any form. Y2K is not a new system development as in your example which shows the problems of systems integration, a very difficult task. Again Y2K can not be compared to system integration, two totally different projects.

a, I assume you're a fairly intelligent person afterall, you have a degree in science. Sometimes you don't actually display that intelligence and your posts on this thread prove (that's right prove) my point.

-- Maria (anon@ymous.com), July 15, 1999.

So, to make a long story short Marma, what you are trying to tell us through a lot of fancy foot-work is that there ain4t no better way to test these gizmos. And you are probably right, in view of the fact that time is up and resources and committment were never available really. That4s the reason they are doing it this way. But what Robert is trying to tell you is what counts: if this is as good as you can test, it ain4t by no means enough Marma. Same goes for the international 4lab test4 which will necesarily be even less reliable.

Your "not necessary" confidence doesn4t fly with y2k Marma.

-- George (jvilches@sminter.com.ar), July 15, 1999.

I don't believe the financial industry can say much about their testing results or examination results at this time other than to say "we're ready". ALL OF THAT comes from the FFIEC (at least in our experience). Go to their website www.ffiec.gov and check it out. The FFIEC governs the whole shooting match when it comes to Y2K and the financial industry. The OTS, OCC, FRB, NCUA and FDIC (which make up the FFIEC) do all the audits and release all the findings.

I can't even tell our clients their findings. They have to call the FFIEC themselves to get them.

When they talk of setting up an infrastructure, I doubt very seriously they're referring to a lab. Our external vendor testing has been accomplished via EDI all over the country that past 4 months. The entire infrastructure was future-date tested - from our mainframe to our servers to frame-relay to their servers to their mainframe. As close to real-world testing as you're gonna get.

FWIW - the MBA testing has had NO Y2K issues at all. NONE.


-- Deano (deano@luvthebeach.com), July 15, 1999.

Maria said:

"Y2K is a fix on the dates not functionality. How can changing the date affect the software's ability to handle larger or lesser volumes of data? Maybe I'm missing something. That functionality has been tested and already proven in the real world."

THAT'S the crux of the Y2K problem. It's simple on the surface ("what's the big deal, just add a couple of digits to the database" - that was my first "intelligent" reaction to Y2K - about 4 years ago).

The problem is the scope, which in turn is the complexity. Y2K is everywhere; it affects everything - even stuff outside of date calculations. It can't be explained except that it is insidious - and it is everywhere all at the same time.

-- Jim (x@x.x), July 15, 1999.

Whoops - I'm sorry you missed my point of dsicussing the rocket failure:

What I was trying to show was that a "working" and entirely satifactory existing computer program, proven in many rocket launches before, was completely unacceptable in new circumstances. In that case, it was't even a consition of "software testing" failing to identify a fatal error. The program had actually BEEN USED in the field (in space actually - but you get the point.)

The fatal error was in assuming that the same program would work in the same computer in the same rocket control system in the same way it did in the last launch. Well - one factor was missed - the new rocket went too fast. Well - actually, the new rocket went exactly as fast as it was supposed to go - just so happened that the the old program went slower.

Fatally slower. Most "undiscovered" year 2000 computer problem will be of this kind - no matter how good the program design, how good the remediation effort, or how good the testing some things will fail. Some things will have small problems - we hope most will be minor. But others will be flawed,

Fatally flawed. But it might not only be rockets that will come to a burning end.

Unfortunately, now it appears that many things will not be remediated at all, many of those that have been remediated will be only partially tested, and many of those tests are artifically limted - and so are less likely to discover hidden (potentially fatal) flaws.

-- Robert A Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), July 15, 1999.


What you say is quite true, but your emphasis might be misleading. The question is whether this test, *as conducted*, was capable of uncovering y2k errors in the equipment. It may well be the case that this test does not cover (or cover adequately) the question of whether remediation has rendered the equipment incapable of supporting the required transaction rate. However, it seems from the article that the transaction rate was certainly not ignored.

I feel you're right in implying that testing is like a map of the territory, and isn't the territory itself. To the degree that the map is simplified, it runs the risk of omitting what may turn out to be key features of the actual territory, this is true. I just feel you're wrong in emphasizing this shortcoming, and glossing over the genuine and valuable aspects of the tests themselves.

And I don't really mean to single you out. This report is pure, unadulterated good news. The recognition that ALL testing is necessarily limited to some degree does NOT turn this report into spin. On the contrary, the real spin being applied here is the cascade of criticism of these tests, for the sin of not being more than they can be. One could easily come away from this thread with the misimpression that these tests don't really count and don't mean much, because of this negative spin. And that would be a mistake.

-- Flint (flintc@mindspring.com), July 15, 1999.

Good points - which is why I was pleased witht he preliminary reports that the DOD logistics systems were being tested in an simultaneous on-line cross-service integrated fashion. Furhter - that they were planning on re-testing later, etc. THAT WHAT SHOULD BE DONE - at a minimum - everywhere.

Instead - it appeared to be the first such test - anywhere. (Not really true - but then Mr. K. comes up with hsi comment that directly said it was the "best test" " ... the most integrated test ever".

But the spin was deliberately that this ONE test of 44/1000 logistic systems PROVED that the ENTIRE military would be okay year.

And that level of "spin" is clearly a blatant lie.

-- Robert A Cook, PE (Kennesaw, GA) (cook.r@csaatl.com), July 15, 1999.

Howdy Flint,

I agree that the results of this test is good news, but my questions still stand. Was this test done in a lab... Did they have to change anything to make it work... If so, WHEN will these changes be put "in production"???

A few points from the other thread:

This story sounds like the FAA lab - Remember what Kenneth Mead said about the FAA and "problems installing test-center solutions?"

And how complex is the frame-relay network? I spent a few years doing TP, but never did frame-relay. Come on folks, somebody must know something about frame-relay. More info, Maria!?!?

Got any comments for me, Flint? <:)=

-- Sysman (y2kboard@yahoo.com), July 15, 1999.

1939 (1999)

Neville Chamberlin (FNRC) steps off the plane singing and waving the (worthless) treaty, "Peace in our time" (y2k is ok!)

Maria says "Great news Neville"!

Less than a full deck, gushes, "Great news indeed, regards"!

1940, Dunkirk. (December 15, 1999 and after)

BUT, he (THEY) promised it would be ok (ready)


-- brother rat (rldabney@usa.net), July 16, 1999.

George, "Your "not necessary" confidence doesn4t fly with y2k Marma". You're entitled to your opinion but how do you arrive at your opinion? How many systems have you tested and of those how many have been related to Y2K? I'm currently doing Y2K integration testing (all code is remediated and component tested - do you know what that means?). I've worked on about 10 - 15 test programs. When you get to that level then maybe you can explain your disbelief of "not necessary".

Jim, I don't think you read the rest of my post. It's a management problem, not technically difficult but only in the management sense. How do you eat an elephant? One bite at a time. Pieces become compliant (which can be done in parallel) and before you know it the entire system is compliant.

Robert, ok, now I understand your point and I agree. Again, system integration (or fitting pieces into an already existing system) is extremely difficult because you can easily overlooked something.

Sysman, What do you mean by a lab? It was done with test equipment (isolated from the real world) located in various places. Does that constitute a lab? The connectivity used is the same connectivity (lines and protocols) used in the real world. The software used was the remediated and tested software currently in the production environment (copied into the test environment). Even though FAA conducted tests in a "lab" environment, they also conducted tests in the "real world" with connectivity to an aircraft using the Denver airport as the controller. That reduces the risk at least for the Denver configuration (and any similar configuration).

-- Maria (anon@ymous.com), July 16, 1999.

We all know too well what you are NOT doing Marma. The name of the game is integration and regresion testing (present and future), besides logical testing and component testing, under real life conditions. And if you do things well honey then you'll find out that you need to go back to the remediation you thought you had finished but had not (completely). Marma, I find you very conceited and egotistical. Furthermore, not only do you have bad breath, but you also miss the accent on Marma, a beautiful name, but ugly without the accent on the "m"...

-- George (jvilches@sminter.com.ar), July 16, 1999.

I'm confused - why are we arguing about whether or not the test/drill was successful? Whether or not it is indicative of reality? Whether or not the results using a very small sample of third parties represents success with 100% of third parties?

Even the gloomiest doomer will acknowledge some level of Y2K remediation success. Big Deal.

Why aren't there thousands of claims of Y2K compliance, complete with independent verification? Not even hundreds of claims/verifications? What about dozens of verified compliance claims? Could it be that there aren't many significant financial organizations who have successfully passed third party audit?

Why are we quibbling about the veracity (second time today with that word) of this one claim?


Give it up! Open your eyes to the possibility - yes, it's scary.


Give it up! Keep prepping and don't look back - the time is short.


-- tangbang (get@yours.now), July 16, 1999.


Out of curiosity, just who would do this verifying? Where would they draw their expertise from? Do you have any suggestions.

In any case, on GN today was an article from an IV&V outfit for banks (these *do* exist) saying 92% of ALL financial programs audited have NO errors, while the remainder (in the private sector) have about 141 errors per million lines of code. And that one outfit had audited billions of lines of code. So at least where an auditing industry exists, this stuff *has* been done, even if you're not aware of it. Check it out.

-- Flint (flintc@mindspring.com), July 16, 1999.

Flint, where you said that "this stuff *has* been done" you should have also added "to a VERY LIMITED scale" simply because what you called "billions" of LOC are only 60 million. (Please check the original Gary North thread you mentioned as source). The amount of code throughout the world is several dozen billion as you should know.

In second place, it's funny you did not mentioned how POOR IV&V results were for the government sector. They do not matter Flint? How come you are not strict when analyzing the flip side Flint?

-- George (jvilches@sminter.com.ar), July 17, 1999.


You're right. That one outfit reported 60 million lines, and government organizations were much worse. As expected, which is why I emphasized the private sector. But in the US, banks are private.

But don't exercise the Cook Principle -- that is, assuming that that the IV&V we know about must be anomalous, and what we don't know will be worse. I saw no suggestion that this one outfit cherry-picked only the very best code to audit. In fact, North featured this article. And face it, North selects only the worst he can find. If this is the worst record *Gary North* can find, things can't be all that bad.

-- Flint (flintc@mindspring.com), July 17, 1999.

Moderation questions? read the FAQ