Break the bank part II - the compliance mythgreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread
Just wanted to comment on the myth (perpetuated by North and his ilk) of compliance and 'data pollution' between systems of varying compliance.
In a nutshell: it's bogus.
Firstly, there are many companies that - today - aren't 1998 compliant. I know a large distributor whose inventory management system has had problems for 18 months. When the system goes out - and it does regularly - it can be for 36 hours or more. At this point, the inventory department goes into 'manual mode', updating bills and attendant paperwork in hardcopy until it can be batch loaded back into the system. Given the fact that a surprisingly high percentage of systems don't work 100% today, one should be considerably less concerned about full Y2K compliance.
Secondly, regarding systems of various levels of compliance exchanging data: this is the _easiest_ part of the system to remediate. In fact, most EDI or EFT systems already have filtering to ensure that incoming data 'makes sense' (this goes for quantities, currency values, date fields, etc.). This is the simplest part of the remediation - there are fewer calculations and fewer side-effects. Testing is considerably easier.
If North had even a tiny bit of development background he would do a much better job of filtering the content on his site. But then, his vendetta against the fractional reserve system wouldn't be quite as effective, would it?
Worry about Medicare, but personally, I don't worry about this side of things.
-- Bob Knight (firstname.lastname@example.org), December 19, 1998
-- Arlin H. Adams (email@example.com), December 19, 1998.
And what exactly is your experience with mainframes? And why pick on North a historian/economist? Let us hear you call forth Hamasaki or Infomagic to be your strawman. Go down to Infomagic's reply to Webster posted yesterday. Respond to his article, Charlotte's Web, or to Hamasaki's Weather Reports. Convince me by taking on the experts not lightning rods like North or Milne.
-- BBrown (firstname.lastname@example.org), December 19, 1998.
Bob Knight: I have worked on mainframes. North may be exaggerating somewhat, but you are seriously misguided.
-- a (email@example.com), December 19, 1998.
# # # 19981219
BK: I second opinions of 'BBrown' and 'a.'
From professional experience, the majority of applications responsible for "reasonableness" editing _raw_ data, have been limited to is the datum numeric or alpha, and punctuation. Management has short-circuited proper editing rules in the name of speed. Edit rules beyond ( i.e., range limits, checks for duplicates, etc. ) were, and still are(!), discouraged. Some appellations/canards include: Too much "superfluous,"--up front specifications then, coding, and testing--development time for such "trivia" and CPU "overhead."
Don't do it "right!" Just "do it." -- Management wanted a house of cards instead of a house of brick and mortar in the name of "feel good" budget bottom lines. They've gotten what they've paid for. The piper is knocking at the door ...
'Tis time to pay up is past due ...
Regards, Bob Mangus # # #
-- Robert Mangus (firstname.lastname@example.org), December 19, 1998.
> you are seriously misguided
Enlighten me (us) please. I'd like to know specifically how data interchange (take an X.12 invoice transaction as an example) into a system can pollute incoming date fields once a windowing scheme (or similar) is in place.
I can assure you that any company large enough to have EDI/EFT integrated into their systems has an IT staff large enough to address the data interchange issue. Most, if not all, have already done so.
My background is 20 years experience in IT and embedded systems (firmware) revolving around the manufacturing and conversion industries. This includes payment, billing and similar systems all the way down to PLCs on a factory floor.
Now, explain to me, if you would be so kind:
o How many failures are there today in systems of all kinds - and how many failures will there be on 1/1/1999 through 1/1/2000? My contention is that there will be more failures, but not anything that every IT department has not already dealt with. Recently, a consultant with a multi-billion dollar organization (you've all heard of it, one of the largest companies in the world) told me that he had - just a few years ago - found an algorithm problem in billing in which tens of thousands of customers were never billed for services. The result was many millions of dollars in invoices which were never never sent. Very similar to a Y2K- style problem - and one which occurs in many, many companies world-wide... yesterday, today and tomorrow. Or are you insisting that we have no IT failures today? Seriously?
o If you noted in my original message, it simply stated that the simplest piece of a system to remediate and test is the data-interchange component(s). If you disagree with this contention, please elucidate with a transactional example. The high-priority problem exists elsewhere (say, in a pharmacy system that extends valid refill dates out a year).
Don't mistake me for a Pollyanna. I am not one. However, I would like lurkers to understand that there is a bell-curve for everything, even Y2K problems. If the left side of the bell-curve represents easier remediation problems and the other signifies more difficult issues, my assessment is that the 'data-interchange' problem set lies well to the left. Let's assume that Medicare, for instance, lies to the right :-> ...
Or does a bell-curve for remediation not apply to Y2K? Seriously?
-- Bob Knight (email@example.com), December 19, 1998.
Mr Knight, let us assume for one misguided moment that your analysis is correct and lets just focus on your Medicare concerns. Which of the following is the source of your anxiety?
THE fact that over the next 10 years, we're going to run a cumulative deficit in the Part A Trust Fund, the hospital care trust fund for Medicare, of almost $600 billion. Over the next 10 years, over $1 trillion of general revenue will go into Part B of Medicare. In total, we're going to have a general revenue inflow into Part A and Part B of Medicare of $1.6 trillion in the next 10 years?
Or is it:
Total federal expenditures in FY98 was about $300 billion dollars?
Or is it:
The unfunded liability of Medicare is $2.6 trillion?
Or is it:
When this program started in 1965, we had 5.9 workers per retiree. We are now down to 3.9 workers per retiree. We are headed to 2.2 workers per retiree?
Or could it possibly be the economic impact to the overall economy and 'your side of things' when this massive, non-compliant entitlement program comes sliding down around your bare ankles and puts the masses of beneficiares, doctors, hospitals, intermediaries, contractors, bureaucrats, banks, and suppliers into a recession shuffle?
-- MVI (firstname.lastname@example.org), December 19, 1998.
Bob Knight: the problem is you're looking at the data interchange situation in a vacuum. Data interchange will be hampered by
-actual miscalculation of the data interface parameters (which is your focus)
-data that is valid but has been corrupted by erroneous processing prior to the interchange
-failures from no or insufficient remediation
-failures from no or insufficient testing
-failures caused by other failures happening in parallel because the errors cannot be addressed in a timely fashion
-failures resulting from the usual assortment of related y2k problems; commerce, transportation, power, telecom, etc.
-a host of other types of failures and processing errors that I'm sure other programmers can add to this list.
To look at one system or one type of interface and decide that -- hey, it won't be such a big deal to safeguard this code from corruption -- is very naive and fails to address the real nature of the problem. Namely, that the code driving the system of systems is COMPLEX and should have been remediated starting around 1990 or 1995 at the latest. Not 1998 or 1999. This is the opinion of the vast majority of DP professionals that have the big picture of the situation (at least the geeks, maybe not the managers).
When the code handling exceptions begin there is no way on God's green earth that IT is going to be able to muster the response necessary to prevent the whole enchilada from going up in smoke. They're essentially in over their heads right now. They simply will not be able to keep up with the failures and the resulting snowball will wreak havoc worldwide.
-- a (email@example.com), December 19, 1998.
Bob Knight, Don't you have a basketball game today?
-- Alive in 2001 (Outthere@somewhere.com), December 19, 1998.
I've been lurking for some time and feel I now have enough information to speak up.
Bob Knight, You have stumbled into the classic definition of a cult and are wasting your time here. The insiders are superior and the outsiders are to be pitied dismissed and abused. There really is no valuable information here as most of the participants have been duped into turning this board into a testimonial for Mr. Yourdonefor's propaganda.
If you want to be a voice of reason I would suggest posting your thoughts and leaving as very few on this forum seem to be interested an objective discussion of the issues. By just doing quick posts we may be able to avoid the panic and hoarding which Mr. Yourdonefor is hoping for in order to make more money.
If I've ruffled any feathers I apologize but I just felt it was time I expressed my opinion
-- KarenKurious (firstname.lastname@example.org), December 19, 1998.
KarenKurious says, "...very few on this forum seem to be interested an objective discussion of the issues".
Based on my understanding of the concept of objectivity, this forum is the poster child for it. Provide us with your comments on Y2k before you leave which meets the criteria of the definition below.
ob-jec-tive adj. 1. Of or having to do with a material object as distinguished from a mental concept. 2. Having actual existance. 3.a. Uninfluenced by emotion or personal prejudice. b. Based on observable phenomenom.
-- MVI (email@example.com), December 19, 1998.
"My letter to a pollyanna":
-- Kevin (firstname.lastname@example.org), December 19, 1998.
Bob - you are obviously fairly intelligent but that does not mean that you are immune to short sightedness or are wearing blinkers. And having worked in IT, it seems to me that you are focussing in areas that you are experienced in, not necessarily the way to approach looking at the big picture. Throwing in a few straw men of systems that are wasteful now "hey, we're screwing up now, so 2000, we'll be able to get by with a few more screw-ups, just like we are now, we'll muddle through"... this is naive in the extreme (I'm being polite).
We are all worried about the Iron Triangle, if we lose any of these three we are in deep merde : Banking, Communications, Electricity.
I would put Electricity at the top.
I know about Banking, this is my field, so Bob, consider the following comments from your peers, (mainly) programmers in at the deep end, like yourself, but slightly more switched on I would wager.
"how long will it be until they (VISA/MC) notice it is bad data? Not everything will "look" contminated until each person gets his bill in early-mid Feb 2000. And even then, many people will probably have good statements. Or maybe none will.
After how many milion transactions, some of which will be good ones, of course, some will be mixed up good ones, and some will be fraudulant "good" ones taking advantage of the confusion.
Adding: Each transaction (worldwide) represents a merchant's and a supplier's life. Without these legitimate sales, the merchants can't continue in business. Without real expectation of getting repaid from VIA/MC/AMEX to their bank account - these guys would be better off not selling - not accepting any credit, but rather relying only on cash.
Then you have the run on the bank starting - for those who've not already pulled their money out."
"I think you just illustrated the full magnitude of the problem. It's worldwide, especially in the financial arena, it's highly computerized, and bad data can get propagated. Firewalls and filters only work to shut off the sources of bad data. If it's a financial business you simply can't disconnect the spigot without also shutting off the flow of money. Oh, my."
"Ask yourself just *one* crucially important question. Why the hell did Visa issue x million credit cards (via the issuing banks) to umpteen banks worldwide with expiry dates of 2000-2001 when the software, be it at Visa, the Banks or POS (point of sale) terminals could not handle it?"
"Somebody also mentioned triage or workarounds (Paul) - i.e. cutting off mainframe to mainframe discourse from the USA to/from certain countries...logical idea and valid... but not in this lifetime. Who's gonna make the decisions, who will be the central point, who will swallow the legal implications...? Good idea, proposed before, but in the time frame and with the "leadership" we have now, unworkable. The very thought of this would collapse the world banking system..."
"One good reason why bad data *will* be received is that it won't be bad in the sense you are talking about (scrambled - garbage). It might just be inflated or deflated numbers, a product of faulty calculations. No interface program is going to do validity checking like that. That would presume one knew the range of the numbers coming in before they came in.
> This is the misguided, unsupported Gary North idea of "corrupt" data
See, you are calling invalid data corrupt data. Corrupt is corrupt. Invalid can be either corrupt or deceptively out of range.
Even corruption will be received, though, in some (maybe many) cases. That is because the interfaces (between bankA and bankB) are going to be buggy. IMO, the interfaces will be the last piece of the puzzle to get tested ..... way too late.
I've done interface testing and the number of "surprise" failure conditions that arise when interfaces change is astounding. There is no way to quantify it or explain it to somebody that has not done the testing of interfaces. They are quirky at best - total wild animals on their worst days. And 00 will bring in their worst days."
"Date-dependant calculations have nothing to do with data-interchange validations routines. What Andrew is pointing out is that non- complient programs will produce data that is wrongly calculated; these errors will spread magnitudianlly throughout the global financial system. Validation routines between data interchanges simply verify that the parameters are correct: not the calculations forming the data. This is the meaning of corrupt data: bad information, not bad parameters. Andrew (and Gary North) are precisely correct. You are espousing the "misguided, unsupported idea of "corrupt" data " equalling bad parameter transfers. That is incorrect and a straw dummy. Corrupt data = data correctly parametered yet wrongly calculated. Wrong calculations beget wrong calculations ad nauseum. Within 24 hours of the turnover, the Global Finacial System will either A)be completely corrupt B)be completely shut down so as to avoid A. The result is the same in either case; even if we don't go Milne, you are going to see a mess bigger than you can imagine. Alan Greenspan was entirely correct when he stated that 99% is not good enough. We will be nowhere close--not even in the ballpark. The engines have shutdown; the plane is falling--we simply haven't hit the ground yet. Scoff if you must; as a professional working with professionals, I know the score. It's going down. This is why at least 61% of IT professionals are pulling their money out before it hits--of course, in 10 or 11 months, that number will rise to 100%; but then, it will be to late. We know for a fact that that 50% of all businesses in this, the best prepared of countries, will not perform real-time testing. As a Programmer/Test Engineer, I can therefore assure you that at least 50% of all businesses in this, the best prepared of countries, are going to experience mission-critical failures, Gartners new optimistic spin not withstanding. Remediation sans testing is not remediation. The code will still be broken, just in new and unknown ways."
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ">The "amount" of a financial transaction is typically not "calculated" >- it just "is". If I lay down a charge card for a $49.95 purchase - >the amount is not "calculated" and then sent to Visa - it just "is" >$49.95.
Unless you are in Timbuktu.
That's where the interconnecting banks with their currency conversion programs come into play.
>And to top it all off, the Visa machine has to echo back to me the >amount that I agree to pay for - if there is some "miscalculation" > (sic) or mis-transmission of the amount (and what would 1/1/2000 have >to do with that?), then I will simply not agree to authorize that >other amount.
Precisely. There will be a magnitude of "declines" in the trillions, ergo the collapse of the Banking System."
"I have been a computer programmer since 1961, and programmed in at least 15 languages for mainframes and personal computers.
Several respondents, including Bill Moquin on July 27, have challenged your statement that non-compliant data can infect a Y2k compliant computer. However, their reasoning always seems to be that the non- compliant data would appear in the form of a date that will be edited by the receiving computer and found to be erroneous and thus rejected.
They are failing to consider that the non-compliant data may be in the form of the result of date arithmetic done on a non-compliant computer and an erroneous answer that appears to be compliant transmitted to the compliant computer.
For example, if a non-compliant system determines in error that some payment should be made or some product should be shipped, and forwards that information to a Y2k compliant computer system in error, the receiving system (bank?) may merely edit for accurate account number, valid amount, valid item number, etc, and perform debits and credits of the amount or issue a shipping manifest, thus compounding the error. The compliant receiving system would thus have participated in an erroneous, non-compliant transaction, and its accounts would not be correct."
"On April 16, 1996, the Assistant Secretary of Defense in charge of y2k testified before a Congressional committee. He offered this warning:
"The management aspects associated with the Year 2000 are a real concern. With our global economy and the vast electronic exchange of information among our systems and databases, the timing of coordinated changes in date formats is critical. Much dialogue will need to occur in order to prevent a 'fix' in one system from causing another system to 'crash.' If a system fails to properly process information, the result could be the corruption of other databases, extending perhaps to databases in other government agencies or countries. Again, inaction is simply unacceptable; coordinated action is imperative."
"Here is the assessment of Action 2000, which the British government has set up to warn businesses about y2k. The problem is not just software; faulty embedded chips/systems can transmit bad data:
"In the most serious situation, embedded systems can stop working entirely, sometimes shutting down equipment or making it unsafe or unreliable. Less obviously, they can produce false information, which can mislead other systems or human users."
"A bank that is taken out of the banking system for a month -- possibly a week -- will go bankrupt. But if it imports noncompliant data, it will go bankrupt. A banking system filled with banks that lock out each other is no longer a system.
There is no universally agreed-upon y2k compliance standard. There is also no sovereign authority possessing negative sanctions that can impose such a standard. Who can oversee the repairs, so that all of the participants in an interdependent system adopt a technical solution that is coherent with others in the system?
Corrupt data vs. no system: here is a major dilemma. Anyone who says that y2k is a solvable problem ought to be able to present a technically workable solution to this dilemma, as well as a politically acceptable way to persuade every organization on earth to adopt it and apply it in the time remaining, including all those that have started their repairs using conflicting standards and approaches.
Some people say that y2k is primarily a technical problem. Others say it is a primarily managerial problem. They are both wrong. It is primarily a systemic problem. To fix one component of a system is insufficient. Some agency in authority (none exists) must fix most of them. Those organizations whose computer systems are repaired must then avoid bankruptcy when those organizations whose systems are not compliant get locked out of the compliant system and go bankrupt.
If there is a solution to this dilemma, especially in banking, I do not see it."
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From Federal Computer Week
"Interface Issues While most policy-makers have focused on creating awareness and ensuring compliance, most agencies have not addressed the problem of interfaces, or how communications with the outside world will affect compliant systems. What happens when systems are linked and defects are passed over networks?
"You may have a system in which you address the problem, but that system also talks to five or six or several hundred systems, and they may be passing you bad data which you think is good data, and if you process [that data, it] will give you a bad answer on your machine,'' said Terry Zagar, chief scientist for BDM Federal Inc.
To ensure compliance among systems, the National Institute of Standards and Technology has instituted a federal information processing standard change notice that strongly urges all agencies to use four-digit year elements. OMB has endorsed this standard for adoption by federal agencies. In addition, the Year 2000 Interagency Committee decided "early on'' that it would request a four-digit year on data exchanges.
One hang-up, however, is that the standards are voluntary, and while they recommend four-digit year elements, they do not specify a sequence for the day and month elements of an eight-digit date. Again, time is of the essence.
"I think what is important is the fact that we all are ready, and we all can recognize which century we're in when we try to do date calculations,'' SSA's Draper said. "But whether we put the date before the month or the month before the date I think is immaterial as long as we know what we're getting and can react to it.''
Most agencies should make an inventory of interfaces when they look at and analyze their source code and data, said Ken Heitkamp, technical director of the Air Force's Standard Systems Group, Hanscom Air Force Base, Mass.
"Certainly if systems aren't going to be brought into compliance at the same time, then the interface is going to have to be of concern,'' he said."
From Westergaard's site.
"With physical things, it is quite easy to determine where "The Battleship" ends and "The Aircraft Carrier" begins. Not so with software systems.
Software systems have been put into place over long periods of time. While they may have begun life several decades ago as freestanding systems with clearly defined edges, what happens over time is that subsystems are slapped on the periphery. Interfaces are added between previously freestanding systems. Boundaries become blurred.
After an amazingly short period of time, it becomes virtually impossible to know precisely where one system begins and another ends. What is even more difficult to understand is how processing in one system affects processing in another system.
It is entirely probable that due to incomplete historical knowledge, analysts will assume that system A just feeds system B. What is missed is that data that comes from A can pass untouched through B and then trigger something in system C. The connection between A and C can be easily obscured, unless you happen to know about them.
In the real world, of course, it's more likely that data from system A moves passively through systems B through Y and then triggers something in system Z. The chances of the owners of A and Z recognizing their co-dependency is very slim. Now do we understand why testing is expected to consume 50%+ of Y2K project efforts?" Link http://www.y2ktimebomb.com/Techcorner/DE/de9816.htm
From Ed Yardeni
"This is the likelihood of what will happen in the year 2000. The information may still actually be available, but it will be contaminated. It will be corrupt. I can't tell you how many chief information officers I've spoken to over the past year or so who had this as their number one concern that they will spend hundreds of millions of dollars fixing their own system.
And then come January 1, 2000, some systems 50 degrees of separation away will send some contaminated data that will ultimately wind up passing through their system, and information will be filed in the wrong place because of the lack of coordinating the fix of the year 2000 between the systems, and then you're still going to have one heck of a problem. . . .
What level of business could you do in your own profession if your computer systems were down for two hours? Hey, that's easy. That happens on a good day. Now you get on your cellular phone and make some phone calls. You know, maybe you write a personal note, you know, the old-fashioned way, with a pen. That's no big deal.
What if it's down for two days? Hey, well, all right. Well, you figure out something else to do. Two weeks? Two months? Ask yourself that question. What would happen to your personal GDP? And then add it up and put in the number that you think is the kind of disruption that we're going to have in terms of time. There's no way it's just going to be two hours. No way it's going to be just two days. It's going to be at least two weeks. Two months is kind of reasonable.
Could it be six months of major disruptions to our computer systems? Absolutely. Could it be an entire year? Absolutely. Even if we fix everything in our country, even if we all together get alarmed about it and get the message out and make this the top priority that it needs to be, and freeze all changes, stop, you know, changes of tax laws and stop merger and acquisition activity, just focus on only one thing, even if we did that, the rest of the world would still be in trouble.
Now, I say this recognizing it's a strong statement. But, you know, I think I say it with a certain amount of confidence.
Today, Asia is toast. In the year 2000, Asia will be burnt toast. In Asia, they have a year 1998 problem. Most companies have a 90-day business plan of how they're going to stay in business for the next 90 days. They are doing this month by month over in Asia. They have totally been distracted and resourced away from dealing with the year 2000 problem...." Link http://www.csis.org/html/y2ktran.html#yardeni ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Bob, I'm really not sure if you are just having a laugh, but this is a useful exercise anyway.
-- Andy (2000EOD@prodigy.net), December 19, 1998.
Has anybody got $50 so Bob or Karen can buy a vowel?
-- eatme (email@example.com), December 19, 1998.
Andy, dammit I asked KarenKurious to be objective, not you. LOL
-- MVI (firstname.lastname@example.org), December 19, 1998.
Saturday December 19, 3:19 AM
Cheques Cleared For Millennium Bug
The Association for Payment Clearing Systems, the body responsible for transferring money between banks and for clearing cheques, has declared that all its computer systems are Year 2000 compliant.
The Association - known as Apacs - plays a vital role in the UK's economic infrastructure, regularly handling 50 million payments a day which can be worth some #240 billion.
The system is now geared up to handle both sterling and euro payments in the Year 2000 when it is feared that some computers will crash as they will read the year 2000 as the year "00".
-- Bob Knight (email@example.com), December 19, 1998.
The computers worked fine, but the best of intentions caused an embarrassing year 2000 problem for a Fort Worth, Texas, bank.
Although a test to determine whether the bank's computers were Y2K ready went off without a hitch at Bank One, good old-fashioned human error turned the event into a public relations nightmare.
Soon after the test was completed, 2,013 Bank One customers received a surprise in the mail -- notices saying they'd bounced checks.
The notices were generated during the test, and all carried dates after Dec. 31, 1999. They were supposed to go in the trash can.
But a diligent worker inside Bank One wasn't in on the test. The notices were mailed out.
"This was awful, receiving a notice like that just before Christmas," an Arlington woman, whose 83-year-old mother received a letter, told the Fort Worth Star-Telegram.
The bank is sending out apologies to account holders who received the notices.
"The customers are being told that their accounts are still in order, that they won't be charged for being overdrawn," said Joe Bowles, a Bank One spokesman. "This was purely a mistake that shouldn't have happened."
-- Bob Knight (firstname.lastname@example.org), December 19, 1998.
That VISA amount that "just is" will be echoed back by the LOCAL CLEARING BANK and NOT YOUR BANK. At the end of teh day, that clearing bank will have a chat with your bank and the two computers will exchange data, and the amount wil be converted to dollars (if you are out of America). You need to hope to God that the two computers are using the same or compatible Y2 coping strategies, or your little purchase will be very different from what you thought. The data will look ok but it won't be.
-- Chuck a night driver (email@example.com), December 19, 1998.
Oh well guys we tried. Win some/lose some.
Or as Dagny Taggart mused in Atlas Shrugged, "It was strange, she thought, to obtain news by means of nothing but denials, as if existence had ceased, facts had vanished and only the frantic negatives uttered by officials and columnists gave any clue to the reality they were denyin".
-- MVI (firstname.lastname@example.org), December 19, 1998.
Dang, that quote is too prophetic to waste here. Wish I could take it back for later use. Maybe I'll just recycle it.
-- MVI (email@example.com), December 20, 1998.