October 1 NEI Update: 95 of 103 Nuclear Power Plants Y2K Ready

greenspun.com : LUSENET : Electric Utilities and Y2K : One Thread

The full report is available at: http://www.nei.org/library/y2k_readinessreport.pdf I am posting the first page below (please forgive if the formatting is incorrect).

_____________ Nuclear Utility Industry Year 2000 Readiness Status Updated October 1, 1999 (Year 2000 Readiness Disclosure 1 ) Each of the 103 commercial nuclear power reactors has reported the status of their Year 2000 readiness program, based on industry guidelines in Nuclear Utility Year 2000 Readiness. These programs apply to software, hardware and firmware in which failure due to a Y2K issue could interfere with performance of a safety function or impact continued safe operation of the nuclear facility. To date, 95 reactors have completed all remediation and are Y2K ready. There are only ten open items at the eight remaining reactors. Remediation is in progress at the four reactors currently shutdown for refueling outages. One reactors refueling outage is planned for latter this fall and three reactors are remediating a site support system that do not impact reactor operations. Over the past two years, the industry has tested approximately 200,000 items that could be susceptible to Y2K issues. Of these, approximately five percent or 10,000 itemsneeded remediation. The industry has completed over 99 percent of the overall readiness program. Each facility also prepared contingency plans for key Y2K rollover dates using guidance in Nuclear Utility Year 2000 Readiness Contingency Planning. These plans will reduce the impact of internal or external Y2K induced failures. Both industry guidelines are publicly available at the Nuclear Energy Institute web site (http://www.nei.org). The Nuclear Regulatory Commission (NRC), the federal governments nuclear safety regulator, has been directly involved in the industrys Y2K readiness activity for the past two years, including on-site program reviews. NRC audits and on-site reviews have confirmed that nuclear power plants will continue to generate electricity safely and reliably as we enter the year 2000. The agency also concurs that all safety systems will function if required to safely shut down a plant. Independent NRC and industry audits have concluded that Y2K readiness programs have been properly executed. The nuclear industrys Y2K effort has been closely coordinated with the North American Electric Reliability Council (NERC), the organization managing the overall Y2K readiness effort of the electric industry. The current industry status leads to high confidence that nuclear generation plants will continue to reliably deliver 20 percent of the nations electricity needs well into the next century. 1 This year 2000 readiness disclosure is made under the Year 2000 Information and Readiness Disclosure Act (Public Law 105-271) -------------

Regards,

-- Anonymous, October 03, 1999

Answers

"We had the highest degree of confidence that this grand ship was unsinkable."

J. Bruce Ismay, President, White Star Line

(U.S. Senate Inquiry into the sinking of the Titanic)

-- Anonymous, October 03, 1999


Finally, a number we can trust!

"...approximately five percent or 10,000 itemsneeded remediation.The industry has completed over 99 percent of the overall readiness program."

1 percent would equal 2000 items left to fix, which seems a bit different than "...only ten open items at the eight remaining reactors."

Who cares about these stupid press releases, the lawyers? There is no real information here. Just more of the same.

-- Anonymous, October 03, 1999


Rick, the Titanic as an analogy is quite apt.

Contrary to what the film showed -- a long gash being torn in the side of the hull by the iceberg -- what actually happened is that the impact caused some of the rivets in the ship's steel platework to pop out of their holes, opening up many small gaps in the platework. The gaps amounted to roughly 12 square feet of surface area, but were spread along along 250 feet of hull length and five hull subdivisions.

A relatively small number of rivets were involved, and relatively small amount of surface area was affected, when compared to the many hundreds of thousands of rivets in the hull, and the many thousands of square feet of surface area below the waterline. But it was enough to take the ship down.

Now, there are theories that say that if the crew had been trained in damage control methods, using the same techniques employed in warships, they could have stemmed enough of the flow to keep the ship afloat until help arrived. However, it was inconceivable to most everyone involved in her design and operation that an event such as running into an iceberg could happen; or that even if such an event did happen, the ship would not be capable of surviving the damage.

-- Anonymous, October 04, 1999


Rick,

I second your comment on this matter. And that is where I am right now with the entire infrastructure. Let's not forget that what is said publically and what is believed privately can be very different. The Challenger disaster proved that as well. We can only speculate on why some engineers and management decided to launch, and take the risk. In retrospect there is no doubt that if they could have known what was about to happen they would have delayed it.

-- Anonymous, October 04, 1999


Or the latest Mars probe..

-- Anonymous, October 04, 1999


Sean,

I would love to comment on that Mars probe matter, but I cannot. I promised Rick that I wouldn't start any more postings on UFO's. ;-(

-- Anonymous, October 04, 1999


Now that we know that it was not a great big "GASH" from stem to stern from the analogy above,I guess you could say,

that the iceburg was "Just a bump in the Road"

........ so to speak.

-- Anonymous, October 04, 1999


Rick,

Someone should do (or maybe already has done) a serious study of crowd pyschology that leads to disasters that seemingly should have been obvious. Witness the Tulia Mania, South Sea Bubble, various aspects of the French Revolution, lots of other examples, and now (probably) Y2K Information System crashes. It would be funny if it weren't so serious. It is most maddening to attempt to argue against the "God Himself can't sink her" mentality. Can it be that "reality only exists in the mind" has taken over?

David

-- Anonymous, October 04, 1999


I posted this just as information regarding nuke status that I thought you guys/gals might appreciate, not to shout "were almost done!". Just the same, it's amusing to see the reaction to "good news".

The Titanic analogy is interesting but extremely flawed. Y2K is more like many boats in a large ocean with a few big icebergs and lots and lots of ice......cubes. And as opposed to the Titanic, we have many lookouts keeping watch for Y2K bergs.....

Regards,

-- Anonymous, October 05, 1999


As long as you want to look at entities in isolation, the Titanic analogy does not seem to fit. But if you look at commerce and the economy in perspective, a few very important things going a little bit wrong can have dramatic effects, so the analogy fits better. As someone has said (Tom/Bonnie), we don't just need microscopes to look at Y2K, but also binoculars.

"Ice.... cubes"? Sheer assertion, based on preconceived notions, ignoring lots of evidence that unremediated systems will malfunction and that many businesses are doing little or no remediation.

But the Titanic analogy is particularly apropos when it comes to hubris. Fortunately, we have none of that here.

-- Anonymous, October 05, 1999



Helium has posted my Titanic comments on the TimeBomb forum, where it is generating similar discussion. I wish I had been able to think of the "bump in the road" correlation first, but I was riveted on rivets.

I should tell you folks that I am a strong advocate of nuclear power, believing that even with its risks, it is ultimately far less damaging to the environment -- actually or potentially -- that fossil fuel plants.

However, in saying that, I am also of the belief that nuclear will remain safe only if an informed public maintains eternal vigilance over the design, maintenance, and operation of nuclear plants. Y2K will supply the ultimate test as to whether this belief is defensible or it isn't.

-- Anonymous, October 05, 1999


(Lane Core wrote, tounge firmly implanted in cheek:

But the Titanic analogy is particularly apropos when it comes to hubris. Fortunately, we have none of that here.

That was my point, guv'ner.

-- Anonymous, October 05, 1999


Scott, I find myself in much agreement with your postion on nuclear power, I live a dozen or so miles from a large coal plant and about the same from a nuclear plant, and I watch the stacks blow smoke from the coal plant and am thankful for the clean air at the nuclear plant. I would add that the vigilance must also come from inside the nuclear industry as well.

Lane, "Ice.... cubes"? Sheer assertion, based on preconceived notions, ignoring lots of evidence that unremediated systems will malfunction and that many businesses are doing little or no remediation. "

In my analogy, the ice cubes were a symbolic representation of the minor nature of the majority of Y2K bugs, the icebergs represented the fewer more serious ones. "Sheer assertion"?" Nonsense. I based my representation on actual findings in my work on Y2K, and I have posted many industry links stating the same findings regarding embedded systems. What Y2K work have you performed? What evidence do you have that the prevalence and severity of Y2K bugs is more serious than I or the electric utility industry have found? If you have evidence that Y2K bugs are largely serious as opposed to minor, please provide your evidence - the industry needs your actual findings now.

based on pre conceived notions. See the above, and I cant imagine how you would know whether any notion I have is preconceived without being me, which I am pretty darn glad you are not, lol!

lots of evidence that unremediated systems will malfunction. Please Lane, give us your evidence, we need your help. I tell you what, just provide me with just THREE manufacturer/model numbers of embedded systems that will malfunction. Three. Thats not many, out of LOTS:should be an easy task., and would demonstrate that youve done your homework (no helping Rick, lol). Not links, Lane, but manufacturer and model numbers, just three little old systems that will malfunction and not be able to perform their functions.

Regards,

-- Anonymous, October 05, 1999


Yes, fossil fuel power plants are a nasty business. I heard on NPR Radio the other day that coal-fired plants in the midwest put 50 tons of mercury into the atmosphere annually. Mercury as a poisonous substance has signicantly greater toxicity to humans than does plutonium. But I will stop at that, before I engender an off-topic discussion of plutonium as "the world's most dangerous poison".

-- Anonymous, October 07, 1999

FactFinder,

Not only do embedded systems place utilities at risk, but also non-embedded systems.

On the embedded systems side, please note that all digital chips are not created equal in form or function. If your plant has 20 systems and each system has a 1000 chips, you have 20,000 embedded chips to test. Ok, let's say that .1% of the chips will malfunction on Y2K rollover (much less than some estimates). That is 1 chip in each system (1 in a 1000). Which chip is it? Is it the chip that integrates the reading of a sensor over time OR is the chip that controls the timing of discrete events? If the former fails, you miss a reading that may or may not cause unforeseen events (opening or closing valves unexpectedly, etc.). If the later fails, the system looses its heartbeat and dies.

Do you see why estimating the number of failures is meaningless separated from the context of the function of the chip? The message is it must be completely and perfectly tested. As an example, I have shipped software that is composed of 100,000 + lines of code in which one extra period cause a mission critical failure.

Please, even the embedded chip realm, this isn't about engineers testing and poking around trying to see it they can make components fail! (BTW: I'm an engineer too.) Absent specific knowledge of the function of a chip in a particular system, you are helpless to certify a system as Y2K compliant. It's like trying to drive a car by Braille. You are going to miss some and hit others. Without chip level system knowledge, no one can truly certify a system.

That's what makes this whole thing so blasted frustrating!

On the non-embedded side, anything goes. Non-embedded buggy software can make the power system unreliable and unstable. It can bankrupt the power company or cut off the supply of fuel. It can create hardships for employees that make them move or prevent them from coming to work. For the sake of intellectual honesty, please do not ignore this side of potential problems.

Regards,

-- Anonymous, October 07, 1999



David, When we fix an embedded system that has a Y2K bug, it's usually something like replacing the EPROM chip with a new one already programmed with the Y2K fix, or burning in new firmware in the existing EPROM. We then test the DEVICE, not the "chip" for proper functions, and to ensure that it passes all date tests. I don't know of anyone who is testing just "chips" - what chips are you talking about, the PROM chips? The RTC? The microprocessor? I am a bit confused as to what you are saying here, since testing individual chips that make up a system doesn't test the system as a whole and while appropriate in a chip manufacturing environment, testing chips already installed in a system dosen't seem to make a lot of sense in a finshed system...maybe I am misunderstanding what you are saying here.

As far as testing "completely", testing all functions and dates does this - this is "system" testing, not chip testing.

I agree that software is also important to the utilities function, and in my opinion was always more of a potential threat (long term) to the company than embedded systems if not remediated. I tested a number of desktop applications, and in my opinion the best way to do this is to have, in addition to the documenation, an expert user involved to ensure all functions are identified so that they can be properly tested. For mainframe software, we were fortunate enough to have programmers around who were already supporting the various software programs.

I take expection to your "perfectly tested" statement though, unless you mean properly tested for Y2K using the critical test dates. "Perfect" testing of complex systems and complex software was never the case in the past, and I doubt it will be in the future. Proper and adequate testing is the best we can hope for. The example you gave of a dot in the code causing a mission critical failure would should have been detected by testing the functions, especially the mission critical ones! This is a perfect example of why spending days looking at the code may be better spent running the program and testing the functions. Same goes for chips - the chip might be "perfect", but put it in the system and you HAVE to perform functional testing of the system as a whole - it's the only way to truly assure the system will function properly.

Regards, Regards,

Regards,

-- Anonymous, October 09, 1999


Hi FactFinder,

Thanks for your thoughtful response. I was a bit unclear on "systems" vs "chips". My assumptions are that chips make up systems; systems are tested, but it is the individual chips that fail. If a failure is experienced, it is significant as to which chip failed. That is the point that I was trying to get across before.

A chip can experience a software failure (corrected by updating the EEPROM) or a firmware failure (correctable only by replacing the chip). Maybe a large number of systems can be fixed with an EEPROM update; others may require a new chip or a new system. Either way, testing can be very difficult. I'm sure that you have been through this type of scenario, but for sake of discussion please consider the following.

For example, consider a date or time sensitive system (not a chip) that has 10 inputs - 8 digital (on/off) and 2 analog. If you know that each chip in the system does not have any firmware date processing issues *and* you have the microcode that integrates the chips into a system, then you can examine the microcode to prove/disprove possible date processing error conditions. The other choice is to test each critical date with every possible input condition. If you are testing 5 dates, this means 10^2 * 5 = 500 tests and each test must exercise the full range of any active analog input. The only way to reduce the number of tests is to know about the design of the system.

I have not seen data to support the conclusion that most devices are throughly tested as in this example.

I agree that software is also important to the utilities function, and in my opinion was always more of a potential threat (long term) to the company than embedded systems if not remediated.

I could not agree more.

For mainframe software, we were fortunate enough to have programmers around who were already supporting the various software programs.

I wished I could explain in a simple paragraph how difficult this really is. I don't even do this kind of work any more, but complex systems that took 100's to 1000's of man-years to construct are not easy to fix.

I take expection to your "perfectly tested" statement though, unless you mean properly tested for Y2K using the critical test dates. "Perfect" testing of complex systems and complex software was never the case in the past, and I doubt it will be in the future.

Any software must be tested only as well as it is expected to work. The average corporate "mission critical" business system cost about $3 to $7 per line of code. NASA spents about $1000 per line on shuttle code. Medical devices fall somewhere in between. You are correct, even the NASA code isn't perfect, but the more perfect it must be, the more time and $$$ that it takes to reach completion. The difference in cost is fueled by the testing. The question is, what kind of failure can the system tolerate?

(The hilarious ... tongue in cheek ... thing about the late testing of software based systems is that at least 1/3 of the time required for the full remediation is in testing. Many times problems are found in testing that send the engineers back to the drawing board for another solution that must start the testing cycle all over again.)

My "period" failure should have been caught ... you are exactly right. However that project was understaffed, behind schedule, and customers were screaming for software (sound familar?). Human error.

This is a perfect example of why spending days looking at the code may be better spent running the program and testing the functions. Same goes for chips - the chip might be "perfect", but put it in the system and you HAVE to perform functional testing of the system as a whole - it's the only way to truly assure the system will function properly.

It is much more difficult to get correctly working systems by "running the program" or "using the system". I refer you to the example above and the volumes of industry literature on testing software controlled devices.

I would like to think that we have tested the bugs out of our systems. I really would. However, I know too much about the failures of "finished" software systems to have any degree of confidence that this is the case.

Regards,

-- Anonymous, October 11, 1999


P.S.

The successful software projects do become stable (not bug-free) products after a period of time. During that period of time, customers find problems that are reported and fixed. The length of the time period varies with device complexity. As long as the device isn't too buggy (as defined in the context) it will "live" long enough to have its bugs discoved and fixed.

The problem is that about 35% or so of the projects are never completed. Somewhere between 40% and 50% of software projects and/or devices are not "finished" by the due date. The 15% to 25% of the projects that are finished typically ship with 15% of the bugs still in the product.

If we had another year, maybe we could find the 15% of the bugs in the completed projects, complete most of the other 40% to 50% of the projects, and make contingency plans for the remaining 35% of unfinished projects. Unfortunately, as a whole, we are at least a year behind at this point.

Regards,

-- Anonymous, October 11, 1999


Moderation questions? read the FAQ