Oh, *now* I get it

greenspun.com : LUSENET : Electric Utilities and Y2K : One Thread

After much thoughtful cogitation on this matter I finally get it. Utility companies can claim power will be up on 01/01/00 because it might. However, the next week or the next month or the next summer is anybody's guess. They are blathering on about 01/01/00 because that is the most obvious date.

Well I want to know that power will have no interruptions DUE TO Y2k ever. Does anyone anywhere want to pretend they can guarantee that?

-- Anonymous, May 12, 1999

Answers

No you don't get it at all. What you are saying is that if your power goes out for a day due to a storm or a transformer failure that is OK but if your power goes out for 10 minutes due to Y2K it's not? Or if a transformer blows up and your power goes out you will be "understanding" but if the power goes out because of Y2K you won't be? If your power goes out in 2005 will you blame that on Y2K? Maybe a delayed reaction?

BC Hydo has put out a statement saying they do not expect any outages due to Y2K. That is probably the best you can do.

No company will guarntee you power tomorrow, next week, next year, or when ever. Why do you demand a greater amount of security than you normaly would?

I think the question shows you don't get it at all.

-- Anonymous, May 12, 1999


What is at issue here with Y2k, as everyone knows, is the risk of simultaneous failures. Enough so to cause instability of power availablity over large areas for extended periods of time.

The secondary issue is degredation, due to cronic problems in distribution, on the plant floor, or with unreliability in vendor support.

So, in simplistic terms, we have two hurdles to deal with. Making sure power "can" stay on, and then working to ensure it "will" stay on. Obviously, the latter issue is moot without successfully addressing the first issue.

-- Anonymous, May 12, 1999


Actually, no.

First there are over 200 plants now running with their clocks set to 2000+. Also electricity is shipped over time zones. Add into that that changes to and from DST occur twice a year and you can see that not having everything exactly at the same time isnt the greatest need.

As for you second comments. I am not sure what you mean by distribution. Whether you mean it in the sense of electrical distribution or distribution of parts. In the first sense its meaningless. In the second the fallacy is to assume that there are things breaking down every, day or hour that constantly have to be replaced. Just aint so. Ditto with vendor support. Most (thought Ill admit not all) companies have spare parts in stock.

What came out of the NERC report is that just about all of the items found that might fail due to the date change have to do with logging or reporting functions. Important, but not vital. Under normal circumstances it isnt even something you call out a tech to fix on overtime. You wait until they can get around to it

-- Anonymous, May 12, 1999


The Engineer,

If "just about all of the items found that MIGHT FAIL due to the date change have to do with logging or reporting functions" why aren't the utilities finished with remediation and testing. Why are there even posts about mission critical issues. These should just be reported as y2k ready due to the fact they are not impacted at all by the date change.

I'm sure I'm missing something here. So, I am truly interested in trying to understand why we aren't done with the entire issue.

Thanks,

-- Anonymous, May 12, 1999


Posts from who? The companies, people in the companies or people wanting to sell Y2K books, freeze dried food, guns&ammo, etc? I suspect the later.

It basically is over and done with at the (electricity) supply level. Some fixes wont be implemented until the standard Spring /Summer shutdown and maintenance cycle. But its really over except for the echo of the shouting. Corporations also have business functions and depending on the company these are either done or still being done. And to let you in on a dirty little secret Y2K was and is a great way to get new stuff they wouldnt have brought otherwise. If you said you wanted a new computer the answer was no, but if you said you needed it for Y2K, well thats a different story. Also a lot of it now is just doing the paper work to keep people happy who like to see reports and all the little squares having Xs in them.

This isnt just beating a dead horse, its beating the ground its buried in.

-- Anonymous, May 12, 1999



Engineer:

Could you enlighten us as to how you draw your conclusions of the utility industry as a whole? At least one other industry insider has shared similar optimism based on what he says were conservations with numerous colleagues at other electrical companies. It would be helpful to know the extent of your personal knowledge concerning the progress of other Y2k projects within the industry.

-- Anonymous, May 12, 1999


As a small business in BC I've made it my business to assess BC Hydro's Y2K status. Based on the information below, I have concluded that full self provision of my power needs is necessary.

From a 1998 BC Hydro letter BC Hydro is well under way in addressing Year 2000 computer-related issues. We began investigating the effect of Year 2000 on our systems three years ago. By the end of 1998 we expect to have examined all of our computer systems and to have completed the remedial work needed. We've recognized the importance of doing this early to produce the best possible solutions and to help us secure the human and computing resources required for such a large task.

From the BC Hydro web site late 1998

When will BC Hydro complete all its testing? Testing is absolutely pivotal to our Year 2000 preparations. Testing now makes up about 50 per cent of our Year 2000 activities, and the testing of critical devices will continue until June 1999, when we expect it to be completed. Testing of lower priority devices and systems will continue into 2000.

BC Hydro explaining testing on its web site 1999

* program code is not always accessible by BC Hydro users, thus making it difficult, if not impossible, to make changes in programming logic; * program logic is not fully documented, thus requiring BC Hydro owners to regard that component as a "black box"; * inputs/outputs for the system or device are often digital or analog signals and cannot be visually inspected or interpreted, while inputs and outputs from business systems can be inspected; * on-line systems testing may not be possible if the testing is likely to impact the production environment in any significant way (e.g. SCADA for the entire electricity grid); * a few systems may not allow the rollback of the clock after advancing the date in tests; * operation/control systems sometimes behave uniquely in each application, thus requiring testing of each physical occur

-- Anonymous, May 12, 1999


Linda,

>If "just about all of the items found that MIGHT FAIL due to the >date change have to do with logging or reporting functions" why >aren't the utilities finished with remediation and testing. Why are >there even posts about mission critical issues.

This is about the best question I've seen you ask. You have focused like a laser beam on a central point of this issue. Utilities are attempting to prove a negative. We hear from our management and you "Tell me there will be no problem." We suspect there is none, but we test anyway. We find none, but our conservative nature tells us to plan (contingency) for failures - even though we have no test data that supports such a need (plus our lawyers apparently will never allow that). We then have test drills to prove we can communicate even if the telecom industry fails - even though we have no data to support this thesis (in this regard, we are like you - preparing even in the absence of data proving this is necessary). We will have more drills. We are sharing data with other utilities via EPRI to confirm others have independently measured the same results. We will probably continue to worry - because it is impossible to prove a negative.

I will tell you what I tell my boss, Director, Manager and VP. I have tested and found NO problem in a T&D protection device that would cause a problem on Y2K and beyond. I believe that. I am staking my families economic future on it. I am sure of it. Will I stop reviewing my test results and methods in my sleep at night - probably not. Not because I'm not sure of my results, but looking for even more stones to roll over and look under.

Your question is also a source of frustration for me. My peers at other utilities are in varying stages of testing the same families of devices that I tested. This is valuable because it corroborates and confirms the results of those that are completed. Does this mean that those who aren't done yet are Not Compliant? NO! It just means that they aren't finished corroborating the results of the completed utilities. (EMS and SCADA master applications are exceptions - to the extent that they can be user programmed and are more unique than protective relays and control devices)

Even after I have arrive early for my flight, confirmed my seat, and sat down to read a magazine - I still obsess about the location of my airline ticket and check every pocket of my carryon bags every 5 min. until I board the plane.

-- Anonymous, May 12, 1999


The Engineer,

If it is over, then someone should inform NERC. Everyone should be reporting "y2k ready". The utilitie reports should be reflecting the fact that it is over and there were no problems.

There seems to be something holding up the utilities from getting totally "y2k ready" reports out to NERC. That's all I'm saying.

-- Anonymous, May 12, 1999


cl--

I had already sent that question to Engineer before you posted your answer.

Thanks for that answer. You are saying that what the utilities are doing now is not FIXING PROBLEMS, but VERIFYING THERE ARE NO PROBLEMS. If I'm reading you right.

I think most people have the idea the utilities are busy right now FIXING areas that will be problematic in 2000. This is a different way of viewing the utilities activities as report to NERC.

Thanks,

-- Anonymous, May 12, 1999



Actually, before everyone decides I'm a total idiot, let me explain that I DO realize that companies go through inventory, assessment, remediation and testing...Not necessarily in that order. I had the idea that the electric utilities were bogged down in the remediation phase. Do we all agree that they are bogged down in the testing phase? I know these are simplistic questions, but I can't get by here everyday and I'm sure I'm behind the curve. Just a civie that is trying to make fairly enlightened decisions.

Thanks AGAIN

-- Anonymous, May 12, 1999


engineer, i have two questions.

> there are over 200 plants now running with their clocks set to > 2000+.

how many plants are there in the united states? what percentage is 200 of the whole?

-- Anonymous, May 12, 1999


It's all over but the shouting? I think you still have cause for concern about reliability. First of all, self reporting and self testing are just that, an inherently biased source of information. While I don't doubt that what you have stated to your VP and family are true in your case, I am not so certain about the rest of your industry. I list as an example, the recent report compiled by the California Public Utilities Commission which details the problems with a single (albeit large) California Utility (PG&E).

The report is an investigation into a Dec 98 outage which resulted from a rather simple problem being compounded by human error into a large outage which left half a million people in the Bay Area without power for almost 8 hours. The incident itself is an intersting event for all to study, as it is a good example of what I believe we will face on Jan 1. Hopefully we will not all respond or conduct ourselves in the same fashion as PG&E.

You can download the report here: ftp://ftp.cpuc.ca.gov/gopher-data/CSD/PII_FULL_REPORT.pdf

I've taken a few sections for you to peruse here. But read the whole report and tell me you're not concerned about the Western Interconnection's stability. ___________________________________________________________________ This investigation was initiated by the California Public Utilities Commissions (CPUCs) Order Instituting Investigation (OII), No. 98- 12-013, issued on December 17, 1998. The purpose of the investigation was to understand the underlying causes of the December 8, 1998 San Francisco outage and to recommend cost-effective actions to prevent future recurrences i.e., prolonged San Francisco outages. To avoid sub-optimization of future improvements, the team was asked by the Consumer Services Division (CSD) of the CPUC to examine all potential underlying causes involved in the outage, including human errors, equipment failures, process failures, and management system deficiencies.

The San Francisco outage occurred on December 8, 1998. It is estimated that more than one million people (or 456,000 customer accounts) were affected by the outage. The outage started at 8:15 a.m. and ended at 3:54 p.m. with a total duration of seven hours and thirty-nine minutes. About 85% of the customers had their power restored by 2:05 p.m.

Based on its investigation into PG&Es processes and management system, the investigation team found that the human errors resulting in the failure to remove grounds that initiated the December 8, 1998 outage are symptoms of several underlying causes (or OFIs). These underlying causes are: (1) inadequate management control of human performance in the field; (2) error-prone procedure (and switching log) preparation and development process; and, (3) vulnerability in the existing electrical protection systems, which make them less capable of preserving San Francisco s critical load in the event of faults with large voltage fluctuations.

The OFIs that can be corrected to reduce the probability of initiating electrical faults due to inadequate ground removal are: 7 Inadequate supervisory skills to command and control field work (OFI-1) 7 Error prone work culture for the involved personnel that tends to bypass procedures and work practice requirements (OFI-2) 7 Lack of a positive means to track and count grounds installed and removed (OFI-3) 7 Inadequate post-work testing procedure that allowed the electrical bus to return to service before finding unremoved grounds (OFI-4). 7 Inadequate attention to critical operation and critical equipment (OFI-5)

CONSEQUENCE CONTAINMENT PHASE This phase covers the period between the fault initiation at the San Mateo substation to the time when the San Francisco electrical load was tripped off. After the initiation of a fault, there are local and distant protection systems designed to contain the consequences of the electrical fault. If these local and distant protection systems fail, the San Francisco Operating Criteria (SFOC) is designed, as a last defense, to isolate the San Francisco electrical system and preserve its critical load. The isolation ensures that the critical load be served by Potrero and Hunters Point power plants, located within the San Francisco area, and that restoration can proceed expeditiously. As all of the investigations have demonstrated, the fault could have been confined and localized at SMS if the differential relay for Bus 2 Section D was cut in. PG&E reported that the main reason related to the failure to cut in the differential relay was that the operator at the San Mateo switching center missed the cut-in instructions in the switching log. Contrary to PG&Es report, the investigation team believes that the underlying cause of the failure to cut in the differential current relay was error proneness of the switching log preparation process, rather than operator errors in failing to follow the switching log that returned the Bus 2 Section D to service. The investigation team believes that the cut-in step noted in the switching log was not there during the switching operation and was falsified into the switching log after the switching error was made and after the initiation of the electrical faults. Therefore, the underlying cause for the failure to cut in the differential relay was related to inadequate switching log preparation, not an operator error. The OFIs that can be corrected to contain consequences of a fault are: 7 Error prone switching log preparation process (OFI-6) 7 Error prone work culture that is not self critical or forthcoming with problems for the involved personnel (OFI-7) 7 Inadequate protection system for local clearing (OFI-8) 7 The protection system for distant clearing is not designed for fast clearing of bus faults (OFI-9) 7 Current San Francisco Operating Criteria (SFOC) not designed to preserve critical load against disturbance of large voltage fluctuations or loss of generation after islanding (OFI-10) ____________________________________________________________

So there you have it. Tell me I should put my faith in a company who falsified a log to cover up gross errors. Should you? I encourage you to read the entire report as it is quite thorough in it's approach to the problems. It is my profound hope that there are not that many of these type's of companies in the grid, but somehow, I doubt it. I doubt that the industry will find and correct every Y2K error possible. From a common sense standpoint, this is simply unbelievable and impractical. Hopefully, you and your associates will help to ensure that your industry brethern are not falsifying Y2K test results as well. Good luck. You're going to need it.

-- Anonymous, May 12, 1999


**Marianne, just in case Engineer is busy, here's the answer to those two questions:

-- Electric Utilities as of 1/1998 (Number of Units): 10,421

http://www.eia.doe.gov/cneaf/electricity/ipp/t1p01.txt

My calculator indicates that 200 is 1.9% of the total.

**LindaO, either a lot of utilities actually have been and are replacing components and *fixing* problems or they're pulling the wool over their investor's eyes. Take Wisconsin Power & Light's 10K filing to the SEC, for instance:

" Inventoried devices and systems have been assessed and prioritized into three categories based on the relative critical nature of their business function: safety-related; critical-business-continuity-related; and non-critical."

"Remediation and Testing IEC's approach to remediation is to repair, replace or retire the affected devices and systems. Remediation and testing of safety-related and critical-business-continuity-related devices and systems is underway in all business units. In some cases IEC's ability to meet its target date for remediation is dependent upon the timely provision of necessary upgrades and modifications by its software vendors. As of December 31, 1998, IEC was expecting upgrades from 48 embedded system vendors and 14 information technology vendors. Should these upgrades be delayed it would impact IEC's ability to meet its target date. At this time, IEC does not expect that these upgrades will be delayed. As part of the testing process, client/server applications are being tested in an isolated test lab on Year 2000 compliant hardware and software. Also, IEC intends to implement a process to protect the integrity of the data once it is year 2000 compliant."

Since they were waiting on forty-eight embedded systems upgrades and fourteen software upgrades for their priority classifications, it certainly looks to me like there were some problems found.

-- Anonymous, May 12, 1999


Who is this John Hamre guy from the DOD? Why is he sending out memos on military support for Y2K civilian problems if everythings OK? Has he consulted The Engineer? I think once he reads that Engineer says everythings 2K-OK he'll back down on this silly military protocol for Y2K support. Man those government guys are just waaaayyy paranoid. PS-I hear Jack Kevorkian has set up a Y2K survivor camp located under high tension wires. (sorry, it's been a long day and I'm

http://www2.army.mil/army-y2k/depsecdef_dod_civil_support.htm

-- Anonymous, May 12, 1999



CL,

I have always (and continue) to read your posts with interest. One of the reasons that I am fascinated is that your "polly" view of the electrical grid remaining in service is based almost completely on your T&D testing results.

Walk with me a minute here ... and point out my logic errors, please.

What we are trying to prevent is unanticipated transients on the electrical grid. Those transients may be caused by generation, transmission, and distribution failures. They may also be caused by Acts of God, customer load variations, communication failures, and transients from interconnects. If I understand you correctly, you are attempting to argue "all is well" because T&D devices will not cause extreme transient events that lead to system failure. Ok fine. I'm not an expert in that area - I'll defer to you. However, to categorically state that "all is well" without addressing issues in generation (other than good people are working on it), system dispatching systems, communication systems, customer loads, and finally talking to God to see what he has up his sleeve, goes contrary to logic and deductive reasoning.

What would happen to your compliant T&D system if large industrial loads drop off due to Y2K problems in their systems? What happens if the power gets a little dirty due to small problems in the system and a large customer shreds load to prevent damage to equipment? I roughly know the answer - it depends on the dynamics of the system at the time. Electrical systems act a lot like a spring under tension and respond to changes in much the same way. Lots of issues would come into play. What is the power factor? How are the circuits loaded? How fast is the load shed? When the load is shed, is the system at steady state or is it already in flux? Do large inductive loads attempt to start up all at once?

What is my point? Bottom line. Allow me to hold to rationally deduced viewpoint that says (1) CL is not lying, but nonetheless, (2) Electrical systems may fail due to Y2K consequences that occur both inside and outside power company control. (3) Even if Power companies have zero (0) Y2K failures, a statistically significant probability exists that local, regional, and maybe even national outages will occur due to loads that are attached to the electrical grid.

based on testing done on T&D devices. CL, these are only a small part of the total picture. Clearly

A system with small margins can be tripped much easie personally know of two separate instances in a 5000 MW regional system where the system almost completely dropped off line due to

-- Anonymous, May 13, 1999


Duh ... proof read first!!! Sorry about the draft notes ...

-- Anonymous, May 13, 1999

Dsmith, we, the foolishly optimistic ones, are constantly telling you, the woefully pesimistic ones(atleast I will have labelled one group right!) that it is impossible to gurantee power because of the points you made, and CL is simply saying that y2k is ok. and you are correct that a largish power fluctuation will cause a grid to go offline, i work at a power company in Aust and we have a large poeple communications group contadcting a large % of bigger companies and asking them what they 'plan' to do over y2k regarding their power usage, i would hope and expect your local power company will do the same. chow, graham

-- Anonymous, May 13, 1999

Is it just me, or does "The Engineer" sounds an aweful lot like "Fact Finder"?

Whatever happened to "Fact"Finder anyway?

--aj

-- Anonymous, May 13, 1999


Jim, >It's all over but the shouting? I think you still have cause for concern about reliability. First of all, self reporting and self testing are just that, an inherently biased source of information.

Perhaps, but independent verification testing by utilities across the country getting the same results, plus NRC audits, all tend to mitigate a perceived bias. Some utilities, (and I think the generation folks at my utility) are participating in voluntary peer reviews where testers from one utility audit anothers program. If you truly don't doubt my truthfullness, then you must also trust the other utilities that have tested the same devices with similar results. Or, you must accuse me of lying about my tests.

As far as the San Francisco outage is concerned, it was triggered by human error and compounded by equipment failures. When this was discussed in this forum at the time, immediate speculation was certain that Y2K caused this. Flawed conclusion. So now you want to conclude that SF utility is incapable of assessing and fixing Y2K. This conclusion may be accurate, or may be flawed also. Is there a consistant trend of human failure power outages there? A large amount of employee deaths? (not being wise-a_ _, this is really a consequence of incompetence in this industry). Are there a lot of reports of generating units experiencing unscheduled outages? If you can see these trends, then perhaps your conclusion is correct. You cannot hide incompetence in this industry very long. (that is why I think that no utility would lie about Y2k status). Ask these questions about the general performance of your utility and apply them to your logical reasoning about the competence of Y2K prep. (If your utility is that incompetent, someone will be along soon to buy them up and correct things - bad performance = bad business).

-- Anonymous, May 13, 1999


DSmith, your posts are also interesting.

>One of the reasons that I am fascinated is that your "polly" view of the electrical grid remaining in service is based almost completely on your T&D testing results.

What is a polly? My view is based on my test results, conversations with vendor design engineers, test results at other utilities (T&D AND generation) AND the test results of my company peers in generation. I am simply trying to delineate between my informed opinions and my quantifiable conclusions and experiences. I am informed on my utilities generation testing results, I can use them to color my view. I judge my peers competence and can draw conclusions on the veracity of their claims. I see positive test results in all areas, some I performed others I did not. What information do others (including national politicians) use to claim certain knowledge of outages? Many challenges by FactFinder, Engineer, Dan, Guru and others to bring evidence of embedded failures have gone unmet. At the same time they bring real data and report it to NERC/EPRI showing devices either passed, or had cosmetic problems, or were remediated (or will be at next outage).

>What we are trying to prevent is unanticipated transients on the electrical grid. Those transients may be caused by generation, transmission, and distribution failures.

Gee, that sound like what I was doing BEFORE Y2K!

> They may also be caused by Acts of God.

We will not be able to prevent Acts of God at the Y2K rollover. I doubt anyone will.

> If I understand you correctly, you are attempting to argue "all is well" because T&D devices will not cause extreme transient events that lead to system failure.

That is correct, and my observations and conversations with other utils lead me to conclude this is true grid-wide.

> However, to categorically state that "all is well" without addressing issues in generation (other than good people are working on it), system dispatching systems, communication systems, customer loads, and finally talking to God to see what he has up his sleeve, goes contrary to logic and deductive reasoning.

I have addressed generation, but only by observation not a test I performed myself. Same applies about hearing other utils speak of their generation. I do think that all DCS and PLC systems at gen. Stations that have "home-grown" code need to be tested and perhaps remediated. NERC reports lead me to believe this will be done. System dispatch systems are partially my area. RTU's are OK. SCADA and EMS fall in the same category as DCS - they better test them all. This is being done (we replaced/upgraded rather than test). BUT the SCADA/EMS systems will not trip, the external communications will not cause a trip and we CAN (and have) operated without them even if they failed (April drill).

> What would happen to your compliant T&D system if large industrial loads drop off due to Y2K problems in their systems? What happens if the power gets a little dirty due to small problems in the system and a large customer shreds load to prevent damage to equipment?

You say you know the answer to this - then you know this happens every day. Circuits trip, plants have large processes trip off, all the time. We can regulate unit outputs, switch inductors/cap banks, spin pumped storage units as motors etc. You apparently have the education to know this. Some system margins are tighter than others, but remember, this is a weekend. Not many assembly line workers will be on the job, failures will become apparent when the normal load upswing on Monday doesn't happen. That is not so transient now is it?

> Allow me to hold to rationally deduced viewpoint that says (1) CL is not lying

Thanks. Are you saying I'm an honest idiot? (grin)

> (2) Electrical systems may fail due to Y2K consequences that occur both inside and outside power company control.

Interesting viewpoint. QUANTIFY IT! Sorry for the frustration oozing out, let me rephrase that in a more charitable tone, Can you please be more specific and include the failed devices that will be the catalyst of these consequences?

> (3) Even if Power companies have zero (0) Y2K failures, a statistically significant probability exists that local, regional, and maybe even national outages will occur due to loads that are attached to the electrical grid.

Define "statistically significant" and then cite your probability figures (please include how you derived the probability). Also refer to comment to (2) above - SHOW ME THE DEVICE FAILURE!

Thanks for your thoughtful post, we disagree but in a constructive way. Do/did you work for a utility or are you an engineering grad?



-- Anonymous, May 13, 1999


A lot of questions. I wont be able to answer them all (my fingers would be bloody from all of the typing) And it looks like CL has answered most of them but Ill try and add some information.

First of all the errors in DSmiths logic:

No we dont try and prevent unanticipated transients in the grid. First you cant prevent transients. They happen when breakers open due to a fault or just normal switching. They also occur due to lightning strikes, which you can shield for to a certain extent but never truly prevent. What you try to prevent are transients that grow out of control or exceed your systems limits. Thats where the relays that CL was writing about come into play. Usually when you read about a blackout not caused by weather conditions relay failure (or misoperation) is a contributing feature. Transients occur all the time on the system. Before the advent of some much digital equipment in the home you just never noticed them most of the time. When Digital clocks and VCRs appeared people noticed that small disturbances would cause the dreaded blinking 12:00 to occur. You really didnt need surge suppressors at home until people started to use home computers. Youve all probably had TVs in your homes for a long time. How many of them had surge suppresors on them? Any?

The power isnt any where near as clean as you seem to think. Its good enough and you dont notice any harmonics in most house hold appliances unless they get really bad. Actually half the problem with harmonics isnt the utility but industrial users injecting them back into the system.

Dsmith wrote:

What would happen to your compliant T&D system if large industrial loads drop off due to Y2K problems in their systems? What happens if the power gets a little dirty due to small problems in the system and a large customer shreds load to prevent damage to equipment? I roughly know the answer - it depends on the dynamics of the system at the time. Electrical systems act a lot like a spring under tension and respond to changes in much the same way. Lots of issues would come into play. What is the power factor? How are the circuits loaded? How fast is the load shed? When the load is shed, is the system at steady state or is it already in flux? Do large inductive loads attempt to start up all at once? 

First if large industrial loads are dropping off the system that is load shedding so you dont have to worry about load shedding. You have the opposite problem. What you would have to worry about is reducing the generation fast enough. You have to worry about load shedding when you drop generation or a significant number of paths from the generation.

I should state here that there are differences in the eastern and western grid. The eastern grid is much stiffer, lines are shorter, and the generation is closer to the loads. The western grid has more long (100-200 miles or more) lines and the generation is more removed from the load. So the problems are different in each system.

Customers dont shed load to prevent damage to their equipment or because of dirty power in the sense that it is a conscious decision. Pre set relays would automatically disconnect them from the system if certain parameters are exceeded. If you are shedding load the system would not be in a steady state, by definition. Im not sure what your last question even means since any plant that became disconnected from the system would have to have its own start up procedure and if the breaker that tied them to the system tripped it would be operated by the utility and not the plant.

DSmith wrote:

What is my point? Bottom line. Allow me to hold to rationally deduced viewpoint that says (1) CL is not lying, but nonetheless, (2) Electrical systems may fail due to Y2K consequences that occur both inside and outside power company control. (3) Even if Power companies have zero (0) Y2K failures, a statistically significant probability exists that local, regional, and maybe even national outages will occur due to loads that are attached to the electrical grid.

(2). And they may fail due to a giant meteor striking the earth. This is a rather meaningless catch all phrase. They may not fail too. The question is what is the most likely event backed up by the evidence at hand. And right now that is on not failing rather than failing. Less things showed up than were expected. Most had nothing to do with the actual generation and transmission of power. (3) Huh? Can you back this up with anything? What is the probability and how did you (or who ever) calculate it? Without anything to back it up this is just meaningless. Why would a national outage occur due to loads attached to the grid? Because it sounds so cool and neat? And exactly what kind of loads do you mean?

John Smith should check out this URL:

http://www.insidetheweb.com/messageboard/mbs.cgi?acct=mb237006&MyNum=9 26620603&P=No&TL=926620603

Which brings up the interesting question. If nothing much happens what than? Lets face it there is no down side to being a doom and gloomer over Y2K. No politician will be voted out of office. No media expert will be declared persona non grata. No one will really loose any credibility and will be able to point out any little thing as proving they are right. And if it wasnt due to Y2K its obviously a cover up. So it makes a lot of sense to be on the negative side. You never really have to prove anything and suffer no negative consequences when you are wrong. Nice set up, if I do say so myself.

As for what Bonnie wrote:

True the percentage is low compared to the total but it proves that it can be done and without everything falling apart. The equipment isnt all specialized and you dont have to test every single individual item. If a certain type of relay passes for Y2K you dont have to test every single relay of that same type in your system. Ditto most other equipment. There is nothing in what you posted that went to the heart of the matter. Would any equipment they are fixing, repairing have cause the lights to blink? We have found things wrong but its in the recording (time tagging) area. Should it be fixed? Yes. Is it vital? No.

Jim Smith: First its not been proved that the log was tampered with. I have some friends who go to meetings with people from PG&E and theyve said that report isnt as accurate as the paper would leave you to believe. I can tell you from personal experience that logs are frequently filled in after the fact and sometimes you write in them the best you can remember. For instance you are supposed to log in and out every time you go in a sub. Ive been in a lot of subs where my name doesnt show up in the log. Conspiracy. No. Just forgot sometimes, in a hurry, someone else was writing in it and I didnt get back to putting my name down. Also when a lot is going on the Operators first job is to put the station back together. Filling out the paper work comes in a distinct second and is always done after the fact. Im not saying it may not have been tampered with just dont make any rush to judgement either way. And did the company falsify the log or did a human do it (if it was done?). Personally Id go with operator error having seen it more than one. Ive seen operators write down switching orders that are 100% correct and than go open the wrong breaker. Ive seen them open the wrong disconnects under load. And they all have separate numbers and you need a key with that number on to open the lock so than you can open the switch. And It still happens. And yes, Ive made my share (maybe a little more) of errors too.

Am I saying there will not ever be any outage again ever? No, of course not. In fact I think I could almost guarantee you one in the next 5 to 10 years and probably before that. You are dealing with an extremely complex system and humans. Errors are going to happen. The question is how does this relate to Y2K? And the answer is not much. Everyone knows its there, knows its coming and will be on their toes.

-- Anonymous, May 13, 1999


Engineer said...

Am I saying there will not ever be any outage again ever? No, of course not. In fact I think I could almost guarantee you one in the next 5 to 10 years and probably before that. You are dealing with an extremely complex system and humans. Errors are going to happen. The question is how does this relate to Y2K? And the answer is not much. Everyone knows its there, knows its coming and will be on their toes.

We've spent a lot of time debating logging and process control indication functions. Totally putting aside direct impact on process controls, the concerns with bad data logging and process indication bothers me more than anything. Why? The human element. I've spent a lot of time in my professional career in control rooms, and I *know* how important proper indication and process parameter monitoring is when it comes to making split second decisions.

And sometimes it's important to know what happened in *what sequence* to make casualty recovery decisions.

So, you're right - everyone will be on their toes. Does this mean (outside of perhaps the nuclear side of the business where requal training is required every couple of months) that plant and system operators are going to receive the necessary response training in the next 6 months to update their skillsets in dealing with system anomolies potentially induced by Y2k? I can't answer this one. But that's one hell of a lot of training between now and then; I'm wondering how that's been budgeted for. (Training is usually the first thing to hit the bricks when budgets get tight.)

-- Anonymous, May 13, 1999


Rick,

We were discussing protective relays and other devices whose primary functions, once electro-mechanical, are now numeric, with value added SOE and/or time/date stamping. No big deal if SOE fails as long as protection/control is ok.

SCADA/EMS failure due to outside dependencies CAN be handled manually as evidenced by the April drill. What SCADA decisions must be made in a split second manner? What devices do know of that will fail causing this situation? Certainly not the battery alarm monitor (that is the only embedded device failure I can recall you mentioning) Shoot, our parameter monitoring will be transmitted once, and then sampled locally and only reported back on a report by exception basis (reduced comm. traffic).

Waz up.

-- Anonymous, May 13, 1999


CL:

Thanks for your thoughtful post, we disagree but in a constructive way. Do/did you work for a utility or are you an engineering grad?

I am an engineer by education and have worked some in the utility industry. I have been responsible for commercial software development projects for quite some time.

It is nice to *constructively* disagree.

Thanks. Are you saying I'm an honest idiot? (grin)

Of course! Seriously, you are *not* an idiot. I just don't think that you asking all the right questions.

My point (Engineer listen here too), is that most loads are completely outside the control of the power companies and many of these are operating based on suspect computer systems that may or may not fail due to Y2K. These failures will not necessarily occur on Jan 1, but may happen as bad data accumulates in the IT systems.

Several times you ask for non-compliant devices. From the context, it seems as though you mean non-compliant devices in power facilities. For the sake of the argument, I accept that power delivery systems have zero problems. However, as far as loads outside the system, many potential problems have been observed. For example, the statement by GM's CIO reflecting "catastrophic problems" in their manufacturing facilities. Texas Instruments test of a semi-conductor fab in Dallas in which over 50% of the critical equipment was non-compliant. Other public examples exist in hardware, but many more exist in software. Remember, logic is implemented in software and hardware (a nice distinction for discussion, but a clean separation does not exist). It is failure in logic that will either directly or indirectly change the load on the electrical grid.

You say you know the answer to this - then you know this happens every day. Circuits trip, plants have large processes trip off, all the time. We can regulate unit outputs, switch inductors/cap banks, spin pumped storage units as motors etc. You apparently have the education to know this. Some system margins are tighter than others, but remember, this is a weekend. Not many assembly line workers will be on the job, failures will become apparent when the normal load upswing on Monday doesn't happen. That is not so transient now is it?

The changing of the loads, or transient loads, also have a velocity and an acceleration component. True, every minute of every day, loads dynamically change. Our systems are design to add hysterosis (sp?) to the system, i.e. to control the acceleration and velocity of transient loads. However it is my contention, that the days following Jan 1, 2000, are no ordinary days in terms of external load management. We can't use a "normal" day to extrapolate possibilities for post Y2K. We *hope* that we can, but the global evidence points to everything except business as normal for awhile (days? weeks? months?).

With regards to dirty power, remember we are not talking about older analog devices that tend to more voltage tolerant than new devices. We are talking about manufacturing processes in which almost all equipment has embedded digital logic devices. I have burned up a fair amount of chips either building or working on this type of equipment. What could easily happen is that plant managers worried about the effects of voltage fluctuations could shut down large loads to protect the *controllers* of the load. Y2K compliant controllers will cease to function if destroyed by dirty power or turned off by worried managers. This is one type of device that the power companies have no control over.

Engineer ... you have a point about trying to *prevent* transients; but we do devote a huge amount of resources in trying to manage the transients that we experience.

(2). And they may fail due to a giant meteor striking the earth. This is a rather meaningless catch all phrase. They may not fail too. The question is what is the most likely event backed up by the evidence at hand. And right now that is on not failing rather than failing. Less things showed up than were expected. Most had nothing to do with the actual generation and transmission of power. (3) Huh? Can you back this up with anything? What is the probability and how did you (or who ever) calculate it? Without anything to back it up this is just meaningless. Why would a national outage occur due to loads attached to the grid? Because it sounds so cool and neat? And exactly what kind of loads do you mean?

Engineer, unfortunately, you can't calculate probabilities when the input data is unquantified. In general, widespread outages are possible under a number of conditions. Just because I can not give you possiblities of a transformer in a generation facility arcing to ground while being cleaned with isotonic water, doesn't mean that it will not happen ... it has. And what kind of odds would you place on a fallen tree being the trigger for a western power grid failure? What kind of odds would you give for NYC to lose power due to the failure of some equipment to control lightning transients and the massive amount of operator error that subsequently occurred? How much money would you bet that a failed procedure in a substation would black out the San Franciso area - an area served by a system designed so that such widespread failures couldn't occur?

... and LISTEN ... there is NOTHING cool or neat about widespread power outages. People are exposed to danger when this happens and they may be seriously hurt or killed. Don't you EVER again accuse anyone on this forum of anticipating power outages because they are cool or neat! If you think for a moment that anyone here wants power outages, you are so very wrong.

Am I saying there will not ever be any outage again ever? No, of course not. In fact I think I could almost guarantee you one in the next 5 to 10 years and probably before that. You are dealing with an extremely complex system and humans. Errors are going to happen. The question is how does this relate to Y2K? And the answer is not much. Everyone knows its there, knows its coming and will be on their toes.

Will we have widespread power problems? Yes, you are probably right on that point. The when is the only question. However, to hold a view that Y2K has no potential to cause problems because everyone knows about Y2K, is a myopic and silly statment. You simply do not know enough about the world outside of your industry.

-- Anonymous, May 14, 1999


Good Grief! I put a < big grin > in the above and made all the text bold. Sorry.

-- Anonymous, May 14, 1999

Now is bold turned off?

-- Anonymous, May 14, 1999

Rick, I think you are confusing recording incoming data in a proper sequence and tagging that data. Ill give you the following example:

A certain DFR (which we use) had a Y2K problem with its time code. If you advance the system clock to 12/31/99 and let it roll over the time went crazy and the hours started advancing in minutes. However it had absolutely no affect on the recording of the wave forms. They were all correct and would reproduce properly. A thousand amps showed up as a thousand amps and operations such as opening the poles of a breaker showed in their proper sequence. So the recording of the data was still correct. Only the tagging of it with a specific date, hour, minute, etc. was wrong.

When I came into this business that wasnt available anyway. When we had the old Hathaway light beam oscillographs some people tried putting watches in them so the light would reflect the dial off the mirror and on to the paper. Sometimes it worked reasonably well and sometimes people forgot to wind the watch. It didnt affect the ability to analyze the oscillogram. The watches would only give you the hour and minute and when you are looking at an oscillogram you are looking at millisecond changes.

To give a more modern up to date example: The digital relays that are now being installed can show a report when they operate. The report shows the sequence of when elements picked up and when the 52b switch(es) operate. This sequence has nothing to do with the time put into the relay. If you put in the wrong time the relay will display the wrong time but the sequence of operations (assuming the relay operated correctly) will be correct. Time tagging is a nice extra but thats it.

Different SCADAs have different scan rates. Most of those in use are only in the two to three second range.

DSmith:

When you say that the loads are outside the control of a utility you are partly correct. As the load changes the generation & transmission changes to accommodate it. When you write about the velocity and acceleration of the changes you are trying to define what is called system swing. They are about the worst thing that can happen and can really tear up a system. It was a system swing that tore up the western grid last time. But it takes special conditions to produce a large swing. Usually you need a system that is fully loaded and stretched to its limits. You also need a loss of, or insufficient reactive power. Reactive, unlike real power cant be shipped to where its needed and has to be close to the generation or load. One of the causes of the break up of the western grid was insufficient reactive at one of the major substations on the North- South Intertie. Though my understanding is that some of the generators at one of the large dams out there have been modified so they can generate reactive power. Also several hundred megavars of caps have been added. When you loose lines under those conditions (and a few others) you can initiate a system swing and if not curtailed in time will tear up a system and cause it to break apart.

However when the roll over occurs there is expected to be 25 to 50 percent excess generation over load. The maximum load conditions on the transmission lines wont be there because of the time of year, the fact that its both a week end and holiday and that most plants will probably be shut down due to worries about Y2K. I think a utility in Virginia even asked some of their customers not to shut down because they were worried about too little load being on their system at that time. (VEPCO?)

Digital controllers are an old story. Ive been to many a meeting where problems with them have come up again and again. The plant manager usually always blames the utility. Nine times out of ten its due to the fact that the controller is set to tight. Ive read a lot of papers where plants were tripping off under circumstances that were not unusual such lightning strikes a bus or two away from the plant. When a DFR or PQ monitor was put in the plant it showed voltage dips that were well within contractual limits as well as any state utility board mandates. The problem was, almost invariably, that the controllers came set to trip off at even if the voltage dipped by only a percentage or two off nominal. Yet the machinery itself could take a much larger voltage dip without problems. The solution was to reprogram the controllers to be closer to the machines specs. I remember one paper where the plant owners were trying to get the power company to spend millions to prevent the voltage from sagging by more than 2% because of the problems. I think they would up reprogramming all of their controllers for about $15K.

This whole dirty power has become a catch all phrase. It means far more than just having low voltage. Harmonics and switching spikes (transients to you) all come into play. The joke is people think they will have cleaner power from their Onan or Honda generators then they get off the grid. One person even was thinking about using batteries and an inverter. Anyone who thinks that should use an oscilloscope to look at the waveform coming off the machine. I think you will be in for a surprise. The power is probably good enough for a motor (refrigerator, freezer) but unless you have some surge suppressor or a power conditioner Id think twice about running any PC or similar device off of it.

Any plant manager who has been paying attention to whats been going on in the last few years would know about the above.

It wasnt a fallen tree. The line sagged (because of the load) into the tree. That was followed by a misoperation of a relay and due to system conditions (see above) a break up occurred. As for the NYC outage. That was back in 65 I believe. And we do try to learn from our mistakes and not repeat them. The equipment didnt fail to control the lightning transients. What ever that means. The relay failed to trip the line for a lightning caused fault.

I didnt accuse anyone of wanting a power outage. I accused YOU of using terms because they sounded cool and neat. And that you couldnt back up using what you wrote with any sort of actual knowledge. The fact is that you dont really know the terminology (control the lightning transients, I have to admit I LOL when I read that.) and cant back up what you write about probabilities, etc.

I suspect I know more about the world outside my industry than you know about the utility industry. I never said there wouldnt be any power outage due to Y2K but I dont believe any of the BS about outages for months or weeks.

-- Anonymous, May 14, 1999


Hi Engineer,

I appreciate your detailed explanation of "reactive power". Although I have not elaborated on this issue, I have not ever been employed as power grid engineer. I do have some background in EE, but the majority of my industry information comes from untold lunch hour type discussions with power industry executives and system control engineers. I apologize for not using all the correct terms. On one hand, I am targeting my responses to non EE type readers, so I use "spring" type analogies because everybody understands a spring. On the other hand, I don't have all the industry terminology at my fingertips, so I do make some missteps in communication.

However when the roll over occurs there is expected to be 25 to 50 percent excess generation over load. The maximum load conditions on the transmission lines wont be there because of the time of year, the fact that its both a week end and holiday and that most plants will probably be shut down due to worries about Y2K.

Please remember that in many instances where small problems became big problems, the generation load was supposed to be on line and wasn't. Maybe some yahoo dug up a natural gas main, or some control systems malfunctioned and generation units unexpectedly were offline. I've seen both of these situations that made isolated failures into regional failures in the last three years.

This whole dirty power has become a catch all phrase. It means far more than just having low voltage. Harmonics and switching spikes (transients to you) all come into play. The joke is people think they will have cleaner power from their Onan or Honda generators then they get off the grid. One person even was thinking about using batteries and an inverter. Anyone who thinks that should use an oscilloscope to look at the waveform coming off the machine. I think you will be in for a surprise. The power is probably good enough for a motor (refrigerator, freezer) but unless you have some surge suppressor or a power conditioner Id think twice about running any PC or similar device off of it.

Perhaps dirty power has become too much of a catch phrase, but it does do a good job of explaining the problem to non-EE types. Yes I know that the grid waveform is typically better that portable generators, but you must know that such an answer is much too simplistic. The shape of grid waveforms are normally excellent. However, grid supplied voltage can vary +/- %10.

Sub $500 type generators produce mostly "motor" type electricity. The waveform sometimes has lots a small variations or noise because the small gensets use brushes. Voltage will vary a lot depending on the load. Voltage and frequency dips are pretty bad when large loads are applied.

Honda gensets use brushes, but generally produce pretty clean waveforms. Their voltage and frequency regulation is speced to be less than +/- 5% variation. Voltage and frequency dips stay with the 5% spec.

Onan and better genset do even better that Honda sets. Especially the RV models.

Industry gensets with electronic governers and brushless generators hold voltage and frequency varations to less that 0.2%. Our datacenter is backed up by such a unit. As long as the load that is applied is not a majority of the capy of the genset, these sets normally produced better power than comes from the grid.

As for inverters, the output varies greatly depending on the unit. The inverters that we use average 40+ steps per waveform and even the most sensitive electronic equipment has no problem. Granted these are "sine wave" type inverters, not step inverters.

didnt accuse anyone of wanting a power outage. I accused YOU of using terms because they sounded cool and neat. And that you couldnt back up using what you wrote with any sort of actual knowledge. The fact is that you dont really know the terminology "control the lightning transients, I have to admit I LOL when I read that.) and cant back up what you write about probabilities, etc.

I suspect I know more about the world outside my industry than you know about the utility industry. I never said there wouldnt be any power outage due to Y2K but I dont believe any of the BS about outages for months or weeks.

Perhaps you wouldn't laugh so loud at "voltage and/or load variations in delivered power due to the inability of the distribution and/or transmission system to control load imbalances cause by unpredicted causal events". Now, which is more understandable to the non-EE?

I submit to you that computers and software have much more to do with the causal effect of Y2K success or failure than "reactive power". However, I don't push you around with computer geek terms that you probably can not define - Channel connected DASD, 9 track, console ops, execution queues, CICS, MVS, VMS, ABEND, parallel processing, Wolfpack, cluster, RAID, kernel mode, user mode, IO ports, hooking, objects, exceptions, COM, COBRA, DLL, threads, windowing, ISAM, relational, SQL, navigational, and on and on and on ... So please forgive me, if I misuse a term that might be common to you.

As to the weeks and months without power, I have never made such a statement. I did say that cascading power outages might cover a large area, maybe even the nation. I agree with you about weeks without power. I do not think that is an likely senario at all. My personal guess is that power quality and reliability will suffer significantly for a short period of time in the winter and then again in the summer as electrical demand increases. In the unlikely event that widespread areas are without electricity for more than about 3 to 5 days, social unrest will be a problem. God help us if the electricity is off longer than that.

-- Anonymous, May 15, 1999


Engineer,

Bonnie posted a link to a Y2K analysis by ABB on the thread below. I copied a bit of it to the thread. ABB supports what I've said above. Namely, 1) the probabilities that you are requesting generally rise under certain circumstances due to an increase in the potential number of failures and 2) that it is not clear if our reliance on system interdependence will be an asset or a liability during 2000.

http://greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=000pp5

-- Anonymous, May 15, 1999


DSmith wrote:

Please remember that in many instances where small problems became big problems, the generation load was supposed to be on line and wasn't. Maybe some yahoo dug up a natural gas main, or some control systems malfunctioned and generation units unexpectedly were offline. I've seen both of these situations that made isolated failures into regional failures in the last three years.

Can you please give me the specific examples of where these became "regional failure". Small area failures happen all the time. I was out of power for six hours last fall because a driver hit a pole and the transformer on top of it caught on fire. But it was a neighborhood black out and that was it.

Actually our Grid supplied voltage is + or  5%. And it doesnt vary by that much under normal conditions. Those are the acceptable limits. It may spike more due to a lightning stoke or fault but that should only be within a bus or two of the fault.

You said it yourself about Gensets. You get good voltage regulation as long as you arent running near capacity so you have to way oversize the set. Also the load has to be fairly constant. You can do that in a business but in a home its another story. The power should be OK for your fridge or freezer but I wouldnt hook my computer to it without some form of surge protector in between the computer and the line.

The inverters you use are also expensive ones. Not the ones that Joe Average is going to buy or even think about when hes panicked about Y2K and is clueless about electronics.

DSmith wrote:

Perhaps you wouldn't laugh so loud at "voltage and/or load variations in delivered power due to the inability of the distribution and/or transmission system to control load imbalances cause by unpredicted causal events". Now, which is more understandable to the non-EE?

No, sorry but that still doesnt make any sense.

I submit to you that computers and software have much more to do with the causal effect of Y2K success or failure than "reactive power". However, I don't push you around with computer geek terms that you probably can not define - Channel connected DASD, 9 track, console ops, execution queues, CICS, MVS, VMS, ABEND, parallel processing, Wolfpack, cluster, RAID, kernel mode, user mode, IO ports, hooking, objects, exceptions, COM, COBRA, DLL, threads, windowing, ISAM, relational, SQL, navigational, and on and on and on ... So please forgive me, if I misuse a term that might be common to you. 

Actually I do understand a lot (but not all of those terms). Do you really think I am clueless about parallel processing or IO ports, SQL, etc. Please dont be that naove.

Ive also read the ABB statement. It was written in January. And ABB is trying to sell their expertise to various companies. We even hired them a year or so ago to review one of our stations where we had a lot of their equipment. Of course they are going to say be concerned and hire us. What is the surprise of that?

-- Anonymous, May 20, 1999


bold begone.

-- Anonymous, May 20, 1999

Engineer,

Being out of the loop in the power industry, I don't know a lot of examples of small failures becoming regional. However, three instances come to mind. All happened in the Panhandle of Texas over the last three (?) years. One was caused by the cleaning of transformers; the other because some poor guy dug up a 8" gas main (and paid for his error with his life). Best I can tell, about 20,000 square miles of Texas and New Mexico lost power each time. Power was out from 4 to 16 hours, depending on your location.

Another incident this last fall: Lubbock, Tx is a city with municipal power and a population of 200,000. They normally generate about 50% of their power and buy the rest from another source. One fine morning, they lost 85MW of capy from their supplier due to transmission problems. With no time to spin up generators, a large portion of the city lost power. About 2 hours later, the city was back with an operating margin of only about 5 MW. About 1 (?) hour later, a little 10MW cogen unit tripped. The subsequent load was transferred to the other units and caused cascading failures. Power was out for 8-12 hours in Lubbock. Ironic note: The major and his department heads including fire, police, etc. were in a windowless city hall conference room discussing possible Y2K induced outages!

In Florida this winter, they *almost* had widespread blacks during the freezing days. I don't have the link handy, but I remember the power company spokesman speaking of very tense moments when the system was on the brink. Evidently, they almost lost power in several major cities due to overloads causing by lots of resistive heating.

Have you ever been in the back of a Walmart SuperCenter (famous for no windows) when the power goes out? So you get out of Walmart navigating using the emergency exit lights. Managers are running around prying cash registers open and cramming $ into the safe. Employees walk the store with flashlights from sporting goods trying to clear people out before Walmart gets cleaned out. So you get to your car. Noticing that the gas gauge is a little low, you pull into a gas station and are halfway out of the car before you realize that no electricity means no gas. Humm ... the KFC next door is locking their doors. A quick dash to their drive-in window (no ordering at the sign) nets you a free chicken dinner. Traffic lights are out. Any lights that were backup up by battery have already dimmed or quit working. Instead of heading home and dodging all the "idiots" on the street, you have a chance to sit in the parking lot for awhile and reflect on how much you miss your wonderful servant electricity. The police direct traffic at a few intersections, but most are chasing fire and buglar alarms. After you get home, the house is cold. The central heater is useless without electricity. So is the electric range ... and the microwave. You would drive to stay with family or friends, but ... you don't have enough gas to drive out of the blackout area! So you drag in a few of those hunks of wood from the tree that you cut down last year, throw them in the fireplace, and wait for the lights to come back on.

Life goes on because the electricity was only out for 8 hours or so. To the prepared, that is nothing. In fact, it happens all the time. But the city-dwellers who are used to 24x7 electricity and are not prepared, 8 hours is no fun - 48 hours is h*ll.

True story. No new "revelations" here. But a little taste of "regional" electrical outages can change your position on preparations.

-- Anonymous, May 22, 1999


Dsmith,

Free chicken? Regular or extra crispy? (grin)

-- Anonymous, May 24, 1999


Moderation questions? read the FAQ