Power Grid Configuration

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Could someone please comment on the following logic and check my thinking.

Back when I first started looking at this problem about 2 years ago, I figured if power generation or the grid was at risk, then all bets were off. There would be panic in places like New York and Los Angles let alone the fact that a lot of people would be w/o heat all over the place and basically everything would grind to a halt.

Then in May and again in September NERC ran a couple of drills that tested the communications system - the phone system - between all the power plants. One guy at plant or switching facility X could talk to another guy at plant or switching facility Y to get power redistributed manually. When this kind of testing was performed I wondered: what the heck are they testing that for. That's like testing a manual system of some kind not the automated switch over via computer control when plant or switching facility Y goes down. It seemed like their testing was very shallow. So with that kind of due diligence it seemed to me that there was a very good possibility of at least a brownout if not a more permanent failure of a portion of the grid.

OK, so now comes 1/1/2000. CNN shows all countries have power the lights stay on and there is no panic in the streets. Y2k weekend is over and everyone goes to bed - kind of boring.

Hmm... What gives here? Other than a few isolated communities in Canada and possibly the US, power stays on. How is this possible. We know no one fixes code that well. Bugs are introduced even during fixes. Where are the bugs? They sure as hell didn't flesh them out via testing - no amount of testing is that good. Besides there's been a lot of lying going on in the government (slipped dates and such) and the inductry and the pressure was on to just get the fix out, but...

I began to postulate how this portion of the rollover could have been so successful. You have to figure that all those plants where not compliant. But then all over the world, even in countries that had spent little to get prepared? Sure made a nice show for CNN millennium coverage with all the parities and all the LIGHTS!

I began to figure that maybe, just maybe some or all of these power companies pulled something off in the way of a contingency plan that allowed them to supply power in a very reliable safe manner. But how did they do this? The conclusion that I'm coming to is that maybe they ran a very different system than what is normally run. In the gravest case where compliance was not possible before 1/1/00, they just changed their clocks back to 01/1972. After all, they absolutely positively could not afford to have those cities w/o power; there would be PANIC and worse. I figured this problem early on but really didn't consider the possible work arounds to that infrastructure to stay in place. Big mistake on my part.

Now you know a number of items came to light within the last two weeks before the rollover (here a couple of key items):

1) Russia will run their nuclear power plants and I believe their whole power grid in a manual mode nearly eliminating the possibly of computer failure from y2k.

2) many manufacturing, oil and gas pipeline facilities, and chemical plants were shutting down over the rollover if not the whole Christmas/new years holiday.

The Russia thing is interesting because, if they could move through this period using the old manual method, then, hey what about other countries? What about the US? When I first heard about Russia, I really didn't consider anyone else, but now...

Item 2 is important because if the grid did fail then there would be less chance of major disasters at these industrial facilities. Also, the power requirements would be very minimal at this time with all the industrial users shut down and even some on holiday. Not only that but it is winter after all. It's not the middle of summer with all those air conditioners running - summer is peak usage time electrically speaking I belive.

So, with all that in mind, I thought I better go to the NERC site and see for myself if they really said anything about what they planned on doing to solve this problem.

www.nerc.com/~y2k/y2kplan.html

Man was that an eye opener. The site is real old like '98 something, but the basics are there. The plan was, even back then, to run the grid(s)in a "precautionary configuration" or something like that. In other words they would reconfigure the grid(s) to have as much analog equipment as possible in such a mixture that would lend itself to what sounded like to me manual switch over by humans. Additionally they would reduce power generation so that plants could, as needed, gear up to handle a sudden load. Plus they would suspend interstate sale of generated power to neighboring facilities. It would appear that they did just that as borne out by the shut down of one of the mega line between OR and CA when the central OR terrorist knocked out that tower on 12/31/99 (or there abouts). This is documented on BPAs site www.bpa.gov. They are very clear that automatic switching took care of the incident, but it really wasn't a problem anyway because they were not using the line with the current configuration demanded by y2k (we not selling power right now).

The NREC site although somewhat dated goes on to say that they would roll in this planned configuration some weeks before y2k and then roll it back out some weeks afterward. In other words we're not really running with the normal setup. Now that makes you wonder what other pieces of the power pie is abnormal. Did all the power companies get new software rolled in in time? Does this mean that on 1/1/00 the whole damn power grid with all those entities just switched over to generating power with release 01.01.2000 of their respective sw packages all at the same time? Would you run this thing this way w/o any real end to end testing? Did they phase in all new releases before 1/1/00?

I'll bet they didn't do it that way. I think that many if not most of these companies are not running the new code but instead are going to bring portions if not all of there systems back on line w/ the new code in some kind of controlled sequence - as time permits. No big rush now, we have power. This makes more sense now when you think of the probably for numerous failures should something be discovered at one plant but then hitting many plants simultaneously - instant power outages all across the grid.

There are probably a lot of other good reasons to take this approach, but it is really interesting if this is what they did.

So to stave off panic, have a real nice show on CNN (reassure the public all over the world), they keep the lights on. Wow, it's so polictially correct.

Now the real work begins much of it behind the scenes. And if some problems do crop up and a power plant or two blows up and workers are killed, no big deal. They have the option of declaring it NOT y2k related - after all we're already over that 1/1/00 milestone (by the end of this week no one will even remember what caused the y2k bug). Even if they want to blame an incident on y2k, it's OK because they can just take what they learned from that plant conversion on to the other hundreds or so that still need to be brought up to speed. It's beautiful!

If true, I wonder if this approach is being taken with any other infrastructures? We believe that oil and gas are basically fix on failure.

Again the world reboots in sequence and we rebuild the infrastructure. It all starts over; may take a few reboots. Fascinating...

Am I loosing my mind or does any of this make sense?

Warren



-- warren blim (mr_little@yahoo.com), January 04, 2000

Answers

warren, that's a very interesting theory. All we need now are some power industry people to come in with some facts. Anyone?

-- Dzog (dzog@plasticine.com), January 04, 2000.

Work Plan

4.Operate systems in a precautionary posture during critical Y2K transition periods.

NERC will coordinate efforts to operate transmission and generation facilities in precautionary configurations and loadings during critical Y2K periods. Examples of precautionary measures may include reducing the level of planned electricity transfers between utilities, placing all available transmission facilities into service, bringing additional generating units on-line, and rearranging the generation mix to include older units with analog controls. Another example is increased staffing at control centers, substations, and generating stations during critical periods. Fortunately, from an electric reliability perspective, New Years Eve falls on Friday, December 31, 1999, and January 1 is a Saturday. Therefore, electric system conditions are likely to be favorable with the level of electricity transfers at light levels and extra generating capacity available during the most critical period.

-- probably a bunch of BS, but interesting (_@_._), January 04, 2000.


Good post Warren.I think you are right on the money.It makes sense.

-- bailey (glbailey1@excite.com), January 04, 2000.

What really makes sense is that THE POWER IS ON and will likely STAY ON except for the usual odd glitches here and there......

Methinks that there are far too many of the doomer persuasion who are hell bent on trying to find non-existant problems or cover-ups to try and perpetuate this TEOTWAWKI myth..........

-- Craig (craig@ccinet.ab.ca), January 04, 2000.


One more snip

The critical Y2K operating period is likely to extend several weeks before and after midnight December 31, 1999.

-- probably a bunch of BS, but interesting (_@_._), January 04, 2000.



Do you have a background in electrical engineering??? I thought not!!

-- huh (huh@home.com), January 04, 2000.

I had the same thoughts about FAA. FAA's equipment was already in bad need of upgrading -- big time. Their air traffic controllers union (I don't have the web site -- I've been looking for it lately) back in 98 listed a chronology of events surrounding FAA's lack of movement on equipment replacement (including listing documentation from IBM that after a certain date it could not guarantee parts to the old mainframes that they were using). Now comes Y2K and the need to make code and equipment changes fast. FAA brings on this new air traffic control system, only to have it fail in the heavy air traffic cities. So, they test it in places like Denver, and then they branded it a success. However, everytime they tried to use it in Chicago, they had problems and had to take it off line to go back to the old system.

Now it's 12/99 and FAA proclaimed that it was Y2K ready and all that. However, I never heard anything else on how they got the new system up and running in Chicago. Furthermore, on Y2K Newswire, Mike Adams posted verbatim, a conversation run around he had with the FAA public affairs person, and individuals with an independent contractor that had been used to audit their system. He was trying to get his hands on the report. The FAA spokesman told him that he had to get it from the vendor. The Vendor told him that they were under a confidentiality agreement and could not release it without FAA's consent (which happens to be true -- vendor preparing reports for the gov't -- those reports are considered property of the govt). Finally, after several follow-up calls, Mike let it go.

It is this kind of thing that leads one to think and speculate on the true 'level of readiness'. I suspect that FAA is working off of a contingency plan also, and will continue to remediate -- gradually -- until they get it done.

-- Mello1 (Mello1@ix.netcom.com), January 04, 2000.


Warren,

I like the logical assessment. Although I work for an electric utility, I am a skeptic myself. No use taking chances with something as critical and important as power. I say that as an electricity consumer, not a provider.

However, I just checked with my inside guys here at the company, and they report everything is working normally. Yes, energy trading activity was way down during the rollover last weekend, as the various utilities ramped up their own power generation to ensure they could cover their own loads if need be. But from what our system operators are telling me just minutes ago, activities are essentially back to normal. No manual controls, no contingency plans in effect, nada.

What you should know probably know is that normal these days is sometimes kind of shaky to start with. The trend towards wholesale energy marketing and energy trading keeps our transmission network loaded all year long any more. You would be surprised how often that, even in the winter, line load relief is called for (basically, trying to ship too much power through our lines to somewhere else, so some transactions have to be cancelled or scaled back). It is all based on cost and economics, and the safety factors we took for granted in the past are being pushed closer and closer to the limits. My guys tell me that this is what concerns them most anymore - all of the power sales that are going on. When you have people selling a commodity that they are just buying from someone else, the potential to come up short increases dramatically. The deregulation of transmission is still in the early stages, and no doubt will be subject to a shakeout.

As far as Y2K is concerned, I did a check last summer amongst the various system operators, asking how many had generators. They all did! When asked if this was for Y2K, they all said no, it was because they know how fine a line we (the industry) walks every day.

Back to your question, my operators reported that they have had to reboot their workstations a time or two today, and have had a couple of communications problems that were dealt with. But in all actuality, this is just business as usual.

If I hear anything of note I'll be sure to post.

sparky (i'm more doomer than polly, just ask my spouse)

-- sparky (lights@re_my.business), January 04, 2000.


Warren, thank you. Would you have any objections to my posting verbatim your well-written question over on Rick Cowles' Energyland Public Discussion Forum, 'Electric Utilities and Y2K'?

If you post a reply that this is OK, I will be careful to copy/paste only from your Title through your name, "Warren" - *omitting your email address*. Mr. Cowles' Board is password protected, but posts require email addresses, so I would copy it as a "posted with permission from another board."

I am not a programmer, but I would be interested to read comments from those in the utility business.

If I don't read an OK from you here, I won't do it. But, in any case, it's an intriguing idea...although, I would guess it's probably too big as an explanation to keep secret.

Respectfully, Jim

-- Jim Young (jyoung@famvid.com), January 04, 2000.


Well, per Sparky's (Thanks) reply, maybe we should just watch incoming responses here for a while, Warren, you think? -Jim

-- Jim Young (jyoung@famvid.com), January 04, 2000.


Warren, "Am I loosing my mind or does any of this make sense?"

None of it makes sense and you are losing your mind, Warren. So are your likeminded buddies here.

Craig, Thanks for the voice of reason!!! Geez...

-- Jake (Jake@Reality.com), January 04, 2000.


To: huh@home.com

No I don't have any background in electical engineering; unless you could that EE101 course in college (just kidding).

Do you?

If so, do you care to comment on my theory?

I do have a career's worth of background in computer science. I'm retired now. Are you? :=)

Apart from any slamming, I'd really be interested in the particulars of your viewpoint if you have one.

From my knowledge, all this up time and no bug reports from this single infrastructure is unlikely considering what should have taken place after remediation and roll in efforts were complete; that includes the 1/1/00 turn over. To put it another way, things just don't work that well with so many changes. Go to the bpa.gov web site. They make big noises about all the systems that were changed. Making changes like this w/o incumbent errors is simply against all odds and flies in the face of all that's been observed in the past about software projects. If there's anything that Ed is right about it's this fact.

So here we are on 1/4/00 with everything related to power aparently funtioning PERFECTLY. Perfectly? I can't even belive I'm using that word in relation to software let alone any hardware that was changed.

Do you have another explanation which is posibly EE related?

Warren

-- warren blim (mr_little@yahoo.com), January 04, 2000.


This is a question for Sparky:

As a power industry person, do you or anyone you know have an explanation for why all the world's power systems stayed up? From the US to Australia to China to North Korea? Were the people you work with as surprised as most of us were? Did they think, as many did, that large chunks of the world's grids were going to be toast?

-- Dzog (dzog@plasticine.com), January 04, 2000.


Jim, you can repost this theory if you think it's appropriate.

Although from what Sparky says, everything is back to normal with a big green light. I'd have a tendency to put faith in his first hand knowledge.

Believe me, I'm not trying to perpetuate an end-of-the-world theory, but at least in this particular case the facts just don't seem to add up. I'm simply doing a little postmortem thinking about what's happened and what we've observed so far; no harm, just some extra consideration.

A couple of other possibilities are that 1) everything is simply fixed with power (all over the world I might add), or 2) there was no major y2k bugs found in power generation. But then there is all the effort put forth by orgs like bpa.

Warren

-- warren blim (mr_little@yahoo.com), January 04, 2000.


Here's another thought. If it's infact true that we are in a green condition w/ all remediated code in place with not even a single real glitch reported, then these guys deserve the biggest congratulations the world has to offer. Hey, time to raise rates :=)

In fact NERC and others should be holding a press conference real soon to let us all know just how good the indutry is as shown by this months excellent performance. it really would be quite an accomplishment.

Maybe we'll be seeing something like this on CSPAN or CNN soon.

Warren

-- warren blim (mr_little@yahoo.com), January 04, 2000.



Warren, thank you very much...I guess we'll let it go, for now. I DO think it is an excellent question, deserving respectful comments.

It's all so unbelievable, the utilities I mean, that I (also) keep thinking that a fast one was pulled somehow, but it would have to be such a large fast one...how could that not be leaked out?

Even Mark A. Frautschi, author of Embedded Systems and the Year 2000 Problem, seemed to believe a couple years ago (see the section, "Impact:" that utilities were at risk due to embedded systems failing. How could he have been so wrong?

Oh, Ed Yardeni is coming on CNN's Moneyline, NOW.

Thanks again!

-- Jim Young (jyoung@famvid.com), January 04, 2000.


Warren:

Interesting theory, but I think what I posted in the following thread is what happened. Exerpts follow.

Why nothing was ever going to happen with the embeds

----- EXERPTS -----

After the rollover I put my brain in gear, rather than relying on the "experts" as I have done to now about what goes in the embeds. I'm not a hardware guy, but I went back and thought about my only hardware project from 20 years ago in university (back when memory was expensive).

Any thing that is in hardware that deals in time is going to use counters to determine when time has elapsed. They are not going to use dates because you have to use more memory to store it and then convert it to a number to do the calculations and then more memory to convert the number back to a date. So they'll count seconds or days. The point of storing a date calculation is know when a certain amount of time has passed. If you use counters (even thousands of seconds for many days) it is the simplest, cheapest, and bug free way to do that - regardless of date. Now some of the more fancy hardware that is newer may have some date functions for things like maintenance (since memory is not a problem now) that has been arbitrarily decided to be done at month ends rather than on a fixed interval, but my guess are those are very few and between.

Yourdon, I'm surprised you fell for this in such a grand way, you're supposed to be one the "experts" who investigated all this. Why Mr. CEO said all his teams were being sent home was because his clients along with all the other companies with embeds found out the above and realized that the "consultants" were swindling them by just investigating and investigating and investigating but actually doing very little else. I'm willing to bet that 99.999% of all embeds are like what I describe above. That's why the world could tollerate a 0.001% hiccup in the number of embeds out there and not blink at all.

I'm willing to bet if you turn on 99.999% of the systems with embeds there is no place to enter a date or even set one - after all I don't see every one of these systems with a keypad or keyboard to enter a date if the current date is incorrect. These systems are black boxes like your modem. Yes they track time (with a counter) not by knowing what day of the year it is. They don't care about dates they care about durations of time and days passing.

----- SOME RESPONSES RECIEVED -----

Interested Spectator, regarding the last thing you wrote, about the systems that track time, but could care less about the date, well my husband had said the SAME thing to me no more than a couple of weeks ago. And He was working on hardware years before getting into software. And I figured he was right, but I Still worried about those nuclear embeddeds. Because the stakes Were so high. And that was my main concern. That and the electricity going out. So right now I feel pretty good, but won't use all the preps just yet. I've learned a lesson that Boyscouts follow as a matter of rule. Preparedness is a good lesson, pure and simple. What I Really wish is that solar power would become economically feasible. That's where we need to be going anyway.

-- DB (tomG@h.com), January 01, 2000.

IS,

I happen to be one of those utility company Y2K project team members. We examined and tested over 6500 systems with embedded chips. About 95% of those turned out to be exactly as you have speculated - no date/time function or only elapsed time or day counting. However, until we did the testing, we had no way to know this. Most systems were put into service years ago with no documentation about how dates were caculated. We spent 80% of our money testing and documenting systems and only about 10% on remediation. As it turned out, there was only one critical system that would have failed and that would have knocked 15 megawatts off-line. Since we produce about 3000 megawatts on average, we barely would have noticed it. If we would have known this two years ago, we could have saved a lot of time and money but we didn't, and there was no way anyone could have known.

We told everyone we could find that there would be no problems with power on Jan 1. We told everyone we were Y2K ready. We set up a monitoring center because, although the probablityo f problems was very low, it was not zero. As it turned out, we were right and I'm happy. What disturbs me is that some people either never listened to us or assumed we were lying. We worked hard and that's one of the reasons we're all able to get on the net tonight and post messages.

Happy New Year to All.

-- (Someone@somewhere.com), January 03, 2000.

-- Interested Spectator (is@the_ring.side), January 04, 2000.


A little duct tape'll fix anything.

-- cin (cinlooo@aol.com), January 04, 2000.

Interested Spectator, thank you very much for that detailed post. That seems to put embedded systems to bed.

Does this also take into account software changes or should we simply assume that there were very few software changes in the first place?

Warren

-- warren blim (mr_little@yahoo.com), January 04, 2000.


Warren,

Good post; really enjoyed reading it.

The basic gist of it(and that is all I can get from this type of discussion; I'm not an electrical engineer) is intriguing, to say the least. Here is what I got from it, sort of along the lines of,

"If a tree falls in the woods and no one is around, does it make a sound when it hits the ground?"

Or:

"If the non-Y2K-compliant computerized power grid rolls over to 01/01/ 2000 and continues to work because it was set on manual override weeks before, Do the sheeple know it is non-compliant?"

Is it non-compliant? If it looks like a duck, quacks like a duck, and walks like a duck, it probably is a duck.

If it works before the rollover, during the rollover, and after the rollover, does it really matter HOW it works, as long as it continues to work? This is the one that has been getting me. So the power grid was non-compliant, it got switched over to manual, and work arounds/ computer fixes continued until the system could be fully automated again by, say, February of 2002. Nobody finds out; it all goes on behind the scenes. Y2K compliant system = the power stays on. Non- Y2K-compliant system + sleight of hand + smoke and mirrors = the power stays on. Either scenario, the end result is the same.

Isn't that the important thing here? Making sure this mess we call the infrastructure holds together, even if it is propped up on ugly old boards with a coat of dime-store paint on it to pretty it up while the major overhaul is done? Do we really care how it is fixed, so long as it is fixed?

Granted, I would like to know that the remediation process is being done in a logical, rational, professional manner. Then again, I honestly don't think it is, or has been. As with any large project where piles of money are thrown around, there will be graft and corruption, con games galore. Shoddy worksmanship, outright lies(yeah, we fixed it - no problem!); I'm sure that this went on and continues to go on in many forms(there are NO Y2K problems - those 12 refineries were gonna blow simultaneously anyway).

I guess the bigger question is this: If the power grid is being propped up manually right now, can it hold out until the automated system is back online and functioning properly? This is the one that worries me a bit. If it can't hold up, if power goes off before "the fix is in" because we just can't keep things running manually anymore because it is just too damn big, remediation then becomes a LOT harder(try rewriting code on a computer without power; it's a bitch). We will then have our Y2K rollover meltdown, just later than we the people(and the sheeple, let's not forget them)thought.

Of course, the manual override idea works for any of the other systems(water, oil/gas, financial, you pick), and the same rules hold; How long will they hold out, and will it be long enough?

Holding onto my preps for a while longer...

Peace,

Don

-- Shimoda (Enlighten@me.com), January 04, 2000.


Warren:

WRT to software systems these are a different animal.

I gave my views in this thread on Jan 1. It seems that the overnights have gone also much better than I would expect (and I'm a software guy so I didn't need any experts to explain this to me). However lets wait until each of the time periods I describe below (and Feb 29) have gone by before we see just how bad it will be. Although overnights are ok, as I say most of the errors will happen when dates are used the most. That will be on the first payroll, and the end of the first month.

Yes embeds are "non-issue" BUT IT will have its real rollover test TONIGHT here's why

----- EXERPTS FROM ABOVE FOLLOW ---------

But with respect to IT/database systems etc. The real test for them will be TONIGHT when the first day of overnight processing happens. You see if you done any programming you know that must of the bugs occur at what are known as boundaries. Now 1/1/2000 is a boundary. It is a boundary when you rollover and it remains a boundary until you use it. WE HAVE NOT FINISHED THE FIRST DAY SO THE BOUNDARY IS STILL LIVE AS IT HAS NOT BEEN CROSSED YET. IT will use it very heavily tonight when they process today's transactions. You yesterday night the processed transactions that were checking dates for work between 30.12.99 and 31.12.99. Today they will be processing dates for work done between 31.12.99 and 01.01.00 and will use new date seriously for the first time.

Boundaries cause errors just prior to the boundary, at the boundary and just after the boundary. That is the rule of programming.

So you can expect the potential for the most errors at the following times (amongst other times) for IT systems:

Begining of first month before boundary. Begining of first payroll (etc.) before boundary. Begining of first week before boundary. Beginign of first day before boundary. After boundary has crossed. Begining of first day,month,week,payroll etc. after boundary. End of First day after boundary. End of First week after boundary. End of First payroll after boundary. End of First month after boundary. End of First quarter after boundary. End of First year after boundary.

The vast majority of potential errors come where the vast majority of the date calculations can occur. These occur in the After periods when we reach the End of xxx time.

So hang on folks the ride in IT BEGINS TONIGHT.

-- Interested Spectator (is@the_ring.side), January 04, 2000.


Well Don, you've done it again...spoken like a true master.

-- Dee (T1Colt556@aol.com), January 04, 2000.

Warren,

Very good ideas. I like your logic.

And no, you are not losing your mind.......

Paul Grasha

-- Paul Grasha (lightningcomp@hotmail.com), January 04, 2000.


Mello1,

I see here that you believe the FAA has mainframes....WRONG...they did a very large project in the early 90's using AIX RS6000 systems. Now they are still running AIX 3.2.5 on a lot of them which is circa 94 and is not y2k compliant. However, there are patches and the problems are with the at command and touch -t as well as some diag and errpt commands. It is all pretty obscure stuff. I have two of these systems in my environment, one is my mail server and the other is a backup DNS server. They ARE FINE, I haven't even applied the patches.

You should make sure that you know what you are saying before posting.

-- William R. Sullivan (wrs@wham.com), January 04, 2000.


William,

The FAA certainly has mainframe computers. There are twenty located in regional air traffic control centers. There was a rather large controversy over a year ago when IBM warned the FAA that the special model mainframes they were using (over ten years old) could no longer be supported by IBM, and that IBM felt the hardware was not Y2K compliant and could not be remediated. The FAA has replaced those with less ancient mainframes (over five years old, I believe).

The FAA also has a large number of even more ancient Apollo workstations running a Unix-like operating system called Domain. These are used for some radar displays. They have a 68030 or 68040 CPU (depending on model), 32 MB of RAM, and 320 or 650 MB hard drives, and use a proprietary token-ring type of networking. These are also more than ten years old, and it is almost impossible to get parts for them. HP, who bought Apollo a long time ago, will no longer support these systems. By the way, I have three of these, which anyone is welcome to for the cost of shipping.

I don't doubt that the FAA has a large number of RS/6000's. They have a large number of many different kinds of systems, many of which are obsolete, and all of which are difficult or impossible to integrate. Even without Y2K, the FAA is in deep trouble.

-- Jerry Heidtke (jheidtke@email.com), January 04, 2000.


Thank you Warren, well said....and may I add...

Feb. 29 May Pose Next Y2K Hurdle 02:49EST 01/04/00

WASHINGTON (AP) -- Febuary 29 could pose the next problem for computers not programmed to recognize the first "extra'' leap year in 400 years. There's typically -- but not always -- an extra day in February every four years and this leap year one is particularly unusual. Leap day 2000 is "the exception to the exception,'' explained Rick Weirich, the Postal Service's vice president for information technology. Some computers may not expect a leap day this year, and thus skip ahead to March 1, he explained. Because the actual year is slightly longer than 365 days, an extra "leap'' day is added every fourth year. But that still doesn't make things come out quite even over time, so leap days normally are skipped in years ending in 00. Except -- and here's the problem -- if the year ending in 00 can be divided evenly by 400 it still is a leap year. Thus 1600 was a leap year and 2000 will be too, but 1700, 1800 and 1900 were not and neither will be 2100.

----------------------------------------------------------------------

Warren, How many times did you write that code? I bet you used to cut and paste it and could write it in your sleep. Divide by.....check remainder.....if......then......

I'm with you on this one.

-- thingsthatmakeyougohmm (oldcoderslikewarren@know .it), January 04, 2000.


My wife is a long distance telephone operator. Based on what she hears from other countries there are a lot of spoatic power outages. Some last a day or two. Our media is ignoring them but they are happening. Power out in India and Japan and unusual enough for their telephone operators to comment about it.

-- for protection (notreal@this.time), January 04, 2000.

Dzog,

Sorry for the long delay in replying, I was late leaving work tonight due to Y2K (surfing the web of course ;-) and just now got back to the forum.

Was I surprised the grid stayed up? No, not from everything I'd been hearing. Was I taking it with a grain of salt? Yes, only because I feel loss of electricity would have been the most immediately noticed, and potentially one of the biggest, of the Y2K problems. IMHO, if the grid went down, even for a short time, things would get dicey real fast.

The only explanation I can offer for the lack of problems would be to relate what I heard from various members of our Y2K team dedicated to power generation / distribution / transmission issues. Early on a big concern was embedded technology. It was a big unknown, and should there have been problems found, time was so short that it would have been very difficult to fix in time. As time went on during the inventory and testing phases, the perception changed to one where the embedded issue really did not turn out to be a problem for us. We have many different types of technology in place, some of it very old and some that is cutting edge, and our biggest task was determining what we had. The only problems I heard about involved logging/record-keeping type functionality, which even if we'd left as it was would not have caused any problems as far as keeping the lights on. Might have been tough for the auditors to document energy transfers (the electricity flows based on the laws of physics, not accounting, so the network didn't care) but that was about it.

Late '98 or early '99 (can't remember exactly when) our EMS team (Energy Management System) did some Y2K testing. For us, our EMS is used mostly to monitor our distribution and transmission networks, control generation, and do some limited switching on the networks. There are utilities that do a large part of their system control using their EMS/SCADA systems, but so far we're not one of them. Our team used the backup system for testing, and rolled it over to 2000. All went well until it hit 30 Feb 2000, and then 31 Feb 2000, etc. :- ) Turns out it was a known problem that the system vendor had also discovered, and was working on a fix for at the time we were testing.

All known problems were fixed at our utility by late summer '99. Still cutting it too close for my comfort, but that is the nature of the business. Deregulation has been a bit of a distraction for the industry. Of the problems that were discovered, none were considered to be capable of causing an outage. In terms of remediation, there really wasn't much to speak of. We've had more (non-Y2k related) problems with our new ERP system than anything experienced in terms of the network. We can deliver the power, but we might not be able to bill you for it. :-) In terms of the grid itself, it was for the most part "compliant" before we even looked at it. It was just an unknown until we did look into it.

Were we surprised the grid stayed up? Those who actually do the work would definitely say no (I'm involved with the Engineering branch). I'd heard rumors that there was some concern about China and Russia, interestingly enough sourced to US "intelligence". But our people were confident that the US grid would stay up without a problem. Did we have a lot of people working overtime that night? Yes, but they were spread out in the distribution network to take care of remote switching in case there were *communications* problems. We expected no problems with our equipment. And the extra personnel would only be needed if something caused an outage, such as a tree, drunk, or other mechanical damage, and if the communication network was out.

Many of our specialty/craft people were on call, and we did staff a special corporate Y2K command center, but I can honestly say that might have been to prove "due diligence" as much as anything else. (A big buzzword in Y2K was showing due diligence, essentially a cover your butt in case you get sued issue. Protection against potential of litigation, right or wrong, consumed as much of our Y2K efforts as actual useful testing and remediation. You know the drill, spend 1 hour testing and 1 week documenting.) Our actual Dispatching/Control center was staffed normally, except for a supervisor and a few higher ups who normally aren't there at night. I believe the generating stations were staffed in a similar manner. So I would have to say that confidence was high about the US grid. Of course the underlying issue with Y2K is that since it has never occurred before it was a great unknown, and even with a low probability of problems, the smart thing to do is try to be ready for anything. That we did.

Our Y2K command center was watching all of the other rolls in other parts of the country, and I would guess that if there had been trouble in any of the other similar countries (western Europe, Australia), electrical trouble or otherwise, our people would probably have been called in just to make sure they were there if needed. In our system, just about everything, with the exception of generation and main substation switching, operates autonomously, based on pre-defined parameters, and much of it is not digital in nature. That which is digital cares only about number of operations and other electrical stuff, and most of it doesn't care what day it is. In terms of running the electrical network, our computers are used mostly for keeping an eye on things, controlling generation, and scheduling energy transfers. I have asked several times, in several different ways, and I cannot get anyone to even suggest that we went on "manual" for anything. I have no reason to doubt them.

Another thing that gave us confidence is that most of our generation had already been moved into 2000 last fall. And their system clocks were also staggered, so that if something did crop up it wouldn't affect all of the units at the same time since none of them are running with the same date. None of them experienced any problems rolling over to 2000. Our peaking units were also on standby if needed, and our available generation greatly exceeded our expected load, something we normally would not do (it takes time to get a station up and running, and it is more economical and easier on the equipment to run fewer stations as higher loads than many stations with low loading).

Why were the other countries able to stay up? My guess is that much of the electrical system is not so much controlled as it is protected - devices are for the most part in place to protect equipment from damage when something happens to the system (i.e. drunk hits pole, squirrel commits suicide).

As I said before, I'm more doomer than polly, but I haven't been able to dig up any dirt anywhere. If there was a fast one pulled, it is well hidden, even from the people doing the work. Just no evidence to support this theory that I can find, and I am truly a skeptic by nature. With the exception of our Y2K command center and the extra personnel on hand, we have operated normally throughout the roll.

With the lights still on, we might want to turn our attention to something that doesn't follow the rules of physics the Dow Jones!

sparky (more doomer than polly, just ask my spouse)

-- sparky (lights@re_my.business), January 05, 2000.


Thank you Sparky and all others who replied.

Little or no remediation necessary for the grid explains a lot. That was one of my theories namely no remediation work performed just a lot of inventory checking and assessment.

Clock being staggered on some systems and set forward on others into 2000 before 1/1/00? Man that sounds weird, like no computer network I've ever seen. But hey, things sound real different in the power world.

With these thoughts in mind it appears my question is answered: not running in manual or any different configuration; just business as normal. Hence the reason for everything staying up.

Again, thanks to all who responded!

Warren

-- warren blim (mr_little@yahoo.com), January 05, 2000.


Warren,

Just as an interesting note, found this as I was "cleaning house" in email today... Take a look at this url (have to admit, I was a bit skeptical myself when I first read it back in august...)

http://www.egroups.com/group/roleigh_for_web/1133.html

(sorry for the lack of a hot link - haven't taken the time to figure out how to do that yet...:-)

sparky

-- sparky (lights@re_my.business), January 06, 2000.


Sparky:

that link does provide some good news about readiness and manual procedures. I get a little skeptial when on the one hand folks like Jim Cooke say: 'manual? can't be done' and on the other you see staffing to handle either real or unreal manual operations. Why staff on the rollover for emergency manual operations if you can't do it anyway? It shows some due diligence, but seems a little weak unless there's something really to it. Is is that no one knows, including the utility and their spokespople, what's really going on and why? Very smokey.

BTW, to setup the link just type in the html like:

reader sees this text

Thanks.

-- warren blim (mr_little@yahoo.com), January 07, 2000.


For the link html, 'view' the source I typed using your browser.

-- warren blim (mr_little@yahoo.com), January 07, 2000.

Moderation questions? read the FAQ