You are NOT JUST ANOTHER ENGINEER - 60,000,000,000 embedded chips?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Dear Mr. Engineer,

I've been following you around for the last week, and I think if anyone can give me an answer - it's you.

Last week I went to a meeting, and the moderator said:

"Michael Laura, there are sixty billion embedded chips in the world, of that, the number that we expect, like 3%

L. are critical. I know.

M. Thats number 1. Number 2: Of that theres only 1/10th of 1% that will have any problems. Youre talking about an infinitesimal amount."

As I think you know, I require verifiable information to base my judgments. Unfortunately, I am having nightmares because I have to suspend judgment since accurate data just is not available. (Or, I haven't found it.) I'd like to get a couple of restful hours sleep tonight, so:

Do you know how can I determine the veracity of "60,000,000,000" chips?

The reason I ask is; this may be a number that someone came up with for the sake of discussion. Maybe an urban myth? It seems to me that the number must be substantially lower. To conceptualize 60 billion, I compared it to the number of people in the world. My buddy Banks told me that there are 6,000,000,000 people in the world, so that would mean there are 10 chips per person. Since we are more industrialized/computerized than most countries, I'm assuming most of those chips are here. (THAT gives me nightmares.)

However, 60 billion seems much too large.

-- Laura (Ladylogic46@aol.com), November 21, 1999

Answers

>M. Thats number 1. Number 2: Of that theres only 1/10th of 1% that will have any problems. Youre talking about an infinitesimal amount."

Bull, that conflicts with all information given by pertinant authorities before the Fed spin went into effect and its disinformation operatives. It certainly conflicts with what the Marine Corp itself had posted at on a site last year.

Ooh my how it just all is so itty bitty and teeny as we approach the fragile window. Witch's twitch their noses and everything is all better now. 1/10th of 1%? Ho, ho, yeah right sure, never mind what all the pertinant authorities had discovered pre-Fed spin.

I've seen outright lying but by gosh that may be the trophy winner.

-- Paula (chowbabe@pacbell.net), November 21, 1999.


Mistress Laura

John Eva of Foxboro Automated systems stated that his company estimated by their on site checking of their equipment that they have a 15% failure rate (their company has an estimated; by them. 15,000 sites world wide). (want to bet on Honeywell, Seimens etc). As to the 60 billion number, that is about right.

~~~~~~~~~~~~~~~~~~~~~~~~Shakey~~~~~~~~~~~~~~~~~

-- Shakey (in_a_bunker@forty.feet), November 21, 1999.


Did you realize that even 1/10th of 1% of 60 billion is still 300,000 chips? (Did I do that right? Majored in English, not Math.) That is still a considerable amount, even just considering how they would affect us in a vaccuum. Throw in the domino effect and you've got FUBAR.

I feel the need to go buy some more stuff.

-- preparing (preparing@home.com), November 21, 1999.


I didnt major in maths either, and dont take this as a criticism, but Im struggling to see how a large number beginning with a 6, when divided by 1/10 of 1/100 can have a result that starts with a 3.

Surely there'd have to be a 2 in there somewhere.

This might be a decent place to start.

-- big numbers are just small numbers (with@scary.words.attached.com), November 21, 1999.


Paula,

I don't think the guy lied, I think he's lazy and repeating what he's heard.

~~~~~Shakey~~~~~

Mistress Laura - I like that!

Preparing,

That is 1.8 million chips! I graduated summa cum laude with a communication degree. (Someone else did the math for me, lol. However, my emphasis was in empirical research.)

-- Laura (Ladylogic46@aol.com), November 21, 1999.



3% of 60 billion is 1.8 billion. 1 tenth of 1% is 0.001. 1.8 billion times .001 = 1,800,000.

Godspeed = 186,000 miles per secon

-- Pinkrock (aphotonboy@aol.com), November 21, 1999.


Sorry, I think it is 60 million? I don't know how I got 300,000! I got down to 1% then divided by half or somehing! (My calculator won't do numbers this big, btw)

Lessee:

60,000,000,000 <----60 billion

1% of that is :

600,000,000 <----600 million

1/10th of 600 million:

60,000,000 <------60 million

Sheesh, still quite a few little chips there!

-- preparing (preparing@home.com), November 21, 1999.


Preparing,

Darlin' they are saying THREE percent.

60,000,000,000 <----60 billion

1% of that is :

600,000,000 <----600 million

-- Laura (Ladylogic@aol.com), November 21, 1999.


600 million,that's even worse.

-- preparing (preparing@home.com), November 21, 1999.

Thank you Pinrock,

But, I still think l.8 million is a moot point. We are basing this number on 60,000,000,000, and I don't know if that is a credible number.

WHERE CAN I FIND A CREDIBLE SOURCE?

-- Laura (Ladylogic@aol.com), November 21, 1999.



No! Preparing, I took that off of your post to show you the mistake in your math.

The number is 1.8 million. I promise.

-- Laura (Ladylogic@aol.com), November 21, 1999.


A basic article on embedded technology:

http://www.jsonline.com/bym/tech/0214chips.asp

-- Linkmeister (link@librarian.edu), November 21, 1999.


Now, just for fun, let's look at Shakey's numbers: 15% of 60 billion is:

60,000,000,000 x 0.15 = 9,000,000,000.

That's 9 billion. Now for some perspective: let's say you can count (to yourself, please) really fast, say to twelve in one second. At that rate you could count to roughly one million in one day (24 hours). Oooh, that's alot. Now keep counting at that rate, 24/7 and it will take you OVER TWENTY FOUR YEARS to reach 9 billion.

Now, for the pop quiz. How long would it take to fix all those 9 billion chips, at the blistering rate of twelve per second, every second of every day?

The test will be graded on the curve. Shhh! No talking, please.

Go

-- Pinkrock (aphotonboy@aol.com), November 21, 1999.


I have always thought that the .001% figure was the number of embeds that would have a meaningful failure. I have read some industries show a 3%-5% failure rate. Only .001% of 60 billion would be in places that would create a meaningful problem. 60,000,000,000. divided by .001 = 600,000. meaningful problems. No toasters. Thats only embeds now take in to concideration all the softwear and hardwear problems. I could be wrong.

-- Gambler (scotanna@arosnet.com), November 21, 1999.

Or is that 60,000,000. meaningful problems?

-- Gambler (scotanna@arosnet.com), November 21, 1999.


Laura --

Sorry, was out doing some chores and just got back.

As to the 'real' numbers, I couldn't tell you. I've heard numbers ranging from a low of 30 billion to a high of 165 billion. And it really depends on how you count them. Is a microwave an 'embedded system'? How about a coffee maker? The chips in your car?

If those are all counted then I would believe the higher numbers. (As an example, I believe my car has about 30 chips in it. How many millions of cars in the U.S. alone?)

Sorry I cannot provide a soporific for you. I don't sleep all that well some nights either. There are a whole lot of variables involved in this.

First, we don't know how many of these things there are. It can be argued that 'Well, nobody actually has to know this. Each individual company or organization knows what *they* have, and why should they care about what somebody else has?' This argument fails due to the fact that

a). Not every organization *does* know what they have. The thing that people forget is that a lot of this stuff was designed to be 'fire and forget'. You turn it on and forget about it until it either fails, is replaced, or requires maintainance. And there are some that were procured, installed, maintained, etc, by people who are no longer with the organization. There may or may not have been records, but who thinks to look at them.

b). They may or may not know what *other* systems the given one interacts with. That is, the organization may know what systems *they* have, but not what systems are *also* required for their systems to work.

Second, with respect to 'embedded' chips, there is a marked tendency to forget about the *software* (or firmware, for the purists), and concentrate on the *hardware*. This was one of the main thrusts of the Dale Way essay (Critique of Ed Yourdon's Y2K End Game Essay). As soon as you concentrate on the *hardware* you are, indeed, probably looking at something like a 3% or lower failure incidence. But the *real* issue is the code that resides in these things. As an interesting example of this, there was a thread three or four days ago in which I was arguing with Paul Davis about this sort of thing, (the one where he was pontificating about 'hand waving', 'magic', and the lack of chips in cars), and he stated something like 'Of course, it is possible to put non-compliant code in a compliant chip...'. I restrained myself, with difficulty, from pointing out the obvious, which is that *THAT IS EXACTLY WHAT THE PROBLEM IS!* I mean, the hardware itself *almost* never cares about the date. This is the basis for all of the arguments about the 'hardware only cares about the "tick" of the real-time clock'. Which is true, but tells one ABSOLUTELY NOTHING about whether the software in there cares.

Third, a good many of these things were written in the late seventies and early eighties. Almost nobody gave a thought to Y2K back then, and even those who did usually got overruled because there was "no way this system will survive till them." After all, the rated life of the chips themselves was only about 5 years. Unfortunately, this isn't the way it worked out. An awful lot of those systems are still in place. There typically isn't any source code for them. There isn't any documentation on them. (Frequently, not even a requirements document. This was one reason why a lot of them have survived. Nobody knows or remembers what all they were supposed to do, or what sort of restraints were required, so nobody wants to replace them, not knowing what will be overlooked.) Systems like this are *extremely* hard to remediate. Shoot, they are hard to inventory or assess.

Fourth, an awful lot of the 'speculation' concerns individual chips. This is probably a mistake. It would probably be a lot more intelligent to concentrate on the number of *systems* which contain these chips. I have no feel for this at all. There has been virtually no discussion of this point that I have found, it has all concentrated on the chips.

Looking at systems would be easier, as I suspect that the numbers would be much more manageable. Instead of 30-165 billion chips, you would probably be looking at 1-2 billion systems. The *downside* of this is that you would probably be looking at *MUCH* more significant failure rates. (I believe the Gartner Group posited 30% to as much as 60% 'systemic' failure rates, but I didn't read the whole article, just the part that was 'cut and pasted'. ) I do suspect that a lot of systems are vulnerable to failures of the *system* due to failures of small proportions (possibly as few a *one*) of the chips.

I don't know if this helps or not. It is about the best I can do.

-- just another (another@engineer.com), November 21, 1999.


The number grows - was 40 billion, then 50 billion, now 60 billion.

FYI, those who talk about billions of chips "failing" are clueless as to the technical aspects of y2k. The chips don't fail. EQUIPMENT ('systems') "fail" to to show the date properly (minor y2k bugs), or worst case, have a functional failure (a true failure).

* Microprocessor - doesn't fail * RTC - doesn't fail * PROMs- don't fail

Put them all together, program the PROM with firmware to handle the dates improperly in the year 2000, and then you have EQUIPMENT (or an "embedded system") that fails to perform properly in the year 2000. But none of the chips fail.

Chips don't fail on y2k. Chips don't fail on y2k. Keep repeating class....

And please, don't bring me an RTC chip - it keeps on ticking through 00...;)

Regards,

-- FactFinder (FactFinder@bzn.com), November 21, 1999.


Thank you Linkmeister,

" But there are literally tens of billions of these dedicated processors out there in everything from microwave ovens to airliner cockpit controls (a Boeing 777 has 1,000)."

According to this source, there are "tens of billions". So the figure could be 20,000,000,000 or 90,000,000,000 according to them.

Unfortunatly, I am still not comfortable with the veracity of that range of numbers. The source you gave me is not juried (reviewed my multiple sources within a specific industry or field) so I question this "evidence".

I suppose at some point I am going to have to quit questioning, and resign myself to tears and sleepless nights. (sniffle) I give up. I'm going to go have a drink.

-- Laura (Ladylogic46@aol.com), November 21, 1999.


FactFinder --

Uh, I believe that you may find you are mistaken. (Take a look at the thread a while back that linked to an article by one of the senior engineers at Dallas Semiconductor. This discusses the 'internal clock' issues in depth.)

In general, the statements you make are true. But, particularly older chips, say, the TI9900 or the RCA1802, or the Z80, have internal clocks and may or may not be compliant, depending on the particular version of the microcode.

Although, I will agree that generally the problem is not the hardware, but the software. This is the problem with all of the 'type testing' that has been going on. It is concerned with the hardware compliance, rather than concentrating on the *important* stuff, like the internal software.

-- just another (another@engineer.com), November 21, 1999.


Just another, I am guilty of oversimplification in this thread - in others I have addressed the internal clock of the microprocessor - i.e., it provides clock "ticks" but not dates. You can use the clock to build a date with software, but this is rare to the best of my knowledge, and not at all typical. The typical embedded system does use an RTC. Now there may be newer chips I am not familiar with, but I speak of systems commonly installed.

I haven't seen the thread you are speaking of, but is this concerning microprocessor chips?

Regards,

-- FactFinder (FactFinder@bzn.com), November 21, 1999.


Gambler --

I believe that the '.001%' number probably winds up refering to the number of *chips* that have *physical* or *internal microcode* problems with the date. That is the *ONLY* way such a low number makes sense. (But that number *does* make sense if it only refers to the chips that fail without respect to the internal software.)

Most of the problems would be expected to come from the *logic*, the firmware or software that is what makes the chip *do* whatever it is that it is supposed to do. And here is the rub I see with this. Much of the 'testing' I have read about is 'type testing' which concentrates on the piece of hardware. A lot of the rest *appears* to have been done using a methodology called 'functional analysis' which basically determines which ones must be tested by first attempting to divine by analysis which ones have functions which 'need' dates. The fallacy with this comes from what is called "feaping creaturism". This is the tendency to load up a system with all sorts of 'features' which have nothing whatsoever to do with the *functionality* of the device. These would not be caught by 'functional analysis'.

Given all of the above, it is a little hard to come up with any sort of 'reasonable' number for these things. As someone else has also alluded to, it really the number of 'systems' that counts, and I have seen no numbers on this.

-- just another (another@engineer.com), November 21, 1999.


NOT just another engineer,

Since I should be considering "systems" rather than chips, the numbers I'm panicing over are greatly reduced. I think I will skip that drink, take odds that I am going to be fine next year, but - just in case - get back to my water purification experiments.

Muchas gracias, mi amigo.

-- Laura (Ladylogic@aol.com), November 21, 1999.


just another engineer, just out of curiousity, when do you expect to see failures of these "embedded systems"? Will they start to fail before January 1st? Thanks for any answer you can provide.

-- Boy Scout (boyscout@prepared.com), November 21, 1999.

FactFinder --

FOUND IT! (Lord there has been an awful lot of activity here lately, this was only a couple of days ago, and it was clear down 3/4 of the way through the 'New Answers'.) http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001nNH

Enjoy.

Laura --

Well, if that is all it took... (Actually, it concerns me more, as I believe that the systems are at *greater* risk, and I don't see any attention being paid at the same level, as, say, the 'chip' issue.

As for 'water purification', check out the prep forum. There are articles in there on building solar stills, comparisons of various types of filters, (for my money, the British Berkefeld it GREAT, it improves our local water immeasurably, and that is BEFORE anything has gone wrong!), and various purification techniques.

-- just another (another@engineer.com), November 21, 1999.


NOT just another Engineer,

That's just great. I thought I could go dancing tonight.

I have thought about buying the Berkefeld, but I've been concentrating on long-term purification processes.

http://www.cyberfind.com/Y2K/water.html

I have found methods to recycle activated carbon, and now I am studying methods to reduce VOC's and Total Tri-halomethanes.

Furthermore, I have already built solar ovens out of everything I could think of...boxes, fish tanks, bread boxes, etc.

Iam as prepared as I can get. Unfortunately, the magnitude of the situation is just starting to sink in, so, I am searching for answers to guage my panic meter. Today, my Likert-type scale projection is a 6. I sure hope it doesn't go any higher.

-- Laura (Ladylogic46@aol.com), November 21, 1999.


just another, can you give me some idea of when the embedded systems might begin to fail?

-- Boy Scout (boyscout@beprepared.com), November 21, 1999.

They are only guessing anyway.

Chips will fail (no body has disputed this) the questions are when, where and what.

The big one is what does the date do in the system. Some of the failures will be no big deal if even noticed not every chip uses the date function. The ones that do will be missed by us all.

-- squid (Itsdark@down.here), November 21, 1999.


The problem here is one of definition. A typical PC might have 200 total chips on the motherboard, counting such things as capacitors, resistors, ferrite beads, etc. If you stuff the slots full of peripheral cards (modem, video etc.) you might double that total.

Now, this isn't 200 *different* chips. The number of different chips might be 75 or so (double that for the peripherals, at most).

NONE of these chips has a built-in date problem in a PC, per se. The BIOS is CODE, burned (or usually flashed) into one of those chips. The code contains the date error (if there is one) -- nothing is wrong with the flash part. In fact, a modern BIOS probably has 400- 500 known errors at any given time -- new errors are written about as fast as old errors are fixed. That's how software works.

So in practice, we couldn't care less about the total number of chips. We care urgently about the total number of chips containing code, since this is where date errors lie. Essentially, this means various flavors of ROM chips, and microcontrollers.

Yes, it's true that most RTC chips only support a 2-digit year, which must perforce be properly windowed by the software. But replacing a 2- digit-year RTC with a 4-digit-year RTC will NOT correct any date problem by itself. If windowing is considered unsuitable (I know of no such instances), then the code must ALSO be changed to USE the now- available 4-digit-year of the replacement RTC.

How many chips out there contain code? Nobody can guess within a factor of three (or more). How many chips contain code that uses the year? The percentage is extremely small, but nonrandom. How many chips contain code that misuses the year? As a wild guess (I haven't seen anyone who claims to even have a feel for this), I'd estimate half the code the uses the year doesn't handle rollover properly.

Finally, how many of these date errors cause functional failures? We know it's a minority, but can't get much closer than that. We can only test the systems (however defined) that are critical, and deal on a case-by-case basis with the problems we find. Like everyone else, I'd dearly love better data -- it can't get much worse! The worst such problems will be newsworthy at the least, and only our imaginations limit the most.

-- Flint (flintc@mindspring.com), November 21, 1999.


That was great Flint!

Clear...logical...easy to understand.

Thanks for the education. I took CIS 101 in college, so I came to this board without any knowedge of windowing, patches, chips, time and date function, etc. Over the last week, I've had a crash course in petroleum industry 101 and computer programming 101. I'm dizzy from the pace, but I appreciate the opporunity to learn.

-- Laura (Ladylogic46@aol.com), November 21, 1999.


Laura:

I would commend to your attention:

www.pwgazzette.com

the spelling is variable in the z's and t's. He has the insert cartridge for the berkey for LOTS LESS than berkfield (they are the 9" Doulton's in the Berkey EX LG model). He has a siphon which just might have a better throughput than the Berkey using only one filter cartridge. Even if you buy a Berkey get your cartridges from PW (I am about to order mine this week).

Chuck

-- Chuck, a night driver (rienzoo@en.com), November 21, 1999.


Flint --

An EXCELLENT synopsis of the problem. (General, but then he appears to be speaking to the 'non-tech' types for purposes of 'flavor', which is to be recommended. The positions of the tech types have basically hardened into 'There isn't going to be a problem. End of story.', 'I don't know what is going to happen, and I can get no reliable, verifiable information.', and 'Oh My God, It's Going To Be TEOTWAWKI!', (again, typically without information attached to explain.). It is the non-techs that need what little information is out there so that they can make up their minds.

Boy Scout --

There are a *LOT* of possible failure modes. It depends on the software. (In my humble opinion.) Oh, there will be some things that fail due to the actual microprocessor or microcontroller 'BIOS' going flaky, but I am more concerned by all the proprietary stuff out there. These are the ones where the 'operating system' is a control loop written by the system programmers for the specific application. These are typically process controllers, for example, an HVAC (Heating, Ventilation, Air Conditioning) systems, or a helium liquefication plant. That sort of thing. Usually, these kinds of applications require some pretty tight timing to make sure that they get back around to service input and output and control queues in a timely fashion. The 'interrupt' structure that most of the commercial OS's support was frequently not sufficient to handle the level of timing required. (At least, this was true as recently as 5 years ago.)

In these cases, I would expect to see failure modes of the following types.

1). Prior to rollover -- Systems which have 'look ahead' date functionality (that is, they have date-based arithmetic which looks at dates in the future and does some sort of comparison function with either the current date or a recently past date.) I would think that these would be the *least* of the problems. I can only think of one application that I have worked on that had this situation.

2). At rollover -- Systems which look at 'current date' and make decisions based on it, or do arithmetic based on it. (These are the ones which make people queasy about the evening of the 31st.) And a lot depends on how the stuff was programmed. I can think of an application where the date gets an additional 'century' 2 digit field at that point. The problem is, the field it gets put into is a 'union' which never got its boundaries updated. In other words, it got compiled with a union size of 'x' bytes, and as soon as rollover comes along, that size is going to be 'x+2' bytes. I am not sure how this will fail, exactly, but I suspect that it will be fairly spectacular. (I am betting that it will pick up the value in the union, but that the call will bring the entire union, and two bytes will get left on the stack. These happen to be where the thing will pick up its next jump address. If I recall right, that ought to put it into the data space, which will result in the thing beginning to 'execute the data'. So the results are likely to be different everywhere that particular chip resides. But I can't remember if the thing is put into the union yycc or ccyy. If yycc then the jump address will be 20xx which will react as I stated. If ccyy, then it is going to have a 00xx address, which will jump into the 'limited' interrupt table. Not sure how that will work. Depends on the value of 'xx' where it falls in the table. ) One thing about this mode of failure, though, if the programmers who wrote the application had sense, they wrote their stuff such that if it gets to an 'if' statement, or a 'case' statement or some such compare operator, the result is likely to be 'undefined' or 'default' and such results *should* shut things down, hopefully, before any damage to anything occurs. (Of course, shutting some things down also does damage.)

3). A third type of failure mode is related to 'look-back' functions. This comes into play after the rollover, when things that look at current date with arithmetic relating it to dates in the past. Again, what is likely to happen involves shutdowns, assuming that the devices were programmed to 'fail safe', which means any unexpected failure, or unknown failure mode, should cause things to stop running before they do damage.

(The Dale Way essay, on a thread from about a week or two ago, goes into this in much more detail.)

So I would expect to see some systems containing embedded controllers fail before rollover, although not many, some to fail at rollover, and many more to fail on some condition after rollover. (There is also a failure mode due to things which were done in an effort to verify chip 'sanity' on powerups, some of which might involve 'date verification routines', which occasionally were something as simple as 'make sure the first two digits of the year are '19'.)



-- just another (another@engineer.com), November 21, 1999.


Thank you, NOT just an Engineer,

I'm new, I'm stupid, and I need information: and you have provided it.

-- Laura (Ladylogic46@aol.com), November 21, 1999.


Laura --

You're welcome. (And not stupid. Perhaps ignorant of the problem, but that is curable, and you seem to be taking the antidote. In large doses. :-))

Anyone else with questions, I'll check back tomorrow night.

-- just another (another@engineer.com), November 22, 1999.


Laura, you naughty girl. I know where you came up with that 60 billion figure. The guy in the movie said 60 billion tuna are now having their New Year's Eve party in the Pacific Ocean. Pretty clever. :-)

-- Hawk (flyin@high.again), November 22, 1999.

The origional number was given by Dave Hall who propagated embedded chips in the first place. He now admits he guessed and has lowered the numbers for chips to next to nothing, and only talks of "impacts" (not failures) of embedded systems. mHe had no background and did not know what he was talking about when he started the whole thing.

But unfortunatly it spread like wildfire and has been plastered all over the web for years.

You might say he started that particular urban legend about billions of chips failing.

-- Cherri (sams@brigadoon.com), November 22, 1999.


Good morning Cherri,

I suspect that must be the case. Can you give me a little background on Dave Hall?

-- Laura (Ladylogic46@aol.com), November 22, 1999.


Dear Hawk,

I went to bed about half an hour into the dumb movie. I heard them say there was 30 billion chips in the world and thought, "this is crazy". Everyone's estimation differs by billions, so this subject is sounding more and more "fishy" to me (Pacific or otherwise.)

(I don't really have a hummer darlin'. Just an odd sense of humor.)

-- Laura (Ladylogic46@aol.com), November 22, 1999.


Laura: As with all things Y2K, "nobody knows". (Least of all, Cherri.) The important thing is that they are out there, they govern very critical life sustaining systems, and they are SUBJECT to failing or causing undersirable events. They have not all been located, checked, fixed/replaced.

The odds are low. The stakes are high. Same old, same old.

-- King of Spain (madrid@aol.cum), November 22, 1999.

Have to agree with Flint on this one (gasp!).... GIGO.

-- R. Wright (blaklodg@hotmail.com), November 22, 1999.

Check out this thread. It has an interesting number...

http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=001Knn

-- Brooklyn (MSIS@cyberdude.com), November 22, 1999.


My Lord - Thank you for reminding me.

R. - Flint did a great job, didn't he?

Brooklyn - Thank you very much! I went to the site, and followed a couple of links from there. I am going to take some time tonight to read them, and generally, I synthesize information when I first wake up. If this thread is still up tomorrow, I'll tell you what I found.

Thanks for taking the time.

-- Laura (Ladylight46@aol.com), November 22, 1999.


Moderation questions? read the FAQ