Secondary Clocks Question : LUSENET : TimeBomb 2000 (Y2000) : One Thread

OK folks, I'm dizzy trying to make heads or tails from the embedded chip/secondary clock debate that has been brewing here and elsewhere for the past few weeks.

The only things I'm sure of at this point is that it seems to be a critical issue better resolved now than waiting 'til 1/1/00 and that we have completely opposite opinions from folks with various backgrounds/experiences.

Now maybe I've missed something, which makes this a very stupid question, but I was wondering in light of the back and forth arguments over the existence and functionality of secondary clocks, who actually knows the definitive answer and why can't we just ask them? Who knows enough about this stuff to get us beyond theory?

Surely this issue is of sufficient import to warrant a little footwork.

Can someone who knows what they are talking about simply ask a chip manufacturer (or whomever) what the actual answer is? Do we need to enlist one of our resident reporter/journalists to take up the task?

Declan? Drew? Anyone?

Diane, do you live close enough to Silicon Valley to give it a shot??

I apologize for inserting my non-techie foot into this mess and I will respectfully bow out at this point.

-- David (, April 22, 1999


David, why don't you head over to and get your info from the horse's mouth if you already haven't.

Good luck.

-- don't know (, April 22, 1999.

No don't do that David. Bruce Beach is the source of the confusion.

-- Doomslayer (1@2.3), April 22, 1999.

Sorry, but Mr. Beach's credentials, or rather lack there of, are not what I'm looking for. Besides, my point isn't trying to figure out which side to pick. It's a question of finding a more authoritative source. As far as I know, no one who has hands on experience in chip design and manufacturing has yet weighed in on the subject. An alternative would be to locate a testing methodology published by a chip design company -- that would represent a sufficiently authoritative source. Surely some utility has consulted with chip manufacturers in the course of spending countless billions of dollars on Y2K remediation and testing.

-- David (, April 22, 1999.

Here is what Gary North has to say:

Subject: The #1 Issue in Beach's Essay Is the 25% Failure Rate

Comment: In my original posting on Bruce Beach's essay, I labeled it, 25% Systems Failure Rate: The End of the Case for Y2K Optimism. I thought this indicated why I thought it was important.

In a letter to his list, Mr. Beach recently wrote:

"5. In my article I also described a theoretical possibility of a Y2K related type of bug that I now call the Beach Bug and which can at occur anytime after Y2K.

"I estimate that it may be present in perhaps less than 1% of embedded processor situations.

"TOTALLY UNREALISTICALLY this 1% theoretical possibility has generated over 99% of the subsequent discussion and the 25% objectively verified reality has generated practically none."

The programmers have ignored this. All they want to talk about is Beach's two-clock thesis. This is another piece of evidence that programmers are lost in the y2k woods because all they care to acknowledge is the existence of lots of trees. They are tree-focused people. Their unwillingness to remove their blinders, beginning in the 1950's, will cost us dearly.

and here is what a supposed "embedded systems expert" (RMS) has to say:

"[Y2K] has gained fame and notoriety SOLEY [sic] because it coincides with the Spooky and Mystical Dawn of the New Millennium"

You be the judge.

-- a (a@a.a), April 22, 1999.

and here is what a supposed "embedded systems expert" (RMS) has to say:

"[Y2K] has gained fame and notoriety SOLEY [sic] because it coincides with the Spooky and Mystical Dawn of the New Millennium"

You be the judge.

-- a (a@a.a), April 22, 1999.

Hey aaaaaahhh!

I am flattered that you consider me an embedded systems expert although I have never made such a claim. You may want to remove your head from the dark place that it seems to be so deeply lodged in and re-read the source of the quote you attributed to me -- that was written by the esteemed Mr. Poole, not I. But then, why should facts get in the way of what you post?

You be the judge indeed, David. One does not need to be an expert in any field to see Bruce Beach's theory for what it is -- unsubstantiated, illogical thoughts based on bits and pieces of technical knowledge, some of which he understands and some of which he does not. When someone gives a theory that has no scientific basis and no supporting evidence, misrepresents his credentials for putting forth said theory, and his best defense when called on it is "you can't prove that the possibility doesn't exist", then even someone as dimwitted as aaaaahhhhh! can see through it -- if the want to. If you have a pessimistic viewpoint about Y2K and are not interested in a rational plausibilities versus ridiculous possibilities, then you will probably accept his theory.

-- RMS (, April 22, 1999.


I believe that statement was made by our other "embedded systems expert", Stephen Poole, and not RMS. <:)=

-- Sysman (, April 22, 1999.

Please, let's not turn this into another argument over who's more qualified. Quite frankly, I don't have the expertise to judge between them, but I do know enough to know that a true "expert" doesn't have to deal in theories and speculation ala Bruce Beach. Someone who knows what s/he is talking about will state facts from first-hand experience in chip design and from first-hand, published test results conducted by or in conjunction with chip manufacturers. There's probably thousands of such engineers in Silicon Valley and elsewhere capable of addressing the facts. We just need to get hold of one and stop the unproductive arguing.

If Beach is so authoritative, why doesn't he go directly to chip manufacturers and get them to publically comment on his theory?

-- David (, April 22, 1999.


I am not a hardware guy. I have been a programmer for 31 years. Mostly assembly language on the IBM System/360/370/390 mainframe. I also have many years of assembly experience on microprocessors, including the Z-80, 6502, and most of the x86 line. I have written programs that run from ROM.

I will be preparing a somewhat lengthy opinion on the Beach issue in the next day or two, definitely by this weekend. IMHO it is not all hype, as some here would lead you to believe. I'll try and remember to post a link in this thread when I am done. <:)=

-- Sysman (, April 22, 1999.

Sorry RMS...I guess I meant Steven Poole, CET. The essence of the post os the same.

-- a (a@a.a), April 22, 1999.


Your questions are precisely on target. Stay with it. You're asking on behalf of a lot of us, I suspect.


-- Phil Zachary (, April 22, 1999.

OK, in an earlier post, I asked if any Y2K testing methodologies or utilities had been published by a chip manufacturer that might address the issues at hand, namely, how does one test an embedded chip for compliance. I located an example of what I'm talking about from the software angle. Here's a link to a white paper put out by Unix to assist their clients in Y2K testing.

And here's another link that offers Unix users testing utilities and download tools for use in their remediation efforts.

Surely chip manufacturers offer a similar fare for their users. No? Has anyone run across anything?

To me, the only ones who can address the chip issue are the chip makers. Are any of the compatants willing to go fishing for supportive documentation from the true "horse's mouth"?

-- David (, April 22, 1999.

I guess the links would work better if I named them huh?

The first one was:

The second one was:


-- David (, April 22, 1999.


In my opinion, Beach is talking about a software problem involving long-term "counters", and what happens when these counters "overflow".

Blank chips are sent to a "box" manufacturer, where a custom written program for that box is "burned" into the chip. More to come, stay tuned. <:)=

-- Sysman (, April 22, 1999.


You asked if any Y2K testing methodologies or utilities had been published by a chip manufacturer that might address the issues at hand, namely, how does one test an embedded chip for compliance.

This is really the whole crux of the matter and the simple answer is "No, because it is not necessary." Take a look at Intel or Motorola's Year 2000 pages and you will find all of their processors, PROM's, microcontrollers, and other embedded "products" listed along with their Y2K compliance levels. I did not find any that were listed as non-compliant but I did not spend much time looking through all of the tables as both have thousands of products in their databases.

An example is probably the best way to illustrate this. Let's say you have an automatic control valve from Vendor A. You find out, that inside the valve is a processor from Motorola and a PROM from Intel. So, the first thing you do is call Intel and Motorola and ask for the test protocols for their devices, right? WRONG! The first thing you do is call Vendor A (or check their website) and determine if that model valve has been tested and what it's Y2K compliance level is and follow their recommended procedures for upgrading or replacing if necessary. The firmware running in the control valve is identical to what it was when it left the factory so, if was compliant then, it is compliant now. You don't NEED to test the chips or even the valve itself.

Now, lets say the valve is part of a large distributed control system. Each of the controllers and I/O modules also have many chips so you better identify those chips and call the vendors again, right? Wrong again. You call your control system vendor and determine the Y2K compliance level of the products in your system. Many will have had various revisions over the year but all of the vendors include several different versions of each product in their databases. So, now you are done, right? Not in this case. The DCS is a configureable system and the vendors Y2K compliance information covers only the basic hardware, operating system, and any pre-packaged application software that comes with it. So, you must test your systems applications to verify that everything is compliant. But, there is no need to check the chips themselves -- your vendor has done that for you.

Bottom line, this embedded systems thing has become a red herring and is diverting people's attention from areas they should be looking at. Unless a system can be modified after it leaves the factory, you should rely on the vendor's Y2K compliance statements for those products. If they have not tested it, then you will need to find the information you are looking for but always start at the highest level. For example, the cable converter with your television may be of concern. Start with your cable supplier. If that does not satisfy you, go to the converter manufacturer. If that does not satisfy you, go to the internal component manufacturers.

The biggest issue with all of this secondary clock BS is that it is all hypothetical and there is no evidence whatsoever of its existence. Take a look at the Intel database and you will find some truly ancient products (in electronics years) such as 4004 and 8008 series processors which are Year 2000 compliant. If you think you have to test every chip in every device in your plant, you will never get done. If you do a little bit of homework and work with your suppliers, you will find there are very few real Y2K issues to worry about and they are of such a number that they can be replaced/remediated well before 1/1/00.

-- RMS (, April 22, 1999.

From IBM

Major information technology consultants agree. Year 2000 testing will take 40 to 60 percent of the total Year 2000 transition effort. Yet customers are leaving far less time and resource to adequately test the readiness of their systems. And many small businesses are unaware of the need to test at all.

Customers must test not only remediated applications and new packaged applications, but also the interaction between applications and the supply chain. If they do not, they run the risk of IT system failure when the century turns.

Year 2000 testing is NOT accomplished merely by trying several dates after Jan. 1, 2000 in major applications. Testing is NOT accomplished by obtaining the assurance of hardware, system software and application software providers that their offers are "Year 2000 ready."

-- Don't (trust@the.vendor), April 22, 1999.

Out of Context Alert!!

Look at the paragraph following the ones you posted from IBM:

While they are critical steps, these items are components of a comprehensive Year 2000 test plan which should also include infrastructure tests, testing non-IT assets and supply chain testing outside the enterprise.

Obviously, they are talking about Year 2000 compliance of a complete facility, not an individual system or component which is what this thread is dealing with. If you don't trust what your vendors are telling you, you did a pretty poor job of selecting vendors in the first place!

-- RMS (, April 22, 1999.


If everything you say is true, then how do we get 2 models of the same PC, coming off the same assembly line, 1 serial number apart, and one is compliant and the other isn't? It would seem that the maker should know exactly which chips were being put it there, but that is not always the case is it? You are ignoring the reality that vendors bought and used chips from the spot market, the gray market, and the black market. Sad but true. Pure capitalism in action.


The reason this is so difficult to pin down is that there is no *one* source for the answer to this problem. The chip you are interested in finding the source data for was assembled by a committee, and you know how hard it is to find the exact answer from a committee report.


Hope you will be clearing this up once and for all.

-- Gordon (, April 22, 1999.


This sounds like the same type of urban legend that has been bandied about before but never verified. Please provide a source for your statement:

... how do we get 2 models of the same PC, coming off the same assembly line, 1 serial number apart, and one is compliant and the other isn't?

Company? Model? Year? etc.???

-- RMS (, April 22, 1999.


I don't want to turn this into another debate thread as David has requested, but I gotta point this one out:

"If you don't trust what your vendors are telling you, you did a pretty poor job of selecting vendors in the first place!"

So you're telling us that is isn't only Beach that doesn't know what he is talking about, but now IBM also? Give me a break. RMS credibility: -1.

Also, the IBM post is no more out of context than you are my friend:

"Now, lets say the valve is part of a large distributed control system." bla bla bla.

I'll be back. <:)=

-- Sysman (, April 22, 1999.


You are a pettyfogger of the first order. Are you really saying that you have seen no analytical testing of products that show the lack of standardization that I mention? Because if that's the case you just are far, far behind in your research on this matter. Get clicking!

One of the biggest mistakes being made in this matter, as I see it, is that there seems to be a lot of strangled attempts to look at this whole field as a science, but it isn't. It is much closer to an art form, where creative license rules. Just look at the giants, like Microsoft, who put out a product then patch, patch, patch it for years. And always there is someone who finds another crack in the system. This is not science. This is shade tree mechanics.

And as far as not understanding the underlying creative work that was done, just look at the Great Pyramid. It sits there in plain sight, a wonder to behold. Strong and useful. Yet we don't have a clue as to how it was constructed. The documentation is long gone, and reverse engineering is just not possible. Same as the old legacy codes in the DOD and other complex departments. Same as some of these chip codes.

-- Gordon (, April 22, 1999.


I might be able to shed some light on your manufacturing question.

1) You need to be careful about the word 'model' in this context. The system designation (sometimes called the model) can refer to quite a few different units. A manufacturer might keep this same designation for several years (Like the A1731 model, for example). Over time, this designation is applied to each newer generation of CPU, chipset, motherboard peripherals (video, audio, etc.) and so on. These can hardly be described as 'identical' PCs.

2) Of necessity, every manufacturer uses as few sole-source parts as possible. Sole source parts have many disadvantages -- the supplier might get backlogged, or jack the price, or have a run of bad parts. During the time while the nominally identical systems are being manufactured, numerous parts might indeed come from different manufacturers for any of those reasons. Thess are mostly passive components, and rarely apply to the PC board, the chipset, or the CPU. In other words, they'll rarely affect compliance.

3) The BIOS does indeed get slipstreamed -- these are flashed (today) right on the assembly line, and bug fixes (and new features) get put in and new images get flashed along the way. So nominal compliance can well be affected here between otherwise 'identical' units.

4) Software downloads (the software that comes on your hard disk when you buy it) is really wildly variable. Especially with the you-name- it, you-got-it feature selections you can get. There might be many dozens of different hard disk 'images' available. And of course, this means that you might get Linux (compliant) or Windows (has issues), and so on through many applications.

5) Finally, 'compliant' in practice means a unit that produces a certain result based on a certain test. There are by now hundreds of tests out there for PC compliance, and these can produce very different results. Unit A can be compliant according to test X and noncompliant according to test Y, while unit B is just the reverse! The point Mr. Poole raised earlier in another thread applies here -- 'compliant' has various definitions, almost always defined in terms of the results of some test or procedure. Change the test or procedure, and the unit (or system) under test reverses its compliance status. Very confusing.

I hope this helps.

-- Flint (, April 22, 1999.


Thanks, that was an excellent explanation of how these things can get mixed up a little bit. Your comment on sole source supplier is good. This is the reason, I have read, that there are sometimes different chips on the motherboard. That is to say, chips that do the same job, but actually come from different sources, some of which can be black market. Now I'm not suggesting that Intel would do this, but others...?

BTW, I have seen reference on Roleigh Martin a couple of times now to your statement about *stakes*. It seems that a lot of people have picked up on that and like it a lot. But they don't know where it came from. Why not drop Roleigh a line? I'm sure he'd appreciate it.

-- Gordon (, April 22, 1999.

I believe the distinction between "odds" and "stakes" and their implications for preparing in the face of uncertainty, if that's what you are referring to, was argued on this forum by Hardliner.

-- BigDog (, April 22, 1999.

Big Dog is correct. I summarized part of Hardliner's excellent argument into a short sentence, and that sentence got picked up. But neither the idea nor the argument was mine.

-- Flint (, April 22, 1999.

Flint and Big Dog,

Yes, that is the commentary I was referring to. Since there has been a pick up on this by many others now, and since no one knows where it came from, perhaps you could drop Roleigh a note and let him know the details. They knew it came from this forum, but couldn't pin it down any better than that.

-- Gordon (, April 22, 1999.


So you're telling us that is isn't only Beach that doesn't know what he is talking about, but now IBM also? Give me a break.

No, that is not what I said. The statement by IBM is a generic blurb talking about how a company should go about becoming Y2K compliant. And, they correctly state that just getting compliance information from component vendors is not sufficient to prove that your entire facility is compliant which I also pointed out above. No where did I ever say that trusting what your vendors say is all you have to do. That is necessary but not sufficient. Nowhere does IBM ever say "Do not trust your vendor" so my saying you need to trust your vendor in no way can be translated to mean that IBM does not know what they are talking about. The only reason I stated that it was out of context is because the excerpt that was posted had a very different meaning by itself than when read in context with the rest of the statement as intended by IBM. The meaning of my statement about the DCS is not ambiguous and does not change the meaning of the previous discussion so it is not out of context. Superfluous, perhaps, but not out of context.

Sysman reading comprehension: -10.

I'll be back. <:)=

OK, I'll trust you on that. Just so you know, I am going to an out of town wedding tommorrow and will not be back online until at least Sunday night so, if I fail to respond to your new thread immediately, that is the reason.


You are a pettyfogger of the first order. Are you really saying that you have seen no analytical testing of products that show the lack of standardization that I mention?

Wow, I've never been called a pettyfogger before! I assume that is some type of insult so please explain in case I want to use that in the future! And I did not say anything about lack of standardization. You made a statement, I said it sounded like an urban legend and asked for some source that supported your statement. Pretty simple request, no hidden meaning. Your response to me was just more rhetoric and reinforces my opinion that the original statment had no factual basis.

-- RMS (, April 23, 1999.


So now you don't even know what a pettyfogger is? I am not going to do your research for you! As I said above, you need to get a lot more information into your portfolio before you go off the deep end. And then you need to learn to prioritize. And then you need to study the KISS priciple.

-- Gordon (, April 23, 1999.

Moderation questions? read the FAQ