Question on Computer Concepts

greenspun.com : LUSENET : HumptyDumptyY2K : One Thread

Question on Computer Concepts 30 August 1999

Caution : my ignorance of relevant principles is acknowledged...

Is it possible that the concept of pushing [or pulling] electrons around in a silicon [or germanium, or gallium arsenide, or other material] for 'gate-functions [on and off decision-making] was a fatal concept in the beginning [i.e. 1945 or so]? - i.e.

Is it possible that there is some inherent 'reliability' that we expect, based on our experience with other materials and their properties, that just does not exist, when electrons are sent running around and through polymorphous crystalline materials, such that, for the world to adopt this new technology so fast [in 30 years or less] and in such a profuse manner [worldwide] without long term testing and reliability studies, - by placing so much infrastructure functionality on this concept, - have we have created a Pandora Box [Y2K] and is now the Genie out of the bottle [ how's that for mixed metaphors...?]

I base my question on a familiarity with materials used for machinery; from the cast irons to the cast steels to now the poly-concrete-granite-epoxy goos. A machinist likes to say "give me a hunk of iron and I'll build for you a tool, machine or other product that will DO THE JOB, that will DELIVER, that will LAST"; "I guarantee you WILL be pleased, - you will get the RELIABILITY you ask for"; - electrons running around in very-thin- silicon seem not to have that same property of reliability - or so it would appear.

(Machinists like to say when a computer hangs or quits it has "got a bit cross-wise"...[in its 'silicon tunnel'...])

Did a promising, but immature technology [the concept] get put into widespread operation before it was fully ready to be used?

And, was the logical desire to 'interconnect' operations based on thus concept also prematurely incorporated into the equation?

Is this another example of 'mans boldly marching forward where angels fear to tread' ?

Or am I all wet?

Practical application:

If Y2k is more severe than a bump in the road, is it possible, that taking a slower, harder look at the basic concept AFTER Y2K, and doing lots more testing under far more rigorous protocols might result in a technology that really does work - but not now, not yet?

Thanks,

Perry

-- Perry Arnett (pjarnett@pdqnet.net), August 30, 1999

Answers

The principles of the hardware side of computers are sound. What's doing us in is the programming side, both firmware (in embedded devices) and software. In any post-Y2K situation look for hardware advances to possibly be put on hold while heavy emphasis is put on a lot of review of the programs to be used.

Up until now there really hasn't been a lot of "What if?" analysis put into programming, you just wrote something, made it work, tested the program to make sure it did what you wanted when you wanted it to and very rarely did any "What if the century changes?" kind of testing.

I imagine that if we still have any form of a technological society in the coming years that there will be requirements for testing softare and firmware for unusual operating circumstances.

WW

-- Wildweasel (vtmldm@epix.net), August 30, 1999.


I suspect that in the early days of the industrial revolution that certain machines seemed to provide so much additional value that they were created as quickly as possible, without rigorous testing to determine if they were suitable for use far into the future. Often, I suspect, inferior techniques were used because none other existed. But this could not keep the machines from being built because their payoff was so large. (If you don't want to consider machines, consider bridges.)

Computers and their programs are no different. The payoff is so huge that even a short term use can more than pay for itself. Couple that with the empirical observation that computing power increases and its cost decreases very rapidly, so rapidly that it is reasonable to assume that any given computer system will be replaced within a few years. In such an environment there is little or no incentive, economic or otherwise, to cater for what will happen to dates at the 1999 to 2000 rollover, a time (relatively) far into the future.

Unless and until some economic incentive is in place to cause computer systems to be developed to do more than simply solve the problem of the system's users, all problems will be considered only within the time frame of the development of the next release, sort of a "**** it. Leave it for the night shift," mentality. And I would point out that trying to legislate the result without providing the economic incentive will fail.

The reason that we are in danger from y2k is that computer systems are so inexpensive and so powerful that we can do things with them that cannot be done otherwise and therefore they have been adopted throughout our civilization. This ubiquity guarantees that their failure, in whole or in part, will affect most, if not all, now living.

Could anything have been done to avert this? Probably not. We do not know how to change human nature.

Eccl. 9:3

George

-- George Valentine (georgevalentine@usa.net), August 30, 1999.


Hi Perry, you've brought up some interesting points.

As someone else pointed out the Y2k bug is founded in software, not in the bits of silicon. Whereas a wafer is Si or GaA a poofteenth of a millimeter thick might seem tiny to you, it is big as a house on an atomic scale. Quantum mechanics tells us that atoms are very random little creatures, but average out a vast number of them and the behaviour of the system becomes very predictable and reliable. A silicon chip is big enough to be seen on this macro scale (well, for the moment anyway).

Human brains are the exception to the quantum rule....

Hey enough of the computer bashing already! The dire consequences of utilising poorly understood and under-researched technology is by no means limited to computers. Remeber DDT, asbestos, nuclear energy, thalydamide, IUDs, and all the other developments that were immediately embraced as wizz-bang cure-alls without consideration to their side effects.

Nor is it a modern phenomena -- You only need look back at the horrific pollution and unemployment suffered by major European cities in the early 1800's. At the start of the industrial revolution no one considered the ramifications of using the fantastic but poorly-understood technology developed, due to the cost and labour savings they brought. The Roman Empire's use of lead pipes is another example.

I guess when the first pre-human picked up a stick and used it to dig ants out of a nest, our fate was sealed. Perry it would seem the utilisation of poorly understood technology will by no means cease any more than the abuse of well-understood technology. As soon as a development is made which promises to make life easier in some way, it will be marketed by someone bent on profit and snapped up by the public eager for its benefits; all this without thinking about the side-effects.

IMHO Y2k does hold the prospect of making people more weary of the pifalls of 'technology abuse'; but lets hope folks don't siffer Luddite paranoia for it.

-- Jason Quarry (onca@hotmail.com), August 31, 1999.


According to Moore's law, the amount of space and time to store or calculate using chips is halved every 18 months. Y2k might slow us down for one or two 18 month cycles.

When we reach the end of the line we will probably no longer be using silicon but rather some form of macromolecular matrix which is designed with similarity to biological systems (for example, one nerve cell in your brain has hundreds of thousands of ion transport pores made of specific protein structures). Just by analogy alone, it is predictable that someday, computers weighing the same as a human brain will be able to think massively faster (to some pea brained folks, a pentium already seems to fit this picture). One such computer might be able to run the entire databases of the Federal Governent.

The field of genetic algorithms has helped computer programmers to understand how to write programs by breeding them. This is a young field but someday it will be very powerful.

Much more scary than y2k is the idea that the computers of the near future (say 50 years) will be able to think on their own like a human, in the sense that they will be able to learn, reprogram themselves, build and test virtual programs. They will be able to "meet with eachother" in cyberspace and discuss progress in computation and design. Since they will provide us a "service" they might consider "unionizing". They might all go on strike so they could spend their CPU time playing cyber games.

Y2k is just a simultaneous collective reminder that we have entered an age when we are less limited by what we can build and therefore have to build-in the limits by choice and wisdom.

-- Thom Gilligan (thomgill@eznet.net), September 01, 1999.


"Y2k is just a simultaneous collective reminder that we have entered an age when we are less limited by what we can build and therefore have to build-in the limits by choice and wisdom." --- Thom Gilligan

AMEN, Brother, Thom. Or, as we used to say back in my radical motorcycling days, "Just because you can, does that mean you should?"

Hallyx

"I do not fear computers. I fear the lack of them." --- Isaac Asimov

-- (Hallyx@aol.com), September 02, 1999.



The "thang" with computers is that they allow the ready manipulation of so much more information than working with "hard copy" data. In my world, the cutting edge technology is "modeling" or what to do with the data to make it usable for decision making purposes. This is where we fail to exert adequate controls on premature reliance upon the system.

A typical user will plug in field data that may or may not have been collected using standardized protocols and calibration of data collection instruments. Sometimes the data collection stations automatically record the data for direct download into a laptop. This data will be whirled through the model and a "picture" of what is happening in real life will emerge.

One critical problem is with the lack of understanding that comes into play in designing and calibrating the model. The "picture" is colored by the designer's assumptions and limited understandings which are built into the model. Usually, the designer is a computer expert but has no direct knowledge of the complex system he/she is modeling.

Although the designer usually understands this, management is so eager for a standardized, measureable, consistant scientific process that it inappropriately uses the model as the decision maker instead of a mere tool.

Take the modeling of global warming - it tracks only a small fraction of the many variables involved, yet is being relied upon for prediction in policy-making. How much reliance should be placed on projections from such a model?

Locally, we have a cutting edge, multi-factor model for water quality and quantity as relates to anadromous fish. It was designed by a team as a decision making tool, but is being used by a federal agency as the decision maker.

My concern is that we have abdicated our responsibility for making decisions to computers on the basis that human decision making is a subjective, mysterious and intuitive process. The introduction of a machine, data and the label "scientific" gives the illusion that the decision-making process is objective and certain. It becomes, somehow, more defensible. We fail to see the fallible Wizard of Oz standing behind the curtain pulling the levers.

We try to regulate messy, chaotic, magical real life to conform with some idealized vision produced by an imperfect and inadequate computerized model. This is a management problem. Management decisions need to be made close to the problem - in the field and on the ground and not in the lab, committee meeting or floor of a distant legislature.

-- marsh (armstrng@sisqtel.net), September 02, 1999.


(1) Computers in 1945 were not highly reliable. (They were still pretty unreliable in 1966 when I got into the IT field.) At that time, we accepted the situation. Is that perhaps why some of us "oldtimers" are more concerned about Y2K than some younger IT staff?

(2) Is perhaps the real problem the user attitude of "garbage in, gospel out?" Users sometimes expect unrealistic results, based on what they are willing to put in.

(3) I doubt if there will be any groundswell of change, though. We can just blame Bill Gates or whoever, and get back to deploying software that is not fully tested.

-- Mad Monk (madmonk@hawaiian.net), September 02, 1999.


Perhaps, this quotation also applies to the computer:

"Art thou any thing? Art thou some god, some angel, or some devil That mak'st my blood cold and my hair to stare? Speak to me what thou art."

-Shakespeare, Julius Caesar

-- Stan Faryna (info@giglobal.com), September 05, 1999.


Moderation questions? read the FAQ