Who are the veteran computer folks on this board?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

On March 6, in response to a thread I posted wondering why not all computer techs understand the potential seriousness of Y2K, Grey bear posted the following:

".. aren't we about ass deep around here in people who are in the computer business with ..oh...say 20- 25 - 30 years experience? Hell, they're thicker'n fleas on a lazy dog. (No offense to Geeks. The above reference was not in any way meant to associate Highly Experienced Computer Professionals with paracytic insects. Only to comment on the numerical frequency of their appearence here as it bears on the statistical analysis of the standard mean deviation while weighing the overall input of non-trivial data) Or as we say in Texas, "Somebody ask that guy with the pocket protector" -- Greybear (greybear@home.com

Spending considerable time on this board, I'm learning to discern those with worthwhile imput (i.e. "That's a thread I HAVE to read, etc."

My question is this: Who among you fit Greybear's description above? I also monitor Cory Hamasaki, and Rick Cowles. I'd really like to know who the computer folks are on this board. (The question I'd like to post to Cory is how many of those old IBM mainframes are currently in use, and where? Also--how many of them are being replaced, and where? If any of you know the answer to that, it would be helpful to know.

Thanks!

-- FM (vidprof@aol.com), March 08, 1999

Answers

You need to define "computer folks". The biggest scam to date is the idea that an IT has any knowledge outside of how to manipulate programs to do certain things. Basically about as well as any 15 year old with 6 months experience on a home computer. Check out the requirements needed to get a degree in IT. The only programming requirements is an intro to XXX programming. The majority of classes are based on how to manipulate the company you work for and make work projects like identifying what can be done, surveys, identifying the "needs" etc before the actual work gets started. Since management types have been getting downsized the requirements have been transfered to IT. If IT's had more than a passing glance at real computing~as in programming they would have been aware of the danger posed by two digit years. The concept certainly was not taught to them. Now programmers, especially experienced ones, understood the working of software and are the ones who prevented a lot of possible Y2K problems in the first place by doing it correctly. There are people who actually take pride in their work and responcibility for what they do. So a programmer would have the knowledge required to define problems and fix them. The good programmers never allowed the problem to exist in the first place. And YES there are many areas where no Y2K problems ever existed in the first place because of these people used their brains and knew what they were doing. There are programmers who cannot grasp programming and work "scared" that they will mess up. They usually do and tend to be downsized or left with simple tasks which take little ability. Then you get the people who actually run the mainframes. The hardware techs. There are good and bad here, fortunatly it is not an area that a person can bluff their way through for long. They know the hardware, they know the software ~they have to know the software to understand how it effects the hardware. They can read a schematic and determine if there is a Y2K problem. So much for having to test "embedded chips and systems.Only people who do not have this ability would claim that hardware had to be tested. That shows that the majority of so called "embedded" experts don't know what they are talking about. With the proper training and experience a harware person can trace any fault down to its exact location and fix it. It does not matter if the fault is a sofware or hardware one. The fault could be in the programming, the intigrated circiuts or electromechanical. Unfortunatly these days people take the easiest road and become IT's - so called computer experts. It is a farce. And society fell for it. Paid big bucks to those with the least ability. And now society is paying for it *grin*. The biggest joke of all is when a company puts their IT to the task of fixing the Y2K problem. Who were the ones who actually fixed the problems on mainframes? The programmers COBOL and other so called ancient workers who had been let go because it was believed that IT's were experts. So do you mean a real expert or an expert by what is considered one due to popular belief? There are also people who know computers in all of these areas. Hardware, software, data processing hell even in wiring the darn thing from their power source. I know, I am one of them. You don't want to get me started on the so called "embedded" subject which more hype than fact. So far all of the "embedded" faults found have been due to the software that runs through them, not "programmed" into them. Basically the falts lie in the PC based RTC-BIOS problem. In comunication from others like myself I have found they have given up trying to explain the facts to people who have pre concieved ideas of failures from what they have read and from self proclaimed experts who in many cases have never had hands on experience about the subjects they speak of. Ever notice it is usually the IT's who head for the hills and not the experienced programmers or hardware tech's?

Oh and to answer how many IBM mainframes are out there? What the heck difference does it make? You assume all have the 2 digit year problem. Think again, not everybody was stupid.

-- Cherri Stewart (sams@brigadoon.com), March 08, 1999.


Cherri,

Oh My!!

'Guess I asked a loaded question, so to further define, in answer to your following question:

*So do you mean a real expert or an expert by what is considered one due to popular belief?*

I'm referring to *real experts.*

By the way--what's your take on what Cory Hamasaki thinks about the status (i.e., they're doomed) of most mainframes? (As in--the work hasn't been done, there's not time to finish it, etc.)

Thanks to all for helping me educate myself on this issue.

-- FM (vidprof@aol.com), March 08, 1999.


Cherri commented:

"Now programmers, especially experienced ones, understood the working of software and are the ones who prevented a lot of possible Y2K problems in the first place by doing it correctly."

Cherri, I must strongly disagree with you here. When I started programming IBM mainframes (IBM7080) back in 1964 the die was cast. We used two digit dates because EVERY thing before us used them. The largest commercial mainfram had 80k of Core memory, that was it.

Ray

-- Ray (ray@totacc.com), March 08, 1999.


Cherri hit the nail on the head with her answer.

The best comment I'll make concerning 'experts' is this:

If someone says they are a computer expert and they know all there is to know, the suggested response is to knod knowingly while slowly backing away...

j (who has been in the business 20+ years)

-- j (justpassing@thru.com), March 08, 1999.


Cheri,

What the hell is an IT?

What the hell are you talking about?

-- Cheri is (certainly@not.expert), March 08, 1999.



Me's think we have a DISINFORMATION thread a brewing here !!

Ray

-- Ray (ray@totacc.com), March 08, 1999.


Not from me, Ray! Sorry if I opened the door to that!

-- FM (vidprof@aol.com), March 08, 1999.

Note to Cherri - Up until just a few years ago, all IBM mainframes had a 2 digit year in the hardware. A post from "John Doe" a few days ago told the story of his company, 26 years ago, not using the "system date" but obtaining date information, including a 4 digit year, from "control cards", so his company is having few Y2K problems, except for data from "other people" that has 2 digit years. His programmers are working on that problem. This example is the exception. Most programmers 10, 20,30 or 40 years ago just used the system date, figuring their programs would be obsolete by 2000. They were wrong and that's why we're in this mess.

I've been programming for 31 years. Started on an IBM/360 doing FORTRAN for a short while, COBOL, and ASSEMBLY (I LOVE assembly, both mainframe and PC!).

Mainframes are still in use just about everywhere, some old and some new like the G5 line. Their ability to move huge amounts of data isn't easily replaced. They are enjoying a rebirth these days as giant web servers.

<:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.


Sysman, we had a punched card reader as part of the console. This is where the processing dates were entered from through the JCL or sometimes as initial records on the input tape. This was serial processing mode only with 18 tape drives for sorting input and output. We did not have random processing until we received our IBM 360/50 in 1966

To my knowledge there was no ability to access a date from the hardware.

Ray

-- Ray (ray@totacc.com), March 08, 1999.


Sysman

Ok - for us dumb newbies who are trying to figure out what all this means - what exactly is the so called "embedded" chip and what is its basic function? Are they as big a deal as they are made out to be? I read a "White Paper" by someone named Jordon (I can't find the link now) that said that they should be more of a focus than the software and hardware that everyone is working on, because if you don't fix the embedded chips, the software and hardware fixes are useless. Are their a lot of them around or just a few? I have also read that some are reprogrammable and some are "burned" - not reprogramable. Thanks for the light you can shed!

-- Valkyrie (anon@please.net), March 08, 1999.



Hi Valkyrie. This is somewhat outside my area, but I'll give it a shot. Embedded systems are small computers, very similar to a stripped down PC. They have a processor, memory, and some sort of input/output. Most have a program in a ROM, similar to the BIOS on a PC. Rumor has it that there may be as many as 50 BILLION of them in use today, used for all kinds of things, from appliances, cars, building controls, security systems, to some more serious things like controling actions of refineries, the power grid, manufacturing robots, airplanes, ships and weapons. Opinions vary widely as to how many of these systems are date sensative, estimates ranging from .5 to 3 percent. But only 1% of 50 billion is still 500 MILLION. Fixing these systems isn't easy for several reasons. Sometimes an "identical" pair of boxes may be completly different on the inside, but perform the same function.

I hope this is enough to get you started. Some of the regulars here can give you more info, and we have an embeded systems category at the bottom of the questions page. <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.


Ray - what model computer did you have before the 360/50? <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.

sysman asked:

"Ray - what model computer did you have before the 360/50?"

We went directly from the IBM7080 to the 360/50. this was a GIANT leap since it involved moving from serial processing to random processing. It was scary back then. When we became comfortable with random processing it was great, no more tape sorts that seemed to take days.

Ray

-- Ray (ray@totacc.com), March 08, 1999.


Thanks Ray. The 7080 is before my time. Don't know if it even had a built in clock/calendar? <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.

Valkyrie, here's a good paper by Mark A. Frautschi, Ph.D. explaining the problem with embedded systems:

http://www.tmn.com/~frau tsch/y2k2.html

Here's another one by Tava Technologies, this one is in pdf format though, you'll need Acrobat Reader:

http://www.tavatech.c om/files/TAVA3_0.pdf


-- Chris (catsy@pond.com), March 08, 1999.


Fortunately, the real world of embedded systems isn't that bad. Enumerating all the things that *could* go wrong is fine, but it's a lot like listing all the things you *could* run into, if you fall asleep at the wheel.

The embedded numbers game has by now become a joke. Nobody knows how many total chips are currently in use, and estimates vary by up to 40 billion (a factor of two). Nobody distinguishes among generic 'chips', processors, RTCs, etc. Few distinguish between chips and systems. Nobody bothers to define what an embedded system consists of (but for practical purposes, companies think in terms of the replaceable unit).

Indications are that there are only a few thousand embedded systems that potentially pose real hazards (explosions, etc.), and maybe 10 times that many that, if not fixed, will force temporary shutdowns (mostly manufacturing environments).

Embedded systems fail pretty regularly for a variety of reasons, and we get by fine anyway. PC BIOS problems represent a trivial problem in all but some rare circumstances. So I'd expect some explosions, some breakdowns, some manufacturers experiencing line shutdowns for a while. Not infrastructure collapse.

-- Flint (flintc@mindspring.com), March 08, 1999.


One real example of an embedded chip reading a date - "at the minor but doesn't matter very much level" - is my home thermostat: if its out, I can't control the furnace nor the A/C. You can "program" it to read hours and minutes, day of week, heating or cooling, AM/PM, and desired temperature. It figures out (during each weekly cycle) whether the house is too hot or too cold, then trips on the heater or A/C.

Simple little process - but it fails or any of a series of other controller and embedded chips fail, or the 24v power to it fails because the 120v house power fails, or the natural gas fails, or the house power to the A/C compressor fials, or the heater trips on "too hot" sensed (another embedded chip), or the temperature probe on the heater outlet trips (another embedded chip) or the pilot light trips on sensing low temperature - (no chip - that's mechanical/thermometallic) - then I get no heat.

An embedded chip is "behind the scenes" in almost every electronic device made today. Many will be okay - or just cause minor problems if they fail. The thermostat - for example - can't be tested for Y2K compliance - there is no way to set the year. Will it work? Maybe. maybe not. I've kept my old manual thermostat just in case. That one works on thermally expanding a spring - rotating a mercury bulb, thus "mechanically" transmiting electricity through the mercury to the outlet.

Figure any device you can reset or enter numbers into has a chip. Anything that displays numbers or letters or lights has a chip. Anything operated by a mechanical switch may or may not have a chip. Anything not battery powered or plugged in probably doesn't have an embedded chip.

However, be careful testing any appliance or electronic device, you may not like the outcome. One forum reader a while back "set ahead" their VCR, it promptly failed, and could not be reset at all.

Commercially - everything nowdays from the french fry machine at McDonald's, to the gas pump, to the credit card acnner to the theft detector magnetic stands to the Coke Machine has embedded chips. The postage scale and meter. The copier, the fax machine, and the electronic door lock. And any given one could fail. And all will fail if there is no power.

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 08, 1999.


Every time an argument begins about how or why the two digit representation of the year came about, or about who's fault it was/is, or, most frequently, when I hear the usually incorrect explanation of some journalist who doesn't know a byte from a bite, the following words from a song run through my head:

In a cavern, in a canyon, excavating for a mine, lived a miner, forty-niner, and his daughter Clementine.

I have read letters written by folks during the 16th, 17th, 18th and 19th centuries, and while not all of them did so, many of the authors used a two digit date.

I remember well, the date representation in punched card decks that preceded and coexisted with magnetic media. It usually consisted of one digit to represent the year. (in that case, it never became a problem simply because the media typically physically wore out long before the decade ended.

The bottom line for me, though, is that it really doesn't matter who's fault it is/was, or even why it came about.

I have other tasks that require my energy, and nearly all of them have a higher, personal priority.

For Ray and Sysman,

Most System 360 models had a TOD (Time of Day Clock) in hardware that consisted of a 32 bit binary number and in which all zeros represented the midnight between Dec. 31, 1899 and Jan. 1, 1900 (some of the very late models and the "special editions" like the 9020 incorporated some 370 architecture, so this may not be an absolute). it was incremented every 1.048576 seconds, and had a cycle of approximately 143 years (which makes it Y2K compliant since the end date is about 2043). When System 370 expanded the architecture, the TOD clock was enlarged to 64 bits, with the low order bits used only for high resolution timing. Bit 51, for example, changed state every microsecond. The date/time was stored in the hardware clock with the assembler instruction, "SET CLOCK" and was examined with the instruction, "STORE CLOCK" which put this 32 or 64 bit binary number into a specified location in main storage ('B205,base, displacement'). The binary clock value that would correspond to "New Year's Evil" is (in hex) 'B361 183F 4800 0000'.

The operating system (be it DOS, VS, MFT, MVT, VM, MVS, etc.) simply accessed the TOD clock and put it into a unique storage location for the date/time. How and in what format it was used, was specific to the operating system, and was always determined by what the software did to the 32 or 64 bit binary number from the clock location in main store.

-- Hardliner (searcher@internet.com), March 08, 1999.


For Ray and Sysman,

Never mind! I see that you were asking about the 7080, not the 360. Sorry about that!

-- Hardliner (searcher@internet.com), March 08, 1999.


Some ASS said:

"Cheri,

What the hell is an IT?

What the hell are you talking about?

-- Cheri is (certainly@not.expert), March 08, 1999."

She certainly *IS* an expert...

YOU would do well to listen.

Don't know what IT is?...why are you bothering with following this forum, then?...

-- Mutha Nachu (---@paleblueskies.com), March 08, 1999.


Hi Hardliner. Actually, the SET/STORE CLOCK instructions were introduced with the 370. The 360 had a simple interval timer at X'80'. It is true that few if any applications used either method because of logic needed to convert the timer, and relied on the OS for date/time services. The date in DOS(VS/E) for example was stored in COMRG in a MMDDYYJJJ format, with JJJ being the julian day. I guess we covered this topic, so lets not beat a dead horse! See ya. <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.

"So I'd expect some explosions, some breakdowns, some manufacturers experiencing line shutdowns for a while. Not infrastructure collapse.", said Flint, explaining the Y2K impact on embedded systems.

Well, you never know, Flint. Loss of power, clean water, sewage, and All That Other Stuff (can you say Peach Bottom?) can do a real number on infrastructures, especially in urban areas.

-- Jack (jsprat@eld.net), March 08, 1999.

At the risk of mortal injury, I throw my own credentials into the fray, and try to explain why most of my colleagues are DGI's and DWGI's.

I started out in 1982 working with FORTRAN on an IBM 3xx (not sure exact model)using those cursed punch cards. Later same year I helped install and program a Sperry UNIVAC 90/60.

Went to PASCAL in 1983 and "C" in 1984. C++ in 1989 (Before the spec was locked down) I currently program primarily PC's - Windows and Macintosh. Currently I'm working on an embedded system for a Kodak Camera.

Other pieces parts:

8080 Assembly (1983) 68000 Assembly (1990) UNIX [HP, Solaris, LINUX],MFC, TCL, RDBMS, Java, MacApp, PowerPlant, SWING, ATL, ODBC [Server], ActiveX, OLE, CORBA, etc etc.

Most of my experience is obviously in the Personal Computer realm. The IBM 3xx, UNIVAC 90/60, and HP-9000 / 9845 were all in the Navy through 1985.

Various positions held include (in increasing order of seniority) Senior Engineer, Lead Engineer, Technical Lead, Systems Analyst, Architect, Engineering Project Manager, Project Manager. Other [game industry] positions include Producer, and Designer.

Basically, I've done virtually all aspects of software engineering, with a heavy bias toward Applications. My systems engineering experience is as a "client", not in development (excepting 90/60 - but I was very junior at the time).

That's enough for my "qualifications".

I've told over 100 of my Silicon Valley colleagues about the Y2K issues. Only 2 have taken me seriously enough to do any preparation themselves. And only 1 is preparing "in earnest".

Having run a half dozen projects, and over 40 programmers at one time or another, some things become clear:

Programmers aren't liars - they're optimists. This means they can't predict [very well] their productivity. They always assume "maximum" productivity at all times. Scheduling relies on these predictions.

Second, most engineers have a very small world in which they work for each project. If they have a good specification, they never get a "gestalt" view of the project they're working on. This gives them tunnel vision and an inability to "connect the dots". And believe me,there are LOTS of dots to connect for Y2K. Many of the dots are chimera, and may or may not exist. These have to be extrapolated or deduced from insufficient and sometimes contradictory data.

That's why most are not worried.

Also remember - most of the engineers are young (35 and under) and haven't even seen mainframe programming.

Finally, most of my fellow engineers work for private industry on personal computers. The real problems are not there. The real problems are in government and large [old] computers. So they really don't have a valid perspective in that area.

The engineers that have the most likelihood of "Getting It" will be systems analysts and Software Architects. These people by definition have to look at the "big picture".

Hope this post wasn't too long..

Jolly

-- Jollyprez (Jolly@prez.com), March 08, 1999.


It wasn't an embedded systems problem, and it wasn't Y2K related. This little news items shows, though, how just one small date error can cause a big problem...

http://y2k.dia.govt.nz/whatis.htm

[snip]

The Million-dollar glitch

("The Dominion" -- Wellington, New Zealand, 8 Jan 1997) via NZPA [New Zealand Press Assoc.]

A computer glitch at the Tiwai Point [place in South Island of New Zealand] aluminium smelter at midnight on New Year's Eve has left a repair bill of more than $1 million [New Zealand Dollars]. Production in all the smelting pot lines ground to a halt at the stroke of midnight when the computers shut down simultaneously and without warning. New Zealand Aluminium Smelters' general manager David Brewer said the failure was traced to a faulty computer software program, which failed to account for 1996 being a leap year. The computer was not programmed to handle the 366th day of the year, he said. "Each of the 660 process control computers hung up simultaneously at midnight," Mr. Brewer said.

The same problem occurred two hours later at Comalco's Bell Bay smelter, in Tasmania [Australia]. New Zealand is two hours ahead of Tasmania. Both smelters use the same program, which was written by Comalco computer staff.

Mr. Brewer said the cause was difficult to trace and it was not till a telephone call in the morning from Bell Bay that the leap year link was made. "It was a complicated problem and it took quite some time to find out just what caused it."

Tiwai staff rallied through the night to operate the pot lines manually and try to find the cause. The glitch was fixed and normal production restored by mid afternoon. However, by then, the damage has been done. Without the computers to regulate temperatures inside the pot cells, five cells over-heated and were damaged beyond repair. Mr. Brewer said they would have to be replaced at a cost of more than $1 million.

This illustrates a major point, it is the unexpected which is most likely to cause problems.

[snip]

-- Kevin (mixesmusic@worldnet.att.net), March 08, 1999.


Mutha Nachu said:

_________________________

Some ASS said:

"Cheri,

What the hell is an IT?

What the hell are you talking about?

-- Cheri is (certainly@not.expert), March 08, 1999."

She certainly *IS* an expert...

YOU would do well to listen.

Don't know what IT is?...why are you bothering with following this forum, then?...

______________________________________

Relax, Mutha, I'm on your side. Cherri's rambling here makes all techies look bad. There is no person called an IT. Of course, I know what it means. But when reading Cherri's rambling it's hard to tell which side of the argument she is on. She either only half-way knows what she is talking about or needs to slow down and make her points more clear.

-- Mutha Nachu's Ass (ass@polly.anna), March 08, 1999.


Sysman,

You're right about the set/store clock instructions; before that it was just loads and stores and comparing your location to x'80' (and I thought my explanation was so clear!). Anyway, the logic drawings (ALDs) for the updating of x"80" and the diagnostic programs to test the circuitry were labeled "TOD Clock" on some pages and "timer" on others, and the "clock" (in 360s) didn't have a dedicated register in hardware except for the word at x"80".

I also remember a number of customers who got quite irate if the "Disable Interval Timer" lever switch on the console was operated since it made their time stamps wrong!

The point I was trying to get at actually was that it was the software (and I consider here that the OS is such) and not the hardware that determined whether the year was a two or four digit date.

-- Hardliner (searcher@internet.com), March 08, 1999.


I'll own up to a very dusty Basic manual circa 1970 and some programs punched in paper tape. (That box should have been trashed year ago...) Anyone got a paper tape reader?

jh

-- john hebert (jt_hebert@hotmail.com), March 08, 1999.


Hi FM, Sysman, Cheri and All,

It's easy to see who has been involved with mainframes from way back and came into computers since the era of PC's (roughly 1978). There's a big gap in understanding how dates were used and why. Prior to 1970, only small, special-purpose computers had built in capability to keep track of the time, and any date processing was purely software.

Early computers had no concept of what we know as an interrupt, and most didn't have built-in capability for subroutine calls and returns. All of the hardware needed for those capabilities was far too expensive, and failure-prone, to use in production machines.

It's no stretch to say that almost all of today's programmers simply can't conceive the limitations that were imposed by hardware available in the late 50's and early 60's.

The most common computers had memories of 2000 to 4000 words/characters. Input was processed at 8000 characters per MINUTE, from a 402/403 or 12000 characters/minute from a 407. Printing was at the same rate. (These rates are dictated by the reading and printing speeds of the 402/403/407 tab machines.)

Since most current programmers don't have the 'built in' feeling for the conditions present in the early days of commercial computing and Data Processing, they can't really believe the reasons given for using a 2 digit (or even one digit) year representation. Because they can't really believe the reasons, they don't understand how Y2K could be such a big problem.

I have many friends and business associates who say that Y2K can't be a problem. All of them have come into IT work since the PC era began, through VisiCalc or early Novell networks (before IBM PC's). Now they're working on enterprise-sized WANs and production systems. They still don't understand.

Dean (Certificate in Data Processing from the DPMA, 1963)

-- Dean -- from (almost) Duh Moines (dtmiller@nevia.net), March 08, 1999.


The thread at this link has some info on veteran computer folks:

http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=000Pn2

-- Kevin (mixesmusic@worldnet.att.net), March 08, 1999.


Dean,

I'll bet you know what a "programming wire" is!

Kids look at me like I've got three heads when I tell them that the original Voyager spacecraft (a billion dollars or so) had only 8K of onboard storage! And they simply refuse to believe that there was ever a time when you figured out what "number" was in "memory" by poking a little stick with gradations on it into the guts of the accumulator. . .

There was a time in my life when I thought that a 557 (with proof feature) was the hardest machine in the world to fix. . .

-- Hardliner (searcher@internet.com), March 08, 1999.


And if you haven't had to resort a card deck after the rubber band breaks at the bus stop - you don't realize why you put line numbers in.

And if you have never had to unspagetti some long-departed "friend's" coding, you never understand the reason "comments" were invented.

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 08, 1999.


Hi. This is my first time to post. I've been a lurker for a while and have really enjoyed the depth of the discussions from people who are experienced in this arena. I started working with computers in 1969 so I guess that makes 30 years now. Scientific computers at the EPA and then to a triplex in Washington - 360s. It was a fun time to learn computers - we did everything - cards, paper tapes, transportable disks. At the EPA was one of the few times that I was in a computer shop that we were starting from scratch and even there we were interacting with others. I can remember in subsequent computer shops from almost the very beginning discussions about what would happen when the year turned 2000. Once the systems had been started and standards had been established, programmers had to program within the constraints of the individual shops. At any time that they might have wanted to "buck" the system there would have needed to be rewrites and they would have had to have justified their position - they would not have had the final decision on that kind of departure from the standards established in their departments. So even if the programmer were an expert, very well-versed in the ramifications of what lay ahead in the area of Y2K, he would not have been in a position to significantly impact the systems on which he worked.

I have been working in this field since 1969. Most of the work has been on mainframes (Cobol, CICS, DB2, IMS - a lot of older languages and newer languages), although I dabble in the PC area - Visual Basic and Web stuff for my personal PC. I have been a Computer Consultant since 1980. My most recent contracts have been Y2K remediations - one for a large automobile insurance company - the last for the State of Georgia, Individual Income Tax.

I consider myself a GI.

-- Jean MacManus (jmacmanu@bellsouth.net), March 08, 1999.


Hardliner - I'll give you the point on the OS doing the date. However, do you know of any IBM OS that offered 4 digit support before a few years ago?

Dean - Ah yes, the old 402 days! The plug-in boards with a zillion wires - true hardware programming. If I remember, out of the 120 print positions on a 402, something like the first 40 (?) could be alphabetic, and the rest of the print line was numeric only. This was fixed on the 407, and maybe the 403?

<:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.


-- FM,

As you may have deduced from above posts in this thread, the age of the mainframe is less likely to be a problem than the ancestry of the programs. BTW, I am not sure if the number of IBM mainframes is growing or shrinking, but the amount of processing done on them is growing. Many companies that are using them are getting bigger and faster ones each year.

Regarding date control "cards" (which now are usually one record files on disk), they are quite common, since the "as of" date of the data is often not the same as the date on which it is processed. However, due to precedent, the date control commonly has a two digit year!

Regarding the interval timer on the 360 series, I seem to recall that it ticked once each .0166667 seconds, i.e. 60 cycles per second. I distinctly recall that it was too slow accurately to measure time slices among multitaking regions on the higher end 360s, but that was the breaks until the 370 series came along with the 1 microsecond per tick TOD clock.

Jerry

-- Jerry B (skeptic76@erols.com), March 08, 1999.


Memories - I had forgotten about paper tapes. Were yours the 1/2 yellow ones? Came on a spool that made the "new" magnetic tape drives a dream to work with.......and the "hard" disks that were about 5 and 6 stacks high, 24" in diameter - holding less than a floppy (much less a zip drive) does now in your pocket.

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 08, 1999.

Sysman,

As I've indicated, I'm mostly a hardware guy. Assembler is usually all that's necessary to put the circuit in question into a loop, although some of the "hairier" bugs I've worked on would only fail for the OS. MVS was the first OS that I got any formal training on (early XA course with SP "differences" explained) and, to tell the truth, I don't even remember if those guys did 4 digits. I was really only impressed with my newly acquired ability to use the OS as a true diagnostic tool rather than a bludgeon.

I'd forgotten that all the print wheels on the 402 weren't alpha/numeric, but what I'll never forget is replacing type slugs on the 407. As to be expected, the zero slugs wore out first. When you removed the type bar, it had a sliding cover on one side that covered all the spring-loaded slugs and held them in the hollow bar. You had to get a spare cover, and "follow" the original one with it so that you could create a "slot" over the worn slug and only the worn slug. More often than not, it would get away from you and you'd either spend "hands and knees time" looking for the spring or (as I took to doing) you carried a lot of spare springs around with you!

-- Hardliner (searcher@internet.com), March 08, 1999.


Jerry,

Initially, the Interval timer was driven off the power line frequency as you remember, but the problem you cited caused IBM to come out with the High Resolution Timer Feature. That was definitely a 360 feature, although not available universally, and I can't remember anymore just which models could get it and which not. . .

-- Hardliner (searcher@internet.com), March 08, 1999.


OK Hardliner, just one more point before we bore everybody to death. MICROCODE, something I know very little about. Some machines like the 4341, which is a full blown 370, can't run the ESA operating systems, and hence can't support 4 digit years. I guess it's only a question of support from IBM, since this machine reads the microcode from a floppy at IML time during power-on. Any comments? Anyone? <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.

Sysman,

First off, 5code is just like any other code, except the instruction set is different. It's unique to the circuitry it drives but otherwise, works the same way. For example, a single 360/50 5code instruction, D->MLJK, would take the contents of the "D" Register (the storage data register) and gate it to the "M" register (another 32 bit register), the "L" register (still another 32 bit register), four bits (I think 16-19) to the "J" register (a four bit register) and four more (28-31 maybe?) to the "K" register (another four bit register). Each CPU clock cycle, a single 5word is executed, which contains many 5instructions.

Now obviously the 5code to drive the instruction set requires a certain amount of memory to live in. 360 kept the 5code in ROM of one sort or another (and even in the case of the Model 75 and up, in hardware). When 370 came along, they started putting it into RAM and here is where I suspect part of the rub lies with 4341s and ESA. I'd bet a lot that the available storage for 5code simply isn't big enough.

The suspicious looking bit to me is that while ESA will support either copper or glass channels (for instance) it seems unlikely that the 5code to do so could be crammed into the space alloted for copper support alone.

There's probably a zillion other reasons why it won't run, but that looks like a showstopper to me.

-- Hardliner (searcher@internet.com), March 08, 1999.


Thanks for that info Hardliner. Been fun chatin' with ya!

FM - Sorry if we got a little off topic on your thread. As you can see, a few of us out here have been playing with the boxes for decades. I can think of at least a few more that haven't answered yet, and I'ld bet a few more like Jean that are just lurking. Birds of a feather I guess. I just want to point out that you shouldn't judge a post by this though. Many other sharp people here with lots of valuable info to offer, and I'm sure we'll attract more as the year counts down. Just ignore the trolls, try to enlighten the DGI, and most important, make sure you and yours are prepared. May we all have a Happy New Year! <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.


I hate to drag this discussion back (NOT but i have enjoyed it) Cherie's post for a bit but (s)he refrred to reading the schematic as the best way to handle hardware,and that testing was for the less capable (paraphrase mine) and that embedded systems/chips was truly a strawman. If memory serves, we have had a few of teh folks who wrote firmware for embeddeds, as well as a few folks who designed operations which included embeddeds and all of tehm have suggested that readin gthe specs etc will NOT give a usable answer as the chip actually used may vary from batch to batch and the schematics/specs may not match this batch.

Comments from folks as erudite as teh above on mainframes??

foggy but gettin clearer (hunt-n-peck still a bit of an adventure though)

-- Chuck, night driver (rienzoo@en.com), March 08, 1999.


Testing - oh the way - is the most difficult part of debugging - and the slowest, and the part the most expertise.

She didn't indicate her background or what processes she has used in the past for configuration control, but I'm skeptical about the QA she's seen employed - or negated, as the case may be. Her company may be one of those that "lets the customer" do the debugging.....

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 08, 1999.


FM,

I started in 1963, used IBM 1620, 7094, 704, 360s, 1800, 370s, XDS and other minor brands, DEC PDP-11 and VAX-11/7x0, IBM 30xx and 43xx, Tandem various, & PCs. Did systems programming on some of those, installation and tuning on others, applications on the rest for companies in the oil, real estate, and financial services industries.

-- No Spam Please (No_Spam_Please@anon_ymous.com), March 08, 1999.


Damn No Spam, you are an old timer! 1620, 7094 (wasn't that "stretch"?), no 1401 though? Well sir, you've got me by five years, and my hat is off to ya! <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.

I can't remember when I've been with a group that would appreciate this story as much. To give you an idea of how old it is, I first saved it on a 5 1/4" SSSD hard sectored floppy! I got it off of some bulletin board, and have no idea who wrote it, but I knew guys "like" Mel, and I'm sure that some of you did too.

* * * * * * * * * * * * * * * * * * * * * * * * * * *

If you like assembly language, you'll *love* this story. ---------------------------------------------------------------------- --:00 1969 Real Programmers Don't Use FORTRAN, Either! A recent article devoted to the *macho* side of programming ("Real Programmers Don't Use Pascal," by ucbvax!G:tut) made the bald and unvarnished statement: Real Programmers write in FORTRAN. Maybe they do now, in this decadent era of Lite beer, hand calculators and "user-friendly" software, but back in the Good Old Days, when the term "software" sounded funny and Real Computers were made out of drums and vacuum tubes, Real Programmers wrote in machine code. Not FORTRAN, not RATFOR, not, even assembly language. Machine code. Raw, unadorned, inscrutable hexadecimal numbers. Directly. Lest a whole new generation of programmers grow up in ignorance of this glorious past, I feel duty-bound to describe, as best I can through the generation gap, how a Real Programmer wrote code. I'll call him Mel, because that was his name. I first met Mel when I went to work for Royal McBee Computer Corp., a now-defunct subsidiary of the typewriter company. The firm manufactured the LGP-30, a small, cheap (by the standards of the day) drum-memory computer, and had just started to manufacture the RPC-4000, a much-improved, bigger, better, faster -- drum-memory computer. Cores cost too much, and weren't here to stay, anyway. (That's why you haven't heard of the company or the computer.) I had been hired to write a FORTRAN compiler for this new marvel and Mel was my guide to its wonders. Mel didn't approve of compilers. "If a program can't rewrite its own code," he asked, "what good is it?" Mel had written, in hexadecimal, the most popular computer program the company owned. It ran on the LGP-30 and played blackjack with potential customers at the computer shows. Its effect was always dramatic. The LGP-30 booth was packed at every show, and the IBM salesmen stood around talking to each other. Whether or not this actually sold computers was a question we never discussed. Mel's job was to re-write the blackjack program for the RPC-4000. (Port? What does that mean?) The new computer had a one-plus-one addressing scheme, in which each machine instruction, in addition to the operation code and the address of the needed operand, had a second address that indicated where, on the revolving drum, the next instruction was located. In modern parlance, every single instruction was followed by a GO TO! Put *that* in Pascal's pipe and smoke it! Mel loved the RPC-4000 because he could optimize his code: that is, locate instructions on the drum so that just as one finished its job, the next would be just arriving at the "read head" and available for immediate execution. There was a program to do that job, an "optimizing assembler," but Mel refused to use it. "You never know where its going to put things," he explained, "so you'd have to use separate constants." It was a long time before I understood that remark. Since Mel knew the numerical value of every operation code, and assigned his own drum addresses, every instruction he wrote could also be considered a numerical constant. He could pick up an earlier "add" instruction, say, and multiply by it, if it had the right numeric value. His code was not easy for someone else to modify. I compared Mel's hand-optimized programs with the same code massaged by the optimizing assembly program and Mel's always ran faster. That was because the "top-down" method of program design hadn't been invented yet, and Mel wouldn't have used it anyway. He wrote the innermost parts of his program loops first, so they would get first choice of the optimum address locations on the drum. The optimizing assembler wasn't smart enough to do it that way. Mel never wrote time-delay loops, either, even when the balky Flexowrite required a delay between output characters to work right. He just located instructions on the drum so each successive one was just *past* the read head when it was needed; the drum had to execute another complete revolution to find the next instruction. He coined an unforgettable term for this procedure. Although "optimum" is an absolute term, like "unique", it became common verbal practice to make it relative: "not quite optimum" or "not very optimum." Mel called the maximum time-delay locations the "most pessimum." After he finished the blackjack program and got it to run, ("Even the initializer is optimized," he said proudly) he got a Change Request from the sales department. The program used an elegant (optimized) random number generator to shuffle the "cards" and deal from the "deck," and some of the salesmen felt it was too fair, since sometimes the customers lost. They wanted Mel to modify the program so, at the setting of a sense switch on the console, they could change the odds and let the customer win. Mel balked. He felt this was patently dishonest, which it was, and that it impinged on his personal integrity as a programmer, which it did, so he refused to do it. The Head Salesman talked to Mel, as did the Big Boss and, at the boss's urging, a few Fellow Programmers. Mel finally gave in and wrote the code, but he got the test backwards and, when the sense switch was turned on, the program would cheat, wining every time. Mel was delighted with this, claiming his subconscious was uncontrollably ethical, and adamantly refused to fix it. After Mel had left for greener pas$ture$, Big Boss asked me to look at the code and see if I could find the test and reverse it. Somewhat reluctantly, I agreed to look. Tracking Mel's code was a real adventure. I have ofter felt that programming is an art form, whose real value can only be appreciated by another versed in the same arcane art; there are lovely gems and brilliant coups hidden from human view and admiration, sometimes forever, by the very nature of the process. You can learn a lot about an individual just by reading through his code, even in hexadecimal. Mel was, I think, an unsung genius. Perhaps my greatest shock came when I found an innocent loop that had no test in it. No test. *None*. Common sense said it had to be a closed loop, where the program would circle, forever, endlessly. Program control passed right through it, however, and safely out the other side. It took me two weeks to figure it out. The RPC-4000 computer had a really modern facility called an index register. It allowed the programmer to write a program loop that used an indexed instruction inside; each time through, the number in the index register was added to the address of that instruction, so it would refer to the next datum in a series. He had only to increment the index register each time through. Mel never used it. Instead , he would pull the instruction into a machine register, add one to its address, and store it back. He would then execute the modified instruction right from the register. The loop was written so this additional execution time was taken into account -- just as this instruction finished, the next one was right under the drum's read head, ready to go. but the loop had no test in it. The vital clue came when I noticed the index register bit, the bit that lay between the address and the operation code in the instruction word, was turned on -- yet Mel never used the index register, leaving it zero all the time. When the light went on, it nearly blinded me. He had located the data he was working on near the top of memory - the largest locations the instructions could address - so, after the last datum was handled, incrementing the instruction address would make it overflow. The carry would add one to the operation code, changing it to the next one in the instruction set: a jump instruction. Sure enough, the next program instruction was in address location zero, and the program went happily on its way. I haven't kept in touch with Mel, so I don't know if he ever gave in to the flood of change in progrmming techniques since those long-gone days. I like to thin he didn't. In any event, I was impressed enough that I quit looking for the offending test, telling the Big Boss I couldn't find it. He didn't seem surprised. When I left the company, the blackjack program would still cheat if you turned on the right sense switch, and I think that's how it should be. I didn't feel comfortable hacking up the code of a Real Programmer.


-- Hardliner (searcher@internet.com), March 08, 1999.

"His code was not easy for someone else to modify"

Heeheehee.

I think I know what happened to Mel. He wrote several of the ROMs I've been handed, along with an opcode map, and told to disassemble, modify, and reassemble. Sorry, no assembler or disassembler for this microcontroller anymore. We need it tomorrow.

-- Flint (flintc@mindspring.com), March 08, 1999.


Very kool Hardliner (I'm stealing it!). After countless hours with my head buried in core dumps, I've developed a great respect for the true pioneers of this business. Guys like Mel that wrote the first assemblers in machine language, before a "cross assembler" was even a dream. <:)=

-- Sysman (y2kboard@yahoo.com), March 08, 1999.

Geeze, what a bunch of old geeks inhabit this place. :)

The LGP-30 used a sealed, single platter disk (addressed the same as a drum). It was a business mini-computer in the late 60's (and was sold in competition with the Friden-Singer machines).

I notice that nobody's mentioned using the Burroughs computers. They had the most advanced tape OS for their time. (I have no experience on them, either, but I thought many of the OS ideas were great.)

-- Dean -- from (almost) Duh Moines (dtmiller@nevia.net), March 08, 1999.


How come all the computers in the science fiction stories I grew up with worked better than the real ones I'm using now? Even I, Robot didn't fail in the software programming itself - only in intent or by design accident from the effect of the programming.

It ain't fair I tell ya' - they used up all the bug-free programs in the SF stories. It's a comspiracy. Call HAL. Call one of the Berserker's. Least they ran forever.

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 08, 1999.


I suppose you could call me a veteran. . .although I wouldn't go as far as calling me an expert. These days I'm an applications manager at a large bank. . .I have the dubious honor of ensuring the day to day of 20-year old COBOL/CICS applications with countless BAL modules in the mix.

In the past few years, three actually, I've been up to my ass in alligators, remediating these old monoliths.

As far as replacing mainframes go, there are projects underfoot making such attempts. They usually fail, because some genius computer expert type believes that an NT server application is just what the doctor ordered to replace the old dinosaur mainframe application. Of course the genius fails to understand that the old dinosaur application crunches 2 million transactions a day, and needs 300 3090 DASD packs to hold all the data. I've seen a few failures in my day. Then the kid claims victory after implementing phase I of the project (which incidently had little to do with the Business requirement), is 3 million over budget and two years behind schedule, and moves on by getting promoted to develop his next server application. Being a legacy applications manager, I naturally get to clean up his mess for him. Sometimes I feel like a father endlessly changing dirty diapers.

-- Mark Lurtsema (lurtsemgm@aol.com), March 09, 1999.


1) Hardliner - picking my chin up of the floor over Mel (as spouse will attest) OOOOOOO!!! I am truly impressed ,

2) Aren't you the guy a few thread miles ago who tried to plead ignorance?? or was that Greybear?? not only my eyes get foggy.

Marc - I TRULY feel for you. I've had to trace a few core dumps without the "GENIUS" programmer available myself. Fun it AIN'T!!!

foggy

-- Chuck, night driver (rienzoo@en.com), March 09, 1999.


To quote Jolly, "at the risk of mortal injury," I will post again to this thread.

I think I'm convinced now that Greybear was right when he said: ".. aren't we about ass deep around here in people who are in the computer business with ..oh...say 20- 25 - 30 years experience? Hell, they're thicker'n fleas on a lazy dog."

I'm guessing this will be a great archival thread. 'Trouble is--when I read a lot of your comments I felt like I was in an uninhabited area of Siberia without a road map, i.e., I obviously don't speak your language.

So here's a simple follow up question for all who have posted here:

Are you preparing your families for potential problems that may result from Y2K? If so, are you preparing beyond the Red Cross guidelines? If so, to what degree?

Thanks so much for your responses. I never had any idea there would be so many.

God bless all of you for your hard work over the years.

-- FM (vidprof@aol.com), March 09, 1999.


Real Programmers don't write specs -- users should consider themselves lucky to get any programs at all and take what they get. Real Programmers don't comment their code. If it was hard to write, it should be hard to understand and even harder to modify. Real Programmers don't write application programs; they program right down on the bare metal. Application programming is for feebs who can't do systems programming. Real Programmers don't eat quiche. In fact, real programmers don't know how to SPELL quiche. They eat Twinkies, and Szechwan food. Real Programmers don't write in COBOL. COBOL is for wimpy applications programmers. Real Programmers' programs never work right the first time. But if you throw them on the machine they can be patched into working in "only a few" 30-hour debugging sessions. Real Programmers don't write in FORTRAN. FORTRAN is for pipe stress freaks and crystallography weenies. Real Programmers never work 9 to 5. If any real programmers are around at 9 AM, it's because they were up all night. Real Programmers don't write in BASIC. Actually, no programmers write in BASIC, after the age of 12. Real Programmers don't write in PL/I. PL/I is for programmers who can't decide whether to write in COBOL or FORTRAN. Real Programmers don't play tennis, or any other sport that requires you to change clothes. Mountain climbing is OK, and real programmers wear their climbing boots to work in case a mountain should suddenly spring up in the middle of the machine room. Real Programmers don't document. Documentation is for simps who can't read the listings or the object deck. Real Programmers don't write in PASCAL, or BLISS, or ADA, or any of those pinko computer science languages. Strong typing is for people with weak memories.

Real Programmers only write specs for languages that might run on future hardware. Noboby trusts them to write specs for anything homo sapiens will ever be able to fit on a single planet.

Real Programmers don't play tennis or any other sport which requires a change of clothes. Mountain climbing is ok, and real programmers often wear climbing boots to work in case a mountain should suddenly spring up in the middle of the machine room.

Real Programmers spend 70\% of their work day fiddling around and then get more done in the other 30\% than a user could get done in a week.

Real Programmers are surprised when the odometers in their cars don't turn from 99999 to 9999A.

Real Programmers are concerned with the aesthetics of their craft; they will writhe in pain at shabby workmanship in a piece of code.

Real Programmers will defend to the death the virtues of a certain piece of peripheral equipment, especially their lifeline, the terminal.

Real Programmers never use hard copy terminals, they never use terminals that run at less than 9600 baud, they never use a terminal at less than its maximum practical speed.

Real Programmers think they know the answers to your problems, and will happily tell them to you rather than answer your questions.

Real Programmers never program in COBOL, money is no object.

Real Programmers never right justify text that will be read on a fixed-character-width medium.

Real Programmers wear hiking boots only when it's much too cold to wear sandals. When it's only too cold, they wear socks with their sandals.

Real Programmers don't think that they should get paid at all for their work, but they know that they're worth every penny that they do make.

Real Programmers log in first thing in the morning, last thing before they go to sleep, and stay logged in for lots of time in between.

Real programmers don't draw flowcharts. Flowcharts are after all, the illerate's form of documentation.

Real Programmers don't use Macs. Computers which draw cute little pictures are for wimps.

Real Programmers don't read manuals. Reliance on a reference is the hallmark of a novice and a coward.

Real Programmers don't write in COBOL. COBOL is for gum chewing twits who maintain ancient payroll programs.

Real Programmers don't write in FORTRAN. FORTRAN is for wimpy engineers who wear white socks. The get excited over finite state analysis and nuclear reactor simulations.

Real Programmers don't write in Modula-2. Modula-2 is for insecure analretentives who can't choose between Pascal and COBOL.

Real Programmers don't write in APL, unless the whole program can be written on one line.

Real Programmers don't write in Lisp. Only effeminate programmers use more parentheses than actual code.

Real Programmers don't write in Pascal, Ada or any of those other pinko computer science languages. Strong variable typing is for people with weak memories.

Real Programmers distain structured programming. Structured programming is for compulsive neurotics who were prematurely toilet trained. They wear neckties and carefully line up sharp pencils on an otherwise clear desk.

Real Programmers scorn floating point arithmetic. The decimal point was invented for pansy bedwetters who are unable to think big.

Real Programmers know every nuance of every instruction and use them all in every Real Program. Some candyass architectures won't allow EXECUTE instructions to address another EXECUTE instruction as the target instruction. Real Programmers despise petty restrictions. Real Programmers Don't use PL/I. PL/I is for insecure momma's boys who can't choose between Cobol and Fortran. Real Programmers don't like the team programming concept. Unless, of course, they are the Chief Programmer. Real Programmers have no use for managers. Managers are sometimes a necessary evil. Managers are good for dealing with personnel bozos, bean counters, senior planners and other mental defectives. Real programmers ignore schedules. Real Programmers don't bring brown bag lunches to work. If the vending machine sells it, they eat it. If the vending machine doesn't sell it, they don't eat it.

Real Programmers think better when playing Adventure or Rogue.

Real Programmers use C since it's the easiest language to spell.

Real Programmers don't use symbolic debuggers, who needs symbols.

Real Programmers only curse at inanimate objects.

-- Hoffmeister (hoff_meister@my-dejanews.com), March 09, 1999.


Hoffmeister - I've seen most of these, but will "borrow" the few originals and add to my list!

FM - To answer your new question, you better believe it! I consider myself very fortunate for several reasons. For one, I'm single at this point in my life. Serious girl friend but no permanent plans at this time. I share the rent with two big time GI bachelor friends (one licensed to carry) on a 150+ acre farm. We only rent the house, the fields are rented to a farming company. We have a genset (not yet connected), and will be replacing an older second fuel oil tank this summer. Three 55 gal drums for kero/gas (in shed FAR from house), more to come. Full size freezer waiting to be plugged in. Almost full pantry now. Will be expanding already good size garden this spring. I think you get the idea. Nice thing is splitting three ways makes almost painless.

I have elderly parents in Nevada, but also have a good friend nearby. Have been sending extra cash to help get them ready. Plan on going there later this year to make sure they're covered.

This is a good question FM. It's getting down the list, so you may want to put back on top. <:)=

-- Sysman (y2kboard@yahoo.com), March 09, 1999.


I'm wearing hiking boots, got to work at 12:30 - does that count?

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 09, 1999.

To answer the query about preparations.

Yes, we're currently at the 18 month level. Our goal is to be completely ready for 3 years. We're already there with food, and kerosene, and some other items. Main worry is we're low on propane.

Jolly

-- Jollyprez (Jolly@prez.com), March 09, 1999.


To continue his comment:

"Most" "mainframe" programmers I'm aware of are preparing for far greater timeframes of the potential troubles than most engineers I'm aware of. The vast majority of PC-level programmers I'm aware of are not preparing for anything but New Year's Eve.

That's only one obseration - please, you've got to decide for yourself what your own family's "comfort" level is.

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), March 09, 1999.


Robert,

Re: paper tape. No, wasn't 1/2 inch stuff, closer to an inch. May have been yellow, honestly don't recall - but definitely yellow with age. (Used it as kindling last night.) As I recall, teletype style unit with attached tape punch/reader. Think you had to physically change the track for punch vs read - maybe it was same track and you just had to reinsert to get to the start - a few too many years to recall clearly. Phone coupler to U. Mass screamed at what I believe was 200 baud. Those were the days? !

jh

-- John Hebert (jt_hebert@hotmail.com), March 09, 1999.


Have I worked wih any of you people? Sure sounds like it.

Anyone else remember what a 12AT7 is? Do you remember when you had to have plates on for a while before powering the computer up? I wish I could remember the names of all the computers I have worked on but that would be going back to 1972. Now understand I sarted in computer repair and maintenance on non information computers. They ran hardware systems. I know the firs was a relay logic computer, analog of course, then the monster with tubes and big servo's in it. Same as ATC is srill using in a lot of places today. All Logic was hardwired in. When the airconditioner went out I had to get an aircart off of the flightline to cool the monster down. Good Old USAF I worked on two with drum memory and learned to tell when it was ready boy the sound of it winding up. Lets see a couple of PDP-11's something and one PDP-11-24? Core memory and took three cards to be checked for each memory bit problem. Had to use paper tape to bootstrap them for a while then punch binary codes to bring them up. (OK octal but I remembered the instructions in bianary and trouble shot the things that way) Used to program little diagnostic programs using this method. Ever have to find a broken wire in core memory? Anyone remember the upper-upper, upper-lower, lower-upper and lower-lower memorys? basic programming..I taught it to my daughter at age 6 on an (now) old TI computer. I worked on Sperry-Univac V-76's went to Orange county for computer archatecture where I learned how to input an instruction and trace it through every piece of the computer (from 16 bit to 64) until it finished doing whatever it was supposed to. I loved it! I got to program by punch cards, then was allowed to use the dumb terminal. That was a mistake on their part..I went through every deck of punch cards and created databases of everything from test equipment to inventories. We rotated shift every 4 weeks so while on days when the equipment was being used the Software programmers put me to work. I was lucky. I ended up going from computer to computer SEL, Gould, V-76 and I have no idea of all the others in engeneering, and started programming on them. After learning a few the rest were basically easy to learn. I remember learning assembly by dicecting a "dice" game and reprogramming it into a mice game. I would study the assembled program to see what I was changing where. The firse game I played (besides pong and pacman? at the bar where we lunched) was one where we were told where we where in a cavern or maze and had to move around. I believe it used a random number generater that used the date. Has anyone thought about what will happen to all the random number generators that use dates to generate the numbers when the dates go to 00? I cannot tell you which computers after that I worked on maintaining but it got real boring when we just pulled boards and replaced them and sent them out for repair. I also remember when programming chips was new and we endded up wwith the prom burners and erasers UV erasing them till we got them right and used them. We documentated everything and had all records and changes from the beginning of the comps to the present time. I don't know if other businesses did that but mine did. I got pretty suspicious when the program I made that had rooms within rooms showed up on microsoft as windows. One of the guys I worked with went over to microsoft soon after I wrote it *evil grin* who knows. Maybe I should have gone over too, I was such a pain in the Butt about 4 digit years that maybe microsoft wouldn't have endded up with their Y2K problems. One thing I know where I made a BIG mistake was when, years earlier, this same guy had come into work and told of these two guys working in their garage that needed money for stock in their project. I did not like this guy and since I was single and had pleanty of money I could have invested, but I didn't like this guy. Oh well.......guess I'll go beat myself up about it once again......... Cherri

-- Cherri (sams@brigadoon.com), March 10, 1999.


Ummm. was it something I said?

-- Cherri (sams@brigadoon.com), March 11, 1999.

Cherri, I think it's just that you were a late answerer. (like me)

-- Tricia the Canuck (jayles@telusplanet.net), March 12, 1999.

Still here Cherri? Threads are only "hot" for a very short time here these days, a day or 2 at most. I've started 3 "MAN-YEAR" threads in as many days (inspired by this thread) about computer pros, just trying to get a feel for programmer experience here. The number of new posts here is going up quickly, and it's next to impossible to keep up. Nothing you said, just so many people saying things! If you don't answer this by later today, I'll "page" you, so you feel better about yourself! <:)=

-- Sysman (y2kboard@yahoo.com), March 13, 1999.

Cherrie, do me a favor ok? Empty the chit box on the interpreter. When you finish, come back and join the discussion.

MoVe Immediate

-- MVI (MVI @407.com), March 13, 1999.


Just thought some of you might like to see this:

http://cnn.com/TECH/computing/9903/11/itcontract.idg/index.html

"IT shops still can't live without contractors"

-- Kevin (mixesmusic@worldnet.att.net), March 13, 1999.


Which are the treads that were generated from this thread? Aren't there any other people out there that have "grown up" in both the hardware and software of computers?

The first Computer "programmers" were women.

-- Cherri (sams@brigadoon.com), March 13, 1999.


Cherri, And your point is?

Move Immediate

-- MVI (MVI@yepimhere.com), March 13, 1999.


Point? Just a little history info.

-- Cherri (sams@brigadoon.com), March 13, 1999.

Clarification of my bio:

I have only about 30 years' professional programming experience, not 36. Though I started programming in 1963, it was at first as an extracurricular hobby, not a paid job.

-- No Spam Please (nos_pam_please@hotmail.com), June 06, 1999.


Hoffmeister,

Let me suggest a modification for the "Real Programmers scorn floating point arithmetic" paragraph:

Real Programmers scorn floating point arithmetic in general, but they know how to do fast integer arithmetic in the FPU when it is pipelined better than the integer unit.

-- No Spam Please (nos_pam_please@hotmail.com), June 06, 1999.


Moderation questions? read the FAQ