David Eddy posts some excellent examples of fudging with numbers

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

I'm in too big a rush to hotlink to this column (imagine that), but I didn't see it posted here elsewhere, and I really do think it's worth reading. The cut&past earl (that's URL to you :) is:


Please note these examples. These are the types of things someone who has spent time with Y2K (like moi & many here in this forum) is already familiar with, but most people are not. To wit:

**The cold, hard facts about measuring Y2K progress by percentages is that you can't add systems up like boxes of candy in the local 7-11 store. One system may be 10 million lines of code (LoC), while another system may be 50,000 LoC, or worse yet a series of spreadsheets, which simply cannot be counted in LoC at all. Although it's somewhat logical to conclude that the 10 million LoC will be more expensive to repair than the 50,000 LoC, this is not necessarily so. Perhaps the 10 million LoC can be replaced with a standard package. Perhaps the 50,000 LoC system is in a particularly opaque language like APL (A Programming Language), which is virtually impossible for anyone other than the original author to decipher, but is crucial for some mysterious financial portfolio risk-analysis calculations...

**The ultimate example of such lunacy is when the Air Force blandly tallies the B-2 bomber as one system and the F-16 fighter as 82 systems. Let's do the math...fix the B-2 and we're 1.2 percent done, fix the F-16 and we're 98.8 percent done.

I think I've indirectly referred to examples like that last one here in the forum myself in the past; I didn't know about this one, although I've heard of similar ones from the military.

That other one, about LoC, is right on the money.

-- Drew Parkhill/CBN News (y2k@cbn.org), April 14, 1999



Excellent point. Experienced programmers know that just saying LOC has virtually no meaning re: difficulty in changing. I will take small issue with your reference to APL as "opaque". Its a language which demands eloquence or you can't code it.

-- RD. ->H (drherr@erols.com), April 14, 1999.

RD -

Take it up with Porlier; that was his comment. Drew's more yer ink-stained wretch than yer code-cranker, methinks.

-- Mac (sneak@lurk.hid), April 14, 1999.

Sorry! That was David Eddy's comment about APL, not Victor Porlier's! I read both the articles and transposed the authors...

God loves us humble, and often gives us opportunities to become much more lovable... 8-}]

-- Mac (sneak@lurk.hid), April 14, 1999.

All true, but now flip it around. Let's say you have all these different kinds of systems (planes, COBOL, APL, spreadsheets, distributed processing, you name it). You task is to find an honest, accurate way to best describe your *overall* progress. No fudging at all. How do you do it?

-- Flint (flintc@mindspring.com), April 14, 1999.

mac (you humble one),

i am not a full-blown code-cranker, but i do understand things like this code is more difficult to deal with than that code :)

-- Drew Parkhill/CBN News (y2k@cbn.org), April 14, 1999.


off the top of my head- invent a degree of difficulty scale, assuming man-hours per project (ie, the easy code project can be largely automated, so we'll give it x hours, while the hard code project must be manual, let's estimate y hours). i don't know, i'm just making things up while i hallucinate... seriously, i'm sure a better metric could be devised than simple percentages.

-- Drew Parkhill/CBN News (y2k@cbn.org), April 14, 1999.

Drew and Flint:

There is a way to measure progress that is used all the time. It has to do with manhours (or personhours for the PC minded). When a project schedule is developed the first step is to develop the work scope. The work scope relates to what work items are being scheduled. The next step is to break-down the work scope into discrete meaningful activities. The next step is to develop a logical plan. This is done by linking the activities and showing their interrelationships.

After the above steps are done manhours are assigned to each discrete activity. Manhours are assigned according to the complexity of the activity, simple activities are assigned less manhours than complex ones. The activities are then scheduled and an "original schedule" is established. After the project has started, performance measurement begins. This is done by creating a duplicate of the original schedule and naming it the "planned schedule". The planned schedule is monitored periodially (weeky, bi-weekly or whatever) and responsible individuals status the schedules. They are asked to report the real world status on the "planned schedule". For example if a activity was originally planned to consume 50 manhours on the "original schedule" and now it is expected to take 500 manhours, the 500 manhours would show on the "planned schedule".

After the status was completed the original schedule would be compared to the planned schedule. Let's say the original schedule showed that as of a certain date the original schedule said 10% of the schedule should have been completed (let's say that the total project manhours were 500 manhours) so by 10% schedule date 50 manhours were scheduled to be complete. However, analysis of the planned schedule showed that only 5 manhours had been expended so insted of being 10% complete they were only 1% complete. Further analysis will show the forecasted completion date and forecated percentages.

Hope this helps, Did'nt mean to do a Planning and Scheduling 101 lecture.

-- Watcher (anon@anon.com), April 14, 1999.

Flint makes a good point, especially if we consider the need to translate into terms that the citizenry can respond to. Unfortunately, he used words like "honest and accurate", which is precisely what the powers that be don't want to provide.

Honesty with percentages, IMO, would dictate taking mission-critical systems as a portion of the whole and counting Y2K project completion accordingly as, for instance, "25% of critical, 10% of overall systems".

Next step: take generally agreed measures of start-to-finish task weighting for mixed develop-maintenance projects (which is the way most of us view Y2K stuff) and count against that: 25% of critical through remediation; another 10% of critical through remediation/system test, etc.

It's still LARGELY bogus for all the good technical reasons adduced above (and I could give you a hugely long list) but it would be far less bogus than what we currently see and would help fellow citizens get a much better idea of where we stand.

-- BigDog (BigDog@duffer.com), April 14, 1999.


makes sense. i was thinking of something along those general lines (though i hadn't thought it through very much)

-- Drew Parkhill/CBN News (y2k@cbn.org), April 15, 1999.

2" worth here. The reporting issues are all political, not a matter of doing a 'right' or 'good' job. Buy time, make it look good so that people stay away from you are your group as you will need all the breathing room you can get. Down and dirty. You figure if everyone fails fairly equally then you are 'withing the norm' and therefore will not be singled out for punishment.

-- David (C.D@I.N), April 15, 1999.

drew, have you seen the april 15 washington post article, "a new center for getting coordinated" by stephen barr? koskinen is setting up a new fed center for collecting y2k data because analysts complain there isn't enough data to make accurate predictions. maybe they are also getting tired of being kept in the dark. not a good basis for optimism yet, is it, that the feds don't know? if they don't know, who the devil does?

-- jocelyne slough (jonslough@tln.net), April 15, 1999.


yes, i did see it. i think the purpose will be disaster relief as well as information gathering.

actually, i don't think anybody really knows :)

-- Drew Parkhill/CBN News (y2k@cbn.org), April 18, 1999.

Interesting - very interesting.

I like the combination (for tracking progress) of what Watcher and BigDog discussed.

On the other hand, if Mr K. is only now getting enough feedback from "his analysts" to find out that they are not getting enough reliable information of high enough quality - he is about 7 months behind my somewhat amateur level of analysis and predictions. Way - way too late if they only now figured out the current reporting process is inadequate to determining the country's progress.

By last September, last November at the latest, they should have had a list of every federal agency, (except DOD, FBI, and CIA if security is classification concern), and every major division in each department - broken those down into total systems, person responsible, total critical systems, what has to be done in each critical system, what is coimplete, what is left to do. Concerns, long lead items, and contingency plans for each. This doesn't need to be publically diseminated maybe - but they need to know it.

Repeat for each state - but tell the states to give them a monthly update (6 weeks periodicity would have been okay too - at that time) of every state system the same way.

States would be responsible for counties, local governments, school districts, special municipal districts, and special regional bodies, and utilities. This would cover city and hospital, emergency care, elderly, and the like; distribution to poor, needy, and ill would be covered in these to ensure there are no "safety net" holes. The master list would only list every county, and the top 200 cities - as to expected completion date, percent complete, most troublesome issue.

Anybody failing to report or update gets publically listed.

They should have had international "order of magnitude" impacts ready by December, updated quarterly.

They should have had a central government "bug report" for every IT manger and city manger - whether library or hospital, traffic light system or prison, to be able to look in the database for "what is the problems, what did other people do, what didn't work, where to look for problems, and even where "no problems" have been found yet. That's equally important - if every fire chief knows that a certain maintenance program fails - each can fix that program and skip looking for false failures.

They should have each major industrial group reports ready each month - percent complete for those making progress, and "who won't make it" reports ready privately to allow pressure to be applied in private - publically if needed if the SEC reports differ

All Fortune 500 by percent complete, expected due date, next schedule date, percent ahead and percent behind.

All distributed utilites (power, water, natural gas) by map coordinates and region covered: by percent complete, due date, percent ahead or behind, test date(s) and trouble spots.

If they were looking for real data - and this is "some" of the minimum data needed to actually know - whether or not the country - as a whole - would ready for next January. If any part of this data is missing - and by the looks of it, perhaps most is missing, not available, not provided, not reuested, (or requested but misrepresented) - nobody in the administration can say what systems in what places might be ready by next January, what areas will be affected in what ways, and where help might be needed.

If this story is correct: that they are only now realizing they needed to know this data to manage the crisis - we are in world of trouble. Because the "they" in "they will take care of it" won't even know what "it" is.

Assuming they know what 'is" is.

-- Robert A Cook, PE (Kennesaw, GA) (Cook.R@csaatl.com), April 18, 1999.

Big "thank you", Watcher!
Thank you, Robert!

I've been waiting to read (in print) what Eddy said for about a year now. Yay. It's already forwarded to my Y2K correspondents.

-- Grrr (grrr@grrr.net), April 19, 1999.

Well, Robert, as you no doubt know, they don't know what "is" is (that is, we in IT don't). It is instructive and enormously discouraging to do major audits of serious corporations and discover the near (not total, just near) impossibility of building "common denominator" bases for collecting even the simplest metrics, as you justly propose. The Keystone Kops are FAR more advanced than most Fortune 1000 corp(ses).

With CEOs/CIOs who truly UNDERSTOOD the business relevance and were willing to give us the authority we needed to bust heads (politically and technically), it all became possible. Sadly, there weren't (aren't?) too many of them when we were plying that gig.

I darn betcha Ed Y knows whereof I speak on this score and then some.

Anticipating the flames, let me say that it is indeed remarkable how much our technology accomplishes every day in this state. Folks will look back a century from now with a mix of awe and disbelief.

But will the Rube Goldberg structure tolerate the coming Y2K noise overload .... ?

-- BigDog (BigDog@duffer.com), April 19, 1999.

BigDog, I'm not sure they know what "it" (IT) is either.


-- Diane J. Squire (sacredspaces@yahoo.com), April 19, 1999.

(also feeling braindead ... just skimmed the upper "it" responses ... sorry for the redundancy)

Missed that Washington Post article ... thanks.


A New Center for Getting Coordinated

http://search.washingtonpost.com/wp-srv/WPlate/ 1999-04/15/115l-041599-idx.html

-- Diane J. Squire (sacredspaces@yahoo.com), April 19, 1999.

Moderation questions? read the FAQ