Lane Core Does It Again

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Lane Core, my favorite programmer/investigative reporter has hit another homer. He must have bought one of those new Hughes Diamond Point Drill Bits, because he goes through the resistance like sandstone. His next Westergaard essay, to run on Tues, June 22, is available now:

www.y2ktimebomb.com/Media/Articles/lcore9925.htm

See Ratcliffe "get it" right between the eyes!

-- Gordon (gpconnolly@aol.com), June 20, 1999

Answers

BULLSEYE!!!

-- Andy (2000EOD@prodigy.net), June 20, 1999.

"We found the that companies that started their remediation work the earliest were the most pessimistic about completing." - Cap Gemini, Y2K consulting firm

"The companies that have been working the longest and hardest seem to be the ones most afraid of suffering catastrophic failures" - Cory Hamasaki, Y2K consultant

"We've gotten classified reports [on the Federal government and Y2K] that are so disturbing they had to be classified." - Fred Thompson, Rep, Tenn.

-- a (a@a.a), June 20, 1999.


Let's make a couple of assumptions here (since we don't know), and see where it leads.

Assumption #1: Companies actually understand their own operations, and have a handle on their exposure based on their own inside knowledge.

If this assumption is true, it's only reasonable that those with the greatest exposure would be most concerned. You would naturally expect such companies to start earlier, work harder, spend more, etc. They've geared their efforts to the size of the task facing them. Those who started later and/or aren't working so hard know that they have less to do.

Assumption #2: Companies don't know what they do for a living, and those working with the code base don't understand it or know what it does.

If this assumption is true, we have a learning curve. Those who started earliest did so purely from random chance. The deeper they got into it, the better they realized how difficult it was. They became more pessimistic and started working harder once they recognized what their code was actually doing. Those who started later and/or aren't pessimistic simply don't yet realize how much trouble they're in.

It seems most likely that the first assumption is more generally applicable than the second, although certainly both are very true in individual different situations. The degree and timing of awareness on the part of companies probably spans a broad spectrum. The conclusion that *everyone* is clumped at the most desperate far end of the spectrum is simply not justified.

-- Flint (flintc@mindspring.com), June 20, 1999.


Assumption #3: Most managers see systems complexity evolving in a linear rather than the actual exponential fashion. Only when they task programmers to begin peeling away layers of their software onions do they realize that a) they underestimated the cost b) overestimated their ability c) they will not make their deadline.

As Jones and Yourdon and Milne have said, 30 years of software metrics will not "fly out of the window" just because this is a job that "has to get done"

-- a (a@a.a), June 20, 1999.


Assumption #0: Different people in each company have different degrees of understanding of different portions of the company's operations, different perceptions of the relative importance of the subcomponents of each of the company's operations, different amounts of experience in their current postions, different paths by which they reached their current positions, and different budget and planning approval limits, among various other differences.

If this assumption is true, different people in the same company may have very different perceptions of that company's Y2K readiness.

Jerry

-- Jerry B (skeptic76@erols.com), June 20, 1999.



'a':

You raise an interesting point here. Capers Jones has said that his metrics cannot be applied to remediation, which is largely a maintenance task. No metrics have even been defined for such tasks. Jones admits that his 'function point' analysis is strictly for new development. And without any existing yardstick, there is no metrical way to determine maintenance task performance.

Yourdon's assessment is based on his experience, and is probably pretty good (but it's not a metric). Yourdon recognizes that many remediation projects are 'big', at least measured in terms of man months. His experience (and mine!) is that doubling the size of the task (even measured in terms of appearance to the experienced eyeball) more than doubles the time required to accomplish it.

What we lack is any method of quantifying large maintenance projects not fully completed (except using rough-and-ready measures like bug counts and rate of introduced errors). And for all of Yourdon's vast experience, little if any of it is in software maintenance. We simply don't have 30 years of metrics for such projects, or anything beyond very primitive measurements that aren't very descriptive or predictive. Nonetheless, Yourdon's concerns must be taken seriously.

Milne, of course, cites these false 'metrics' whenever it supports his argument, and then turns around and claims y2k is unique and can't be compared to *anything* else when *that* supports his argument. Since he doesn't know what he's talking about in either case, citing him as an 'authority' is nugatory.

-- Flint (flintc@mindspring.com), June 20, 1999.


Flint: Milne is a very intelligent fellow, and I'm sure a fast learner. I don't think that you saying "he doesn't know what he is talking about" negates the months if not years of research old Paul has put in on this topic. In fact, as we have absorbed more about y2k than 99.9999% of the population, I consider him, myself, you, Gary North, and a bunch of our respective compatriots, as the experts on y2k.

And if I'm not mistaken, I believe there is evidence now that the metrics that are being produced for y2k remediation to date highly resemble development rates.

-- a (a@a.a), June 20, 1999.


'a':

Yes, I agree that those aspects of remediation we think we know how to measure closely resemble development metrics. Certainly the error rates are similar, and the emerging curve of progress rates against task size look disturbingly familiar.

What's different is that a lot of remediated code is being returned to production, which follows quite a different pattern than new development (and tends to uncover introduced errors somewhat as well, as we've seen all too often). And of course, there's good reason to believe that production code that's been bungled will cause a lot more problems than development code that was canceled before completion. There's plenty of reason to worry.

For many reasons (contingent on project), remediation tasks face a very wide range of degree of difficulty. It shouldn't be regarded as one single, massive global project. Even within organizations, it's broken up into lots of little projects of varying size and importance. And I don't think we have a good handle on this aspect.

As for Milne's expertise, I'll meet you halfway here. With enough time and effort, you become very good at what you practice. Milne's time hasn't been spent ferreting out the truth, but rather in *creating* a truth using every means of propaganda known. And without doubt, he's become quite expert at this. You can't deny that the technique I mentioned (claiming we have 30 years of metrics and know all about y2k when it suits him, and claiming it's unique and can't be compared to anything else when it suits him) is one he relies on fairly often.

Techniques of effective argument are completely different from techniques of investigation. Milne and North are effective advocates, because their arena is political rather than technical. Their goal isn't to inform opinion, but rather to sway it.

-- Flint (flintc @mindspring.com), June 20, 1999.


Thank you *ALL* for your contribution to this thread, very helpful.

-- Will (sibola@hotmail.com), June 20, 1999.

a,

Is there a URL for the Thompson quote?

-- Linda A. (adahi@muhlon.com), June 20, 1999.



Nugatory?

-- ariZONEa (que_es@nugatory.com), June 20, 1999.

I believe it was just slightly misspelled and should be nougatory. Now, a nougat is defined as "nuts or fruit pieces in a sugar paste." So what he is saying, in a sly way I might add, is that Milne is either nuts or fruity or both, even when he is sugar coating the facts. This is not the way that *I* see Milne, but Flint sometimes has a different way of looking at things than I do. :-)

-- Gordon (gpconnolly@aol.com), June 22, 1999.

Nugatory (adj) (1) Of little or no importance; trifling (2) Having no force, invalid.

From the American Heritage dictionary.

-- Flint (flintc@mindspring.com), June 22, 1999.


Moderation questions? read the FAQ