50% of companies won't be ready (analysis of Yardeni data)

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Ok, I admit I'm posting this here in a shameless plug to sell copies of my novel ( Noontide Night - A Y2K Novel ), but since I'm donating all the profits to the American Red Cross, I hope y'all don't mind. :-)

Anyway, I've been doing some curve-fitting analysis of data from Dr. Ed Yardeni's Y2K polls (the ones in conjunction with CIO magazine), which have gathered ~1000 responses from folks purportedly intimately familiar with remediation efforts in their (mostly very large) companies.

What I found that's disturbed me is -- despite the (usual managerial) optimism -- that only about 50% of organizations will actually complete their Y2K remediation work by January 1st.

The data for actually-done-by dates very, very nicely fit a normal ("bell") curve, as common sense would indicate they should. With the mean sitting right on January 1st. That puts half finishing afterwards. Urk.

If you look at his poll data, it's unnerving how the percentages of "will finish by" future dates keep jumping up. Yet if you compare how their previous "will finish by" dates measured up to the later tabulated "did finish by" dates, they just don't measure up -- they're proven to be extremely optimistic. Applying this to each of Yardeni's data sets shows the same unfortunate result: Only about half will actually make it by January 1st.

(Now, what impact 50% of all organizations in the US not being Y2K compliant will have, I don't know. I know there's a 30-year low unemployment rate, which is basically as low as it can get, so there's no labor pool out there to hire help to "do things manually" as most contingency plans call for. And with even "compliant" systems failing, it just doesn't bode well that so many unfortuitous events will coincide.)

For more details on the math, etc., please see the assorted pages linked to the above, notably the www.noontidenight.com/news2.html, news.html, and ready.html pages.

For background, I might mention that I've been a comp.sci. professor for a dozen years and a 20+ years a programmer (UNIX kernel hacker sort, data warehouses, AIs, etc.). I've educated thousands of students, and know what sort of programmers and managers we've got out there, and I think I have a feel for how overly optimistic they can be -- they're the same guys & gals who'd come into office hours minutes before an assignment was due and tell me the program was almost perfect, 95% done, it just needed a little tiny fix, could I help? It would turn out what they meant by "almost perfect" was that it finally had compiled without syntax errors. I really hated explaining to them that meant they were actually about 1/3 done. I see that same incredible optimism in Yardeni's data, where 55% of companies say they're "91-99% complete". Pardon me if I still don't believe my former students and their kin. :-)

While I'm not predicting the end of civlization as we know it, I do think we're in for a rocky bit of time, and want to share what I believe are fairly unbiased bits of analysis.

I also think that if you're interested in the topic, you'll enjoy my novel. :-) It's fiction, so I took the liberty of playing with what I think are the *reasonable* worst case scenarios. I think everything in there *could* happen, though I really truly hope everything sails smoothly as my optimistic former students say. But it's an entertaining story (so say the many and reputable reviewers). And I'm not keeping a dime of the money, since I believe the Red Cross needs it more, and I want to encourage preparedness to reduce panic. Not to mention I've included some speculations on how the government might handle a cyberwar that, even if never carried out (let's hope!), should both chill your spine & tickle your funny bone.

I'd be glad to elaborate here or in email on any questions folks might have. (And I'd be even more delighted to send you autographed books. :-) Links to Amazon, Barnes & Noble, etc. are on the page for your convenience as well.)

Regards, -- Andrew Burt

-- Andrew Burt (y2k@noontidenight.com), December 08, 1999

Answers

Andrew,

Is there any chance you can post the charts you developed based on Dr. Ed Yardeni's Y2K polls? It would be a service to this forum to see the poll results plotted as a function of time.

Thanks!

-- Brian E. Smith (besmith@mail.arc.nasa.gov), December 08, 1999.


Well noticed. "Compliance" now just seems to mean "will finish by end of 1999". If that was acceptable, then why weren't every utility and company declared compliant as soon as they started remediation? When did the rules change?

Oh, silly me, I forgot, there are no rules. We're making this up as we go. :(

-- Servant (public_service@yahoo.com), December 08, 1999.


Andrew:

I'm not a comp. Sci. person in any way, but my original guesstimate, based on how companies fudge things to themselves, was that we would be very lucky if 40% of the Fortune 1000 were "ready" by 01/01/2000. And these are the companies that are supposed to be leading the way.

Please, I would love to see the graphs.

-- mushroom (mushroom_bs_too_long@yahoo.com), December 08, 1999.


Unbiased analysis is a very tricky and subtle thing to do.

Let's say (for example) that all companies have reduced their exposure to to a practical minimum -- capable of being handled quickly and easily by the normal maintenance staff working normal hours. Percentage of companies completely ready - ZERO! Oh my, that's awful.

Now, let's say half the companies are 100% complete, and the other half never even started and will become nonfunctional quickly. Percentage of companies completely ready - 50%. Ah, that's MUCH better. Isn't it?

I've noticed a very consistent pattern here. As organizations get closer and closer, and the dangers each faces gets smaller and smaller, the "unbiased" calculations tend to move away from percentage of BUGS fixed, and in the direction of percentage of companies complete. The latter is ALWAYS worse than the former, and a favorable picture becomes bleak again.

NOBODY will complete their remediation. Finding every last date bug isn't feasible, and testing to "prove" this isn't possible. This analysis is a case of GIGO. "Complete" isn't required. "Close enough" is required. Which includes everyone, whether they "claim" they'll be complete or not.

-- Flint (flintc@mindspring.com), December 08, 1999.


Sure, I've whipped up a more detailed analysis, with charts & numbers. It's at: www.noontidenight.co m/slippage.html

Have a look, and I'll be glad to discuss here.

As to the reply above about the rest of the companies getting closer, and perhaps being better able to manage their (fewer) failures... yeah, but have a look at that data. It shows that the 50% who aren't done aren't likely to *be* done for many more months. That suggests to me that they're not as likely to be able to manage those failures as one might hope for. (I mean, if a company has six months more work to do, they can't squeeze that into the first week of January if things go south.) But yes, the point is well taken that those 50% who aren't ready will be in various stages of "almost ready" -- and the curve fit demonstrates just how (un)ready they are (how many months away what percent are from completion). I can't say I find it a reassuring picture!

As for the definition of "completion", and whether the respondents are going with the sense of the reply above, that they'll never finish, well, if that were so, then I'd expect a higher percent of the poll respondents to have said they'd finish after 2000. But... no... only 8.1% answered the question saying they'd finish after 2000. The vast majority seem to have understood the question as one in which they want to give a pre-2000 answer. What I'm saying, with my curve fit, is that the 28.4% who said they'd be done in November and the 22.1% who said they'd finish in December, are very, very ummmmm, "optimistic". (As was demonstrated from the September poll vs. the November poll -- only about half the 31% who said they'd finish by the end of September actually did.)

I know it _is_ hard to be unbiased (and hey, I'm trying to get you to buy a novel here, so I'm a little biased too :-), but since I'm donating the profits to charity I hope that makes it seem a little less likely that I'm just a carpetbagger!). I really have nothing to gain by my analysis. I'm just trying to shed light on a fuzzy subject that the media seem bored with and a lot of other folks seem incredibly optimistic about yet lack data to back up that optimism.

Few people in mid-October of 1929 thought anything like Black Tuesday or the Great Depression were just around the corner, either, yet in retrospect the warning signs were there. I look around and see warning signs _now_, and just hope I'm wrong, and nothing much happens in January. I really don't want to read about how all the warning signs were there in 1999 but folks didn't see them...

-- Andrew Burt (y2k@noontidenight.com), December 08, 1999.



Andrew:

I suspect you're running afoul of the fog of interpretation here. In order for your curve fitting to *mean* something, you must impose a clear interpretation of what "ready" means, *from the perspective of the respondents*. As many have (IMO correctly) pointed out on this forum, the term "ready" is extremely hazy. We've moved away from the term "compliant" since by (usual) definition "compliant" is a binary condition -- you are either compliant or you are not. However, "ready" is a spectrum.

If one of your students studies hard for an exam, is that student "ready" for the exam? Presumably so. Now, let's say that student spends one extra full day studying just to be on the safe side. That student is still "ready", right? But is it the same "ready"? Ready is a matter of degree.

From that same CIO poll, you find that nearly all of those who do NOT expect to be ready, nonetheless expect no "significant" business problems as a result. The clear implication is that there is no clear definition of readiness -- the question "compared to what?" is never answered.

It's entirely likely that date bugs will continue to be encountered by every large enterprise for at least a decade, at a declining rate asymptotic to zero. Every one of them now finds themself *somewhere* on this bug-incidence-rate curve (probably every large enterprise has already suffered date handling errors already, so the curve is active as we speak).

I agree we're in for a rocky bit of time, but I don't think your analysis contributes usefully towards quantifying the rocks. Your data simply lack sufficient definition and detail to do so.

As for indications of coming problems, these always exist. Whole books were written about how to survive the second Great Depression that would plunge the decade of the 1990's into misery. These books contained hundreds of pages of signs, omens and portents, all carefully interpreted and analyzed. As predictors of the future, what makes us so inaccurate is that the world is just filled with unmistakeable signs pointing in ALL directions all the time. Pick a direction, and you can make a solid case. ANY direction, it hardly matters.

I wish you the best of luck with your book, though.

-- Flint (flintc@mindspring.com), December 08, 1999.


Dear Professor,

Thank you for describing how they **unreasonably** botch code at the last second before show time.

Now, please explain how that justifies your (drum roll), **reasonable"** worst case scenario.

I do believe **reasonable** and **unreasonable** don't live togther in the same space, unless, of course, you are talking pure fiction.

-- paul leblanc (bronyaur@gis.net), December 08, 1999.


Flint... I agree that the 1200 folks who answered Dr. Yardeni's poll may have an assortment of thoughts in mind as they answer it. However, I think we can shed some light on it, and thus that my analysis has some merit. For starters, Dr. Yardeni's question was, "When did you or do you expect to finish all phases of the Y2K project, including testing." That's fairly specific. The fact that so few gave answers after January makes it clear, I think, that they were answering a question about those rocks you mention. :-) What also is indicative of the unjustifiable optimism I'm trying to alert folks to is that the numbers kept slipping from one of Yardeni's polls to the next. They've been personally optimistic the whole time; yet their "will be done by" date keeps getting later and later, and when you compare the results of how many _did_ finish vs. when they previously _thought_ they'd finish, they didn't finish when they said they would.

I think it's an interesting demonstration, mathematically, of how managers and the like remain optimistic even in the face of danger. There are so many precedents for human hubris... 1929, the Titanic, etc. Even if I'm all wet (and I *want* to be wrong!), I figure it doesn't hurt to sound a mathematical/logical alarm, and it might well help. I think the definition of "ready" is sufficiently useful within the scope of the analysis that the analysis is pointing out *something*. If I turn out to be wrong -- yippee! But I do hope nobody is _assuming_ I'm wrong and not planning accordingly. My primary message at all times has been one of "just in case" preparedness to reduce panic.

To paul leblanc.... I confess and apologize that I haven't a clue what point you're trying to make! :-) The only "worst case scenario" I talked about was in regards my novel. The data analysis is all "real world" stuff, and has nothing to do with "cases", best or worst. As for "botch code at the last second before show time" -- huh? Who are we talking about here? I *think* you mean students I was talking about, coming in for help right before an assignment was due, and finding out the bad news from me that they were much further from done than they thought. But they didn't "botch code at the last second" -- they'd generally been working diligently (as diligently as student programmers, or paid programmers, do; procrastination is a common human trait...!). They thought they'd been working as hard as they could on the assignments and for a long time. They always said so, anyway. :-) My point was that their concept of "ready" was terribly flawed -- the common new-student assumption that once it compiles cleanly it must be nearly done. :-) I'm not saying professional programmers or managers suffer from this exact same misconception, but that they (based on my data) appear to be suffering a similar *kind* of delusion about how their code will work when put to the ultimate test.

-- Andrew Burt (y2k@noontidenight.com), December 08, 1999.


Andrew:

I think those surveyed are talking about two different forms of readiness here, but I'm not quite sure. The responses tend to get the two confused.

I absolutely agree with you that those surveyed are running out of runway, and are admitting this indirectly by cramming their projected completion dates into a shrinking arbitrary window. In Yardeni's sense, they are not going to be ready. That is, they won't have completed all phases of the remediation project (except the phase where they find and correct errors later). Altogether too much testing will be done live, ready or not.

The other sense of "ready" is both critical and problematical, since it addresses likely impacts rather than projected dates of completion. Just how debilitating will these date bugs prove to be anyway? As I'm sure you're aware, a good programmer can introduce thousands of bugs into a large system, and if it's done carefully, the effect will be barely noticeable. That same programmer can introduce a single bug that will bring down the entire system for some period of time. Bug effects are wildly variable.

That brings us to the important practical issue of projecting the degree of success at dealing with problems in real time. one organization can have remediated a much higher percentage of date bugs than another and still be more crippled by the remainder, depending on the exact nature of the bugs remaining.

We have very little good data to go by here. Surveys indicate that nearly all large organizations have been encountering date bugs during the past year, at some unknown rate. Forward-looking code (almost anything with an expiration date or renewal date, and most delivery or performance dates by now) isn't that uncommon. What *is* uncommon is any noticeable degradation of performance as a result. So as opposed to problems with whole new software systems being implemented, problems dealing with mishandled dates per se appear to be (on the whole) extremely tractable.

I think the optimism of those surveyed with respect to the effect of date bugs on the business itself could well be justified. Most such bugs are easy to find and quick to repair correctly. I think even those who don't expect to complete all phases until months into the new century understand that the resulting loss of efficiency within the organization as a whole will be minimal because the problems that arise will be manageable (given long hours, temporary workers implementing manual but temporary workarounds, etc.)

I doubt few of those organizations in the "unready" category will experience problems as serious as Hershey has, yet Hershey is well into recovery and financially healthy despite suffering big problems during the worst possible season to have them.

I can't guarantee any of this, of course, but I think that's the way to bet.

-- Flint (flintc@mindspring.com), December 08, 1999.


Moderation questions? read the FAQ