Programmers sabotaging code?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Don't know if this thread has been covered before, but I've been reading lately that there will inevitably be (and currently is) a percentage of Programmers out there - the spectrum ranging from disgruntled employees through to US/Foreign National Cyber-Terrorists (the US market has been absolutely flooded with expat contractors - myself included) - who are actively sabotaging code, planting timebombs etc.

The possibilities for exacerbating the core remediation problems have got to be immense.

Anyone care to comment on a) how widespread this might be, and b) the likelihood of success, and c) in what order of magnitude would the sabotage be successful considering the mainframe and commercial interconnectivity factors of the (ahem) amended code.

Thanks, Andy

-- Andy (andy_rowland@msn.com), November 27, 1998

Answers

You won't be able to notice the impact of any sabotage over the impact of unremediated and accidental Y2K bugs.

Most employees, foreigners or otherwise, are honest (whether competent or not). Also if the organisation tests, any deliberate Y2K sabotage is likely to become apparent, and the perpetrator is then in big trouble. If it doesn't test, there's no need for a saboteur; Y2K will get it anyway!

The one exception may be that dishonest employees may insert "back doors" into systems, to allow break-ins later (like maybe 2008, when the trail has gone cold). Such exploits would be predicated on the employer surviving Y2K, so the overall effects on you and I of such a hypothetical dishonest programmer are actually better than not employing him. Where this is a serious risk, I'd expect the employer to keep a computerised audit trail of who changed what, and to audit the changes made after the immediate panic has subsided. On the other hand, an employer who thought ahead like this would have finished Y2K remediation a year ago!

Just my opinions.

-- Nigel Arnot (nra@maxwell.ph.kcl.ac.uk), November 27, 1998.


This is one of the pieces I read:-

TIME magazine

http://cgi.pathfinder.com/time/digital/daily/0,2822,13799,00.html

When they showed up before the Senate's Government Affairs Committee yesterday, CIA director George Tenet and NSA chief Kenneth Minihan planned to startle their audience with a few scare stories about electronic warfare. But the panel was more curious about the Year 2000 problem. "Is there a national security or military relevance to this computer 2000 problem?" asked committee chairman Fred Thompson. Definitely, replied NSA's Minihan: "There's no question."

What spooks the spooks is the possibility that malicious programmers -- perhaps employed by a hostile nation -- will promise to remove Y2K bugs but will quietly insert new ones. Or that malcontents will leak proprietary information about a company whose system they've been hired to fix. "We're watching it very, very carefully. We're working with the [FBI] to understand whether anybody's organizing a threat," CIA's Tenet said.

What about nuclear weapons? Won't they be at risk, Thompson wondered, if they're not repaired in time? The CIA dodged the question. "We can talk about that in a classified session," Tenet replied. Hmm...

Minihan was more downbeat ("It would be illusory of anybody to tell you we're going to get our hands around this, it will be OK and we'll guide you through it") than his colleague. Still, Tenet admitted, "we've got to be careful not to construct catastrophic scenarios. But the fact is that a bank that is unable to transact business in a country that's experiencing financial difficulties in 2000 creates greater problems."

-- Andy (andy_rowland@msn.com), November 27, 1998.


Surely the issue about sabotaging code has little connection with y2k, except that more maintenance is being done probably at the expense of other projects. You have the same risk of sabotage no matter what or do you think there is somehow a greater risk on a y2k project. There would have to be some degree of collusion with sabotage, it depends on whether there is a procedure for desk-checking code.

-- Richard Dale (rdale@figroup.co.uk), November 27, 1998.

I think this is a serious issue. AND, I expect the Government to use a few valid cases of sabotage to justify a much broader Y2K failure. That way when the IRS or HCFA, etc crash its not "their" fault. Its the fault of unnamed "cyber terrorists".

A lot of coding changes are being done in cyberspace by foreign nationals. I can't remember the name of the bank, but they are exporting about half of their Y2K work to India. As far as how easy would it be to insert a bug which would get past the casual "testing" now being done - very, very easy. Most testing procedures are using certain key dates: 12/31/1999, 01/01/2000, 2/29/2000, etc. These dates are well known within a given organization. Given that, can you think of an easy way to insert a problem?

-- R. D..H (drherr@erols.com), November 27, 1998.


Yes, wouldn't the US feds just love to blame Y2K on cyber-terrorism? They've been gearing up for doing just so, showing of course that they have been aware of the scapegoat potential for a long time.

Quis custodiet ipsos custodes?

Who watches the watchers?

-- Donna Barthuley (moment@pacbell.net), November 27, 1998.



Assuming "it can't be fixed in time," it's pretty easy to figure out, globally, what needs to be done next. Sure hope big brother is watching big brother too.

-- Diane J. Squire (sacredspaces@yahoo.com), November 27, 1998.

read Debt of Honor by Tom Clancy. Have to go to pee.

-- fly . (.@...), November 27, 1998.

I've mentioned this before on other threads, but it seems apt for the topic here -- last March the New York Times did a major article on computer security. Although not specifically related to y2k, the reporter mentioned that there had already been several cases where overseas y2k work -- he mentioned Russia and India specifically -- had come back to the United States with illicit back doors added. In the Russian cases, the assumption was that the Russian successor to the KGB was using y2k work to install the back doors to allow future economic and intelligence snooping -- i.e. using a back door into a defense contractor's computer to not only browse the contractor's files but also use that access to get into DOD files. Just another example of the Law of Unintended Consequences, I guess.

-- JDClark (yankeejdc@aol.com), November 27, 1998.

Just a quick note. Backdoors are not usually for espionage or sabotage. Programmers often leave a few debugging facilities within the production code to facilitate unexpected problem resolution. Could it be used for nefarious reasons? Sure. But, it usually isn't. However, it now occurs to me that the Feds could use these "backdoors" as "proof" of cyber-terrorism!!

-- R. D..Herring (drherr@erols.com), November 27, 1998.

In order of likelihood of occurring:

1) Plain errors, caught by initial or secondary testing

2) Plain errors, affecting system interfaces, caught during system testing, or during actual use, whichever comes first.

3) Not-so-plain errors, cusght during systems integrated testing, or (more likely) during actual use post 2000

4) Deliberate "testing shortcuts" or coding simulations (backdoors, sometimes) accidently left in coding to left system testing begin - found during testing, during use, or during post-2000 operation.

5) Incomplete coding left in place, or left "hidden" in the original program, that becomes 'exposed" by Y2K changes.

6) Actual sabotage, includes trapdoors, that doesn't work or is discovered during code reviews or testing but not recognized as potential sabotage.

7) Actual sabotage that works and remains undiscovered.

-- Robert A. Cook, P.E. (Kennesaw, GA) (cook.r@csaatl.com), November 27, 1998.



Moderation questions? read the FAQ