Scientist Is Fearful of Computer Mutiny

greenspun.com : LUSENET : TB2K spinoff uncensored : One Thread

Scientist Is Fearful of Computer Mutiny Sun Micro co-founder says replicating robots could replace humans

Joel Garreau, Washington Post Monday, March 13, 2000

WASHINGTON -- A respected creator of the Information Age has written an extraordinary critique of accelerating technological change in which he suggests that new technologies could cause ``something like extinction'' of humankind within the next two generations.

The alarming prediction, intended to be provocative, is striking because it comes not from a critic of technology but rather from a man who invented much of it: Bill Joy, chief scientist and co-founder of Sun Microsystems Inc., the leading Web technology manufacturer.

Joy was an original co-chairman of a presidential commission on the future of information technology. His warning, he said in a telephone interview, is meant to be reminiscent of Albert Einstein's famous 1939 letter to President Franklin Delano Roosevelt alerting him to the possibility of an atomic bomb.

In a 24-page article in the Wired magazine that will appear on the Web tomorrow, Joy says he finds himself essentially agreeing, to his horror, with a core argument of the Unabomber, Theodore Kaczynski -- that advanced technology poses a threat to the human species.

``I have always believed that making software more reliable, given its many uses, will make the world a safer and better place,'' Joy wrote in the article, which he worked on for six months. ``If I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine that such a day may come.''

Joy enjoys a level-headed reputation in the industry. ``Nobody is more phlegmatic than Bill,'' said Stewart Brand, an Internet pioneer. ``He is the adult in the room.''

Joy is disturbed by a suite of advances. He views as credible the prediction that by 2030, computers will be a million times more powerful than they are today. He respects the possibility that robots may exceed humans in intelligence, while being able to replicate themselves.

INEXPENSIVE SMART MACHINES

He points to nanotechnology -- the emerging science that attempts to create any desired object on an atom-by-atom basis -- and agrees that it has the potential to allow inexpensive production of smart machines so small they could fit inside a blood vessel. Genetic technology, meanwhile, is inexorably generating the power to create new forms of life that could reproduce.

What deeply worries him is that these technologies collectively create the ability to unleash self-replicating, mutating, mechanical or biological plagues. These would be ``a replication attack in the physical world'' comparable to the replication attack in the virtual world that recently caused the shutdowns of major commercial Web sites.

``If you can let something loose that can make more copies of itself,'' Joy said in a telephone interview, ``it is very difficult to recall. It is as easy as eradicating all the mosquitoes: They are everywhere and make more of themselves. If attacked, they mutate and become immune. . . . That creates the possibility of empowering individuals for extreme evil. If we don't do anything, the risk is very high of one crazy person doing something very bad.''

What further concerns him is the huge profits from any single advance that may seem beneficial in itself.

``It is always hard to see the bigger impact while you are in the vortex of a change,'' Joy wrote. ``We have long been driven by the overarching desire to know that is the nature of science's quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.''

Finally, he argues, this threat to humanity is much greater than that of nuclear weapons because those are hard to build. By contrast, he says, these new technologies are not hard to come by. Therefore, he reasons, the problem will not be ``rogue states, but rogue individuals.''

Joy acknowledges that to some people, this may all sound like science fiction. ``After Y2K didn't happen,'' he said, ``some people will feel free to dismiss this, saying everything will work out.''

Joy is less clear on how such a scenario could be prevented. When asked how he personally would stop this progression, he stumbled. ``Sun has always struggled with being an ethical innovator,'' he said. ``We are tool builders. I'm trailing off here.''

http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2000/03/13/MN108057.DTL

-- Lynn Ratcliffe (mcgrew@ntr.net), March 14, 2000

Answers

Earlier thread on this topic:

OT: New Technologies Imperil Humanity - U.S. Scientist at http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=002lkO

-- No Spam Please (nos_pam_please@hotmail.com), March 14, 2000.


Computer "mutiny" is not so far-fetched. I have to admit, I have a forum induced bee-in-my-bonnet (in professional parlance, an obsession) that bee being LASCO C-3 (http://lasco- www.nrl.navy.mil/java/lastC3.html). That century and time display is congruent with the old "public" time date display (courtesy NRL) which existed prior to correction of the 19100 date expression of GMT and was based on the location of the inquiring entity. On "Old LASCO", my GMT display was about six hours different from "real" GMT- post 19100 correction, my time and GMT time are the same (and I did a number of print screen backups for reference). I DON'T THINK SO. If I'm a little p*****d at the substitution, then imagine how a machine will react when its proper function relies on congruence of input and base structure and unmodified data expression. As I posted above, GICC has a number of postings today (031300) that send up red flags. Something's "coming down" as we said in the olden days...the question is , and how significantly will we be affected? Might be time to refocus our gaze from delightful "powdered" bottoms and start getting serious again.

-- mike in houston (mmorris67@hotmail.com), March 14, 2000.

mike in houston,

Do me a favor, expand on your thread and talk in such a way as I might understand and comprehend what you are trying to say.

Let's assume I am a layperson (oops, guess I am), "dumb it down a bit", you may be on to something, help me get a better view of what it is. Thanks.

-- Michael (michaelteever@buffalo.com), March 14, 2000.


And it appears that Steven Spielberg will soon be bringing one version of this apocalyptic tale to a theatre near you: Spielberg Takes Over Kubrick's A.I.

March 15  Memoirs of a Geisha. Minority Report. Harry Potter. Indiana Jones. Speculation about which of all the possible projects before him Steven Spielberg would next take on has run rampant in Hollywood for months. On Tuesday, the Oscar-winning filmmaker laid all the rumors to rest with a prepared statement announcing his decision: A.I.

Let it never be said that he lacks for ambition.

A.I.  short for "artificial intelligence"  is a project that had long held the interest of the late Stanley Kubrick, a friend and mentor to Spielberg. The DreamWorks magnate, who hasn't directed a film since completing work on Saving Private Ryan, for which he won a Best Director Oscar, says Kubrick spent nearly two decades ruminating on how best to commit the project to film.

"Stanley had a vision for this project that was evolving over 18 years," says Spielberg. "I am intent on bringing to the screen as much of that vision as possible, along with elements of my own."

[snip]

Little of substance is known about A.I.'s plot beyond the fact that it deals with robots who develop self-awareness and the capacity for independent thought. (We can tell you right now not to expect Bicentennial Man II.) The tale is reportedly set in a post-apocalyptic future in which the polar icecaps have melted, submerging much of the planetary landmass. (We can tell you right now not to expect Waterworld II.)...

-- DeeEmBee (macbeth1@pacbell.net), March 15, 2000.


Moderation questions? read the FAQ