Comments on Vinge's 'singularity' from the digiteratigreenspun.com : LUSENET : Human-Machine Assimilation : One Thread
Discussion of Singularity
Here are 15 Selected posts from the Vinge singularity discussion, which took place on the Extropians email list 6-19 Sep 1998.
The full discussion can be found in the list archives.
- Hal Finney, 7 Sep
- Hal Finney, 7 Sep
- Robin Hanson, 7 Sep
- Robin Hanson, 11 Sep
- Vernor Vinge, 16 Sep
- Robin Hanson, 16 Sep
- Damien Sullivan, 13 Sep
- Damien Broderick, 17 Sep
- Eliezer S. Yudkowsky, 18 Sep
- Michael Lorrey, 16 Sep
- Vernor Vinge, 19 Sep
- Robin Hanson, 18 Sep
- Eliezer S. Yudkowsky, 18 Sep
- Damien Sullivan, 18 Sep
- Mitchell Porter, 20 Sep
Date: Mon, 7 Sep 1998 18:02:39 -0700 From: Hal Finney
Message-Id: <199809080102.SAA02658@hal.sb.rain.org> To: firstname.lastname@example.org Subject: Singularity: Are posthumans understandable? [This is a repost of an article I sent to the list July 21.] It's an attractive analogy that a posthuman will be to a human as a human is to an insect. This suggests that any attempt to analyze or understand the behavior of post-singularity intelligence is as hopeless as it would be for an insect to understand human society. Since insects clearly have essentially no understanding of humans, it would follow by analogy that we can have no understanding of posthumans. On reflection, though, it seems that it may be an oversimplification to say that insects have no understanding of humans. The issue is complicated by the fact that insects probably have no "understanding" at all, as we use the term. They may not even be conscious, and may be better thought of as nature's robots, of a similar level of complexity as our own industrial machines. Since insects do not have understanding, the analogy to humans does not work very well. If we want to say that our facility for understanding will not carry over into the posthuman era, we need to be able to say that insect's facility for would not work when applied to humans. What we need to do is to translate the notion of "understanding" into something that insects can do. That makes the analogy more precise and improves the quality of the conclusions it suggests. It seems to me that while insects do not have "understanding" as we do, they do nevertheless have a relatively detailed model of the world which they interact with. Even if they are robots, programmed by evolution and driven by unthinking instinct, still their programming embodies a model of the world. A butterfly makes its way to flowers, avoides predators, knows when it is hungry or needs to rest. These decisions may be made unconsciously like a robot, but they do represent a true model of itself and of the world. What we should ask, then, is whether insect's model of the world can be successfully used to predict the behavior of humans, in the terms captured by the model itself. Humans are part of the world that insects must deal with. Are they able to successfully model human behavior at the level they are able to model other aspects of the world, so that they can thrive alongside humanity? Obviously insects do not predict many aspects of human behavior. Still, in terms of the level of detail that they attempt to capture, I'd say they are reasonably effective. Butterflies avoid large animals, including humans. Some percentage of human-butterfly interactions would involve attempts by the humans to capture the butterflies, and so the butterflies' avoidance instinct represents a success of their model. Similarly for many other insects for whom the extent of their model of humans is as "possible threat, to be avoided". Other insects have historically thrived in close association with humans, such as lice, fleas, ants, roaches, etc. Again, without attempting to predict the full richness of human behavior, their models are successful in expressing those aspects which they care about, so that they have been able to survive, often to the detriment of the human race. If we look at the analogy in this way, it suggests that we may expect to be able to understand some aspects of posthuman behavior, without coming anywhere close to truly understanding and appreciating the full power of their thoughts. Their mental life may be far beyond anything we can imagine, but we could still expect to draw some simple conclusions about how they will behave, things which are at the level which we can understand. Perhaps Robin's reasoning based on fundamental principles of selection and evolution would fall into this category. We may be as ants to the post singularity intelligences, but even so, we may be able to successfully predict some aspects of their behavior, just as ants are able to do with humans. Hal
Date: Mon, 7 Sep 1998 18:00:49 -0700 From: Hal Finney
Message-Id: <199809080100.SAA02641@hal.sb.rain.org> To: email@example.com Subject: Re: Singularity - Clarifying Timing Claims Robin Hanson, , writes: > Max More and I both took issue with Vinge's timing claim, but Vinge > just refers Max More to his reply to Nick Bostrom, which is: > [...] > o We humans now are developing devices which can run simulations > faster than our internal, biological "hardware" can do. I think > it's plausible that the accompanying speedup will have the > appearance of "Verticality" over the human phase. I would add to this that the simulations are not only faster, but potentially larger, more complex, more detailed, and more realistic than the level of modelling which can be done by a human mind. Consider Deep Blue, capable of looking at billions of chess positions with perfect accuracy. > Before this discussion can proceed further, I think we need to get clear > on what exactly Vinge is claiming in these two passages. I'd be most > interested in how others translate these, but my reading is: > > "Progress" rates increase with the speed of the processors involved. I find that this phrasing invites the assumption of a simple relationship between the two, which is probably not what Vinge has in mind. What I take him to be saying is that this new resource will be able to greatly increase the rate of progress by facilitating simulations at an unprecedented level. However there is presumably a threshold effect, where primitive simulation tools do not play a significant role. > Now it's not clear what "progress" metrics are valid here, but if > economists' usual measures are valid, such as growth rate in world > product, there are two immediate problems with this theory: > > 1) Progress rates increased greatly over the last hundred thousand > years until a century ago without any change in the cycle speed of > the processors involved. No doubt there are many factors influencing the rate of progress. It is still possible that adding powerful computers to the mix will make a difference. > 2) Computer processor speeds have increased greatly over the last > century without much increase in rates of progress. It could be that the amount of computer power available is still too small to make a significant contribution. One metric sometimes used is total brainpower vs total computer power. If we assume that the latter continues to grow as the product of Moore's law and economic growth rates then the total human+computer power can be expected to be dominated by computers in a few decades. If computers can be given problem-solving heuristics comparable in power to those used by humans, and if their total computational power becomes thousands, then millions, then billions of times greater than that of humanity, then it is plausible that problem-solving abilities will increase by a similar factor. It may be that economic growth rates won't be the most appropriate metric for progress. The large installed base of existing technologies and natural human conservatism may put us into a new kind of economy. New ideas will be coming out of the labs far faster than they can be incorporated into society as a whole. We might see a form of "social shear" where some people are pushing forward as fast as they can while others are hanging back, and still others are trying to hold society together. (Unfortunately shear strain often resolves itself catastrophically.) Hal
Date: Mon, 7 Sep 1998 21:00:43 -0700 (PDT) Message-Id:
In-Reply-To: <199809080100.SAA02641@hal.sb.rain.org> To: firstname.lastname@example.org From: Robin Hanson Subject: Re: Singularity - Clarifying Timing Claims Hal Finney writes: >> "Progress" rates increase with the speed of the processors involved. >> ... there are two immediate problems with this theory: >> >> 1) Progress rates increased greatly over the last hundred thousand >> years until a century ago without any change in the cycle speed of >> the processors involved. > >No doubt there are many factors influencing the rate of progress. >It is still possible that adding powerful computers to the mix will make >a difference. A difference yes. But without a relatively direct relationship between growth rates and processor speeds, its not at all obvious that the difference is enough to induce the very rapid growth scenario Vinge describes. >One metric sometimes used is total brainpower vs total computer power. >If we assume that the latter continues to grow as the product of Moore's >law and economic growth rates then the total human+computer power can >be expected to be dominated by computers in a few decades. If computers >can be given problem-solving heuristics comparable in power to those used >by humans, and if their total computational power becomes thousands, >then millions, then billions of times greater than that of humanity, >then it is plausible that problem-solving abilities will increase by a >similar factor. This may be a plausible way to think about growth, though I'm not sure it is what Vinge has in mind. Even granting that the important parameter is the total computer power, and assuming it will continue to grow at present rates even when it dominates the economy, you get maybe a doubling time of every two years, which I'm not sure is enough to be Vinge's fast growth scenario. >It may be that economic growth rates won't be the most appropriate >metric for progress. The large installed base of existing technologies >and natural human conservatism may put us into a new kind of economy. >New ideas will be coming out of the labs far faster than they can >be incorporated into society as a whole. We might see a form of >"social shear" where some people are pushing forward as fast as they >can while others are hanging back, and still others are trying to hold >society together. But this is the way economic growth has been for a long time. There have always been lots more ideas than we can implement, and the rate at which changes can be absorbed has long been a limiting factor. Robin Hanson email@example.com http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
Message-Id: <firstname.lastname@example.org> Date: Fri, 11 Sep 1998 12:36:49 -0700 To: email@example.com From: Robin Hanson
Subject: Re: Singularity: Human AI to superhuman In-Reply-To: <001901bddd40$b1aa10c0$13934d0c@flrjs> John Clark writes: >>... AI progress ... suggesting the importance of lots of little >>insights which require years of reading and experience to accumulate. > >That's a good point but the question remains, even if a billion small >insights are needed what happens if their rate of discovery increases >astronomically? ... artificial intelligence is only one path toward the >singularity, another is Nanotechnology, perhaps another is Quantum >Computers, and you only need one path to go somewhere. The basic problem with singularity discussions is that lots of people see big fast change coming, but few seem to agree on what that is or why they think that. Discussion quickly fragment into an enumeration of possiblities, and no one view is subject to enough critical analysis to really make progress. I've tried to deal with this by focusing everyone's attention on the opinions of the one person most associated with the word "singularity." But success has been limited, as many prefer to talk about their own concept of and analysis in support of "singularity". In the above I was responding to Eliezer Yudkowsky's analysis, which is based on his concept of a few big wins. To respond to your question, I'd have to hear your analysis of why we might see an astronomical increase in the rate of insights. Right now, though, I'd really rather draw folks' attention to Vinge's concept and analysis. So I haven't responded to Nick Bostrom's nanotech/upload analysis, since Vinge explicitly disavows it. And I guess I should stop responding to Eliezer. (Be happy to discuss your other singularity concepts in a few weeks.) >>There has been *cultural* evolution, but cultural evolution is Lamarkian. > >Yes but the distinction between physical and cultural evolution would >evaporate for an AI, it would all be Lamarkian and that's why it would >change so fast. Well this suggests it would be *faster*, all else equal, but how fast and is all else really equal, that's what's at issue. Robin Hanson
Message-Id: <firstname.lastname@example.org> Date: Wed, 16 Sep 1998 09:38:11 -0700 To: email@example.com From: Vernor Vinge via Robin Hanson
Subject: Singularity: Vinge responds Notions of great change raise the vision of all sorts of things that might be called a singularity. In the past, discussions of what I've written about have more often than not spread out to things quite different. Early on in this discussion I got my point distilled down to: 1. The creation of superhuman intelligence appears to be a plausible eventuality if our technical progress proceeds for another few years. 2. The existence of superhuman intelligence would yield forms of progress that are qualitatively less understandable than advances of the past. Given that, however, the form of the post-human environment is not at all specified or restricted! I like speculation about it, and I like to speculate about it (usually after acknowledging that I shouldn't have any business doing so :-). The speculation often leads to conflicting scenarios; some I regard as more likely than others. But if they arise from the original point, I feel they are relevant. For planning purposes, my vision the high level taxonomy is: o (the null): The singularity doesn't happen (or is not recognizable). o We have a hard takeoff. o We have a soft takeoff. There is a large variety of mechanisms for each of these. (Some, such a bio-tech advances, might be only indirectly connected with Moore's Law.) Thus, I don't consider that I have written off Nick's upload scenario. (Actually, Robin may have a better handle on what I've said on this than I do, so maybe I have words to eat.) Uploading has a special virtue in that it sidesteps most people's impossibility arguments. Of course, in its most conservative form it gives only weak superhumanity (as the clock rate is increased for the uploads). If that were all we had, then the participants would not become much greater within themselves. Even the consequences of immortality would be limited to the compass of the original blueprint. But to the outside world, a form of superhumanity would exist. In my 1993 essay, I cited several mechanisms: o AI [PS: high possibility of a hard takeoff] o IA [PS: I agree this might be slow in developing. Once it happened, it might be a explosive. (Also, it can sneak up on us, out of research that might not seem relevant to the public.)] o Growing out of the Internet o Biological Since then: o The evolutionary path of fine-grained distributed systems has impressed me a lot, and I see some very interesting singularity scenarios arising from it. o Greg Stockman (and ~Marvin) have made the Metaman scenario much more plausible to me: probably a very "gentle" take off. (At AAAI-82 one of the people in the audience said he figured this version had happened centuries ago.) Discussion of others is of interest, too. If I were betting (really foolish now :-), as of Tue Sep 15 10:58:51 PDT 1998 I would rate likelihoods (from most to least probable): 1. very hard takeoff with fine-grained distribution; 2. no Singularity because we never figure out how to get beyond software engineering and we fail to manage complexity; 3. IA (tied in likelihood with:) 4. Metaman 5. ... -- Vernor PS: I like Doug Bailey's postscript: >[Note: My apologies for the less-than-stellar organization but I wrote this in >one pass. If I had waited and posted it after I had time to optimize it, it >would have never seen the light of day.] I find myself near-frozen by the compulsion to optimize :-)
Message-Id: <firstname.lastname@example.org> Date: Wed, 16 Sep 1998 10:11:06 -0700 To: email@example.com From: Robin Hanson
Subject: Singularity: Vinge responds Vernor Vinge writes: >Notions of great change raise the vision of all sorts of things that >might be called a singularity. In the past, discussions of what I've >written about have more often than not spread out to things quite >different. > >Early on in this discussion I got my point distilled down to: >1. The creation of superhuman intelligence appears to be a plausible > eventuality if our technical progress proceeds for another few > years. >2. The existence of superhuman intelligence would yield forms of > progress that are qualitatively less understandable than advances > of the past. > >Given that, however, the form of the post-human environment is >not at all specified or restricted! I like speculation about it, >and I like to speculate about it (usually after acknowledging >that I shouldn't have any business doing so :-). The speculation >often leads to conflicting scenarios; some I regard as more >likely than others. But if they arise from the original point, >I feel they are relevant. ... O.K. Uncle. It seems I was mistaken in my attempt to create a focused discussion on singularity by focusing on Vinge's concept and analysis. I incorrectly assumed that Vinge had in mind a specific enough concept of and analysis of singularity to hold discussants' attention. In fact, by "singularity" Vinge seems to just mean "big changes will come when we get superhumans." And while Vinge has dramatic opinions about how soon this will happen and how fast those changes will come afterward, these opinions are not part of his concept of "singularity", and he is not willing to elaborate on or defend them. This seems analogous to Eric Drexler, who written extensively on nanotech, and privately expressed dramatic opinions about how soon nanotech will come and how fast change will then be, but who has not to my knowledge publicly defended these opinions. Robin Hanson
From: firstname.lastname@example.org (Damien R. Sullivan) Date: Sun, 13 Sep 1998 20:54:59 -0700 (PDT) Message-Id: <199809140354.UAA27848@sloth.ugcs.caltech.edu> To: email@example.com Subject: Singularity: been there, done that I was struck by Vinge's referring to bacterial conjugation and the nature of corporations. Particularly the latter, when I recalled Sasha's liquid intelligence ideas. And by his response to Robin, and mention of _Metaman_ (which I've read.) Most of this will be recap, but perhaps presented differently. First, some literary abuse: Civilizations in the High Beyond produce artifacts not producible in the Lower Beyond, even when understood there, which are sold in exchange for raw materials and art. They can do this because physical conditions in their Zone allow higher bandwidth, and thus more complex forms of organization. The individuals are similar to those Below, but organized better, and probably educated better through those organizations. Even when understood, the High Beyond is not imitable due to conditions below. Individuals from Below trickle on High, but the High Beyond itself extends only slowly Below. Civilizations in the West make products not producible in the Rest of the world, even when understood there, which are sold in exchange for raw materials and art. They can do this because cultural conditions in their Zone allow more trust and reliable contracts, and thus more complex forms of organization. The individuals are similar to those elsewhere, but organized better, as well as educated better through those organizations. Even when understood, the West is not imitable due to conditions elsewhere. Individuals from the Rest trickle to the West, but the West itself extends only slowly elsewhere. Analysis: in the High Beyond authorial magic allows greater linking of computers and people which can run more complex factories and design the products for those factories. In the West (northwestern Europe, North America, Australia/New Zealand, Japan) mass literacy, general reliability of contract, general trustworthiness, reliability of property, a common bourgeois ethic, and a working price system allow greater linking of people in companies (and service-providing governments) which can flexibly organize people for projects exceeding anyone's grasp, and do so for long periods of time. More abuse: The Great Link of the changelings of Star Trek's Dominion is a massive ocean of minds, freely exchanging thoughts and experience. Changelings should be capable of forming organisms and structures large and small to handle a huge diversity of tasks for varying lengths of time. We haven't actually seen much, but it got me thinking: The Great Market of western civilization is a massive ocean of human minds, using language, writing, and prices to exchange thoughts, experience, and desires. In it people are capable of forming structures and superorganisms large and small to handle a huge diversity of tasks for varying lengths of time. In Silicon Valley companies rise and dissolve and have their components rise again in some other combination over and over, merging and splitting and recycling. Liquid intelligence? Look there. Elsewhere the huge structures of Motorola and others are coming together to erect Iridium and Teledesic; presumably they'll link less closely after those are built. 30 years ago a huge structure crystallized out of the Great Market, when enough components agreed to send a bit to the Moon. When that was found to be unsupportable it dissolved, and companies and factories turned to other purposes. 55 years ago the Market responded to threat by coalescing into a huge war machine, producing tons of materiel and soldiers until the threat was vanquished. Then that machine mostly died and dissolved as pieces broke off and reformed civilian firms. Conclusion? Much of what we anticipate has already happened, at fast rates, and with the creation of sharp dichotomies. I already hear that no one person can fully understand a Boeing 747, or MS Excel. We can already produce superintelligences capable of producing things our minds aren't big enough to grasp. The consciousness isn't superhuman, but a human CEO makes decisions based on superhuman levels of prior processing, and with superhuman (in complexity, not just gross scale) consequences. So, what would change if our dreams came true? Everything and not very much. Direct neural connections and liquid intelligence at the subhuman level, would be revolutionary for individuals, redefining (or throwing away) what it means to be human. But above the human level not much would change. Efficiency could increase a great deal -- lower training costs, being able to get more precisely the thinker you need, perhaps greater reliability and honesty, since less individual self-interest. But the type of process would be exactly the same as the West has today. One might think that with direct links posthumans couldn't make the stupidities a corporation can, if it has a stupid CEO or a smart CEO gets wrong data. And possibly it wouldn't make the same errors -- I really can't model that. But consideration of the existence of malapropisms, and people who seem to speak faster than they think, and all sorts of cognitive slips and errors and seeming contradictions in individual people should quickly remove any conviction that direct neural links are any guarantee of posthuman pure sanity and rationality. So from the human point of view we can change things a lot by changing our species. But that's not the traditional Singularity. As far as superhuman accomplishments and beings go the Singularity happened already. Or we're in it: a product of language, money, writing, and law to allow cooperation to form large and long-term but flexible (and self-modifying!) firms. (Which explains why Iain Banks' Culture has never Transcended: no money.) Of course, if we take the Singularity at its most basic level, an inability of an SF writer to imagine stories after a certain level of progress, then the dissolution of human beings into an inhuman soup of thinking abilities would qualify, even while an economist was observing nothing more than a modest increase in GNP. -xx- Damien R. Sullivan X-) Evolution: "uber alleles"
Date: Thu, 17 Sep 1998 12:18:12 +0000 From: Damien Broderick
Subject: Re: Singularity: Vinge responds In-reply-to: <36003B42.4A977C4A@together.net> To: firstname.lastname@example.org Message-id: <email@example.com> At 06:27 PM 9/16/98 -0400, Mikey wrote: > I am sure that to at least some humans on this planet, our >internet civilization we are building here is superhumanly >incomprehensible, while we would look at those people as savages, living a >near-animal existence. Undoubtedly this is why they refer to us by such awe-struck terms of worship as `geek' and `nerd'. :) Damien Broderick
Message-ID: <3602E81F.216B4C59@pobox.com> Date: Fri, 18 Sep 1998 18:09:28 -0500 From: "Eliezer S. Yudkowsky"
To: "firstname.lastname@example.org" Subject: Singularity debate: King James version ONE: 1. In the Beginning was the Singularity, but it was known to none, for men lived in fear and squalor. 2. Then there did come a great wave of technology, bearing man up towards the heavens and into the heart of matter, even into the brain. 3. And from afar off did the Singularity make itself known to man. 4. And the Singularity did make itself known after the profession of he who saw it. 5. Yea, each man saw it as if he himself had invented it, and understood it with the intuitions of long experience, and spoke of it with the words he loved. TWO: 6. And the Singularity did make itself known to Vinge, a mathematician, who said: "Verily it is self-referential, and thus unknowable; yea, it causes our models to break down; it alters the rules so that our old theories do not apply." 7. And the Singularity did make itself known to Drexler, a nanotechnologist, who said: "Material omnipotence shall be granted us; yea, even complete control over the structure of matter, and also much computing power." And because Drexler worked on nanosecond and picosecond time scales, he did speak of high-speed intelligence. 8. And the Singularity did make itself known to Hanson, an economist, who said: "Verily this foolishness is unlike all the laws of economics which I know; yea, the analogies and perceptions which I have learned dictate that it shall proceed at a rapid but knowable pace, like all other revolutions." 9. And the Singularity did make itself known to Yudkowsky, a cognitive engineer, who said: "Surely intelligence is the source of all potency and power; verily intelligence is the breaker of rules, the wild magic. Yea, the Singularity shall tear down the foundations of the world." And because he loved intelligence, he exalted the Singularity above all other things. 10. And the Singularity did make itself know to science fiction authors, who had watched their work crowded off the shelves by garbage, verily dreck which was an abomination unto the Lord; who had seen the government screw up the space program, and great things laid low by inadequate funding. 11. And they said: "Surely the Singularity shall be entered by but a few, and others shall impede them; the superintelligent shall be laid low by lack of infrastructure." 12. And the Singularity did make itself know to the watchers of memes, who said: "Verily this is like many other memes which I have known, and all of them stupid; yea, it seemeth but another apocalypse meme." And they waxed mightily suspicious. 13. And the Singularity did make itself known to Nielsen, a quantum theorist, who did ask of many probabilities which interacted in nonlinear ways, saying: "The chance of a given outcome can be estimated even though it is nondeterministic." 14. And many murmured against the writer, verily the writer of this message, saying that he was torturing the analogy. THREE: 15. So the Singularity was known to many after their own prejudices, but the whole truth was known to none. 16. And they debated, hither and yon, saying yea and nay. 17. Verily many had an amateur grasp of the others' fields, but each of them placed confidence in their own field above all others. 18. And their heart could not be swayed with words they loved not. 19. And they are still debating. -- email@example.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
Message-ID: <36003B42.4A977C4A@together.net> Date: Wed, 16 Sep 1998 18:27:15 -0400 From: Michael Lorrey
To: firstname.lastname@example.org Subject: Re: Singularity: Vinge responds Robin Hanson wrote: > Vernor Vinge writes: > >Notions of great change raise the vision of all sorts of things that > >might be called a singularity. In the past, discussions of what I've > >written about have more often than not spread out to things quite > >different. > >(snip) > > O.K. Uncle. It seems I was mistaken in my attempt to create a > focused discussion on singularity by focusing on Vinge's concept > and analysis. I incorrectly assumed that Vinge had in mind a specific > enough concept of and analysis of singularity to hold discussants' > attention. In fact, by "singularity" Vinge seems to just mean > "big changes will come when we get superhumans." And while Vinge > has dramatic opinions about how soon this will happen and how fast > those changes will come afterward, these opinions are not part of > his concept of "singularity", and he is not willing to elaborate > on or defend them. One concept that I developed in my minds eye upon reading much of Vinge's work is that a 'singularity' is not a destination, but the horizon of any man's vision of the future. While extrapolating from Moore's Law we think that the event horizon of the 'singularity' is going to come closer and closer as we approach a given point that seems now to be incomprehensible. However, given that amplification of our own intelligence is an effect of approaching the singularity, our own ability to imagine future possibilities should also expand with our own intelligence. For example, I and others on this list can imagine greater things in store for humanity in the future than the average broom pusher or the average tribal bushman. I am sure that to at least some humans on this planet, our internet civilization we are building here is superhumanly incomprehensible, while we would look at those people as savages, living a near-animal existence. To them, we are already beyond the singularity event horizon as they see it. To those individuals participating in the bleeding edge techno-culture, anyone less technological is a savage, while the 'superhuman' is never in existence, as it is always 'todays' ideal for improvement. I'll make a stab at a chart to illustrate this: | | | | | |s | / |i | / |n THE NOW / |g | / |u savage | / |l human ----- |a animal | -------- |r ----------0----------- |i --------- | superhuman |t ------- | |y / | / | / | where: x axis is time relative to the observers present and y axis is advancement. As the absolute level of advancement of the observer rises, the 's' curve contracts, but the observer always considers themselves to be 'human', and their own ability to see further up the curve expands to counteract the contraction of the 's' curve. This might also help in the opposite direction. As we can see further up the curve and still consider occupants of that advanced level of the curve to be 'human' compared to us, we might also expand our vision DOWN the curve to consider even more primitive forms of life as 'human', provided there is some means of communication that can be attained. The singularity is ALWAYS in the future. It will NEVER be reached. And just as today there are savages and spacemen living on the same planet, there will also be a whole panoply of civilizations within each nation or ethnic group that at some point will be incomprehensible to each other. This is the 'generation gap' in the extreme, though it is not necessarily a matter of the age of the participants, but in the version numbers of the participant's operating systems. As Moore's Law begins to contract, anyone more than x number of software generations behind the curve will find themselves in a state of career and cultural obsolescence, as far as the bleeding edge is concerned. Do not feel bad, though, as there will always be niches and reservations and uses for individuals of all levels, just as there are niches for savages within our own culture. Mike Lorrey
Date: Sat, 19 Sep 1998 10:21:01 -0700 (PDT) Message-Id:
To: email@example.com From: Vernor Vinge via Robin Hanson Subject: Singularity: Vinge on "horizon" v "singularity" I agree that a person who becomes greater than human would regard the singularity more properly as a horizon. I prefer the term singularity for several reasons (and I think all or almost all of these points have been made by other posters): o Very likely the degree of change is going to be on the order of difference between a chimpanzee (or a goldfish :-) and a human. "Horizon" sounds more like past tech horizons, which could be accomodated by the innate flexibility of the human mind. Very likely, the upcoming change will be qualitatively greater -- though augmented participants might take it in stride. o The notion of "horizon" has the attendent notion of persistence of identity. When change is large enough or fast enough, this is a problem. I see two analogies on this: + The change that leads from a zygote (or 4-cell embryo) to a human child. It might be argued that the early embyro has simply been enhanced. Certainly the child encompasses the embryo.... And yet the change is large enough that the horizon metaphor does not seem appropriate (to me) to describe the change. (I do see at least one advantage to the horizon metaphor, however; it implies that change is ongoing.) + When I think about labile nature of processes in the distributed systems that are being designed these days, and then imagine what it would be like if the such systems could be scaled up so that some of the processes were of human power or greater -- then I see weird things being done to the notion of persistent identity and the underlying notion of self. Again, this makes it hard for me to see the process as one of "us" just moving along to better things. -- Vernor Vinge
Date: Fri, 18 Sep 1998 11:04:44 -0700 To: firstname.lastname@example.org From: Robin Hanson
Subject: Singularity: The Moore's Law Argument The closest I've seen to a coherent argument for explosive economic growth is offered by Eliezer Yudkowsky, in http://www.tezcat.com/~eliezer/singularity.html >Every couple of years, computer performance doubles. ... That is the >proven rate of improvement as overseen by constant, unenhanced minds, >progress according to mortals. Right now the amount of computing power >on the planet is ... operations per second times the number of humans. >The amount of artificial computing power is so small as to be >irrelevant, ... At the old rate of progress, computers reach human- >equivalence levels ... at around 2035. Once we have human-equivalent >computers, the amount of computing power on the planet is equal to the >number of humans plus the number of computers. The amount of >intelligence available takes a huge jump. Ten years later, humans >become a vanishing quantity in the equation. ... That is actually a >very pessimistic projection, Computer speeds don't double due to some >inexorable physical law, but because researchers and technicians find >ways to make them faster. If some of the scientists and technicians >are computers - well, a group of human-equivalent computers spends 2 >years to double computer speeds. Then they spend another 2 subjective >years, or 1 year in human terms, to double it again. ... six months, >to double it again. Six months later, the computing power goes to >infinity. ... This function is known mathematically as a singularity. >... a fairly pessimistic projection, ... because it assumes that only >speed is enhanced. What if the quality of thought was enhanced? ... Eliezer's model seems to be that the doubling time of computer hardware efficiency is proportional to the computer operations per second devoted to R&D in computer hardware, or within all of computer-aided "humanity." (I'm not clear which Eliezer intends.) The "all of humanity" model has the problem that computer hardware wasn't improving at all for a long time while humanity increased greatly in size. And while humanity has tripled since 1930, computer hardware doubling times have not tracked this increase. It is also not clear why animal and other biological computation is excluded from this model. The "within computer hardware R&D" model has the problem that over the last half century doubling times have not increased in proportion to the number of people doing computer hardware R&D. (Anyone have figures on this?) Furthermore, is it plausible that the very long doubling times of ~1700 could have been much improved by putting lots of people into computer R&D then? And if we next year doubled the number of people doing computer R&D, I don't think we'd expect a halving of the hardware doubling time. It is not at all clear what Eliezer thinks "quality" improvements scale as, so there isn't much on this issue to discuss. Robin Hanson
Message-ID: <3602C7E4.C0F4C5B8@pobox.com> Date: Fri, 18 Sep 1998 15:51:51 -0500 From: "Eliezer S. Yudkowsky"
To: "email@example.com" Subject: Re: Singularity: The Moore's Law Argument That section of "Staring Into the Singularity" (which Hanson quoted) was intended as an introduction/illustration of explosive growth and positive feedback, not a technical argument in favor of it. As an actual scenario, "Recursive Moore's Law" does not describe a plausible situation, because the infinite continuation of the eighteen-month doubling time is not justified, nor is the assumption that AIs have exactly the same abilities and the same doubling time. Above all, the scenario completely ignores three issues: Nanotechnology, quantum computing, and increases in intelligence rather than mere speed. This is what I meant by a "pessimistic projection". Actual analysis of the trajectory suggests that there are several sharp spikes (nanotechnology, quantum computing, self-optimization curves), more than sufficient to disrupt the world in which Moore's Law is grounded. So what good was the scenario? Like the nanotech/uploading argument, it's a least-case argument. Would you accept that in a million years, or a billion years, the world would look to _us_ like it'd gone through a Strong Singularity? Just in terms of unknowability, not in terms of speed? Well, in that case, you're saying: "I believe it's possible, but I think it will happen at a speed I'm comfortable with, one that fits my visualization of the human progress curve." The scenario above points out (via the old standby of "It's a billion years of subjective time!") that once you have AIs that can influence the speed of progress (or uploaded humans, or neurotech Specialists, or any other improvement to intelligence or speed) you are no longer dealing with the _human_ progress curve. More than that, you're dealing with positive feedback. Once intelligence can be enhanced by technology, the rate of progress in technology is a function of how far you've already gone. Any person whose field deals with positive feedback in any shape or form, from sexual-selection evolutionary biologists to marketers of competing standards, will tell you that positive feedback vastly speeds things up, and tends to cause them to swing to "unreasonable" extremes. Some of the things Hanson challenges me to support/define have already been defined/supported in the sections of "Human AI to transhuman" which I have posted to this mailing list - as stated, a "supporting increment" of progress is one which supports further progress, both in terms of self-optimization freeing up power for additional optimizing ability, and in terms of new CPU technologies creating the intelligence to design new CPU technologies. The rest of the assertions I can defend or define are also in that thread (including "short time", "rapidly", and "self-sustaining"). But I can't tell you anything about nanotechnology or quantum computing - not more than the good amateur's grasp we all have. I do not consider myself an authority on these areas. I concern myself strictly with the achievement of transhuman or nonhuman intelligence, with the major technical background in computer programming and a secondary background in cognitive science. I am assured of the existence of fast infrastructures by Dr. Drexler, who has a Ph.D. in nanotechnology and understands all the molecular physics. If in turn Dr. Drexler should happen to worry about our ability to program such fast computers, I would assure him that such things seem more probable to me than nanotechnology. I don't know of anyone who Knows It All well enough to project the full power/intelligence/speed curve, but the technical experts seem to think that their particular discipline will not fail in its task. What I can tell you is this: Given an amount of computing power equivalent to the human brain, I think that I - with the assistance of a Manhattan Project - could have an intelligence of transhuman technological and scientific capabilities running on that power inside of five years. (This is not to say that _only_ I could do such a thing, simply that I am speaking for myself and of my own knowledge - my projection is not based on a hopeful assessment of someone else's abilities.) I can also tell you, from my amateur's grasp of fast infrastructures, that the Singularity would occur somewhere between one hour and one year later. Computing power substantially less than that of the brain would probably double the time to ten years, but I still think I could do it given a substantial fraction of the current Internet. In other words, given no technical improvement whatsoever in any field outside my own, humanity's resources still suffice for a Singularity. We passed the point of no return in 1996. Why do I believe the Singularity will happen? Because I, personally, think I can do it. Again, not necessarily _only_ me, or even _mostly_ me - but I can speak for myself and of my own knowledge. -- firstname.lastname@example.org Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
From: email@example.com (Damien R. Sullivan) Date: Fri, 18 Sep 1998 12:44:25 -0700 (PDT) To: firstname.lastname@example.org Subject: Re: Singularity: The Moore's Law Argument On Sep 18, 11:13am, Robin Hanson [quotes Eliezer Yudkowsky]: > >Every couple of years, computer performance doubles. ... That is the > >proven rate of improvement as overseen by constant, unenhanced minds, We've been here before. "Constant, unenhanced minds"? Programs which can help layout a chip's design, and simulate the operation of that design before it is built, are a big enhancement. I build chip C0 and use it to design the faster C1 which helps me design the faster C2 and so on... > >progress according to mortals. Right now the amount of computing power > >on the planet is ... operations per second times the number of humans. > >The amount of artificial computing power is so small as to be > >irrelevant, ... At the old rate of progress, computers reach human- Irrelevant to the grand total of intelligence, perhaps. Not irrelevant to the above loop; I don't think we could build current chips without their predecessors. Not in two years at least, and possibly not at all if keeping track of a PPro's design would exceed a human's attention span, even with the help of lots of paper. (An important cognitive enhancement.) The thing about all that human intelligence is that most of it is not being applied to the specific fields where computers are relevant, so comparing total computing to total human thought is misleading. Most human intelligence is supporting the overhead of being human -- visual and auditory processing, including peripheral and background processing looking out for the unexpected; language processing; the ability to toss crumpled paper into the wastebasket from odd angles; surviving office politics; and so on. I don't know how to do it myself, but it'd be interesting to estimate how the thought an Intel engineer can actually commit to chip design compares with the power of the chip (or chips) he is currently using. Crude model: the mortal brain comes up with scenarios, evaluates them, and remembers them. The paper-enhanced brain uses external memory, freeing part of itself for more creativity and criticism, and allowing larger and more accurate storage than would otherwise be possible. The modern brain uses computers to evaluate more complex scenarios than otherwise possible, with superhuman speed and accuracy (and similar advances in memory), freeing the brain to just come up with ideas. The next step is obvious. But would it be explosive, as opposed to exponential? I don't see why. -xx- Damien R. Sullivan X-)
From: Mitchell Porter
Message-Id: <199809191913.FAA16463@smople.thehub.com.au> Subject: Singularity secrets revealed To: email@example.com Date: Sun, 20 Sep 1998 05:13:30 +1000 (EST) I'm happy to announce that I have discovered what will actually happen at the Singularity, by the expedient of listing various possibilities and flipping a coin. I can confirm, for example, that an SI really would be fundamentally incomprehensible to human beings, and that it would not uplift them. Whether it would leave them entirely alone, however, remains uncertain. Other findings: i) Change does not stop after the Singularity, but continues to *accelerate* forever; ii) There is a "big win" in the intelligence-increase stakes waiting for the first uploads or human-level AIs; iii) An SI would *not* start growing in all directions at just below lightspeed; iv) We or something we make can (and/or probably will) achieve superintelligence within decades; v) Nick Bostrom's "singleton thesis" is correct. I hope that clarifies everything.
-- scott (firstname.lastname@example.org), February 21, 2000