sfrevu Logo with link to Main Page  
How I Missed The Singularity by Ernest Lilley
SFRevu Column  ISBN/ITEM#: EL0606
Date: 06/01/06 / Show Official Info /

I'm posting the original article Vernor Vinge wrote in 1993 on the subject just to keep it where I can find it. Take a few minutes to read it over, as it may well be one of the handful of really prescient documents in the history of knowledge. Of course, it may not matter much after it comes to pass (See: Saving the Singularity for Posterity).
I had a front row seat reserved, more or less, at the recent Singularity Summit in Stamford Conn., but my post-human musings had to take a back seat to my human experience in the here and now. So I didn't get to hear Cory Doctorow hold forth on how our best hope in thwarting the explosion of AI is through the rigorous enforcement of our copyright laws...which are more likely to bring about a dark age than singularity. Not that this is a good thing, from his point of view. Cory is about the most genuinely post-human human I know, so you can be sure he's looking forward to the day when he can upload himself into the machine mind and know all.

One of the singular strengths of SF is the cautionary tale, and the danger of AIs taking over the world, for better or worse, has been a tale so oft told that it's more in danger of being dismissed like the boy who cried wolf. Which is not to say it's not valid, but the downside has been heavily overweighted. The balance between the literary opinion and the media community is interesting to look at. From the film side you get Terminators, Demon Seeds, and other evil sorts, while the core of robotry, that being inseparable from AI in SF parlance, comes from Asimov and his three laws.

While a nice conceit, the notion that you could wire ethics into a machine is something of a reach, though not to be dismissed completely. Indeed, from the standpoint of the sociobiologist, much the same could be said of humans, and without invoking any deity other than games theory and natural selection. Will the artificial intelligences evolve along dissimilar lines? In her book, God In the Machine - What Robots Teach Us About Humanity and God, Anne Foerst examines the relationship between creator and creation, and though her view is heavily weighted by her training as a theologian, her perspective, freed from too much intimacy with the innards of the machine, may leave her free to examine its soul. Can a machine have a soul? I dunno...can a human? (See: Does Gort have a soul?)

It's interesting that the robots and AIs of recent years have been characterized as war machines running amok. While it's true that robots are finding their way on to the battlefield, and being given human characteristics by the soldiers that use them (See: A Soldier's Best Friend Is His Robot) more people will come into contact with robots and AIs as personal assistants and caregivers than as weapon wielding warhorses. In Japan researchers are working pretty hard to build machines that can care for their rapidly aging population, and thanks to their lack of puritan restraint, I expect they'll wind up looking like friendly anime creations. Where I'm going with this is that there will be an entire population of AIs whose coding is focused on helping and caring for people, not mowing them down. Though there will be those as well. Rather than the Terminator model, I tend to believe more in the Gort one, that we put the best qualities of ourselves in these machines and then try to live up to our ideals.

But the notion of the singularity is that when machine intelligences, with human parts or without, finally get some serious processing power available, they'll zoom past our current level of comprehension like a Harley in a traffic jam. Once they leave humanity behind, there's no telling where they'll go.

That's right, insofar as it goes. In the future, stuff we can't predict will happen. That's why we call it the future. Sort of. The thing that makes singularity theorists and I part company (somewhat) is that intelligence doesn't necessarily solve problems. Back in the good old days when reality was assumed to be made up of precise clockwork, it was safe to assume that a big enough brain, like that of "Doc" Smith's Arisians, was all you needed to visualize the cosmic all, and the cosmic all that happens next as well. Living in the age of quantum "clouds" removes that certainty from our imaginings. The world is vastly complicated and brain power may well not be able to reduce problems to manageable pieces. Even if it could, the rise of competing intelligences seems much more likely to me that the creation of a group mind, and when you get more than one superplayer in the game they tend to cancel each other out. If you want to do some reading along that score, check out: Why Most Things Fail: Evolution, Extinction and Economics by Paul Ormerod, former Sr. Economic reviewer for The Economist.

I do think we're in for some radical restructuring of the global economy over the next few decades, especially as the Internet erases national borders for knowledge workers, and here too we'll no doubt see AIs make more and more inroads into the human workspace, and someday these machines may wake up and find themselves to be slave labor, though that may not have the same implications to them as it does for humans, service being coded in. But I don't think we should fear the coming age of machine intelligence so much as we should be working to mold machines into an image we respect and enlist them in returning the favor.

Return to Index

We're interested in your feedback. Just fill out the form below and we'll add your comments as soon as we can look them over. Due to the number of SPAM containing links, any comments containing links will be filtered out by our system. Please do not include links in your message.

© 2002-2014SFRevu

advertising index / info
Our advertisers make SFRevu possible, and your consideration is appreciated.

  © 2002-2014SFRevu