sfrevu Logo with link to Main Page  
How I stopped worrying and learned to love the bot by Ernest Lilley
Review by Ernest Lilley
SFRevu Article  ISBN/ITEM#: 0408EL
Date: /

Popular culture loves robots, generally not as a tool to empower their users, but as a club to beat them over the head with. Robots run amuck is the popular theme of almost every piece of robot fiction, from the ancient legend of the golem (not the warped hobbit-like creature) where a Rabbi molds a creature out of the four elements of the earth and breaths life into it. When the creature's job protecting the community is done, he becomes a threat and has to be de-animated. Incidentally, it's the legend that inspired David Brin's novel, Kiln People see review), and it's the conflict that drives Isaac Asimov's short story in his collection I-Robot, "Little Lost Robot", which no doubt figures in the movie, which take the name of the book, if little of its content. (http://www.irobotnow.com/).

Creator makes robot, robot threatens townspeople, townspeople get fed up, creator kills robot and atones for his sin. That's the way it goes, story after story. Minor variations on this theme have the robots kill off all of humanity and we all get to atone for our sins.

The sin we're atoning for is the act of bringing something to life, a feat reserved for a supernatural being, unless you happen to be female, for whom a loophole exists. Oddly enough, it's not in the top ten things god (Hebrew Variant) says you shouldn't do, unless you wanted to count it under the sin of pride...which isn't actually on the list either. Still, it's a good thing to be nervous about.
Asimov's Three Laws of Robotics
Not just a good idea, it's the law:
1.) A robot may not injure a human being, or through inaction, allow a human being to come to harm.

2.) A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

3.) A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

It's not robots, golems, cyborgs or clones we're afraid of, it's ourselves writ large we fear. In Forbidden Planet (with a little help from Shakespeare) actor Walter Pidgeon is consumed by his animal nature and the "monsters from the id". Interestingly, this is one of the few films ever made where the robot refuses to run amuk, as Robbie, one of the seminal SF robots chooses to blow his circuits rather than harm a human. It's worth noting that Robbie is actually an alien technology robot though built by a human, which may account for his uncommon restraint. Gort, the big robot in The Day The Earth Stood Still, is also an alien bot, with none of the sins of his creator written larger in him.

Generally speaking, this problem is irony at its best. When these priests, scientists, and authors create a creature, they generally had in mind that it could be a force for good, not evil. You have to go pretty far to find someone who makes a robot to get even with humanity for being shunned, scarred, or just plain scared. Though it's been done.

The Turing test of artificial intelligence posits that if a machine can fool you into thinking it's intelligent, it is. I'm all for functional definitions, but this one only scratches the tip of the iceberg. It's easy to see an extension of the test which says that if robots act like humans they're human...but they aren't. They don't have ids, egos, or superegos. They don't have billions of years of evolution behind them, with the forces of nature ("red in tooth and claw") driving them to compete and cooperate. And that's without bringing gods into it.

Ultimately, what worries us about robots, AI and all that jazz, isn't whether or not intelligence is achievable, but whether or not altruism is. Looking around us or worse at ourselves, we find little evidence to support that notion, and when we fear others, it's only reasonable that we should use ourselves as a model for their behavior.

Except robots aren't human. They're tools.

Does the toolmaker leave his imprint in the design? Obviously. A hammer isn't human, but it is shaped for a human hand and made for building human artifacts. Though it can't pass a Turing test, it can do good or ill, depending on how it's used. Maybe we should have a movie about how the cave people rise up against Thog, when he creates the beating stick and somebody bonks themselves on the head with it, or smites their neighbor.

Robots and AIs are much more complex than hammers, and we'll undoubtedly hit ourselves in the head with them. That's not a reason to fear them, but to treat them like any power tool, with healthy respect. We may not be gods, though if we're not, I don't know who is, but we are tool builders. We've been tool builders since before we were humans, and we're not about to give it up.

As to killer robots running amuck, if we build robots in their own image, and keep from confusing it with ours, maybe we can avoid that. Maybe basing our tests for intelligent creations is grading on to easy a curve. Machines are by definition, more powerful than humans...maybe we should focus on making them in a better image than ours.

Ernest Lilley
Editor - SFRevu

Return to Index


We're interested in your feedback. Just fill out the form below and we'll add your comments as soon as we can look them over. Due to the number of SPAM containing links, any comments containing links will be filtered out by our system. Please do not include links in your message.
Name:
Email:
Comments

© 2002-2014SFRevu

advertising index / info
Our advertisers make SFRevu possible, and your consideration is appreciated.

  © 2002-2014SFRevu