Friday, 28 November 2008

The Ethical Nature of Robots

In my opinion, the greatest examination of the theoretical ethics of robots or artificial intelligence was conducted in fiction by Isaac Asimov. His three rules of robot behaviour that was hard wired into the positronic brain of just about every one of his mechanical creations (there were quite a few who were fine tuned to place more or less importance on one or another of the rules) pretty much covers most of the potential situations that could arrive. He looks into what could happen when something goes wrong - either within the robot or the robot's environment) and has considered more real world (and absolutely fantastic world) situations than just about any other.

And let's not forget the fan fiction and thousands of other stories written by others in the canon.

Unfortunately, the real development in autonomous robot intelligence is happening in the military. And the first rule "A robot may not injure a human being or, through inaction, allow a human being to come to harm." - the most important one - is hardly going to work when a military robot is designed to kill people. There are lots of robots that do relatively peaceable things like bomb disarming, reconaissance and decoding but these are steps in developing a machine that will mean soldiers don't have to go in and risk themselves and the robots will do the killing.

It's a real pity that such an important treatise on the morals of artificial intelligence has to be wasted but because the most important work on robotics is being made by the war industry. The technology will trickle down to the rest of us eventually and some amazing robots are being developed in the civilian world but it seems as we're finally moving towards a reality that includes true AI they're going to be built for war and that's not an ethic I can trust.

It's very important that the issues of robot ethics are examined and considered before we have the technology to build them - there will always be those who want to defeat their enemies no matter the human cost - but I think Asimov is the best place to start when considering a moral code for the robots.

No comments: