The rise of robot ethics


Hi there,

Although bioethics deals with living beings and robot ethics deals with machines, their destinies are intertwined. As you can read in this week’s article about the development of Lethal Autonomous Robots (LARs), scientists, armies and politicians need to create ethical “codes” for killing machines. And quickly.

At the moment a human operator instructs a drone to release its missiles. But the time is not far off when the drones will “decide” for themselves. And since their decision-making power needs to be programmed by someone, who better than bioethicists?

The great science fiction writer Isaac Asimov created his famous Three Laws of Robotics as long ago as 1942. These are useful, but they really don’t apply to what currently worries the United Nations about these new weapons. The First Law is that no robot may harm a human being – but several countries are designing robots whose main function is to kill.

The task of creating a robot ethics will be harder than it seems. The fractured history of bioethics offers a cautionary tale: are they to be utilitarian robots, or deontological robots, or principalist robots, or feminist robots, or what? Anyhow, I suspect that there will be job opportunities for unemployed bioethicists with programming skills in the near future…

Cheers,




MORE ON THESE TOPICS |

comments powered by Disqus
 

 Search BioEdge

 Subscribe to BioEdge newsletter
rss Subscribe to BioEdge RSS feed