How can we stop AI robots from becoming killing machines? Ethicists are offering various answers.
One option is to ensure that AI robots are trained by scientists to make the right decisions in complex ethical scenarios. Where an autonomous machine is confronted with a difficult situation -- on the road, in the air, or in our homes and hospitals -- it could be “taught” by human beings to choose the right course of action.
Just as researchers have trained an AI system to play Pong and Space Invaders, so too could scientists train self-driving cars in how to handle “trolley-cart” problems. Researchers from MIT have set up a website to poll audience intuitions on ethical problems for self-driving cars, with a view to building the audience’s views into the software of vehicles.
Similarly, two US-based scholars have recently argued that the doctrine of double effect should be built into the computational system of autonomous machines.
Yet perhaps we need to do more to fully address the problem.
One scholar, Australian computational scientist James Harland, argued in The Conversation this week that machine learning researchers should receive formal training in moral philosophy.
“As in most areas of science, acquiring the necessary depth to make contributions to the world’s knowledge requires focusing on a specific topic. Often researchers are experts in relatively narrow areas, and may lack any formal training in ethics or moral reasoning. It is precisely this kind of reasoning that is increasingly required”.
This article is published by
and BioEdge under a Creative Commons licence. You may republish it or translate it free of charge with attribution for non-commercial purposes following these guidelines
. If you teach at a university we ask that your department make a donation. Commercial media must contact us
for permission and fees. Some articles on this site are published under different terms.