April 20, 2024

The ethical imperative for AI researchers

How can we stop AI robots from becoming killing machines?

How can we stop AI robots from becoming killing machines? Ethicists are offering various answers.

One option is to ensure that AI robots are trained by scientists to make the right decisions in complex ethical scenarios. Where an autonomous machine is confronted with a difficult situation — on the road, in the air, or in our homes and hospitals — it could be “taught” by human beings to choose the right course of action.

Just as researchers have trained an AI system to play Pong and Space Invaders, so too could scientists train self-driving cars in how to handle “trolley-cart” problems. Researchers from MIT have set up a website to poll audience intuitions on ethical problems for self-driving cars, with a view to building the audience’s views into the software of vehicles.

Similarly, two US-based scholars have recently argued that the doctrine of double effect should be built into the computational system of autonomous machines.

Yet perhaps we need to do more to fully address the problem.

One scholar, Australian computational scientist James Harland, argued in The Conversation this week that machine learning researchers should receive formal training in moral philosophy.

“As in most areas of science, acquiring the necessary depth to make contributions to the world’s knowledge requires focusing on a specific topic. Often researchers are experts in relatively narrow areas, and may lack any formal training in ethics or moral reasoning. It is precisely this kind of reasoning that is increasingly required”. 

The ethical imperative for AI researchers
Xavier Symons
Creative commons
https://www.bioedge.org/images/2008images/AI.jpg
ai
existential risk