Nuffield Council releases study of AI in healthcare


AI in healthcare is developing rapidly, with many applications currently in use or in development in the UK and worldwide. The Nuffield Council on Bioethics examines the current and potential applications of AI in healthcare, and the ethical issues arising from its use, in a new briefing note, Artificial Intelligence (AI) in healthcare and research.

There is much hope and excitement surrounding the use of AI in healthcare. It has the potential to make healthcare more efficient and patient-friendly; speed up and reduce errors in diagnosis; help patients manage symptoms or cope with chronic illness; and help avoid human bias and error. A number of AI applications are already in use:

  • Early detection of infectious disease outbreaks and sources of epidemics, such as water contamination.
  • Prediction of adverse drug reactions, which are estimated to cause up to 6.5% of hospital admissions in the UK.
  • Information tools or chat-bots driven by AI are being used to help manage chronic medical conditions.
  • Robotic tools controlled by AI have been used in research to carry out specific tasks in keyhole surgery, such as tying knots to close wounds.
  • Analysing speech patterns to predict psychotic episodes and identify and monitor symptoms of neurological conditions such as Parkinson’s disease
  • Analysing medical scans.

But there are some important questions to consider: who is responsible for the decisions made by AI systems? Will increasing use of AI lead to a loss of human contact in care? What happens if AI systems are hacked? The Nuffield Council briefing note outlines the ethical issues raised by the use of AI in healthcare, such as:

  • the potential for AI to make erroneous decisions;
  • who is responsible when AI is used to support decision-making?
  • difficulties in validating the outputs of AI systems;
  • the risk of inherent bias in the data used to train AI systems;
  • ensuring the security and privacy of potentially sensitive data;
  • securing public trust in the development and use of AI technology;
  • effects on people's sense of dignity and social isolation in care situations;
  • effects on the roles and skill-requirements of healthcare professionals; and
  • the potential for AI to be used for malicious purposes.

The briefing note outlines some of the key ethical issues that need to be considered if the benefits of AI technology are to be realised, and public trust maintained. The challenge, says the Nuffield Council, will be to ensure that innovation in AI is developed and used in ways that are transparent, that address societal needs, and that are consistent with public values.




MORE ON THESE TOPICS | artificial intelligence, nuffield council on bioethics

This article is published by Michael Cook and BioEdge under a Creative Commons licence. You may republish it or translate it free of charge with attribution for non-commercial purposes following these guidelines. If you teach at a university we ask that your department make a donation. Commercial media must contact us for permission and fees. Some articles on this site are published under different terms.

 
 Search BioEdge

 Subscribe to BioEdge newsletter
rss Subscribe to BioEdge RSS feed

 
comments powered by Disqus