“Universal bereavement, an inspiring achievement”

You have to be a baby boomer or an aficionado of black humour to remember the cynical lyrics of mathematics professor and cabaret performer Tom Lehrer. One of his ditties, “We will all go together when we go”, dealt with the mega-risk of the destruction of humanity by our own technology. Or, as Lehrer put it, “When the air becomes uranious, We will all go simultaneous”.

The pessimism without the humour is more or less the theme of The Cambridge Project for Existential Risk — a joint initiative between a distinguished philosopher, Huw Price, the astronomer royal, Lord Martin Rees, and the inventor of Skype, Jaan Tallinn. Since November they have been banging the drum about technological developments which could pose extinction-level risks to humanity.

Whether humanity is worth saving or what precautions we are prepared to take could be the biggest bioethics questions of all! Oxford bioethicist Julian Savulescu, for instance, has argued that humanity needs to be morally enhanced with drugs or genetic engineering or face extinction. 

“Such dangers,” says the Cambridge group, “have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.”

In a recent column  in the New York Times, Professor Price warns that Artificial Intelligence could someday exceed Natural Intelligence.  And if machines are more competent than people, people may become irrelevant. "We need to take seriously the possibility that there might be a 'Pandora's box' moment with artificial intelligence that, if missed, could be catastrophic," he told New Scientist.

Improbable? Fear-mongering? Perhaps, but Price says that if extinction is even a remote possibility, caution is warranted. "It tends to be regarded as a flakey concern, but given that we don't know how serious the risks are, that we don't know the time scale, dismissing the concerns is dangerous. What we're trying to do is to push it forward in the respectable scientific community," he told AP.

This article is published by Michael Cook and BioEdge.org under a Creative Commons licence. You may republish it or translate it free of charge with attribution for non-commercial purposes following these guidelines. If you teach at a university we ask that your department make a donation. Commercial media must contact us for permission and fees. Some articles on this site are published under different terms.

comments powered by Disqus

 Search BioEdge

 Subscribe to BioEdge newsletter
rss Subscribe to BioEdge RSS feed

 Best of the web

 Recent Posts
How can neuroethics combat ‘fake news’?
24 Jun 2017
Texas gives green-light for experimental stem-cell therapies
24 Jun 2017
CIA waterboarding was illegal human experimentation: report
24 Jun 2017
Should medics embed with Iraqi Army for safety?
24 Jun 2017
Going commercial with three-parent babies
24 Jun 2017

Home | About Us | Contact Us | rss RSS | Archive | Bookmark and Share | michael@bioedge.org

BioEdge - New Media Foundation Ltd © 2004 - 2009 All rights reserved -- Powered by Encyclomedia