Microsoft, IBM, Google and Amazon have all hit a “stop” or “pause” button on their facial recognition technology because of fears that it could prejudice people of colour.
IBM will no longer offer the technology. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values,” said CEO Arvind Krishna in a letter to Congress this week.
Similarly, Microsoft CEO Brad Smith said that his company "will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights that will govern this technology."
Timnit Gebru, a leader of Google’s ethical artificial intelligence team, told the New York Times: “It should be banned at the moment. I don’t know about the future.”
And Amazon, which has been slow to react to criticism, is implementing “a one-year moratorium on police use of Amazon’s facial recognition technology”. It added that “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
But the digital rights group Fight for the Future, complained that these avowals are just “a public relations stunt”. It argues that “the reality is that facial recognition technology is too dangerous to be used at all. Like nuclear or biological weapons, it poses such a profound threat to the future of humanity that it should be banned outright. Lawmakers need to stop pandering to Big Tech companies and corrupt law enforcement agencies and do their jobs. Congress should act immediately to ban facial recognition for all surveillance purposes.”
Meanwhile, Ronald Bailey points out in an article at Reason, that another company, Clearview AI, is forging ahead with its technology. He writes: “it is jettisoning all of its private business clients and will sell its facial recognition services only to law enforcement agencies. Clearview AI has created an app that enables police to match photographs to its database of over 3 billion photos scraped from millions of public websites including Facebook, YouTube, Twitter, Instagram, and Venmo.”
Bailey quotes academics who are horrified at this development. "Facial recognition is the perfect tool for oppression," warn Woodrow Hartzog, a professor of law and computer science at Northeastern University, and Evan Selinger, a philosopher at the Rochester Institute of Technology. It is "the most uniquely dangerous surveillance mechanism ever invented."
Michael Cook is editor of BioEdge
This article is published by
and BioEdge under a Creative Commons licence. You may republish it or translate it free of charge with attribution for non-commercial purposes following these guidelines
. If you teach at a university we ask that your department make a donation. Commercial media must contact us
for permission and fees. Some articles on this site are published under different terms.