IBM Will No Longer Develop Facial Recognition Tech

IBM CEO Arvind Krishna said on Monday June 8 that the company will no longer develop or offer universal facial recognition or analysis software. In a letter to Congress written in support of the Justice in Policing Act 2020, Krishna advocated new reforms to use technology responsibly and to combat systematic racial injustice and police misconduct.

“IBM strongly opposes and will not tolerate the use of technologies, including facial recognition technologies from other providers, for mass surveillance, to create racial profiles, to violate fundamental human rights and freedoms or for purposes that do not correspond to our values ​​and principles Krishna wrote in the letter.

Krishna, who assumed the role of Chief Executive in early April this year, added that it was time for Congress to begin a national dialogue on the effects of facial recognition technology and how "should be used by national law enforcement agencies".

The CEO also expressed concern about racial prejudices that are often found in artificial intelligence systems today. Krishna also called for the need for increased monitoring of the testing of artificial intelligence instruments, especially when used in law enforcement and national policies that "give the police greater transparency and accountability, such as body cameras and modern data analysis techniques".

People familiar with the matter told CNBC that the death of George Floyd and the shift in the limelight to police reform and racial inequality had convinced IBM to discontinue its facial recognition products.

In recent years, face recognition systems have developed dramatically thanks to developments in areas such as machine learning. Without official supervision, however, they were allowed to run largely unregulated and violate user privacy. Face recognition technology, in particular, was brought to the fore by a startup called Clearview AI, which was able to create a database of more than 3 billion images by mainly scraping social media websites. Since then, Clearview has been backlashed by companies like Twitter and is currently dealing with a variety of privacy lawsuits.

Clearview AI is also reportedly used by law enforcement agencies in ongoing BLM protests in the United States. Experts have argued that these systems can misidentify people because they are mostly trained on white male faces.

Krishna did not say whether the company would reconsider its decision when Congress introduced new laws to take a closer look at technologies like facial recognition. We have also contacted IBM and will update the story as soon as we hear anything.

Editor's recommendations




Leave a Reply

Your email address will not be published. Required fields are marked *