According to a warning from a top Google executive, the technology behind facial recognition does not yet have “the diversity it needs” and has “inherent biases”.
This remark from the director of cloud computing at Google was made following a debacle with Amazon’s software that was wrong in identification of 28 Congress members, disproportionately people of colour, as police suspects.
Greene said that Google is still working towards gathering of huge poos of data to enhance reliability and therefor the firm has not yet opened up its facial recognition technology for public usage.
However, the company’s controversial work with the military was refused to be discussed by her.
“Bad things happen when I talk about Maven,” Greene said while referring to a project that is to be abandoned soon and one that was drawn up with the US military for the development of artificial intelligence technology for drones.
Following very significant pressurizing form its employees which included a host of resignations, google had announced that it would let the contract with Pentagon die a natural death after it gets over sometime in 2019.
Silicon Valley workers and civil rights groups have expressed considerable concern over the face recognition technology and its usage – specifically in the sphere of law enforcement. At least two police forces in the US make use of the ‘Rekognition’ software of Amazon – which is a software that enables its clients to make use of Amazon AI tech to power facial recognition.
There are major concerns about the readiness and accuracy of the technology which has been used – controversially, in China.
the American Civil Liberties Union (ACLU) discovered the misidentification of members of Congress, the findings of which were published on Thursday. Amazon however said that the ASLU had used the wrong settings while disputing the claims of the group.
While the technology is used by Google to aid users identify friends in pictures, the underlying technology is not open for the public, Greene said.
“We need to be really careful about how we use this kind of technology,” she told the BBC.
“We’re thinking really deeply. The humanistic side of AI – it doesn’t have the diversity it needs and the data itself will have some inherent biases, so everybody’s working to understand that.”
She added: “I think everybody wants to do the right thing. I’m sure Amazon wants to do the right thing too. But it’s a new technology, it’s a very powerful technology.”
There have been instances in the past when Google’s image recognition software has been inaccurate. A black couple were identified as being “gorillas” by the software in 2015 followign which Google had to apologize.
Speaking of facial recognition more widely, the ACLU said: “Congress should enact a federal moratorium on law enforcement use of this technology until there can be a full debate on what – if any – uses should be permitted.
(Adapted from Forbes.com)