Published 16:27 IST, April 4th 2019
Artificial intelligence (AI) bias: Face recognition researcher fights Amazon
Facial recognition technology was already seeping into everyday life — from your photos on Facebook to police scans of mugshots — when Joy Buolamwini noticed a serious glitch:
Advertisement
Facial recognition techlogy was alrey seeping into everyday life — from your photos on Facebook to police scans of mugshots — when Joy Buolamwini ticed a serious glitch: Some of software couldn’t detect dark-skinned faces like hers.
That revelation sparked Massachusetts Institute of Techlogy researcher to launch a project that’s having an outsize influence on debate over how artificial intelligence should be deployed in real world.
Advertisement
Her tests on software created by brand-name tech firms such as Amazon uncovered much higher error rates in classifying of darker-skinned women than for lighter-skinned men.
Along way, Buolamwini has spurred Microsoft and IBM to improve ir systems and irked Amazon, which publicly attacked her research methods. On Wednesday, a group of AI scholars, including a winner of computer science’s top prize, launched a spirited defence of her work and called on Amazon to stop selling its facial recognition software to police.
Advertisement
Her work has also caught attention of political leers in statehouses and Congress and led some to seek limits on use of computer vision tools to analyze human faces.
“re needs to be a choice,” said Buolamwini, a gruate student and researcher at MIT’s Media Lab. “Right w, what’s happening is se techlogies are being deployed widely without oversight, oftentimes covertly, so that by time we wake up, it’s almost too late.”
Advertisement
Buolamwini is hardly alone in expressing caution about fast-moving option of facial recognition by police, government ncies and businesses from stores to apartment complexes. Many or researchers have shown how AI systems, which look for patterns in huge troves of data, will mimic institutional biases embedded in data y are learning from. For instance, if AI systems are developed using ims of mostly white men, systems will work best in recognizing white men.
Those disparities can sometimes be a matter of life or death: One recent study of computer vision systems that enable self-driving cars to “see” ro shows y have a harder time detecting pedestrians with darker skin tones.
Advertisement
What’s struck a chord about Boulamwini’s work is her method of testing systems created by well-kwn companies. She applies such systems to a skin-tone scale used by dermatologists, n names and shames those that show racial and bias. Buolamwini, who’s also founded a coalition of scholars, activists and ors called Algorithmic Justice League, has blended her scholarly investigations with activism.
“It ds to a growing body of evidence that facial recognition affects different groups differently,” said Shankar Narayan, of American Civil Liberties Union of Washington state, where group has sought restrictions on techlogy. “Joy’s work has been part of building that awareness.”
Amazon, whose CEO, Jeff Bezos, she emailed directly last summer, has responded by aggressively taking aim at her research methods.
Advertisement
A Buolamwini-led study published just over a year ago found disparities in how facial-analysis systems built by IBM, Microsoft and Chinese company Face Plus Plus classified people by . Darker-skinned women were most misclassified group, with error rates of up to 34.7%. By contrast, maximum error rate for lighter-skinned males was less than 1%.
study called for “urgent attention” to dress bias.
“I responded pretty much right away,” said Ruchir Puri, chief scientist of IBM Research, describing an email he received from Buolamwini last year.
Since n, he said, “it’s been a very fruitful relationship” that informed IBM’s unveiling this year of a new 1 million-im database for better analyzing diversity of human faces. Previous systems have been overly reliant on what Buolamwini calls “pale male” im repositories.
Microsoft, which h lowest error rates, declined comment. Messs left with Megvii, which owns Face Plus Plus, weren’t immediately returned.
Months after her first study, when Buolamwini worked with University of Toronto researcher Inioluwa Deborah Raji on a follow-up test, all three companies showed major improvements.
But this time y also ded Amazon, which has sold system it calls Rekognition to law enforcement ncies. results, published in late January, showed Amazon bly misidentifying darker-hued women.
“We were surprised to see that Amazon was where ir competitors were a year ago,” Buolamwini said.
Amazon dismissed what it called Buolamwini’s “erroneous claims” and said study confused facial analysis with facial recognition, improperly measuring former with techniques for evaluating latter.
“ answer to anxieties over new techlogy is t to run ‘tests’ inconsistent with how service is designed to be used, and to amplify test’s false and misleing conclusions through news media,” Matt Wood, general manr of artificial intelligence for Amazon’s cloud-computing division, wrote in a January blog post. Amazon declined requests for an interview.
“I didn’t kw ir reaction would be quite so hostile,” Buolamwini said recently in an interview at her MIT lab.
Coming to her defense Wednesday was a coalition of researchers, including AI pioneer Yoshua Bengio , recent winner of Turing Award, considered tech field’s version of bel Prize.
y criticized Amazon’s response, especially its distinction between facial recognition and analysis.
“In contrast to Dr. Wood’s claims, bias found in one system is cause for concern in or, particularly in use cases that could severely impact people’s lives, such as law enforcement applications,” y wrote.
Its few publicly kwn clients have defended Amazon’s system.
Chris zima, senior information systems analyst for Washington County Sheriff’s Office in Oregon, said ncy uses Amazon’s Rekognition to identify most likely matches among its collection of roughly 350,000 mug shots. But because a human makes final decision, “ bias of that computer system is t transferred over into any results or any action taken,” zima said.
But increasingly, regulators and legislators are having ir doubts. A bipartisan bill in Congress seeks limits on facial recognition. Legislatures in Washington and Massachusetts are considering laws of ir own.
Buolamwini said a major mess of her research is that AI systems need to be carefully reviewed and consistently monitored if y’re going to be used on public. t just to audit for accuracy, she said, but to ensure face recognition isn’t abused to violate privacy or cause or harms.
“We can’t just leave it to companies alone to do se kinds of checks,” she said.
16:26 IST, April 4th 2019