Published 14:56 IST, April 9th 2019

Ethical AI - Why Special Care Is Required For a True Next-Gen Technical Experience

The biggest tech companies want you to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to the dark side

Follow: Google News Icon
  • share
null | Image: self
Advertisement

biggest tech companies want you to kw that y’re taking special care to ensure that ir use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to dark side.
 

But ir efforts to assu concerns that ir machines may be used for nefarious ends have t been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what’s in society’s best interests.
 

Advertisement

“Ethical AI” has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. moves are meant to dress concerns over racial and bias emerging in facial recognition and or AI systems, as well as dress anxieties about job losses to techlogy and its use by law enforcement and military.
 

But how much substance lies behind increasingly public ethics campaigns? And who gets to decide which techlogical pursuits do harm?
 

Advertisement

Google was hit with both questions when it formed a new board of outside visers in late March to help guide how it uses AI in products. But inste of winning over potential critics, it sparked internal rancor. A little more than a week later, Google bowed to pressure from backlash and dissolved council.
 

outside board fell apart in sts. One of board’s eight inaugural members quit within days and ar quickly became target of protests from Google employees who said her conservative views don’t align with company’s professed values.

Advertisement

 

As thousands of employees called for removal of Herit Foundation President Kay Coles James, Google disbanded board last week.
 

Advertisement

“It’s become clear that in current environment, ( council) can’t function as we wanted,” company said in a statement.
 

That environment is one of increasing concern that corporate AI ethics campaigns lack teeth.
 

Advertisement

“I think (Google’s decision) reflects a broer public understanding that ethics involves more than just creating an ethics board without an institutional framework to provide for accountability,” AI researcher Ben Wagner said.
 

Google’s original initiative fell into a tech industry trend that Wagner calls “ethics-washing,” which he describes as a superficial effort that’s mostly a show for public or lawmakers.
 

“It’s basically an attempt to pretend like you’re doing ethical things and using ethics as a tool to reach an end, like avoiding regulation,” said Wagner, an assistant professor at Vienna University of Ecomics and Business. “It’s a new form of self-regulation without calling it that by name.”
 

Big companies have me an increasingly visible effort to discuss ir AI efforts in recent years.
 

Microsoft, which often tries to position itself as an industry leer on ethics and privacy issues, published its principles around developing AI, released a short book that discussed societal implications of techlogy and has called for some government regulation of AI techlogies.
 

company’s president even met with Pope Francis earlier this year to discuss industry ethics. Amazon recently anunced it is helping fund federal research into “algorithmic fairness,” and Salesforce employs an “architect” for ethical AI practice, as well as a “chief ethical and human use” officer. It’s hard to find a brand-name tech firm without similar initiatives.
 

It’s a good thing that companies are studying issue and seeking perspectives on industry ethics, said Oren Etzioni, CEO of Allen Institute for Artificial Intelligence, a research organization. But ultimately, he said, a company’s CEO is tasked with deciding what suggestions on AI ethics to incorporate in business decisions.
 

“I think overall it’s a positive step rar than a fig leaf,” he said. “That said, proof is in successful implementation. I think jury is still out on that.”
 

impact artificial intelligence can have on society has never been more clear, Etzioni said, and companies are reacting to studies about power of recommendation algorithms and bias in AI.
 

But as Google’s attempt shows, discussing issues in public eye also invites public scrutiny.
 

Google employees have h more success than or tech workers at demanding change at ir company. internet search behemoth dropped a contract with Pentagon after employees pushed back on ethical implications of using company’s AI techlogy to analyze drone video.
 

And after more than 2,400 Google employees signed a petition calling for James to be taken off board, Google scrapped board altoger. Employees said James has me past comments that were anti-trans and anti-immigrant and should t be on an ethics panel. Herit Foundation did t respond to a request for comment.
 

Google h also faced dissent from its chosen councilmembers. Alessandro Acquisti, a professor at Carnegie Mellon University, anuncing on Twitter he was declining invitation. He wrote that he is devoted to grappling fairness and inclusion in AI but this was t “ right forum for me to eng in this important work.” He declined to comment furr.
 

One expert who h committed to staying on council is Joanna Bryson, associate professor in computing at University of Bath. A self-described liberal, she said before dissolution that it makes sense to have political diversity on panel, and she didn’t agree with those who think it’s just for show.
 

“I just don’t think Google is that stupid,” Bryson said. “I don’t think y’re re just to have a poster on a wall.”
 

She said, however, that companies like Google and Microsoft do have a real concern about liability — meaning y want to make sure y show mselves, and public, that y’ve tried ir best to build products right way before releasing m.
 

“It’s t just right thing to do, it’s thing y need to do,” she said. Bryson said she was hopeful Google actually wanted to brainstorm hard problems and should find ar way to do so after council dissolved.
 

It’s unclear what Google will do next. company said its “going back to drawing board” and would find or ways of getting outside opinions.
 

Wagner said w would be time for Google to set up ethics principles that include commitments y must stick to, external oversight and or checkpoints to hold m accountable.
 

Even if companies keep setting up external boards to oversee AI responsibility, government regulation will still be needed, said Liz O’Sullivan, a tech worker who left AI company Clarifai over company’s work in Pentagon’s Project Maven — same project that Google dropped after its employees protested.
 

O’Sullivan is wary of boards that can make suggestions that companies are under legal obligation to stick to.


“Every company of that size that states y’re interested in having some sort of oversight that has ability or authority to restrict or restrain company behavior seems like y’re doing it for press of it all,” she said.

Also Re: Why AI Is For Everyone, t Just For Engineers: Artificial Intelligence Is Next World Order After All

Also Re: Facebook Outlines Focus And Emphasis On Artificial Intelligence, Anunces New Initiatives For Creative Ideas​​​​​​​

14:55 IST, April 9th 2019