Google Wades Into Controversy with Dismissal of AI Ethicist Timnit Gebru

December 30, 2020

By John P. Desmond

December 30, 2020 | Google ignited a firestorm around its ethics program last week when it let go a prominent AI ethicist, Timnit Gebru, apparently over contents of an email where she expressed her feelings, following a request by Google that a paper on large language models she had submitted to an industry conference be withdrawn.

Gebru had sent an email saying she felt “constantly dehumanized” at the company, according to an account in The Washington Post. She had been the co-leader of Google’s Ethical AI Team, where she was researching the fairness and risks associated with Google’s technology.

Of Ethiopian descent, Gebru was a rarity in the Silicon Valley culture known for its racial homogeneity. She became known in a senior role at Google for critically examining bias in the technology and its repercussions. She co-founded the Blacks in AI advocacy group that has pushed for more Black roles in AI development and research.

Gebru’s team had been researching large language models, such as OpenAI’s GPT-3 system, which has been used to generate seemingly human-created news reports, poems and computer code. Google may be researching consumer-facing products that would generate convincing passages of text difficult to distinguish from human writing, the Post reported.

In an interview in Bloomberg, Gebru said she was asked to remove the names of other Google employees from the paper, which was to be submitted to the ACM Conference on Fairness, Accountability, and Transparency, a conference Gebru co-founded to be held in March.

Google managers told Gebru the paper had to go through an approval process, and not enough time was allowed for the review. Gebru called it a “fundamental silencing.”

 

Thousands Sign Petition Supporting Gebru

Google protest group Google Walkout For Real Change, according to the Bloomberg account, posted a petition in support of Gebru on Medium, which had been initially signed by more than 400 employees of the company and more than 500 from academic and industry figures. Later it was signed by 2,278 Google employees and 3,114 industry allies.

The paper, according to Bloomberg, called out the dangers of using large language models to train algorithms that could, for example, write tweets, answer trivia and translate poetry. The models are essentially trained by analyzing language from the internet, which runs the risk that much of the world’s population would not be reflected in the training data.

This week Google CEO Sundar Pichai issued an apology to Gebru for the way the company handled her departure, according to an account in Axios. He said the company would look at all aspects of the situation.

"I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google," stated Pichai in the memo he emailed to Google employees. "I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust."

Responding on Twitter, Gebru stated Pichai’s memo did not address the core issues around her departure. “I see no plans for accountability,” she stated.

 

Paper Focused on Ethical Implications of GPT-3 Large Language Model

A report from NPR, which had reviewed Gebru’s paper, said the paper explored the potential pitfalls of relying on the GPT-3 tool,  which scans massive amounts of information on the Internet and produces text as if written by a human. The paper argued it could end up mimicking hate speech and other types of derogatory and biased language found online. The paper also cautioned against the energy cost of using such large-scale AI models.

Gebru told NPR she was given an insufficient account for why Google had objections to her research paper. "Instead of being like, 'OK let's talk,' they're like, 'You know what? Nope, bye,' " Gebru told NPR."I don't feel like they thought it through. They could have had a much better outcome through dialogue."

Gebru authored a study in 2018 with Joy Buolamwini, a computer scientist based at the MIT Media Lab, and founder of the Algorithmic Justice League, an organization that tries to challenge bias in decision-making software. The study showed facial recognition software was much more likely to misidentify people of color, particularly women, versus white men. IBM, Amazon, and Microsoft rolled back their face recognition product lines as a result. (See AI Trends.) This was during national protests over the death of George Floyd.

Reached by NPR, Buolamwini was critical of the Google move. "Ousting Timnit for having the audacity to demand intellectual integrity severely undermines Google's credibility for supporting rigorous research on AI ethics and algorithmic auditing," Buolamwini stated. "She deserves more than Google knew how to give, and now she is an all-star free agent who will continue to transform the tech industry."

Gebru’s 12-page paper was “uncontroversial,” according to an account from writers who had reviewed the paper at Wired.  “This article is a very solid and well-researched piece of work,” stated Julien Cornebise, an honorary associate professor at University College London who had seen a draft of the paper. “It is hard to see what could trigger an uproar in any lab, let alone lead to someone losing their job over it.”

In their open letter, the Google employees ask that senior leadership meet with the artificial intelligence team Gebru helped lead to explain how and why the paper Gebru co-authored was "unilaterally rejected" by management at the company.

Efforts to get a statement from Google beyond CEO Pichai’s email message were not successful by our deadline.

 

This summary article originally ran in AI Trends.