AI and ML systems have advanced in both sophistication and capability at a staggering rate in recent years. They can now model protein structures based only on the molecule’s amino-acid sequence, create poetry and text on par with human writers — even spot specific individuals in a crowd (assuming their complexion is sufficiently light). But for as impressive as these feats of computational prowess are, the field continues to struggle with a number of fundamental moral and ethical issues. A facial recognition system designed to identify terrorists can just as easily be leveraged to monitor peaceful protesters or suppress ethnic minorities, depending on how it is deployed.
What’s more, the development of AI to date has been largely concentrated in the hands of just a few large companies such as IBM, Google, Amazon and Facebook, as they’re among the few with sufficient resources to pour into its development. But these companies aren’t doing this out of the goodness of their hearts or for bragging rights, they’re doing so to create and sell desirable products. But we’ve seen what happens when improving the company’s bottom line comes at the cost of societal harm. That’s why when Alphabet coalesced its various AI endeavors under the Google.AI banner in 2017, the company also created an ethics team to monitor those projects, ensuring that they are being used for the betterment of society, not simply to increase profits.
That team was co-led by Timnit Gebru, a leading researcher on the racial discrepancies in facial recognition systems as well as one of barely a handful of black women in the field of AI, and Margaret Mitchell, a computer scientist specializing in the study of algorithmic bias. Both women were staunch advocates for increasing diversity in what has historically been an overwhelmingly white and male field. Their team was among the most diverse in the entire company and regularly produced groundbreaking studies that challenged traditionally held views of AI research. They had also raised concerns that Google was censoring research critical of its most ambitious (and profitable) AI programs. In recent months, both Gebru and Mitchell have been summarily fired.
Gebru’s dismissal came in December after she co-authored a research paper criticizing large-scale AI systems. In it, Gebru and her team argued that AI systems, such as Google’s trillion-parameter AI language model, that are designed to mimic language could harm minority groups. The paper’s introduction reads, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”
According to Gebru, the company fired her after she questioned a directive from Jeff Dean, head of Google AI, to withdraw the paper from the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT). Dean has since countered that the paper did not “meet our bar for publication” and that Gebru had threatened to resign unless Google met her list of specific conditions, which the company declined to do.
“Apparently my manager’s manager sent an email [to] my direct reports saying she accepted my resignation,” Gebru tweeted in December. “I hadn’t resigned—I had asked for simple conditions first and said I would respond when I’m back from vacation. But I guess she decided for me 🙂 that’s the lawyer speak.”
“I said here are the conditions. If you can meet them great I’ll take my name off this paper, if not then I can work on a last date,” she continued. “Then she sent an email to my direct reports saying she has accepted my resignation. So that is Google for you folks. You saw it happen right here.”
Gebru’s corporate email access was cut off before she had returned from her vacation but she published excerpts of her manager’s manager’s response on Twitter nonetheless:
Gebru’s termination, especially the way in which Dean handled the situation, set off a firestorm of criticism both inside and outside the company. More than 1,400 Google employees as well as 1,900 other supporters signed a letter of protest while numerous leaders in the AI field expressed their outrage online, arguing that she had been terminated for speaking truth to power. They also questioned whether the company was actually committed to promoting diversity within its ranks and wondered aloud why Google would even bother employing ethicists if they were not free to challenge the company’s actions.
“With Gebru’s firing, the civility politics that yoked the young effort to construct the necessary guardrails around AI have been torn apart, bringing questions about the racial homogeneity of the AI workforce and the inefficacy of corporate diversity programs to the center of the discourse,” Alex Hannah and Meredith Whitaker wrote in a Wired op-ed. “But this situation has also made clear that—however sincere a company like Google’s promises may seem—corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.”
Mitchell subsequently penned an open letter in support of Gebru, stating:
The firing of Dr. Timnit Gebru is not okay, and the way it was done is not okay. It appears to stem from the same lack of foresight that is at the core of modern technology, and so itself serves as an example of the problem. The firing seems to have been fueled by the same underpinnings of racism and sexism that our AI systems, when in the wrong hands, tend to soak up. How Dr. Gebru was fired is not okay, what was said about it is not okay, and the environment leading up to it was — and is — not okay. Every moment where Jeff Dean and Megan Kacholia do not take responsibility for their actions is another moment where the company as a whole stands by silently as if to intentionally send the horrifying message that Dr. Gebru deserves to be treated this way. Treated as if she were inferior to her peers. Caricatured as irrational (and worse). Her research writing publicly defined as below the bar. Her scholarship publicly declared to be insufficient. For the record: Dr. Gebru has been treated completely inappropriately, with intense disrespect, and she deserves an apology.
After this public criticism of her employer, Google locked Mitchell’s email account and on January 19th opened an investigation into Mitchell’s actions, accusing her of downloading a large number of internal documents and sharing them with outsiders.
“Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered,” Google said in a January statement. “In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today.”
According to an unnamed Axios source, “Mitchell had been using automated scripts to look through her messages to find examples showing discriminatory treatment of Gebru before her account was locked.” Mitchell’s account remained locked for five weeks until her employment was terminated in February, further exacerbating tensions between the Ethics AI team and management.
“After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees,” a Google representative told Engadget. Alongside Mitchell’s firing, the company announced that Marian Croak would be taking over the reigns of the Ethical AI team, despite her not actually having any direct experience with AI development.
“I heard and acknowledge what Dr. Gebru’s exit signified to female technologists, to those in the Black community and other underrepresented groups who are pursuing careers in tech, and to many who care deeply about Google’s responsible use of AI,” Dean said in an internal memo published in February and obtained by Axios. “It led some to question their place here, which I regret.”
“I understand we could have and should have handled this situation with more sensitivity,” he continued. “And for that, I am sorry.”
The company has also pledged to make changes with regards to its diversity efforts moving forward. Those changes include tieing pay for VPs and higher senior management partly to reaching diversity and inclusion goals, streamlining its research publishing process, increasing its employee retention staff, and enacting new procedures regarding potentially problematic employee exits. Additionally, Google shuffled its AI teams so that the ethical AI researchers would no longer report to Megan Kacholia. However the company managed to step on one last rake by failing to notify the Ethical AI team of the changes until after Croak had been hired.