Stanford’s new AI institute is inadvertently showcasing one of tech’s biggest problems

The artificial intelligence industry is often criticized for failing to think through the social repercussions of its technology—think instituting gender and racial bias in everything facial-recognition software to hiring algorithms.

On Monday (March 18), Stanford University launched a new institute meant to show its commitment to addressing concerns over the industry’s lack of diversity and intersectional thinking. The Institute for Human-Centered Artificial Intelligence (HAI), which plans to raise $1 billion from donors to fund its initiatives, aims to give voice to professionals from fields ranging from the humanities and the arts to education, business, engineering, and medicine, allowing them to weigh in on the future of AI. “Now is our opportunity to shape that future by putting humanists and social scientists alongside people who are developing artificial intelligence,” Stanford president Marc Tessier-Lavigne declared in a press release.

It’s a laudable goal. But in trying to address AI’s blind spots, the institute has been accused of replicating its biases. Of the 121 faculty members initially announced as part of the institute, more than 100 appeared to be white, and a majority were male.

AI’s “sea of dudes” or”white guy problem” has been well-documented, and awareness of the topic is becoming more and more mainstream. Diversity and inclusion has become boilerplate language for any major industry event, including Stanford’s own literature on the launch of HAI, and the institute was quick to acknowledge the problems with its faculty makeup.

“We know we still have a long way to go to reach everyone who can contribute to HAI’s mission and it is our top priority,” a Stanford HAI spokesperson said in a statement to Quartz, noting that the institute will be hiring 20 more faculty members soon. ”We know this will be challenging based on the statistics and existing systemic issues, and we know it is critical to the long-term success of HAI and indeed, AI itself. This will take years to fix but we have to start somewhere—and urgently. It’s a fundamental aim of our educational component of our program and outreach.”

The most visible work to address bias within the AI industry is being undertaken by women. Joy Buolamwini, a researcher at MIT Media Lab, has released a number of reports showing that facial recognition algorithms are markedly worse at identifying people of color, a problem that affects everything from consumer technology to the facial recognition systems potentially used by police to identify suspects. Joanna Bryson, a professor at the University of Bath, has published numerous research papers on AI ethics, and how algorithms that try to understand human language pick up unconscious bias.

Virginia Eubanks, author of the book Automating Inequality, investigated the foster-care system in Pennsylvania’s Allegheny County, which uses automated screening tools for reports of child endangerment. People of color are far more likely to be investigated because of this tool, she explained, since there is a racial disparity in calls reporting child endangerment. Mathematician Cathy O’Neil, who writes frequently on data science, wrote on similar issues with car insurance and lending algorithms in her book Weapons of Math Destruction.

AI Now, an organization focused on researching the societal implications of AI, was founded by Kate Crawford and Meredith Whittaker. Research institute Data & Society was founded by danah boyd. Advocacy group Black in AI was co-founded by Google’s Timnit Gebru and Cornell’s Rediet Abebe, and Latinx in AI‘s president is Laura Montoya. The list goes on—and it all points to the powerful contributions of women and people of color in AI, despite their minority status.

“The history of computing is the history of the utilization of women’s labor, because it was not understood as being a profitable industry,” said Rumman Chowhury, Accenture’s responsible artificial intelligence lead. “Computing was typing, so women must do it. And then they realized there was money involved, and they realize there’s power involved, and suddenly the contributions of all the women and women of color got completely erased from the narrative. And the same thing with ethics and AI.”

The fact that much of today’s AI industry is white and male reflects the demographics of the people who are educating technologists. Statistics released earlier this year by the AI Index report detail an industry where fewer than 20% of AI professors are female. (A Stanford spokesperson points out that the HAI leadership team is 30% female, and co-founded by a woman.) A tour through the faculty pages of other top AI universities like Carnegie Mellon, University of Illinois Champaign-Urbana, and MIT CSAIL illustrates how few people of color or women there are in roles of academic power. These are arguably the people who would have the most experience navigating the conscious and unconscious biases of society, as well as the technical knowledge about how to avoid passing those biases on to our algorithmic progeny.

Clearly, there is more work to be done by the universities that support AI research and education if they want to act on the tenets which they claim to uphold. But as more institutions like Stanford tout their investment in a more principled approach to AI, it’s important to acknowledge that women and people of color have been having that conversation for years.

Chowhury spoke to Quartz from Washington, DC, where she said she was meeting with lawmakers who are looking into the regulation of artificial intelligence.

“Now that it’s starting to hit people’s pockets in Silicon Valley, suddenly there’s this movement for human-centric AI,” she said. “Five years ago, this would never have happened. And yet, five years ago, in some circles this was already a narrative. So I do genuinely worry about the erasure of all the women and women of color who have built this industry.”

error: Content is protected !!