‘Coded Bias’ latest screening from Bright Lights Film Series, highlights racial bias in AI

Algorithm+bias+researcher+Joy+Buolamwini+showing+the+racist+faults+in+the+Aspire+Mirror.

Courtesy of Shalini Kantayya

Algorithm bias researcher Joy Buolamwini showing the racist faults in the “Aspire Mirror.”

By Lucia Thorne, Living Arts Editor

Artificial intelligence, defined as “the development of computer systems able to perform tasks that normally require human intelligence,” has the potential to shape the future of every industry. Yet a distinctly human footprint lies in many artificial intelligence algorithms, one that often manifests in the form of algorithmic biases. 

The Visual Media Arts department recently screened director Shalini Kantayya’s documentary, Coded Bias, in its Bright Lights Film Series, which focuses on the potential threat of biased algorithms for the future of society as well as the threats they pose, even in 2021. 

When asked during a post-film discussion hosted via Zoom about her passion for the subject of the discriminatory algorithms, Kantayya said the purpose of her film is to alert the public that artificial intelligence and computer-generated algorithms are “becoming a gatekeeper of opportunity.”

To achieve this, Kantayya uses several situational examples, including the pro-democracy protests in Hong Kong, the facial recognition reliance in China, the corporate study and selling of personal data, the potential installation of facial recognition software at an apartment complex in Brooklyn, and inaccurate facial recognition being used by police in London. 

The incidents Kantayya chose to document in her film show the negative effects that facial recognition is already having on today’s society, acting as a case study for the future of this technology.

The film follows algorithm bias researcher Joy Buolamwini, a Black woman, and her path towards fighting for algorithmic justice. While studying at Massachusetts Institute of Technology, Buolamwini created the “Aspire Mirror,” a mirror meant to project one’s “inspirations” onto themselves using computer vision software. 

When she tried out the mirror, her face was not detected. At first, she thought it was a lighting issue. But when she put on a white face mask, she was finally detected by the mirror. Buolamwini then realized the technology could not recognize the faces of Black people. 

After looking into the data sets used by this technology, Buolamwini found that the majority of faces used to teach the AI to detect faces were those of white men. Since the programmers had unconscious, or perhaps even conscious, biases, the computer vision software Buolamwini used had learned their bias.

A prime example of learned bias that more people are familiar with was demonstrated with the launch of Microsoft’s AI Twitter account, Tay, in 2016. 

“The more humans share with me, the more I learn,” Microsoft AI Tay tweeted on March 24, 2016

Within 16 hours, the AI was taken off Twitter for sending racist, sexist and anti-Semitic tweets after Internet trolls “shared” more with Tay during that time period. 

While this instance involved a Twitter account gone awry, the issue lies within the nature of computer-generated algorithms. As stated in the documentary, algorithms are used in daily life and can be applied to many different aspects of people’s lives, like job opportunities, housing, college,, and loans based on race, gender, and ability. 

Buolamwini discussed the ways in which AI technology has inherited bigotry. 

“When you think of AI,, it’s forward–looking, but AI based on data is a reflection of our history. The past dwells within our algorithms,” Buolamwini said in the film. “The progress that was made in the civil rights era could be rolled back under the guise of machine neutrality.” 

Cathy O’Neil, the author of Weapons of Math Destruction, similarly stated the general public puts too much trust into technology, as if it is free of flaws, when in reality, technology often possesses very human faults. 

“I’m very worried about this blind faith we have in big data,” O’Neil said. “We need to constantly monitor every process for bias.”

What scares those fighting for algorithmic justice is the cementation of human bias as it exists today into future technologies. Meredith Broussard, the author of Artificial Unintelligence, stated the data we use to teach AI technologies is chosen by a small group of rich, white men.

“Our ideas about technology and society that we think are normal are actually ideas that come from a very small and homogeneous group of people,” Broussard said during the film. “But the problem is that everybody has unconscious biases and people embed their own biases into technology.”

Buolamwini’s statement added to this sentiment, as she explained how concentrated data can lead to issues in the technologies’ analysis. 

“If you’re thinking about data in artificial intelligence, in many ways data is destiny,” Buolamwini said. “Data’s what we’re using to teach machines how to learn different kinds of patterns, so if you have largely skewed data sets that are being used to train these systems, you can also have skewed results.”

During the Black Lives Matter protests this past summer, racist AI technology such as facial recognition software has been used by police to identify “suspects,” drawing attention to this issue in the public eye.  

Kantayya gave the example of London police using faulty facial recognition technology, showing the police stopping a 14-year-old Black teenager, claiming his face matched with a suspect they were looking for. Because there are no legal protections that currently prevent the mass use of this facial recognition technology, many protesters have been targeted in this manner, including those at the BLM protests in the U.S. 

In the film, Buolamwini discussed both during her interviews and her Senate testimony that AI regulations are absolutely necessary, seeing how the technology is acting on its bigoted programming today. 

“We’re at a moment where the technology is being rapidly adopted and there are no safeguards,” Buolamwini said. “It is, in essence, a wild, wild West.”

Coded Bias will premiere on PBS nationwide on March 22, 2021.