How might we uncover technological bias and prejudice, and design interventions to fight against anti-racist systems?

Technology is biased. From infrared soap dispensers that are unable to detect darker skin to digital cameras that ask the question “did someone blink?” to smiling Asian faces to facial recognition algorithms that fare better with middle-aged white men’s faces than for people of color, women, children, and the elderly, society has continued to design and deploy tools that perpetrate different kinds of bias. In Race After Technology, Ruha Benjamin, Associate Professor of African American Studies at Princeton University, explains how the good intentions of technology not only fail to stem bias and prejudice, but could oftentimes exacerbate pre-existing inequalities. In June 2020, IBM, Amazon and Microsoft halted the sales of their facial recognition technology to the U.S. police. Many saw it as a concession made by the tech giants in light of the national reckoning over anti-Black police violence and systemic racism. However, even in everyday usages such as content-filtering and content-suggestion, discriminatory algorithmic, under the guise of neutrality and objectivity, continue to put people on the receiving ends at risk of segregations within their own echo chamber, and diminishing their ability to empathize with others in a myriad of ways.

As technological bias is being recognized, people, the tech industry, and society have launched justice movements to dismantle these prejudices. As Benjamin argues in her book, hiding behind the neutrality and objectivity of technology is the humans who created it. Because humans are inherently biased, artificial intelligence inherits all of these biases contained in the data on which it is trained. Even Facebook admitted that its algorithms “exploit the human brain’s attraction to divisiveness,” and could make the social network a more polarized place. Working together with Janet Vertesi, as an extension or in reference to her course in Spring 2021*, this team of Tiger Challenge students will investigate, design, build and test sociotechnical systems that embrace anti-racism as the core value.

Recognizing that technology is not a panacea, or can solve social problems by itself, students will look at sociotechnological bias as a consequence rather than a cause of problems, and develop innovative anti-racist interventions in our technological world.

* Students interested in this challenge need not be enrolled in SOC 414 / COS 415, but the eventual team may include students from the Spring course. 


Faculty Advisor:

Janet Vertesi, Associate Professor of Sociology at Princeton University

Community Partner: