How might we uncover technological bias and prejudice, and design interventions to fight against anti-racist systems?


Technology is biased. From infrared soap dispensers that are unable to detect darker skin to digital cameras that ask the question “did someone blink?” to smiling Asian faces to facial recognition algorithms that fare better with middle-aged white men’s faces than for people of color, women, children, and the elderly, society has continued to design and deploy tools that perpetrate different kinds of bias. In Race After Technology, Ruha Benjamin, Associate Professor of African American Studies at Princeton University, explains how the good intentions of technology not only fail to stem bias and prejudice, but could oftentimes exacerbate pre-existing inequalities. In June 2020, IBM, Amazon and Microsoft halted the sales of their facial recognition technology to the U.S. police. Many saw it as a concession made by the tech giants in light of the national reckoning over anti-Black police violence and systemic racism. However, even in everyday usages such as content-filtering and content-suggestion, discriminatory algorithmic, under the guise of neutrality and objectivity, continue to put people on the receiving ends at risk of segregations within their own echo chamber, and diminishing their ability to empathize with others in a myriad of ways.


As technological bias is being recognized, people, the tech industry, and society have launched justice movements to dismantle these prejudices. As Benjamin argues in her book, hiding behind the neutrality and objectivity of technology is the humans who created it. Because humans are inherently biased, artificial intelligence inherits all of these biases contained in the data on which it is trained. Even Facebook admitted that its algorithms “exploit the human brain’s attraction to divisiveness,” and could make the social network a more polarized place. Working together with Janet Vertesi, as an extension or in reference to her course in Spring 2021*, this team of Tiger Challenge students will investigate, design, build and test sociotechnical systems that embrace anti-racism as the core value.

Recognizing that technology is not a panacea, or can solve social problems by itself, students will look at sociotechnological bias as a consequence rather than a cause of problems, and develop innovative anti-racist interventions in our technological world.


* Students interested in this challenge need not be enrolled in SOC 414 / COS 415, but the eventual team may include students from the Spring course. 

 

Faculty Advisor:

Janet Vertesi, Associate Professor of Sociology at Princeton University


Community Partner:

TBD
 

 

What is the team working on now? In the team's own words:

Our project is “Everyday Racist Technologies”, but since this theme is so broad—spanning so many potential directions and industries—we decided to look through the specific lens of affordable housing to get started. We landed on affordable housing after looking at a previous student group’s project on education. Their investigation of the inequality of school district funding in NJ led us to consider how communities are historically segregated both racially and socioeconomically, and the role that housing has played in doing so. 

We began the process by scheduling and conducting interviews with professors from various academic backgrounds, such as architecture and computer science, real estate developers, social workers, and other relevant people throughout the Princeton area to get a better understanding of the affordable housing industry within New Jersey. Through this process, we learned about the rules/regulations surrounding the industry, the lack of legislation on the state level, the lack of information resources for people seeking affordable housing, and how racial discrimination permeates the industry. For a while, we examined the various factors (including credit scoring) that go into an affordable housing application and how they might disproportionately affect communities of color, later shifting our focus to evaluate the lives of people who are already accepted into affordable housing communities. By doing so, we were advised to research methodologies to reduce living costs for those who are suffering from poverty and to build community for those in need. 

One of our big struggles was talking to affected people themselves—people who have gone through the application process or people currently living in affordable housing. On top of this, the lack of statewide legislation has made it extremely difficult for many social workers to make substantial improvements due to their inability to strongly regulate the decisions of the private developers building affordable housing. 

    Due to these challenges, as well as our collective interest to explore more explicit forms of technology that our team could address, we’ve revisited our affinity maps to look for ways we could pivot into a different but related topic more rooted in technology. While we are still investigating potential topics, some ideas that have been brought to our attention are algorithmic bias in the medical industry and devices, technologies used in policing (facial recognition tech, body cam, etc), and products/experiences that would revert discrimination in order to cultivate empathy.


 

Stories