Date Thesis Awarded

5-2022

Access Type

Honors Thesis -- Open Access

Degree Name

Bachelors of Science (BS)

Department

Psychology

Advisor

Cheryl Dickter

Committee Members

Reya Farber

Jessica Stephens

Ashleigh Everhardt Queen

Paul Davies

Abstract

More than 180 cognitive biases have been identified in humans, and these biases relate to feelings towards a person or a group based on perceived group membership (Dilmegani, 2020). The development of artificial intelligence has fallen into the hands of engineers and statisticians, people who work within fields that have well-established race and gender diversity disparities (Panch et al., 2019). Thus, it is no surprise that the aforementioned biases have made their way into the algorithms behind artificial intelligence. The current study explored how participants’ pre-existing biases and level of outgroup contact have the potential to affect their decision-making pertaining to the development of artificial intelligence algorithms. College student participants viewed pictures of faces on a computer screen varying in their racial identity (i.e., Black and White) and were asked to make decisions relevant to situations that artificial intelligence algorithms are being programmed to do. Eye tracking was recorded to investigate implicit attention to the faces. Results indicated that eye tracking patterns differed as a function of race when people were making decisions about hitting pedestrians while driving and during facial recognition. Outgroup contact did not moderate these effects. This study has implications for how implicit patterns of attention may present in human decision-making in the context of programming artificial intelligence.

Share

COinS