Seeing What We Can't: Evaluating implicit biases in deep learning satellite imagery models trained for poverty prediction
Date Thesis Awarded
Honors Thesis -- Open Access
Bachelors of Science (BS)
Previous studies have sought to use Convolutional Neural Networks for regional estimation of poverty levels. However, there is limited research into possible implicit biases in deep neural networks in the context of satellite imagery. In this work, we develop a deep learning model to predict the tertile of per-capita asset consumption, trained on satellite imagery and World Bank Living Standards Measurements Study data. Using satellite imagery collected via survey location data as inputs, we use transfer learning to train a VGG-16 Convolutional Neural Network to classify images based on per-capita consumption. The model achieves an $R^2$ of .74, using thousands of observations across Ethiopia, Malawi, and Nigeria. Using a variety of interpretability techniques, our study seeks to qualitatively analyze images to evaluate implicit biases in the model. Our results indicate that roads, urban infrastructure, and coastlines are the three human-interpretable features that have the largest influence on the predicted consumption level for a given image.
O'Brien, Joseph, "Seeing What We Can't: Evaluating implicit biases in deep learning satellite imagery models trained for poverty prediction" (2023). Undergraduate Honors Theses. William & Mary. Paper 2002.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.