Fairness through Adversarially Reweighted Learning

As Machine Learning becomes ubiquitous in society, a tension has formed between the need for privacy over sensitive demographic information and the need for demonstrably fair and unbiased models. For this project, we implement a fair recidivism prediction model using Adversarially Reweighted Learning (ARL) which will attempt to achieve comparable fairness outcomes as similar models without using protected labels. Following the implementation, we compare fairness metrics between techniques to discern whether the same level of fairness can be achieved without necessitating the need to collect sensitive data. This project consists of researching fairness and bias measures, reassessing the assumptions behind existing models, and building on top of a proposed ARL model to address fairness concerns in a novel way. The technical portion consists of our implementation of this new approach as well as the methods developed for comparing results. Finally, we perform analysis of the approaches and discuss potential future research areas on this topic.

Github Repository: https://github.com/aderbique/cse598-ai-ethics
Overleaf LaTeX Document: https://www.overleaf.com/read/nsqfzyszhjkt

Leave a Reply

Your email address will not be published. Required fields are marked *