Date of Thesis
As black box algorithms continue to penetrate an increasing number of fields, individuals place significant trust in models that they lack any understanding of their specific decision-making processes. Mortgage applicants nationwide receive decisions that drastically impact their financial futures from algorithms that remain confidential. Recent work in machine learning emphasizes explainability and interpretability as methods to gain insight into black box models. Applying concepts of both explainability and interpretability to the Home Mortgage Disclosure Act (HMDA) dataset, we explore algorithms currently making mortgage lending decisions by creating counterfactual explanations for denied applicants and decision tree classifiers that reverse-engineer the models. After analyzing results on interpretability and explainability, we endorse the value of creating interpretable models over depending on post hoc explainability techniques to audit an algorithm’s decision-making process. Additionally, this investigation of interpretability and explainability highlights inconsistent lending decisions from the mortgage algorithms that judge applicants across the United States.
Analytics, Mortgages, Explainability, Interpretability
Honors Thesis (Bucknell Access Only)
Bachelor of Science in Business Administration
Analytics and Operations Management
Pesacreta, Mark, "Breaking Into the Black Box: Investigating Proprietary Machine Learning Mortgage Algorithms" (2022). Honors Theses. 612.