Date of Thesis

Spring 2022


As black box algorithms continue to penetrate an increasing number of fields, individuals place significant trust in models that they lack any understanding of their specific decision-making processes. Mortgage applicants nationwide receive decisions that drastically impact their financial futures from algorithms that remain confidential. Recent work in machine learning emphasizes explainability and interpretability as methods to gain insight into black box models. Applying concepts of both explainability and interpretability to the Home Mortgage Disclosure Act (HMDA) dataset, we explore algorithms currently making mortgage lending decisions by creating counterfactual explanations for denied applicants and decision tree classifiers that reverse-engineer the models. After analyzing results on interpretability and explainability, we endorse the value of creating interpretable models over depending on post hoc explainability techniques to audit an algorithm’s decision-making process. Additionally, this investigation of interpretability and explainability highlights inconsistent lending decisions from the mortgage algorithms that judge applicants across the United States.


Analytics, Mortgages, Explainability, Interpretability

Access Type

Honors Thesis (Bucknell Access Only)

Degree Type

Bachelor of Science in Business Administration


Analytics and Operations Management

First Advisor

Thiago Serra

Second Advisor

Mihai Banciu