Breaking or Reinforcing the Cycle? Longitudinal Impacts of Bias-Correction Techniques on Feedback Loops and Sustained Financial Inclusion in Machine Learning Credit Scoring

Authors

  • Rajitha Gentyala Frisco, Texas, USA. Author

DOI:

https://doi.org/10.63282/3117-5481/AIJCST-V6I5P105

Keywords:

Machine Learning, Credit Scoring, Feedback Loops, Bias Correction, Financial Inclusion, Performative Prediction

Abstract

In machine learning–driven credit scoring, fairness interventions can produce unintended long-term effects because models reshape borrower behavior and data over time. Building on Juan C. Perdomo et al.’s performative prediction framework and Pagan et al.’s classification of feedback loops (sampling, feature, outcome, and model loops), this study examines how bias-correction techniques interact with dynamic lending environments. Using 1.2 million U.S. loan applications (2018–2024) plus synthetic emerging-market simulations, we modeled multi-cycle credit systems where decisions feed back into future borrower data. We evaluated adversarial debiasing, pre-processing reweighting, causal proxy mitigation, and threshold adjustments across metrics such as demographic parity, AUC-ROC, Brier scores, credit score progression, and inclusion indices. Results show stark trade-offs. Simple threshold adjustments increased initial approvals for Black and Hispanic applicants by 12–15%, but by later cycles, feedback effects widened disparities by 22% due to proxy discrimination and degraded alternative data. In contrast, dynamic resampling aligned with feedback-aware modeling sustained an 18% equity uplift with less than 3% rise in default rates, even under downturn simulations. Overall, static fairness fixes can backfire. Longitudinal, system-level design—combined with multi-horizon stress testing and richer data-sharing—is essential to achieve durable financial inclusion rather than short-term fairness gains.

References

[1] J. Perdomo, T. Zrnic, C. Mendler-Dünner, and M. Hardt, "Performative prediction," in Proc. 37th Int. Conf. Mach. Learn. (ICML), virtual, Jul. 2020, pp. 7599–7609.

[2] N. Pagan, J. Baumann, E. Elokda, G. De Pasquale, S. Bolognani, and A. Hannák, "A classification of feedback loops and their relation to biases in automated decision-making systems," in Proc. 2023 AAAI/ACM Conf. AI, Ethics, Society (AIES), Montréal, QC, Canada, Aug. 2023, pp. 1–12.

[3] Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29, 3315–3323.

[4] Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS), 43:1–43:23.

[5] Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163.

[6] Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning: Limitations and Opportunities. FairML Book.

[7] Corbett-Davies, S., & Goel, S. (2018). The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint arXiv:1808.00023.

[8] Berk, R., Heidari, H., Jabbari, S., Joseph, M., & Kearns, M. (2018). Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1), 3–44.

[9] Liu, L., Dean, S., Rolf, E., Simchowitz, M., & Hardt, M. (2018). Delayed impact of fair machine learning. Proceedings of the International Conference on Machine Learning (ICML), 3150–3158.

[10] Kallus, N., Mao, X., & Zhou, A. (2020). Assessing algorithmic fairness with unobserved protected class using data combination. Management Science, 66(9), 3779–3796.

[11] Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2022). Predictably unequal? The effects of machine learning on credit markets. Journal of Finance, 77(1), 5–47.

[12] Hurley, M., & Adebayo, J. (2017). Credit scoring in the era of big data. Yale Journal of Law and Technology, 18(1), 148–216.

Downloads

Published

2024-09-17

Issue

Section

Articles

How to Cite

[1]
R. Gentyala, “Breaking or Reinforcing the Cycle? Longitudinal Impacts of Bias-Correction Techniques on Feedback Loops and Sustained Financial Inclusion in Machine Learning Credit Scoring”, AIJCST, vol. 6, no. 5, pp. 44–56, Sep. 2024, doi: 10.63282/3117-5481/AIJCST-V6I5P105.

Similar Articles

11-20 of 117

You may also start an advanced similarity search for this article.