Reference : Building up Explainability in Multi-layer Perceptrons for Credit Risk Modeling
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/44277
Building up Explainability in Multi-layer Perceptrons for Credit Risk Modeling
English
Sharma, Rudrani mailto [University of Luxembourg, University of Luxemboutg, Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science]
Schommer, Christoph mailto [University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC) >]
Vivarelli, Nicolas mailto [POST Luxembourg]
9-Oct-2020
Building up Explainability in Multi-layer Perceptrons for Credit Risk Modeling
Sharma, Rudrani mailto
2
Yes
No
International
7th IEEE International Conference on Data Science and Advanced Analytics, Student Poster Workshop
6 - 9 October 2020
IEEE DSAA
Sydney
Australia
[en] Explainability ; Deep Learning ; Artificial Neural Networks
[en] Granting loans is one of the major concerns of financial institutions due to the risks of default borrowers. Default prediction by the neural networks is a popular technique for credit risk modeling. Neural networks generally offer the accurate predictions that help banks to prevent financial losses and grow their business by approving more creditworthy borrowers. Although neural networks are capable of capturing the complex, non-linear relationships between a large number of features and output, these models act as black boxes. This is a graduation project paper that is focused on loan default risk prediction by multi-layer perceptron neural network and building up explainability to some degree in the trained neural networks through sensitivity analysis. The architecture of a multi-layer perceptron neural network with the best result is used to help the credit-risk manager in explaining why an applicant is a defaulter or non-defaulter. The prediction of a trained multi-layer perceptron neural network is explained by mapping input features and target variables directly using a model-agnostic explanation as well as a modelspecific explanation. Lastly, a comparison is performed between two explanation methods.
Researchers
http://hdl.handle.net/10993/44277

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Limited access
Rudrani-Sharma_IEEE_DSAA2020_Conference.pdfPublisher postprint76.64 kBRequest a copy

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.