On the accuracy of loss-given-default prediction intervals
Purpose – The purpose of this paper is to critically analyze the common assumption, made by many credit risk models such as the Moody's KMV Loss-Calc model, of a ß distribution for the loss-given default (LGD). The paper shows that this assumption does not perform well in constructing analytic prediction intervals for LGD. Design/methodology/approach – Simulation experiments were conducted to highlight the potential problems associated with this distributional assumption in constructing prediction intervals for LGD. Findings – The simulation experiments show that, when starting from a different assumption concerning the shape of the population distribution, the beta distribution does not perform well in constructing prediction intervals for LGD. Originality/value – The analysis performed in this study addresses a relevant subject. Indeed, a correct estimate of a credit exposure LGD is particularly relevant not only for internal risk management and management purposes, but also for regulatory reasons within the context of the internal ratings based approach of the recently approved capital regulation framework (Basel II).
Volume (Year): 10 (2009)
Issue (Month): 2 (March)
|Contact details of provider:|| Web page: http://www.emeraldinsight.com|
|Order Information:|| Postal: Emerald Group Publishing, Howard House, Wagon Lane, Bingley, BD16 1WA, UK|
Web: http://emeraldgrouppublishing.com/products/journals/journals.htm?id=jrf Email:
When requesting a correction, please mention this item's handle: RePEc:eme:jrfpps:v:10:y:2009:i:2:p:131-141. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Louise Lister)
If references are entirely missing, you can add them using this form.