On 24 March 2016, the Basel Committee on Banking Supervision issued a controversial proposal to limit and, in some cases, remove the use of internal models to calculate capital requirements for credit risk in the banking book.
The Committee proposed to adopt a “Standardised” framework for Low-Default portfolios - such as Banks, Insurance Companies, Other Financial Institutions, and Large Companies - instead of opting for an Internal Ratings-Based (IRB) approach. This latter method is currently adopted by many banks in Europe and Asia, and by the largest banks in the US and Canada.
For other categories of obligors, the Foundation IRB approach (where only Probability of Defaults need to be estimated) would be allowed for Mid-Corporates, while the Advanced IRB method (estimation also of Recovery Rates, Exposure-At-Default and Maturity) would be retained for Small and Medium Enterprises and Retail exposures. However, under these internal model methods, the Committee proposed more stringent requirements in terms of underlying data, which should cover more years for parameter estimations and refer exclusively to internal default experience.
In line with other Committee’s recent proposals aimed at reducing variability in banks’ risk weighted assets, where internal models on Market and Operational Risk have been partially and fully constrained respectively, this proposal attracts attention due to the materiality of loan book exposures in banks’ portfolios. In fact, according to recent evidence1, around 75% of Banks’ Risk weighted assets are related to Credit Risk in the Banking Book.
Not surprisingly, major international banks and industry associations from all over the world reacted fiercely during the consultation period2, defending the use of internal models for capital requirements purposes. Banks would like to preserve the risk sensitivity of the regulatory capital framework (after all, this was the original goal of the Basel Accord), and also their current regulatory capital levels.
Their main argument is that thanks to Basel 2 requirements, introduced in 20063, internal default risk models, combined with fundamental analysis on large corporate exposures, are currently very embedded in many banks’ decision making processes: they’re used to support not only lending origination and risk monitoring and management activities, but also capital planning and financial reporting under prevailing accounting standards. Internal default risk models contributed to developing an enterprise risk culture that permeated the entire organization, across functions and seniority levels. It is fair to say CEOs, CROs and CFOs currently use outputs from risk models on a daily basis to inform their short-term (lending and financial reporting) and long-term (capital planning & strategy) decision making processes.The practical relevance of default risk models has also been recently confirmed by the new proposed accounting standards on credit impairments, IFRS 9 and Current Expected Credit Loss model (CECL), where credit risk models are contemplated for estimating expected loss provisions on performing loans and debt instruments. According to Mark Carey of the Federal Reserve Board of Governors4, “in the last 20 years models have crept into accounting”, and the only way to get a forward looking measure of credit losses is via the use of credit models. Currently, several sophisticated Banks rely on a “centralized” credit risk modelling and data framework, which is then adjusted to serve specific capital requirements, stress testing and accounting needs. Removing internal models for capital requirements would affect these banks’ economies of scale and scope in this area.
A counterproposal from Banks, who recognized some data limitations affecting their credit risk model outputs, would be to refer to a new class of regulatory capital models: the “Constrained-Internal Ratings-Based” approach5. Essentially, their proposal is to split the default risk assessment process into two phases: 1) Rank Risk Ordering; 2) Calibration of Probability of Defaults.
In the first phase, banks would be allowed to use internal models to assess the relative riskiness of obligors (i.e., the rank ordering of risk, from lower risk obligors to higher risk ones), when there is a minimum number of defaults to develop an approach based on significant statistical principles. This approach, for example, is currently adopted in the UK, where the Prudential Regulation Authority (PRA) requires banks to have a minimum number of twenty (20) defaults for an Internal-Ratings Based approach6.
In the second phase, the Regulators would carry out a “calibration” process, mapping the banks’ internal rating outputs to a “Master Scale” of Probability of Defaults - possibly split by region/industry/counterparty - using “Through-The-Cycle” data. This default experience can be based on external sources (for example, from rating companies) or on banks’ pooled data (for example, from the upcoming European Central Banks’s AnaCredit7 Central Credit Register, or from other private default data consortiums).Our view is that Quantitative Credit Modelling can still be very useful to assess and understand the drivers of default risk, also because empirical research has shown that minimum data points are needed to provide sound estimates of probability of financial distress. For example, in 1968, Ed Altman, the pioneer of quantitative credit risk modelling, developed the first statistical default risk model, the Z-Score, for public manufacturing companies using only 33 defaults (and 33 non-defaults) and their related financial statements. After almost 50 years, this simple and parsimonious statistical credit risk model is still widely used by market participants for trading, investment, loan origination and risk management purposes in many regions of the world.
Looking at default data from S&P Global Ratings (see Figure 1), and at the ample availability in the market of financial statements for financial and non-financial firms8, it appears there is enough data to provide company-specific estimates of Probability of Defaults over different time horizons. A quantitative default risk model can be a more risk sensitive approach than a “Standardised” look-up risk weight table for regulatory capital (and also accounting) purposes. As said by the statistician George Box in 19769: “all models are wrong, some models are useful”. Perhaps, a clearer view on this issue could be achieved standing on the shoulders of these giants of quantitative modelling.
If you would like information about S&P Global Market Intelligence’s default risk models, please contact us at email@example.com.
2 In July, the Basel Committee published on its website the 74 comments received.
4 Remarks given during a panel discussion at the Risk Conference co-organized by the NYU Stern School of Business and S&P Global Market Intelligence, held in New York on 24 May 2016. For more details, see the conference takeaways report by Ed Altman and Cristiano Zazzara (2016).
5 This proposal comes from the Global Financial Markets Association (GFMA), International Swaps and Derivatives Association (ISDA), International Association of Credit Portfolio Managers (IACPM), Japan Financial Markets Council (JFMC), British Bankers Association (BBA), and Institute of International Finance (IIF).
6 See PRA (2015), “Internal Ratings Based (IRB) approaches”, Supervisory Statement, SS11/13, November.
7 The European Central Bank (ECB) adopted the AnaCredit regulation on May 18, 2016. The aim of AnaCredit (“Analytical Credit Datasets”) is to set up a dataset containing detailed information on individual loans larger than €25,000 to companies in the euro area, harmonized across all Member States. Data collection is scheduled to start in September 2018.
8 S&P Global Market Intelligence provides time series of financial statements for around 600,000 legal entities, covering public and private companies via Compustat, Capital IQ, and SNL datasets.