BY CONTINUING TO USE THIS SITE, YOU ARE AGREEING TO OUR USE OF COOKIES. REVIEW OUR PRIVACY & COOKIE NOTICE
X
HOME > OUR THINKING > Credit Analysis > BLOG

Comparing Apples With Oranges: Probability Of Default vs. Credit Scoring Model Outputs

Fundamentals-based credit risk models usually come in two flavors, depending on the asset class they aim to cover:

  • Probability of Default (PD) models, abundant in small and medium enterprises, which are trained and calibrated on default flags.
  • Scoring models that usually exploit the ranking power of an established rating agency, to estimate the credit score of low-default asset classes, such as high-revenue corporations.


At S&P Global Market Intelligence we offer both types of statistical models:

  • PD Model Fundamentals covers publicly-listed and privately-owned corporations and banks, with no revenue and asset size limitation;
  • CreditModel™, a scoring model, covers medium and large corporations (with total revenue above $25M), and banks and insurance companies (with $100M or more in total assets).1


CreditModel and PD Model Fundamentals overlap on medium and large companies. In most instances, they will produce the same or very comparable assessments; however, at times, they can (and will) provide divergent credit risk assessments for the same companies, with a difference of several credit score notches.

This should come with no surprise, given that the models belong to different families of analytics (PD model vs scoring model), were trained on different data sets (default flags vs S&P Global Ratings), and are characterized by a different “DNA” (medium-term risk assessment for PD models, long-term assessment for scoring models trained on ratings).

At S&P Global Market Intelligence, as part of the Credit Analytics suite we offer a powerful analytic tool to identify the drivers of the model output differences: the absolute contribution.

Employing the absolute contribution information, we are able to identify the main drivers of weak credit risk assessment under each family of models. As shown in the table below, for non-financial corporate companies domiciled in North America, the three major common drivers of a score “worse than b-” are as follows:

Nov2016_Blog_Table1

In all cases, we recognize a size variable, a profitability measure, and a leverage/financial flexibility measure. Yet, there are subtle differences that play an important role in driving model outputs, as discussed below.

For the same company, both models will generate the same weak output, within one notch a majority of the time,2 whenever the financial statement contains items that are weak “across the board."

On the other hand, in limited cases there will be significant differences for the same company: one model may assign a “worse than b-” score, while the other may assign a much higher score, by six or more notches. Again, looking at both the actual input values and their absolute contributions, it is easy to see why this happens; since each model looks at a limited sub-set of financial ratios, it is possible that a company files a financial statement with a mixed profile, one that is partly “good” and partly “bad."

Let us look in detail at one of those cases: New York-based company Contour Global Bonaire (CGB), an operating subsidiary of Contour Power Global Holdings, which was founded in 2005.3 CGB develops, acquires and operates electric power and district-heating resources in Africa, Europe, the Caribbean, North/South America.

Looking at Table 2, CreditModel assigns to CGB a bb+ score (top panel) and PD Model Fundamentals assigns a ccc+ score (bottom panel), 6 notches worse.

Nov2016_Blog_Table2

As we know from our analysis, the major and recurring drivers of a score “worse than b-” in CreditModel are Total Assets, Operating Income (before D&A) / Revenues, and Debt/Capital, that overall are not so bad for this company, as shown by their zero contribution. In this case, the absolute contribution highlights the next factors that the statistical model considers important to output an even better score than the current bb+.

Conversely, the company shows relatively low Total Revenues, negative Net income / Total Revenues and high Current Liabilities / Net Worth, in addition to very high Corporate Industry Risk Score and negative Return on Net Capital. Thus, PD Model Fundamentals assigns it a very high PD that is mapped to a ccc+ score. The “devil” is in the detail…

Nov2016_Blog_Table3

So, why did we not choose the exact same set of financial ratios when training these two models? There are two main reasons:

  1. Data availability: companies may not consistently report all items in the financial statement. For example, private companies do not usually report “cash flow” items, so we cannot include these in PD Model Fundamentals – Private Companies.
  2. Model’s DNA: as mentioned at the beginning, each model is trained on different datasets (external ratings or default flags), and optimized accordingly, by choosing the best set of variables that help maximizing the model performance (usually ratings agreement for scoring models, and discriminatory power for probability of default models).

Thus, it is normal that we will select and employ different inputs in each model.

However, since each model looks at a specific set of financials, we advocate using both models when looking at the same company, in order to get a more holistic and more accurate view of credit risk:

  • When both models assign a weak credit risk profile to the same company, it may be commonly found that the company has overall weak financials, and therefore it is pretty safe to assume that doing business with that counterparty may mean “looking for trouble," unless it matches your risk profile/appetite.
  • When model outputs diverge, it is useful to remember that CreditModel was trained on credit ratings from S&P Global Ratings, and as such its scores retain similar dynamics, being relatively static and providing a long-term view of credit risk; conversely, PD Model Fundamentals was trained on default flags: its PD values and mapped scores are more dynamic, providing a more responsive view to changing company financials.4
  • In case of marked divergences, it may be worth complementing the analysis with additional information available on the S&P Capital IQ Platform, analyzing financials, reviewing news and key developments, looking at complementary market signals (where available), considering the debt structure and the maturity schedule of all liabilities, adding a peer comparison analysis via Credit Health Panel5 and keeping in mind the time horizon of the intended business deal.

Learn more about S&P Global Market Intelligence’s Credit Analytics models.


1 S&P Global Ratings does not contribute to or participate in the creation of credit scores generated by S&P Global Market Intelligence. Lowercase nomenclature is used to differentiate S&P Global Market Intelligence PD credit model scores from the credit ratings issued by S&P Global Ratings.

2 By weak, we refer to a “worse than b-” score. CreditModel / PD Model Fundamentals agreement for companies with CM score “worse than b-” is circa 70%, within 1 notch. PD Model Fundamentals / CreditModel agreement for companies with PD Model Fundamentals score “worse than b-” is circa 50%, within 1 notch.

3 Source: S&P Capital IQ Platform, as of 1 November 2016.

4 This is also reflected in the choice of the financials. For example, in PDFN Private short-term Liabilities are included in one of the inputs.

5 Credit Health Panel is available on the S&P Capital IQ Platform.

Subscribe to Insights
Sep 06, 2016
Blog
Aug 31, 2016
Credit Markets
Research